2025-12-10 06:39:09
By Dev Makai
Hey builders, 👋
Today I want to dive into one of those "why didn't anyone tell me this sooner?" moments that cost me two days of debugging and nearly lost revenue. We're talking about webhooks—specifically, why your payment integration might be failing silently while everything looks green on the dashboard.
The Setup That Betrays You
Picture this: You've integrated a payment gateway. You set up webhooks. You test with a few transactions. Everything works. You deploy to production. Weeks later, you notice discrepancies in your accounting. Some transactions processed but never triggered fulfillment. Sound familiar?
Here's the brutal truth I learned: Webhooks fail silently more often than they fail loudly.
Two Critical Workarounds That Should Be Requirements
"Set up a re-query service that polls for transaction status at regular intervals."
This isn't a suggestion—it's their admission that webhook delivery isn't guaranteed. Your server goes down? Network hiccup? Rate limit hit? Webhooks get lost in the void.
My Implementation Strategy:
javascript
// Simple poller service (Node.js example)
async function reconcileTransactions() {
const pending = await getPendingTransactions();
for (const tx of pending) {
const status = await paymentGateway.checkStatus(tx.reference);
if (status.hasChanged()) {
await processWebhookPayload(status);
await markAsReconciled(tx.id);
}
}
}
// Run every 15 minutes
setInterval(reconcileTransactions, 15 * 60 * 1000);
This poller acts as your safety net. It catches what webhooks miss. Without it, you're trusting external systems with your business logic—a dangerous gamble.
The documentation workaround: "Add a trailing slash to your URL."
The proper solution? Fix your server config:
apache
# .htaccess - The RIGHT way
RewriteEngine On
RewriteCond %{REQUEST_METHOD} POST
RewriteRule ^webhook$ /webhook/ [L]
Or better yet:
apache
DirectorySlash Off
Real Damage I've Seen
SaaS Company: Lost $14K in monthly recurring revenue because canceled subscriptions kept charging (failed webhooks)
E-commerce Store: 200+ digital products never delivered despite successful payments
Booking Platform: Double-bookings when webhooks arrived out of order
My Webhook Checklist
After getting burned, here's my non-negotiable checklist:
Before Go-Live
Idempotency Keys: Process the same webhook multiple times safely
Dead Letter Queue: Store failed webhooks for manual review
Signature Verification: Never trust incoming requests without validation
Complete Logging: Request body, headers, processing result, timestamp
Production Monitoring
Success Rate Dashboard: Track delivery failures in real-time
Automated Reconciliation: Daily checks comparing gateway vs your database
Alerting: Get notified when failure rate exceeds 1%
Manual Trigger: Ability to resend webhooks from gateway dashboard
The Mindset Shift
The biggest lesson? Treat webhooks as "best effort" notifications, not reliable triggers. Build your system to survive their failure.
Your payment integration shouldn't be a house of cards. That trailing slash workaround? The polling recommendation? They're band-aids on deeper architectural issues.
Your Turn
What webhook horror stories have you survived? What silent failures did you discover way too late? Hit reply or find me on [Twitter/LinkedIn]—I read every response.
And if you're implementing payment integration this week, do me a favor: Add that polling service BEFORE you go live. Your future self will thank you when the server decides to reboot during peak hours.
Stay building (and keep those webhooks honest),
Makai
2025-12-10 06:34:25
On December 3rd, 2025, React disclosed CVE-2025-55182, a critical remote code execution vulnerability with a CVSS score of 10.0, the maximum possible severity. Within hours, attackers were exploiting it in the wild. Nearly a million servers running React 19 and Next.js were vulnerable to unauthenticated remote code execution. For a framework that had maintained a remarkably clean security record over 13 years, just one minor XSS vulnerability (CVSS 6.1) in 2018 this represented a catastrophic failure.
The exploit exists in React's "Flight" protocol, a custom serialization format introduced with React Server Components. Flight handles the transfer of data and execution context between client and server. The vulnerability allowed attackers to craft malicious payloads that, when deserialized by the server, could execute arbitrary code. The attack required no authentication, just network access to send a crafted HTTP request to any Server Components endpoint.
The technical root cause was unsafe deserialization of untrusted client data. The server accepted serialized objects from clients, deserialized and executed code based on their contents, including accessing object properties like .then and .constructor that allowed attackers to reach JavaScript's code execution primitives. React's defenses relied on the assumption that the serialization format itself would prevent malicious inputs, rather than treating all client data as untrusted by default.
React Server Components (RSC) represent a fundamental shift in React's architecture. Traditionally, React was a client-side library that ran in the browser, rendering user interfaces and talking to backend APIs via standard REST or GraphQL endpoints. Your backend could be written in any language: Python, Go, Ruby, Java, whatever made sense for your use case.
Server Components change this model. They allow React components to execute on the server, access databases directly, and serialize their results including promises and complex state to the client using the Flight protocol. Functions marked with 'use server' become server-side endpoints automatically. No explicit API routes required. The framework handles routing these "Server Actions" and serializing the data flow between client and server.
The pitch is seductive: write your frontend and backend in the same files, using the same language, with "seamless" data flow between them. No API boilerplate, no context switching, just components that "magically" know whether to run on client or server.
This is where React abandoned decades of hard-won security wisdom. The fundamental principle of secure systems is simple: never trust client input. Every mature framework and language ecosystem has learned this lesson through painful experience:
Java serialization vulnerabilities plagued the ecosystem for years, leading to remote code execution in countless applications. The Java security team eventually concluded that deserializing untrusted data was simply too dangerous, leading to deprecation warnings and architectural guidance to avoid it entirely.
PHP's unserialize() function became the attack vector for thousands of WordPress compromises. The PHP community learned to treat deserialization of user input as an anti-pattern to be avoided.
Python's pickle module documentation explicitly warns: "The pickle module is not secure. Only unpickle data you trust." It's considered unsafe for any data that might come from untrusted sources.
Ruby's Marshal has the same warnings and the same history of vulnerabilities.
React looked at this 50 year history and decided to build a custom serialization protocol that deserializes client data into server execution contexts. The Flight protocol needed to be "smarter" than JSON, capable of serializing promises, closures, and complex object graphs. This meant it needed to be more complex, more powerful, and inevitably, more dangerous.
The vulnerability wasn't an implementation bug that slipped through code review. It was the predictable consequence of violating a fundamental security principle: complex deserialization of untrusted data leads to remote code execution. If you can't do it perfectly don't do it at all.
Traditional REST APIs avoid this entire class of vulnerabilities by using JSON, a deliberately limited data format that carries no execution context, no code, no object methods. JSON is "dumb" in exactly the right way: it's just data structures. The server receives JSON, validates it against expected schemas, and explicitly routes it to the appropriate handler. There's no deserialization of execution contexts, no automatic invocation of client specified code paths, no blurred boundaries between data and code.
React Server Components don't just introduce security risks; they eliminate architectural flexibility. When you mark a function with 'use server', you haven't created an API. You've created a React specific endpoint that can only be called by React clients using the Flight protocol.
Consider a traditional REST API:
@app.post('/api/posts')
def create_post(data):
return db.posts.create(data)
This endpoint can be called by:
It can be documented with OpenAPI/Swagger. It can be monitored with standard HTTP tooling. It can be secured with standard WAF rules. It works with every language's HTTP library.
Now consider a Server Action:
'use server'
async function createPost(data) {
return await db.posts.create(data);
}
This can be called by... your React frontend. That's it. It uses a proprietary protocol (Flight) that only React understands. It can't be documented in a language agnostic way. Standard HTTP monitoring tools can't parse the payloads. Security tools can't inspect the traffic. If you want to build a mobile app, you'll need to create a separate REST API anyway.
You haven't eliminated API boilerplate, you've just hidden it behind framework magic while simultaneously limiting who can use it. When your application inevitably needs to support multiple client types such as web, mobile, and CLI, you'll end up maintaining two parallel systems: Server Actions for your React web app, and a proper REST API for everything else. The "convenience" of Server Components becomes technical debt the moment you need to integrate with anything outside the React ecosystem.
The reusability problem extends beyond just multiple clients. Modern applications often need to expose webhooks for third-party services, integrate with partner APIs, or provide data to analytics platforms. None of these can consume React Server Actions. You're forced back to building traditional API endpoints, making the Server Actions redundant; a solution in search of a problem that just created more problems.
Perhaps the most insidious aspect of React Server Components is the way they eliminate architectural choice. For 13 years, React worked with any backend. Your API server could be written in Python for data science and machine learning, Go for high-performance services, Rust for systems programming, Java for enterprise integration, or Ruby for rapid development. The choice was yours, based on your team's expertise and your application's requirements.
Server Components change this equation fundamentally. To use them, your server must be JavaScript—specifically, Node.js or a compatible runtime. The Flight protocol, the Server Actions routing, the serialization/deserialization. All of this requires a JavaScript runtime on the server.
This matters more than React advocates want to admit. JavaScript is a fine language, but it's not the right tool for every job:
Machine Learning and AI: Python dominates this space with mature ecosystems (PyTorch, TensorFlow, scikit-learn) and tools that don't have JavaScript equivalents. If your application needs to serve ML models, you'll need Python services anyway.
High-Performance Computing: For CPU-intensive work, systems programming, or services requiring fine-grained control over memory and concurrency, languages like Rust, Go, or C++ are simply better suited. JavaScript's single-threaded nature and garbage collection can be limiting factors.
Enterprise Integration: Many organizations have existing investments in Java or .NET ecosystems, with established patterns, libraries, and expertise. Forcing a JavaScript backend means either maintaining parallel systems or abandoning these investments.
Data Processing: For heavy data processing, languages like Python (with NumPy/Pandas), R, or even Julia provide better ergonomics and performance than JavaScript.
Traditional REST APIs let you choose the right tool for each job. Your frontend can be React (or Vue, or Svelte) while your backend leverages Python's data science libraries, Go's performance, or Java's enterprise ecosystem. Each layer uses the language that makes the most sense for its requirements.
Server Components eliminate this flexibility. Your entire stack must be JavaScript, regardless of whether it's the best choice for your backend requirements. This isn't just a technical limitation. It's an architectural straightjacket that forces technical decisions based on framework constraints rather than application needs.
The irony is that React's original success came partly from its flexibility. It was just a view library that worked with any backend. Server Components abandoned this principle in pursuit of "full-stack" integration. It traded away the architectural freedom that made React attractive in the first place.
CVE-2025-55182 isn't an isolated incident. It's a symptom of a broader problem in the JavaScript ecosystem. There's a pattern of frameworks prioritizing developer convenience over architectural soundness, of "innovation" that ignores lessons learned in other ecosystems, of complexity marketed as simplicity.
React had a good thing. It was a solid client-side library with a clean security record. Then it tried to own the full stack, invented custom protocols to blur client-server boundaries, and ended up with CVSS 10.0 remote code execution vulnerabilities affecting nearly a million servers.
The traditional approach of clear separation between frontend and backend, standard protocols like HTTP and JSON, explicit API boundaries might seem old-fashioned but it works. It's secure. It's flexible. It doesn't lock you into a single language or ecosystem. And it doesn't require inventing custom serialization protocols that recreate vulnerabilities we learned to avoid decades ago.
Sometimes the boring solution is the right solution. Sometimes the old way was better. And sometimes "seamless developer experience" is just another way of saying "we hid the complexity until it exploded."
React Server Components represent a fundamental architectural mistake. Patching one exploit doesn't fix the underlying problem: you're still deserializing untrusted client data into server execution contexts. The next vulnerability is already there, waiting to be discovered. Because when you violate basic security principles in pursuit of convenience, vulnerabilities aren't bugs, they're features.
2025-12-10 06:31:44
That’s the question I will try to answer in this blog post.
But first, let me tell you a story.
Once upon a time, there was a small team working on a critical application. The deadline was tight, and management was pushing for a release “as soon as possible.” One of the engineers hesitated: some of the automated tests were failing — minor edge cases, nothing too urgent, they argued. “We’ll fix them later,” they said. The pressure to deliver trumped the red test suite. The code was shipped. In production, the very edge cases that those failing tests covered caused a subtle bug that corrupted user data. It took weeks to identify and resolve the issue, which cost the company significant reputational damage and eroded user trust. Ultimately, the “minor” failing tests cost far more than the extra time needed to get them green.
That engineer’s hesitation — and ultimately the team’s decision to ship with failing tests — was not just a matter of expediency. It was a failure of professional ethics.
A professional software engineer does more than “just write code.” According to Robert C. Martin (often referred to as “Uncle Bob”), programming is an act of responsibility, and professionals must take full ownership of the consequences of their work. In The Clean Coder, he makes it clear that professionalism is not about compliance, but about ethics: doing the right thing even when it is uncomfortable.
Accepting a failing test — even “just for now” — violates that responsibility. It knowingly delivers sub-par or unsafe software, relying on the hope that the missing fix will come “later.” That violates the very core of professionalism, because:
When you accept failing tests, you are not just betting on luck — you are betraying the craft.
This ethical stance is not unique to one author.
Kent Beck, the creator of Extreme Programming, placed “Respect” at the core of XP. Respect for your teammates means never checking in code that breaks the build or leaves tests failing. If your changes cause others pain, you are not acting professionally.
Martin Fowler has consistently argued that working software is not enough if the internal quality of the system is decaying. He emphasizes that design and refactoring are continuous responsibilities — not something to postpone indefinitely in favor of “temporary” defects.
John Ousterhout, in A Philosophy of Software Design, strongly warns against “tactical programming,” where engineers optimize for speed today and leave complexity and fragility behind for tomorrow. He argues that real engineering is strategic: reducing complexity over time and refusing to accept unstable or fragile systems.
These authors approach the topic from different angles, but they converge on the same principle: professionals think long-term rather than short-term.
Based on both ethics and practice, there is a rule that should be treated as non-negotiable:
A professional software engineer must never accept or ship code with failing (red) tests — under any circumstances.
Red tests are a signal of broken promises. They tell you the system is not behaving as expected. To ignore that signal is to normalize dishonesty in your own work.
Robert C. Martin describes professionalism as the courage to say “no” — even to managers — when quality is at risk. Refusing to ship with broken tests is not stubbornness; it is ethical courage.
Software increasingly controls critical aspects of our lives: money, health, safety, communication, transportation, and many other domains. The idea that it is “okay for now” to ship broken software is not just naive — it is dangerous.
Accepting failing tests teaches teams to accept uncertainty as normal. It teaches them to postpone discipline. It teaches them to gamble with other people’s safety.
That is not engineering.
That is not craftsmanship.
That is not professionalism.
Professional software engineering is not about being brilliant. It is about being disciplined.
It is about refusing to lower the bar, even when deadlines are tight.
It is about refusing to normalize brokenness.
It is about choosing integrity over convenience.
If you want to call yourself a professional, then this is the standard:
Never accept red tests.
Never ship with red tests.
Never normalize brokenness.
Demand green tests. Ship only with green tests. Do it always.
2025-12-10 06:31:15
In this series, I'll share my progress with the 2025 version of Advent of Code.
Check the first post for a short intro to this series.
You can also follow my progress on GitHub.
The puzzle of day 9 was again a hairpuller. And again, I missed the deadline of completing the puzzle on the same day it was released, as I've only done part one so far.
My pitfall for this puzzle: TBD
Solution here, do not click if you want to solve the puzzle first yourself
#include <cassert>
#include <fstream>
#include <iostream>
#include <sstream>
struct Point {
int x;
int y;
};
std::vector<Point> loadInput(const std::string &filename) {
std::vector<Point> result;
std::ifstream file(filename);
std::string line;
while (std::getline(file, line)) {
auto comma = line.find(',');
int x = std::stoi(line.substr(0, comma));
int y = std::stoi(line.substr(comma + 1));
result.push_back({x, y});
}
return result;
}
void partOne() {
const auto points = loadInput("/Users/rob/projects/robvanderleek/adventofcode/2025/09/input.txt");
unsigned long long result = 0;
for (int i = 0; i < points.size(); ++i) {
for (int j = i + 1; j < points.size(); ++j) {
auto p1 = points[i];
auto p2 = points[j];
long length = (p1.x > p2.x ? p1.x - p2.x : p2.x - p1.x) + 1;
long height = (p1.y > p2.y ? p1.y - p2.y : p2.y - p1.y) + 1;
long area = length * height;
if (area > result) {
result = area;
}
}
}
std::cout << result << std::endl;
assert(result == 4729332959);
}
bool intersect(Point pA, Point pB, Point pC, Point pD) {
auto line1Horizontal = (pA.y == pB.y);
auto line2Horizontal = (pC.y == pD.y);
if ((line1Horizontal && line2Horizontal) || (!line1Horizontal && !line2Horizontal)) {
return false;
}
if (line1Horizontal) {
return pC.x >= (pA.x < pB.x ? pA.x : pB.x) && pC.x <= (pA.x > pB.x ? pA.x : pB.x) &&
pA.y > (pC.y < pD.y ? pC.y : pD.y) && pA.y < (pC.y > pD.y ? pC.y : pD.y);
}
return pA.x >= (pC.x < pD.x ? pC.x : pD.x) && pA.x <= (pC.x > pD.x ? pC.x : pD.x) &&
pC.y > (pA.y < pB.y ? pA.y : pB.y) && pC.y < (pA.y > pB.y ? pA.y : pB.y);
}
bool rectIntersects(const Point p1, const Point p2, const Point p3, const Point p4, const std::vector<Point> &points) {
for (int k = 0; k < points.size(); ++k) {
auto p = points[k];
auto q = (k == points.size() - 1 ? points[0] : points[k + 1]);
if (intersect(p1, p3, p, q) || intersect(p3, p2, p, q) || intersect(p2, p4, p, q) || intersect(p4, p1, p, q)) {
return true;
}
}
return false;
}
void partTwo() {
const auto points = loadInput("/Users/rob/projects/robvanderleek/adventofcode/2025/09/input.txt");
unsigned long long result = 0;
for (int i = 0; i < points.size(); ++i) {
for (int j = i + 1; j < points.size(); ++j) {
auto p1 = points[i];
auto p2 = points[j];
long length = (p1.x > p2.x ? p1.x - p2.x : p2.x - p1.x) + 1;
long height = (p1.y > p2.y ? p1.y - p2.y : p2.y - p1.y) + 1;
long area = length * height;
if (area > result) {
auto p3 = Point{p1.x, p2.y};
auto p4 = Point{p2.x, p1.y};
if (rectIntersects(p1, p2, p3, p4, points))
continue;
result = area;
}
}
}
std::cout << result << std::endl;
assert(result == 24);
}
int main() {
partOne();
partTwo();
return 0;
}
That's it! See you again tomorrow!
2025-12-10 06:27:35
This is a submission for the https://dev.to/challenges/mux
My Senior Dev is a webapp App that provides deep, multi-perspective
code reviews using specialized AI agents. It optimizes for efficiency, thoroughness and insight
It uses three specialized AI agents working in concert:
Instead of surface-level suggestions, you get the kind of thorough,
multi-angle code review that senior developers provide:
"Consider using const instead of let on line 42"
"Security: This endpoint lacks rate limiting (DoS
risk)
# Code Quality: Database transaction isn't properly scoped (race
conditions)
# Architecture: Violates repository pattern from auth-service.ts
# Recommendation: Add rate limiting, wrap in transaction, move to
UserRepository"
My Pitch Video
Demo
As a senior developer, I spent hours on thorough reviews for complex
PRs - checking security, architecture, performance, and
maintainability. While AI tools were getting good at surface
suggestions, none provided the multi-perspective, comprehensive
analysis that complex changes actually need.
My Senior Dev fills this gap. When you have a critical database
migration, security-sensitive auth flow, or complex algorithm, you want
more than "use const instead of let." You want to be able to chat with an AI persona
on a file by file, line by line basis.
Multi-Agent Architecture: Specialized agents with specific expertise
rather than one general-purpose AI.
Configurable Depth: Teams can tune intensity - staff_engineer for deep
architectural review, junior_engineer for standard checks,
product_manager for business impact.
GitHub-Native: Seamless webhook integration, bi-directional comment
sync, three feedback types (line-specific, file-level, PR-wide).
Power User Features: Chat interface with agents, historical trend
analysis, custom per-repo instructions, selective triggering for
critical PRs only.
Tech Stack:
What Makes It Unique
It is intended to work with other AI review tools but is optimized for the human
to review, making it fast and fun to review code. With all of the AI code being produced
engineers time will be spent more and more reviewing AI produced code. The tools
to review code should be optimized for a human to be able to understand the code
and leverage AI in a complimentary way to assist in that understanding.
In addition the interface should make it dead simple and fast to go through
the files of a pull request.
The future isn't replacing human judgment - it's augmenting
expert-level analysis and making senior engineer insights consistently
available for every critical code change.
2025-12-10 06:24:13
We've all been there. Your application crashes in production with a cryptic error, and after hours of debugging, you discover it's because someone forgot to set an environment variable. Or worse, they set PORT=three-thousand instead of PORT=3000.
Today, I'm excited to introduce EnvGuard - a zero-dependency, type-safe environment variable validator for Node.js that catches these issues at startup, not at 3 AM.
Environment variables are the standard way to configure applications, but they come with challenges:
// The old way - error-prone and repetitive
const port = process.env.PORT ? parseInt(process.env.PORT) : 3000;
const apiKey = process.env.API_KEY; // undefined? empty string? who knows!
const debug = process.env.DEBUG === 'true'; // what about 'yes', '1', 'on'?
This approach has several problems:
EnvGuard solves all of these problems with a clean, declarative API:
import { cleanEnv, str, num, bool, url } from '@opensourcesforge/envguard';
const env = cleanEnv({
PORT: num({ default: 3000 }),
DATABASE_URL: url(),
API_KEY: str({ secret: true }),
DEBUG: bool({ default: false }),
});
// Full TypeScript inference!
console.log(env.PORT); // number
console.log(env.DATABASE_URL); // string (validated URL)
console.log(env.API_KEY); // string (masked in errors)
console.log(env.DEBUG); // boolean
// Built-in environment helpers
if (env.isProduction) {
enableCaching();
}
I built EnvGuard because existing solutions were missing features I needed. Here's what sets EnvGuard apart:
EnvGuard has no runtime dependencies. Your node_modules stays lean, and you don't inherit security vulnerabilities from transitive dependencies.
When validation fails, sensitive values are automatically masked:
const env = cleanEnv({
API_KEY: str({ secret: true }),
});
// If API_KEY is invalid, error shows:
// ERROR: API_KEY - Invalid value (received: "sk_l****_xxx")
No more accidentally exposing secrets in CI logs!
Unlike other libraries that only have devDefault, EnvGuard supports separate defaults for test environments:
const env = cleanEnv({
DATABASE_URL: url({
testDefault: 'postgres://localhost/test_db', // NODE_ENV=test
devDefault: 'postgres://localhost/dev_db', // NODE_ENV=development
// Required in production
}),
});
Not every missing variable should crash your app. Use warnOnly for optional features:
const env = cleanEnv({
// Critical - will fail if missing
DATABASE_URL: url(),
// Optional - logs warning but continues
ANALYTICS_ID: str({
warnOnly: true,
desc: 'Google Analytics ID (optional)',
}),
});
Catch typos before they cause problems:
const env = cleanEnv(
{ PORT: num() },
{ warnOnExtra: true }
);
// If PROT=3000 is set (typo), you'll see:
// WARNING: PROT - Unknown environment variable
Sometimes a variable is only required based on other configuration:
const env = cleanEnv({
USE_SMTP: bool({ default: false }),
SMTP_HOST: str({
requiredWhen: (env) => env.USE_SMTP === true,
}),
SMTP_PASSWORD: str({
requiredWhen: (env) => env.USE_SMTP === true,
secret: true,
}),
});
EnvGuard includes validators you won't find elsewhere:
import {
str, num, bool, // Basics
url, email, host, port, // Network
json, array, uuid, // Data
duration, bytes, // Special
enums, regex, // Validation
makeValidator, // Custom
} from '@opensourcesforge/envguard';
const env = cleanEnv({
// Parse duration strings
CACHE_TTL: duration({ default: 300000 }), // '5m' -> 300000ms
// Parse byte sizes
MAX_UPLOAD: bytes({ unit: 'MB' }), // '50MB' -> 50
// Comma-separated arrays
ALLOWED_ORIGINS: array({ default: ['localhost'] }),
// Type-safe enums
LOG_LEVEL: enums({
values: ['debug', 'info', 'warn', 'error'] as const,
default: 'info',
}),
});
Document your environment variables automatically:
import { writeEnvExample, str, num, url } from '@opensourcesforge/envguard';
const spec = {
PORT: num({ default: 3000, desc: 'Server port' }),
DATABASE_URL: url({ desc: 'PostgreSQL connection URL' }),
API_KEY: str({ desc: 'API authentication key', secret: true }),
};
writeEnvExample(spec);
Generates:
# Server port
PORT=3000
# PostgreSQL connection URL
DATABASE_URL=
# API authentication key
API_KEY=
Here's how I use EnvGuard in a typical Express application:
// src/env.ts
import { cleanEnv, str, num, bool, url, email, duration } from '@opensourcesforge/envguard';
export const env = cleanEnv({
// Server
PORT: num({ default: 3000, desc: 'HTTP server port' }),
HOST: str({ default: '0.0.0.0' }),
// Database
DATABASE_URL: url({ secret: true }),
DB_POOL_SIZE: num({ default: 10 }),
// Redis
REDIS_URL: url({ devDefault: 'redis://localhost:6379' }),
REDIS_TTL: duration({ default: 3600000 }), // 1 hour
// Auth
JWT_SECRET: str({ secret: true }),
JWT_EXPIRES_IN: str({ default: '7d' }),
// Email (optional)
SMTP_HOST: str({ warnOnly: true }),
SMTP_PORT: num({ default: 587 }),
FROM_EMAIL: email({ default: '[email protected]' }),
// Features
ENABLE_SWAGGER: bool({ default: true }),
LOG_LEVEL: str({
choices: ['debug', 'info', 'warn', 'error'],
default: 'info',
}),
});
// src/app.ts
import express from 'express';
import { env } from './env';
const app = express();
if (env.ENABLE_SWAGGER && !env.isProduction) {
// Setup Swagger
}
app.listen(env.PORT, env.HOST, () => {
console.log(`Server running on ${env.HOST}:${env.PORT}`);
});
Install EnvGuard:
npm install @opensourcesforge/envguard
Create your environment configuration:
// src/env.ts
import { cleanEnv, str, num, bool } from '@opensourcesforge/envguard';
export const env = cleanEnv({
NODE_ENV: str({ choices: ['development', 'test', 'production'] }),
PORT: num({ default: 3000 }),
DEBUG: bool({ default: false }),
});
That's it! Your environment variables are now validated at startup with full TypeScript support.
If you're using envalid, migration is straightforward:
- import { cleanEnv, str, num } from 'envalid';
+ import { cleanEnv, str, num } from '@opensourcesforge/envguard';
- const env = cleanEnv(process.env, {
+ const env = cleanEnv({
PORT: num({ default: 3000 }),
API_KEY: str(),
});
Environment variable validation shouldn't be an afterthought. With EnvGuard, you get:
Give it a try and let me know what you think! I'd love to hear your feedback and feature requests.
EnvGuard is open source under the MIT license. Contributions are welcome!