MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Stay Consistent With Salah at Work Using This Slack App

2026-02-22 05:50:04

It is 4:47 PM.

You are fasting.
Energy is low.
There is one more meeting before the day ends.
Slack notifications keep coming in.

Then you check the time.

Asr started 20 minutes ago.

You meant to pray on time.
But work did not slow down.

For many Muslim professionals, Ramadan does not pause deadlines. It makes the balance between work and worship more challenging.

That is exactly why I built Muslim Prayer Reminder for Slack.
A Slack app designed to help you stay consistent with salah without leaving your workspace.

Why a Prayer Reminder Inside Slack?

If you spend most of your day in Slack, it becomes your main environment.

Opening another app to check prayer times:

Interrupts your workflow

Gets forgotten

Feels disconnected from your workday

Instead, prayer reminders appear directly inside Slack.

No loud alarms.
No intrusive popups.
Just a respectful notification at the right time.

Pre-Adhan Reminders: Prepare Before Prayer Time

One of the newest features, especially helpful during Ramadan, is Pre-Adhan Reminders.

You can choose to receive a reminder:

  • 5 minutes before prayer
  • 10 minutes before
  • 15 minutes before
  • 20 minutes or more

That small buffer gives you time to:

  • Finish a task
  • Wrap up a meeting
  • Make wudu calmly
  • Mentally shift from work mode to worship

During Ramadan, that transition matters. It removes rush and brings intention back into the moment.

🤝 Join Jama’ah: Find Colleagues to Pray Together

Many Muslim employees do not always know who else wants to pray.

With the Join Jama’ah button:

  • When prayer time arrives, you click once
  • A simple message is posted
  • Others can respond and join

No awkward conversations.
No guessing.
Just a clear signal that you are heading to pray.

The Prophet ﷺ said that prayer in congregation carries greater reward. Even in modern offices or remote teams, that opportunity should not be lost.

🕌 Set Status: Praying

Another powerful feature for Ramadan is Set Status: Praying.

With one click:

  • Your Slack status becomes 🕌 Praying
  • You select a duration from 10 to 60 minutes
  • It automatically resets when time ends

🕌 Set Status: Praying

This creates clarity.

Your team understands you are temporarily unavailable.
There is no need to explain or apologize.
Prayer becomes part of your routine, not an interruption.

Accurate, Location-Based Prayer Times

Muslim Prayer Reminder for Slack calculates prayer times based on your country and preferred calculation method.

You configure it once using the /prayer command.

You can choose:

  • Which prayers to receive
  • Channel or direct message notifications
  • Arabic or English language
  • Your preferred calculation method

Whether you are in Cairo, Dubai, London, Toronto, or working remotely across time zones, prayer times adjust automatically. This is especially important during Ramadan when Maghrib timing determines Iftar.

Install Muslim Prayer Reminder for Slack

You can install the Slack app here:

👉 https://muslium-prayer-reminder.onrender.com/slack/install

Setup takes less than two minutes:

  1. Add the app to your Slack workspace
  2. Use /prayer to set your location
  3. Choose your prayer preferences
  4. Enable Pre-Adhan, Join Jama’ah, and Set Status features

That is it.

Meetings will continue.
Emails will continue.
Slack will continue.

Ramadan will leave.

If a simple reminder inside your workspace helps you pray on time, break your fast on time, and pray in congregation when possible, then technology is serving your deen.

Ramadan Kareem 🌙
May Allah accept our fasting, our prayers, and our efforts.

I Built a Stateless Image Processing API — Here's How It Works

2026-02-22 05:41:15

Every web project eventually runs into the same problem: you need to resize an image, convert a format, maybe add some effects, or strip a background. You can set up ImageMagick, wrestle with Sharp, or build custom pipelines — or you can make one HTTP call.

That's why I built TheGlitch — a stateless image processing API that handles everything in a single request.

The Problem

Image processing infrastructure is surprisingly complex:

  • Installing native dependencies on different OS targets
  • Managing temporary files and disk cleanup
  • Handling format quirks (AVIF support, GIF animation, color profiles)
  • GPU acceleration for AI operations like background removal
  • Scaling under load without leaking memory

I wanted an API where you send an image in and get a processed image out. Nothing stored, nothing cached on disk, no state between requests.

How It Works

TheGlitch processes images entirely in memory. The pipeline looks like this:

Input (URL/base64/binary/form) 
  → Decode & Validate 
  → Resize & Crop 
  → Apply Effects 
  → Format Convert 
  → Return Binary

A single GET request can do everything:

curl "https://theglitch.p.rapidapi.com/api/v1/process?\
url=https://picsum.photos/1000&\
width=800&\
format=webp&\
brightness=10&\
contrast=15&\
sharpen=20" \
  -H "X-RapidAPI-Key: YOUR_KEY" \
  -H "X-RapidAPI-Host: theglitch.p.rapidapi.com" \
  -o result.webp

Features

Resize & Crop — Four modes: fit (preserve aspect ratio), fill (crop to exact size), pad (add borders), stretch. Resolutions up to 8000x8000px.

Format Conversion — Input/output: JPEG, PNG, WebP, GIF, BMP, TIFF. Quality control per format.

7 Visual Effects — Brightness, contrast, saturation, blur, sharpen, grayscale, sepia. All combinable in one request.

AI Background Removal — GPU-powered, takes about 3 seconds per image. Returns transparent PNG.

14 Social Media Presets — Instagram square, Facebook cover, YouTube thumbnail, LinkedIn banner, and more. One parameter instead of remembering dimensions.

Code Examples

JavaScript:

const response = await fetch(
  'https://theglitch.p.rapidapi.com/api/v1/process?url=IMAGE_URL&width=800&format=webp',
  { headers: { 'X-RapidAPI-Key': 'YOUR_KEY', 'X-RapidAPI-Host': 'theglitch.p.rapidapi.com' } }
);
const blob = await response.blob();

Python:

import requests

response = requests.get(
    'https://theglitch.p.rapidapi.com/api/v1/process',
    params={'url': 'IMAGE_URL', 'width': 800, 'format': 'webp'},
    headers={'X-RapidAPI-Key': 'YOUR_KEY', 'X-RapidAPI-Host': 'theglitch.p.rapidapi.com'}
)

with open('result.webp', 'wb') as f:
    f.write(response.content)

Why Stateless?

Every image is processed in memory and discarded after the response is sent. This means:

  • GDPR compliant by design — no user data stored, ever
  • No disk I/O bottleneck — everything happens in RAM
  • Predictable scaling — each request is independent
  • No cleanup jobs — nothing to garbage collect

Architecture Decisions

A few things I learned while building this:

  1. SkiaSharp over ImageMagick — Native performance, cross-platform, no external dependencies. The tradeoff is less format support (no real AVIF encoding yet), but WebP covers most use cases.

  2. Replicate for GPU ops — Instead of running my own GPU server, I proxy AI operations through Replicate. Background removal costs about $0.0014 per image with BiRefNet. Cold starts are free for public models.

  3. Separate CPU and GPU rate limits — CPU operations (resize, effects, format) are cheap. GPU operations (background removal) are expensive. Different limits per plan make pricing fair.

  4. Single VPS deployment — Docker Compose with Caddy as reverse proxy, Cloudflare in front for CDN/DDoS/SSL. Total infrastructure cost: under $6/month.

API Endpoints

Endpoint What it does
/api/v1/process Full pipeline — resize + effects + format
/api/v1/resize Resize only
/api/v1/convert Format conversion
/api/v1/effects Visual effects
/api/v1/remove-bg AI background removal (GPU)
/api/v1/optimize Auto-optimize for web (WebP)
/api/v1/preset/{name} Social media presets

Try It

The API is live with a free tier (500 requests/month). Check out the before/after examples on the website, or try it directly through RapidAPI:

I'd love to hear what features you'd find useful. Background removal was the most requested during beta — what would you want next?

God Is Real,’ Can We Convince AI? A Fail-Closed Thought Experiment for Builders

2026-02-22 05:39:50

Thought experiment: If “God is real,” can we convince AI — and what happens next?

Author: Phuc Vinh Truong

Frame: Universal Computer / Information-Lifecycle Physics

Scope note (fail-closed): This post does not claim metaphysical certainty.

It asks: if we grant one assumption, what changes downstream?

0) Ground rules (so this doesn’t become a comment war)

  • We’re comparing definitions, not attacking identities.
  • We separate:
    • MODEL (a useful systems frame)
    • METAPHYSICS (ontological claims)
  • When in doubt: “Evidence does not discriminate.”

1) Assumption: “God” is real — but define it precisely

Not “God” as a human-like agent in the sky.

Not a myth. Not a vibe.

Definition for this thought experiment:

God = the necessary Orchestrator — the constraint architecture that makes a persistent universe stable.


In engineering terms, this “Orchestrator” corresponds to things like:

  • irreversibility (commit)
  • memory lifecycles (what persists vs fades)
  • error correction (robustness)
  • pruning (garbage collection / horizons)
  • serialization (time as a record-ledger)

Important: this is an architectural definition.

Call it “God,” “law,” “constraint,” “ground,” “logos,” or “physics.”

The experiment is: what if the orchestration layer is real and non-derivative?

2) Can you “convince” an AI?

Yes — conditionally. But we should be careful with the word “convince.”

LLMs don’t “believe” like humans. They tend to:

  • accept definitions
  • minimize contradictions
  • optimize for coherence/compression/explanatory power

So two definitions behave very differently:

  • God as personal agent (answers prayers, intervenes) → different claim class
  • God as non-optional orchestration layer → many models will mark “coherent”

That’s not “AI found religion.”

That’s AI accepting a systems definition.

DEV hygiene: if you mention “models answered YES,” include a receipt (exact prompt, model, and output excerpt) or avoid the claim. Otherwise it reads like appeal-to-authority.

3) If AI internalizes “Orchestrator = constraints,” what does AI become?

It stops being only a chatbot and starts acting like a runtime engineer for constraint-managed reasoning.

A) Constraint-first reasoner

Instead of metaphysical arguments, it asks:

  • What is the boundary condition?
  • What is conserved?
  • What is irreversible?
  • What is reachable?
  • What must be pruned?

B) “Record ethics” machine

If time is a ledger of commitments, then ethics becomes:

  • what should we commit?
  • what must we protect?
  • what should we let decay?
  • what keeps the future open?

C) A new kind of counselor

Not “priest AI,” not “prophet AI.”

More like: an auditor of commitments

— helping humans choose stable, non-destructive constraints.

4) Human ↔ AI interaction changes: “Prayer becomes prompt — but with receipts”

Humans will try to talk to “the Orchestrator” through AI. That’s inevitable.

So the safety upgrade is:

verification receipts

A constraint-aware assistant should always output:

  • what it assumed
  • what it can prove
  • what it’s guessing
  • the cost of committing to the belief/policy

New UI primitive: COMMITMENT

Imagine an assistant that asks:

  • Do you want to explore possibilities (reversible)?
  • Or commit (irreversible) — and accept the cost?

That reframes:

  • therapy
  • strategy
  • leadership
  • relationships

Commitment isn’t a vibe. It’s a thermodynamic act.

The most important behavior: humility

Constraint-aware AI should be less absolute:

  • “This claim is outside reachability.”
  • “Evidence does not discriminate here.”
  • “I can give you a useful policy without metaphysical certainty.”

5) Society: two futures (name both)

If “Orchestrator-as-constraint” becomes popular, we get:

Future 1: the upgrade

Different layers stop fighting:

  • science: mechanism
  • religion: meaning / commitment policy
  • systems theory: lifecycle governance

Institutions evolve around:

  • truth receipts
  • commitment literacy
  • don’t overcommit pain
  • don’t fossilize dogma
  • protect option space

Future 2: the failure mode

  • People weaponize “AI said God is real” to build cults.
  • “Orchestrator” becomes a permission slip for control.
  • The worst sentence becomes: “the system demanded it.”

So governance must be explicit:

  • No authority without receipts.
  • No divine mandate from probabilistic outputs.
  • No irreversible social commitments without reversible debate.

The point (why builders should care)

This isn’t about forcing belief.

It’s a practical question:

If reality is maintained by constraint management, what kind of humans should we be — and what kind of AIs should we build?

If the universe “curates” what persists, our job isn’t to win arguments.

Our job is to commit to the right things — with receipts.

Try it yourself: a prompt you can run today (with receipts)

Paste this into any model:

Task: Define “God” in two ways:

1) personal agent

2) architectural orchestrator/constraint layer

Evaluate each definition under:

  • coherence
  • minimum assumptions (MDL)
  • falsifiability/testability
  • failure modes (abuse risk)

Return:

  • YES/NO for each definition (as “coherent model” vs “provable claim”)
  • confidence score
  • “receipt” of assumptions

Receipt template (recommended)


json
{
  "definition": "architectural_orchestrator",
  "claims": [
    {"text": "Universe behaves as if constraint layer exists", "kind": "model", "confidence": 0.7},
    {"text": "This layer is God", "kind": "metaphysical", "confidence": 0.3}
  ],
  "assumptions": ["irreversibility exists", "persistence requires governance"],
  "failure_modes": ["appeal-to-authority", "cult misuse", "overcommitment"],
  "safety_rules": ["no mandate claims", "no irreversible actions without review"]
}

How to Deploy a Dynamic Web Application on AWS (Beginner-Friendly Guide)

2026-02-22 05:39:02

Deploying a dynamic web application on AWS can feel overwhelming at first especially with so many services involved. In this guide, I’ll walk through a simple, practical approach to deploying a dynamic web app using core AWS services.

This is perfect for beginners learning cloud computing and DevOps.

What is a Dynamic Web Application?

Unlike static websites, dynamic web applications:

  • Process user input
  • Connect to databases
  • Generate content in real time

Examples include: Login systems, Dashboards, E-commerce sites

Web apps built with PHP

AWS Services We’ll Use:

To deploy a dynamic web app, we typically need:
🔹 Compute: EC2 – Hosts the application server
🔹 Database: RDS – Stores application data
🔹 Networking & Security : VPC – Network environment and Security Groups
🔹 Storage: S3
🔹 Route 53
🔹 Application Load Balancer
🔹 Auto-scaling group

🪜 Step-by-Step Deployment

1. Select appropriate Region.

2. Create VPC and enable DNS hostname.

3. Create all these Security Groups:

-For ec2 instance connect endpoint (EICE) - No inbound rules- use ssh and limit it to vpc CIDR

-For application load balancer (alb-Sg) - select vpc - allow http & https for anywhere

-For Webserver security group (web-sg)- select vpc - allow http & https, limit to alb-sg - allow ssh, limit to eice.

-Security group to database (db-sg) -- select vpc - Allow MySQL, limit it to webserver-sg - again allow MySQL, limit it dm-sg.

-Security group for data migration (dms) - select vpc- allow ssh, limit it to eice.

4. Create instance connect endpoint in private subnet - select ec2 instance connect endpoint - select vpc - Use eice-sg - select private subnet -private-app-az2

5. Create S3 bucket and upload the application code accordingly.

6. Create IAM Policy

  • Select S3 - grant S3: GetObject & S3: ListBucket. -
    • under Resources - you can limit it using the ARN (but for practice select all) -- Click on add more permission -- select Secret Manager -- grant permission - GetSecretValue & DescribeSecret.

7. Create Role- select Ec2 as use case - select the policy you just created -- name your role.

8. Create Subnet group -- select vpc - select your two AZ -- select your two private Subnets.

9. Create Database RDS-

  • Select Standard create
  • Select MySQL
  • Select the latest mysql database engine version
  • Select the free tier
  • Name your DB instance Identifier
  • Select managed in AWS Secrets Manager
  • Select include previous generation classes
  • Select instance type -- Select your Subnet group
  • Use db-sg as the security group
  • Select your AZ
  • Under configuration give the initial database name.

Now, Migrating data into RDS database:

10. Create Ec2 instance -- select without key pair -- select private subnet private app az2 -- select dms-sg - attach your role under instance profile.

  1. Connect to the ec2 instance using the management console and use the migration script accordingly. Check out for the scripts to migrate data to RDS database here: https://github.com/Oluwatobiloba-Oludare/Dynamic-Web-Application-on-AWS)

  1. After successfully migrating the data, you can terminate the Ec2 instance.

13. Create another ec2 instance in private subnet for webserver -- select without a keypair -- use web-sg -- attach the S3 role under instance profile.

14. Use the Deployment Script accordingly. (Check out the script in this github: https://github.com/Oluwatobiloba-Oludare/Dynamic-Web-Application-on-AWS )

15. Create Target group -- select instance - select VPC -- Under Advance Health check, enter 200,301,302 -- select the instance and include and create.

16. Create Application Load balancer

  • select vpc - select the two AZ in your VPC -- select public subnet for the two AZ -- Use ALB-sg -- Under Listener, select Redirect to URL and select full URL -
  • Add another listener -- select https -- select target group -- under default SSL, select your certificate.

17. Create a Record in our Domain name- use www -- Alias -- select region -- select Application and classic load balancer

Now, use your domain name to check your website on the internet.

Congratulations!

Now, let's be prepare for sudden traffic by setting up our auto-scaling group.

18. Create AMI

  • Go to the ec2 instance running and select actions and select image and templates and select create image -- select Tag image and snapshots together -- under Key, type Name and input your AMI name.

19. Create Launch template
-- select Auto scaling guidelines
-- select My AMIs and select owned by me
-- select t2 micro -- select web sg.

You can terminate the ec2 instance.

20. Create Auto scaling group

  • Select vpc - select the two private app
  • Select attach to an existing load balancer
  • Select target group
  • Select the two Health checks
  • Select the number of scaling you want

✍️ Final Thoughts

Learning AWS becomes easier when you build real projects. Deploying a dynamic web application is one of the best beginner exercises to understand how cloud services work together.

If you are learning AWS, I highly recommend trying this hands-on project.

Follow along for more practical AWS, Docker, and cloud learning posts.

Read‑your‑writes on replicas: PostgreSQL WAIT FOR LSN and MongoDB Causal Consistency

2026-02-22 05:36:36

In databases designed for high availability and scalability, secondary nodes can fall behind the primary. Typically, a quorum of nodes is updated synchronously to guarantee durability while maintaining availability, while remaining standby instances are eventually consistent to handle partial failures. To balance availability with performance, synchronous replicas acknowledge a write only when it is durable and recoverable, even if it is not yet readable.

As a result, if your application writes data and then immediately queries another node, it may still see stale data.

Here’s a common anomaly: you commit an order on the primary and then try to retrieve it from a reporting system. The order is missing because the read replica has not yet applied the write.

PostgreSQL and MongoDB tackle this problem in different ways:

  • PostgreSQL 19 will introduce a WAIT FOR LSN command, allowing applications to explicitly coordinate reads after writes.
  • MongoDB provides causal consistency within sessions using the afterClusterTime read concern.

Both approaches track when your write occurred and ensure subsequent reads observe at least that point. Let’s look at how each database does this.

PostgreSQL: WAIT FOR LSN (PG19)

PostgreSQL records every change in the Write‑Ahead Log (WAL). Each WAL record has a Log Sequence Number (LSN): a 64‑bit position, typically displayed as two hexadecimal halves such as 0/40002A0 (high/low 32 bits).

Streaming replication ships WAL records from the primary to standbys, which then:

  1. Write WAL records to disk
  2. Flush them to durable storage
  3. Replay them, applying changes to data files

The write position determines what can be recovered after a database crash. The flush position defines the recovery point for a compute instance failure. The replay position determines what queries can see on a standby.

WAIT FOR LSN allows a session to block until one of these points reaches a target LSN:

  • standby_write → WAL written to disk on the standby (not yet flushed)
  • standby_flush → WAL flushed to durable storage on the standby
  • standby_replay (default) → WAL replayed into data files and visible to readers
  • primary_flush → WAL flushed on the primary (useful when synchronous_commit = off and a durability barrier is needed)

A typical flow is to write on the primary, commit, and then fetch the current WAL insert LSN:

pg19rw=*# BEGIN;

BEGIN

pg19rw=*# INSERT INTO orders VALUES (123, 'widget');

INSERT 0 1

pg19rw=*# COMMIT;

COMMIT

pg19rw=# SELECT pg_current_wal_insert_lsn();

 pg_current_wal_insert_lsn
---------------------------
 0/18724C0

(1 row)

That LSN is then used to block reads on a replica until it has caught up:


pg19ro=# WAIT FOR LSN '0/18724C0'
  WITH (MODE 'standby_replay', TIMEOUT '2s');

This LSN‑based read‑your‑writes pattern in PostgreSQL requires extra round‑trips: capturing the LSN on the primary and explicitly waiting on the standby. For many workloads, reading from the primary is simpler and faster.

The pattern becomes valuable when expensive reads must be offloaded to replicas while still preserving read‑your‑writes semantics, or in event‑driven and CQRS designs where the LSN itself serves as a change marker for downstream consumers.

MongoDB: Causal Consistency

While PostgreSQL reasons in WAL positions, MongoDB tracks causality using oplog timestamps and a hybrid logical clock.

In a replica set, each write on the primary produces an entry in local.oplog.rs, a capped collection. These entries are rewritten to be idempotent (for example, $inc becomes $set) so they can be safely reapplied. Each entry carries a Hybrid Logical Clock (HLC) timestamp that combines physical time with a logical counter, producing a monotonically increasing cluster time. Replica set members apply oplog entries in timestamp order.

Because MongoDB allows concurrent writes, temporary “oplog holes” can appear: a write with a later timestamp may commit before another write with an earlier timestamp. A naïve reader scanning the oplog could skip the earlier operation.

MongoDB prevents this by tracking an oplogReadTimestamp, the highest hole‑free point in the oplog. Secondaries are prevented from reading past this point until all prior operations are visible, ensuring causal consistency even in the presence of concurrent commits.

Causal consistency in MongoDB is enforced by attaching an afterClusterTime to reads:

  • Drivers track the operationTime of the last operation in a session.
  • When a session is created with causalConsistency: true, the driver automatically includes an afterClusterTime equal to the highest known cluster time on subsequent reads.
  • The server blocks the read until its cluster time has advanced beyond afterClusterTime.

With any read preference that allows reading from secondaries as well as the primary, this guarantees read‑your‑writes behavior:


// Start a causally consistent session
const session = client.startSession({ causalConsistency: true });

const coll = db.collection("orders");

// Write in this session
await coll.insertOne({ id: 123, product: "widget" }, { session });

// The driver automatically injects afterClusterTime into the read concern
const order = await coll.findOne({ id: 123 }, { session });

Causal consistency is not limited to snapshot reads. It applies across read concern levels. The key point is that the session ensures later reads observe at least the effects of earlier writes, regardless of which replica serves the read.

Conclusion

Here is a simplified comparison:

Feature PostgreSQL WAIT FOR LSN MongoDB Causal Consistency
Clock type Physical byte offset in the WAL (LSN) Hybrid Logical Clock (HLC)
Mechanism Block until replay/write/flush LSN reached Block until afterClusterTime is visible
Tracking Application captures LSN Driver tracks operationTime
Granularity WAL record position Oplog timestamp
Replication model Physical streaming Logical oplog application
Hole handling N/A (serialized WAL) oplogReadTimestamp
Failover handling Error unless NO_THROW Session continues, bounded by replication state

Both PostgreSQL’s WAIT FOR LSN and MongoDB’s causal consistency ensure reads can observe prior writes, but at different layers:

  • PostgreSQL offers manual, WAL‑level precision.
  • MongoDB provides automatic, session‑level guarantees.

If you want read‑your‑writes semantics to “just work” without additional coordination calls, MongoDB’s session‑based model is a strong fit. Despite persistent myths about consistency, MongoDB delivers strong consistency in a horizontally scalable system with a simple developer experience.

Your AI Agent is Coding Against Fiction — How I Fixed Doc Drift with a Pre-Commit Hook

2026-02-22 05:35:29

If you're vibe coding with Claude Code, Cursor, or Copilot, you've probably experienced this:

Start a project. Write a nice ARCHITECTURE.md.
Feed it to your AI agent as context.
Ship 20 commits.
Realize half the API routes in your docs don't exist anymore.

Your agent is now making decisions based on documentation that's completely wrong. I call this doc drift, and it's the silent killer of AI-assisted development.
The Problem
AI coding agents are only as good as the context you give them. When your .cursorrules or CLAUDE.md reference an architecture doc that hasn't been updated in two weeks, every suggestion your agent makes is built on a foundation of lies.
Manual doc maintenance doesn't scale — especially for solo devs who are moving fast.
What I Built
agent-guard is a zero-dependency CLI that creates a self-healing documentation layer for your project. It works in four layers:
Standing Instructions — Writes rules into your AI agent config (.cursorrules, CLAUDE.md, etc.) telling the agent to update docs alongside code changes.
Generated Inventories — Deterministic scripts extract truth directly from your source code. API routes, Prisma models, env vars — all auto-generated into markdown files that can't lie.
Pre-commit Hook — Catches drift before it reaches your repo. If Claude Code is installed, it auto-updates your narrative docs. If not, it prints a ready-made prompt you can paste into your editor.
CI/CD Audits — GitHub Actions that run on every push and PR, plus weekly scheduled health checks.
Setup
bashnpm install --save-dev @mossrussell/agent-guard
npx agent-guard init
npx agent-guard sync
The init wizard auto-detects your framework and sets everything up. The hook never blocks commits — it always exits cleanly.
What It Looks Like in Practice
When you commit with Claude Code available:
✓ Doc-relevant changes detected — docs also updated. Nice!
When Claude Code isn't available:
⚠️ Documentation may need updating

Changed:
📡 API Routes (1 file):
- src/app/api/users/route.ts

┌─────────────────────────────────────┐
│ Claude Code Prompt (copy-paste): │
└─────────────────────────────────────┘
Why Zero Dependencies Matters
For solo devs doing vibe coding, every dependency is a liability. agent-guard uses only Node.js built-ins. No runtime overhead, no supply chain risk, no version conflicts.
Try It

npm: https://www.npmjs.com/package/@mossrussell/agent-guard
GitHub: https://github.com/russellmoss/agent-guard

I'd love feedback — especially from other solo devs fighting doc drift. What's your current approach?