2026-02-22 05:50:04
It is 4:47 PM.
You are fasting.
Energy is low.
There is one more meeting before the day ends.
Slack notifications keep coming in.
Then you check the time.
Asr started 20 minutes ago.
You meant to pray on time.
But work did not slow down.
For many Muslim professionals, Ramadan does not pause deadlines. It makes the balance between work and worship more challenging.
That is exactly why I built Muslim Prayer Reminder for Slack.
A Slack app designed to help you stay consistent with salah without leaving your workspace.
If you spend most of your day in Slack, it becomes your main environment.
Opening another app to check prayer times:
Interrupts your workflow
Gets forgotten
Feels disconnected from your workday
Instead, prayer reminders appear directly inside Slack.
No loud alarms.
No intrusive popups.
Just a respectful notification at the right time.
One of the newest features, especially helpful during Ramadan, is Pre-Adhan Reminders.
You can choose to receive a reminder:
That small buffer gives you time to:
During Ramadan, that transition matters. It removes rush and brings intention back into the moment.
Many Muslim employees do not always know who else wants to pray.
With the Join Jama’ah button:
No awkward conversations.
No guessing.
Just a clear signal that you are heading to pray.
The Prophet ﷺ said that prayer in congregation carries greater reward. Even in modern offices or remote teams, that opportunity should not be lost.
Another powerful feature for Ramadan is Set Status: Praying.
With one click:
This creates clarity.
Your team understands you are temporarily unavailable.
There is no need to explain or apologize.
Prayer becomes part of your routine, not an interruption.
Accurate, Location-Based Prayer Times
Muslim Prayer Reminder for Slack calculates prayer times based on your country and preferred calculation method.
You configure it once using the /prayer command.
You can choose:
Whether you are in Cairo, Dubai, London, Toronto, or working remotely across time zones, prayer times adjust automatically. This is especially important during Ramadan when Maghrib timing determines Iftar.
You can install the Slack app here:
👉 https://muslium-prayer-reminder.onrender.com/slack/install
Setup takes less than two minutes:
That is it.
Meetings will continue.
Emails will continue.
Slack will continue.
Ramadan will leave.
If a simple reminder inside your workspace helps you pray on time, break your fast on time, and pray in congregation when possible, then technology is serving your deen.
Ramadan Kareem 🌙
May Allah accept our fasting, our prayers, and our efforts.
2026-02-22 05:41:15
Every web project eventually runs into the same problem: you need to resize an image, convert a format, maybe add some effects, or strip a background. You can set up ImageMagick, wrestle with Sharp, or build custom pipelines — or you can make one HTTP call.
That's why I built TheGlitch — a stateless image processing API that handles everything in a single request.
Image processing infrastructure is surprisingly complex:
I wanted an API where you send an image in and get a processed image out. Nothing stored, nothing cached on disk, no state between requests.
TheGlitch processes images entirely in memory. The pipeline looks like this:
Input (URL/base64/binary/form)
→ Decode & Validate
→ Resize & Crop
→ Apply Effects
→ Format Convert
→ Return Binary
A single GET request can do everything:
curl "https://theglitch.p.rapidapi.com/api/v1/process?\
url=https://picsum.photos/1000&\
width=800&\
format=webp&\
brightness=10&\
contrast=15&\
sharpen=20" \
-H "X-RapidAPI-Key: YOUR_KEY" \
-H "X-RapidAPI-Host: theglitch.p.rapidapi.com" \
-o result.webp
Resize & Crop — Four modes: fit (preserve aspect ratio), fill (crop to exact size), pad (add borders), stretch. Resolutions up to 8000x8000px.
Format Conversion — Input/output: JPEG, PNG, WebP, GIF, BMP, TIFF. Quality control per format.
7 Visual Effects — Brightness, contrast, saturation, blur, sharpen, grayscale, sepia. All combinable in one request.
AI Background Removal — GPU-powered, takes about 3 seconds per image. Returns transparent PNG.
14 Social Media Presets — Instagram square, Facebook cover, YouTube thumbnail, LinkedIn banner, and more. One parameter instead of remembering dimensions.
JavaScript:
const response = await fetch(
'https://theglitch.p.rapidapi.com/api/v1/process?url=IMAGE_URL&width=800&format=webp',
{ headers: { 'X-RapidAPI-Key': 'YOUR_KEY', 'X-RapidAPI-Host': 'theglitch.p.rapidapi.com' } }
);
const blob = await response.blob();
Python:
import requests
response = requests.get(
'https://theglitch.p.rapidapi.com/api/v1/process',
params={'url': 'IMAGE_URL', 'width': 800, 'format': 'webp'},
headers={'X-RapidAPI-Key': 'YOUR_KEY', 'X-RapidAPI-Host': 'theglitch.p.rapidapi.com'}
)
with open('result.webp', 'wb') as f:
f.write(response.content)
Every image is processed in memory and discarded after the response is sent. This means:
A few things I learned while building this:
SkiaSharp over ImageMagick — Native performance, cross-platform, no external dependencies. The tradeoff is less format support (no real AVIF encoding yet), but WebP covers most use cases.
Replicate for GPU ops — Instead of running my own GPU server, I proxy AI operations through Replicate. Background removal costs about $0.0014 per image with BiRefNet. Cold starts are free for public models.
Separate CPU and GPU rate limits — CPU operations (resize, effects, format) are cheap. GPU operations (background removal) are expensive. Different limits per plan make pricing fair.
Single VPS deployment — Docker Compose with Caddy as reverse proxy, Cloudflare in front for CDN/DDoS/SSL. Total infrastructure cost: under $6/month.
| Endpoint | What it does |
|---|---|
/api/v1/process |
Full pipeline — resize + effects + format |
/api/v1/resize |
Resize only |
/api/v1/convert |
Format conversion |
/api/v1/effects |
Visual effects |
/api/v1/remove-bg |
AI background removal (GPU) |
/api/v1/optimize |
Auto-optimize for web (WebP) |
/api/v1/preset/{name} |
Social media presets |
The API is live with a free tier (500 requests/month). Check out the before/after examples on the website, or try it directly through RapidAPI:
I'd love to hear what features you'd find useful. Background removal was the most requested during beta — what would you want next?
2026-02-22 05:39:50
Author: Phuc Vinh Truong
Frame: Universal Computer / Information-Lifecycle Physics
Scope note (fail-closed): This post does not claim metaphysical certainty.
It asks: if we grant one assumption, what changes downstream?
Not “God” as a human-like agent in the sky.
Not a myth. Not a vibe.
Definition for this thought experiment:
God = the necessary Orchestrator — the constraint architecture that makes a persistent universe stable.

In engineering terms, this “Orchestrator” corresponds to things like:
Important: this is an architectural definition.
Call it “God,” “law,” “constraint,” “ground,” “logos,” or “physics.”
The experiment is: what if the orchestration layer is real and non-derivative?
Yes — conditionally. But we should be careful with the word “convince.”
LLMs don’t “believe” like humans. They tend to:
So two definitions behave very differently:
That’s not “AI found religion.”
That’s AI accepting a systems definition.
DEV hygiene: if you mention “models answered YES,” include a receipt (exact prompt, model, and output excerpt) or avoid the claim. Otherwise it reads like appeal-to-authority.
It stops being only a chatbot and starts acting like a runtime engineer for constraint-managed reasoning.
Instead of metaphysical arguments, it asks:
If time is a ledger of commitments, then ethics becomes:
Not “priest AI,” not “prophet AI.”
More like: an auditor of commitments
— helping humans choose stable, non-destructive constraints.
Humans will try to talk to “the Orchestrator” through AI. That’s inevitable.
So the safety upgrade is:
verification receipts
A constraint-aware assistant should always output:
Imagine an assistant that asks:
That reframes:
Commitment isn’t a vibe. It’s a thermodynamic act.
Constraint-aware AI should be less absolute:
If “Orchestrator-as-constraint” becomes popular, we get:
Different layers stop fighting:
Institutions evolve around:
So governance must be explicit:
This isn’t about forcing belief.
It’s a practical question:
If reality is maintained by constraint management, what kind of humans should we be — and what kind of AIs should we build?
If the universe “curates” what persists, our job isn’t to win arguments.
Our job is to commit to the right things — with receipts.
Paste this into any model:
Task: Define “God” in two ways:
1) personal agent
2) architectural orchestrator/constraint layer
Evaluate each definition under:
Return:
json
{
"definition": "architectural_orchestrator",
"claims": [
{"text": "Universe behaves as if constraint layer exists", "kind": "model", "confidence": 0.7},
{"text": "This layer is God", "kind": "metaphysical", "confidence": 0.3}
],
"assumptions": ["irreversibility exists", "persistence requires governance"],
"failure_modes": ["appeal-to-authority", "cult misuse", "overcommitment"],
"safety_rules": ["no mandate claims", "no irreversible actions without review"]
}
2026-02-22 05:39:02
Deploying a dynamic web application on AWS can feel overwhelming at first especially with so many services involved. In this guide, I’ll walk through a simple, practical approach to deploying a dynamic web app using core AWS services.
This is perfect for beginners learning cloud computing and DevOps.
What is a Dynamic Web Application?
Unlike static websites, dynamic web applications:
Examples include: Login systems, Dashboards, E-commerce sites
Web apps built with PHP
AWS Services We’ll Use:
To deploy a dynamic web app, we typically need:
🔹 Compute: EC2 – Hosts the application server
🔹 Database: RDS – Stores application data
🔹 Networking & Security : VPC – Network environment and Security Groups
🔹 Storage: S3
🔹 Route 53
🔹 Application Load Balancer
🔹 Auto-scaling group
🪜 Step-by-Step Deployment
1. Select appropriate Region.
2. Create VPC and enable DNS hostname.
3. Create all these Security Groups:
-For ec2 instance connect endpoint (EICE) - No inbound rules- use ssh and limit it to vpc CIDR
-For application load balancer (alb-Sg) - select vpc - allow http & https for anywhere
-For Webserver security group (web-sg)- select vpc - allow http & https, limit to alb-sg - allow ssh, limit to eice.
-Security group to database (db-sg) -- select vpc - Allow MySQL, limit it to webserver-sg - again allow MySQL, limit it dm-sg.
-Security group for data migration (dms) - select vpc- allow ssh, limit it to eice.
4. Create instance connect endpoint in private subnet - select ec2 instance connect endpoint - select vpc - Use eice-sg - select private subnet -private-app-az2
5. Create S3 bucket and upload the application code accordingly.
6. Create IAM Policy
7. Create Role- select Ec2 as use case - select the policy you just created -- name your role.
8. Create Subnet group -- select vpc - select your two AZ -- select your two private Subnets.
9. Create Database RDS-
Now, Migrating data into RDS database:
10. Create Ec2 instance -- select without key pair -- select private subnet private app az2 -- select dms-sg - attach your role under instance profile.
13. Create another ec2 instance in private subnet for webserver -- select without a keypair -- use web-sg -- attach the S3 role under instance profile.
14. Use the Deployment Script accordingly. (Check out the script in this github: https://github.com/Oluwatobiloba-Oludare/Dynamic-Web-Application-on-AWS )
15. Create Target group -- select instance - select VPC -- Under Advance Health check, enter 200,301,302 -- select the instance and include and create.
16. Create Application Load balancer
17. Create a Record in our Domain name- use www -- Alias -- select region -- select Application and classic load balancer
Now, use your domain name to check your website on the internet.
Congratulations!
Now, let's be prepare for sudden traffic by setting up our auto-scaling group.
18. Create AMI
19. Create Launch template
-- select Auto scaling guidelines
-- select My AMIs and select owned by me
-- select t2 micro -- select web sg.
You can terminate the ec2 instance.
20. Create Auto scaling group
✍️ Final Thoughts
Learning AWS becomes easier when you build real projects. Deploying a dynamic web application is one of the best beginner exercises to understand how cloud services work together.
If you are learning AWS, I highly recommend trying this hands-on project.
Follow along for more practical AWS, Docker, and cloud learning posts.
2026-02-22 05:36:36
In databases designed for high availability and scalability, secondary nodes can fall behind the primary. Typically, a quorum of nodes is updated synchronously to guarantee durability while maintaining availability, while remaining standby instances are eventually consistent to handle partial failures. To balance availability with performance, synchronous replicas acknowledge a write only when it is durable and recoverable, even if it is not yet readable.
As a result, if your application writes data and then immediately queries another node, it may still see stale data.
Here’s a common anomaly: you commit an order on the primary and then try to retrieve it from a reporting system. The order is missing because the read replica has not yet applied the write.
PostgreSQL and MongoDB tackle this problem in different ways:
WAIT FOR LSN command, allowing applications to explicitly coordinate reads after writes.afterClusterTime read concern.Both approaches track when your write occurred and ensure subsequent reads observe at least that point. Let’s look at how each database does this.
WAIT FOR LSN (PG19)
PostgreSQL records every change in the Write‑Ahead Log (WAL). Each WAL record has a Log Sequence Number (LSN): a 64‑bit position, typically displayed as two hexadecimal halves such as 0/40002A0 (high/low 32 bits).
Streaming replication ships WAL records from the primary to standbys, which then:
The write position determines what can be recovered after a database crash. The flush position defines the recovery point for a compute instance failure. The replay position determines what queries can see on a standby.
WAIT FOR LSN allows a session to block until one of these points reaches a target LSN:
standby_write → WAL written to disk on the standby (not yet flushed)
standby_flush → WAL flushed to durable storage on the standby
standby_replay (default) → WAL replayed into data files and visible to readers
primary_flush → WAL flushed on the primary (useful when synchronous_commit = off and a durability barrier is needed)A typical flow is to write on the primary, commit, and then fetch the current WAL insert LSN:
pg19rw=*# BEGIN;
BEGIN
pg19rw=*# INSERT INTO orders VALUES (123, 'widget');
INSERT 0 1
pg19rw=*# COMMIT;
COMMIT
pg19rw=# SELECT pg_current_wal_insert_lsn();
pg_current_wal_insert_lsn
---------------------------
0/18724C0
(1 row)
That LSN is then used to block reads on a replica until it has caught up:
pg19ro=# WAIT FOR LSN '0/18724C0'
WITH (MODE 'standby_replay', TIMEOUT '2s');
This LSN‑based read‑your‑writes pattern in PostgreSQL requires extra round‑trips: capturing the LSN on the primary and explicitly waiting on the standby. For many workloads, reading from the primary is simpler and faster.
The pattern becomes valuable when expensive reads must be offloaded to replicas while still preserving read‑your‑writes semantics, or in event‑driven and CQRS designs where the LSN itself serves as a change marker for downstream consumers.
While PostgreSQL reasons in WAL positions, MongoDB tracks causality using oplog timestamps and a hybrid logical clock.
In a replica set, each write on the primary produces an entry in local.oplog.rs, a capped collection. These entries are rewritten to be idempotent (for example, $inc becomes $set) so they can be safely reapplied. Each entry carries a Hybrid Logical Clock (HLC) timestamp that combines physical time with a logical counter, producing a monotonically increasing cluster time. Replica set members apply oplog entries in timestamp order.
Because MongoDB allows concurrent writes, temporary “oplog holes” can appear: a write with a later timestamp may commit before another write with an earlier timestamp. A naïve reader scanning the oplog could skip the earlier operation.
MongoDB prevents this by tracking an oplogReadTimestamp, the highest hole‑free point in the oplog. Secondaries are prevented from reading past this point until all prior operations are visible, ensuring causal consistency even in the presence of concurrent commits.
Causal consistency in MongoDB is enforced by attaching an afterClusterTime to reads:
operationTime of the last operation in a session.
causalConsistency: true, the driver automatically includes an afterClusterTime equal to the highest known cluster time on subsequent reads.
afterClusterTime.With any read preference that allows reading from secondaries as well as the primary, this guarantees read‑your‑writes behavior:
// Start a causally consistent session
const session = client.startSession({ causalConsistency: true });
const coll = db.collection("orders");
// Write in this session
await coll.insertOne({ id: 123, product: "widget" }, { session });
// The driver automatically injects afterClusterTime into the read concern
const order = await coll.findOne({ id: 123 }, { session });
Causal consistency is not limited to snapshot reads. It applies across read concern levels. The key point is that the session ensures later reads observe at least the effects of earlier writes, regardless of which replica serves the read.
Here is a simplified comparison:
| Feature | PostgreSQL WAIT FOR LSN
|
MongoDB Causal Consistency |
|---|---|---|
| Clock type | Physical byte offset in the WAL (LSN) | Hybrid Logical Clock (HLC) |
| Mechanism | Block until replay/write/flush LSN reached | Block until afterClusterTime is visible |
| Tracking | Application captures LSN | Driver tracks operationTime
|
| Granularity | WAL record position | Oplog timestamp |
| Replication model | Physical streaming | Logical oplog application |
| Hole handling | N/A (serialized WAL) | oplogReadTimestamp |
| Failover handling | Error unless NO_THROW
|
Session continues, bounded by replication state |
Both PostgreSQL’s WAIT FOR LSN and MongoDB’s causal consistency ensure reads can observe prior writes, but at different layers:
If you want read‑your‑writes semantics to “just work” without additional coordination calls, MongoDB’s session‑based model is a strong fit. Despite persistent myths about consistency, MongoDB delivers strong consistency in a horizontally scalable system with a simple developer experience.
2026-02-22 05:35:29
If you're vibe coding with Claude Code, Cursor, or Copilot, you've probably experienced this:
Start a project. Write a nice ARCHITECTURE.md.
Feed it to your AI agent as context.
Ship 20 commits.
Realize half the API routes in your docs don't exist anymore.
Your agent is now making decisions based on documentation that's completely wrong. I call this doc drift, and it's the silent killer of AI-assisted development.
The Problem
AI coding agents are only as good as the context you give them. When your .cursorrules or CLAUDE.md reference an architecture doc that hasn't been updated in two weeks, every suggestion your agent makes is built on a foundation of lies.
Manual doc maintenance doesn't scale — especially for solo devs who are moving fast.
What I Built
agent-guard is a zero-dependency CLI that creates a self-healing documentation layer for your project. It works in four layers:
Standing Instructions — Writes rules into your AI agent config (.cursorrules, CLAUDE.md, etc.) telling the agent to update docs alongside code changes.
Generated Inventories — Deterministic scripts extract truth directly from your source code. API routes, Prisma models, env vars — all auto-generated into markdown files that can't lie.
Pre-commit Hook — Catches drift before it reaches your repo. If Claude Code is installed, it auto-updates your narrative docs. If not, it prints a ready-made prompt you can paste into your editor.
CI/CD Audits — GitHub Actions that run on every push and PR, plus weekly scheduled health checks.
Setup
bashnpm install --save-dev @mossrussell/agent-guard
npx agent-guard init
npx agent-guard sync
The init wizard auto-detects your framework and sets everything up. The hook never blocks commits — it always exits cleanly.
What It Looks Like in Practice
When you commit with Claude Code available:
✓ Doc-relevant changes detected — docs also updated. Nice!
When Claude Code isn't available:
⚠️ Documentation may need updating
Changed:
📡 API Routes (1 file):
- src/app/api/users/route.ts
┌─────────────────────────────────────┐
│ Claude Code Prompt (copy-paste): │
└─────────────────────────────────────┘
Why Zero Dependencies Matters
For solo devs doing vibe coding, every dependency is a liability. agent-guard uses only Node.js built-ins. No runtime overhead, no supply chain risk, no version conflicts.
Try It
npm: https://www.npmjs.com/package/@mossrussell/agent-guard
GitHub: https://github.com/russellmoss/agent-guard
I'd love feedback — especially from other solo devs fighting doc drift. What's your current approach?