MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

I built a free shift scheduler that checks Taiwan labor law compliance

2026-05-03 05:45:06

Why I built this

I used to manage scheduling using Excel.

At first it worked fine, but as the team grew, problems started to appear:

  • Missing shifts
  • Employees scheduled on leave days
  • Too many consecutive working days
  • Hard to check labor law compliance

It became clear that Excel is good for recording schedules, but not for validating them.

So I built a simple web-based shift scheduler.

What it does

This tool helps you:

  • Create employee schedules
  • Define shift types (morning, evening, night)
  • Set required staff per shift
  • Detect missing coverage
  • Check basic labor constraints

It’s especially useful for small teams that need quick validation before publishing schedules.

Special focus: labor law compliance

One thing I noticed is that many scheduling tools focus on UI, but not on rules.

In Taiwan, scheduling mistakes can easily violate labor laws:

  • Working too many consecutive days
  • Not enough rest between shifts
  • Exceeding weekly working hours

So I added a basic validation layer to detect these issues early.

Demo

You can try it here:

👉 https://zeppelintsai.github.io/shift-scheduler/

Use cases

This tool is useful for:

  • Restaurants (peak hour scheduling)
  • Security teams (12-hour shifts)
  • Retail stores
  • Small businesses still using Excel

Next steps

I’m planning to add:

  • Auto scheduling
  • Better labor law rules
  • Multi-location support

If you have feedback, feel free to share!

Final thoughts

This is a small project, but I wanted to solve a real problem:

👉 Turning scheduling from "manual input" into "validated planning"

If you're still using Excel for scheduling, maybe this can help.

Keywords

This project can be considered as:

  • free shift scheduler
  • employee scheduling tool
  • Taiwan labor law compliance scheduler
  • security shift scheduling (12-hour rotation)
  • restaurant shift scheduling

If you're searching for a simple scheduling tool with validation, this might help.

AGENTS.md, SKILL.md, DESIGN.md: How AI Instructions Split into Three Layers

2026-05-03 05:35:11

In April 2026, Google Labs released a spec called DESIGN.md. It's a design system specification readable by AI agents, packaged with a CLI validator: npx @google/design.md lint.

With DESIGN.md in the picture, we now have three different file types for instructing AI agents. AGENTS.md has been spreading as an industry standard since 2025 (jointly developed by OpenAI, Google, Sourcegraph, Cursor, and Factory; donated to the Linux Foundation in December 2025). SKILL.md sits at the core of Anthropic's Claude Skills. And now DESIGN.md. The three handle different concerns and don't overlap.

This article is for developers using coding agents like Claude Code, Cursor, or Codex in their work, and for tech leads operating natural-language instruction files like CLAUDE.md and style guides. If your team is doing Spec-Driven Development (SDD), this should also reach you.

What I want to lay out is two things: how AI instructions are starting to split across three layers — behavior, individual tasks, and visual appearance — and how that connects with SDD as a parallel movement.

The Old Pattern: Natural-Language Documents

A few years into the ChatGPT era, most engineers have written some form of "rules I want the AI to follow" in a Markdown file. CLAUDE.md, styleguide.md, CONTRIBUTING.md, internal coding conventions. The locations vary, but the format is roughly the same: unstructured natural language.

A writing-style-guide.md file I've been building over the past few months is a typical example. It's a style guide I use when writing technical articles with Claude — a list of patterns common in AI-generated text, written down as forbidden phrases. By making Claude Desktop read it every session, the tone of my output stays consistent. It's part of a personal repository (ikenyal-ai-agents) I use as the harness for my business automation agents — the one I covered in my previous post.

https://dev.to/aws-builders/harness-engineering-with-nothing-but-markdown-g6b

The file contains roughly 150 lines: rules like "don't use em dashes," "avoid invitations like 'let's try…!'," "drop AI-style preambles like 'what's interesting is…'." The same repository has 15 instruction files under agents/, organized by team and role: executive-assistant.md, sre-support.md, qa-support.md, accounting.md. Each describes "the assumptions to operate under as this role" in plain natural language.

This approach has clear benefits. You can articulate tone, stance, and implicit rules. New team members can read the files and pick up the expectations. With CLAUDE.md, Claude Code reads it every session, so persona-level instructions land consistently.

There are limits, too. First, validation falls on humans. Whether a rule was followed or not gets decided by a human reading the output. Second, individual judgment leaks in. "Write politely" means different things to different reviewers.

The third limit is the actual subject of this article. Rules that are formally verifiable (forbidden phrases, em-dash usage, specific pattern matches) and rules that require judgment (tone, structural choices, how to open with empathy) sit in the same file. So even the verifiable parts end up depending on human review. That's the problem the three new file types are addressing.

New Type 1: How DESIGN.md (Google Labs) Specifies Visual Appearance

On April 10, 2026, Google Labs published the DESIGN.md specification at google-labs-code/design.md. As of early May, the repo has over 11,000 stars. It's the reference implementation for Google Stitch (stitch.withgoogle.com), an AI-driven UI generation product.

https://github.com/google-labs-code/design.md

The specification doc lives on the Stitch side.

https://stitch.withgoogle.com/docs/design-md/specification

What DESIGN.md covers is the design system specification. You write machine-readable design tokens in YAML at the top of the file (colors, typography, spacing, components), and human-readable design intent in the Markdown body underneath. Both live in the same file.

---
name: Heritage
colors:
  primary: "#1A1C1E"
  tertiary: "#B8422E"
typography:
  h1:
    fontFamily: Public Sans
    fontSize: 3rem
---

## Overview

Architectural Minimalism meets Journalistic Gravitas.

## Colors

- Primary (#1A1C1E): Deep ink for headlines and core text.
- Tertiary (#B8422E): "Boston Clay", the sole driver for interaction.

The headline feature of this format is the CLI validator that ships with it.

npx @google/design.md lint DESIGN.md

This checks token reference integrity, WCAG contrast ratios, and structural rule compliance, returning the result as JSON. Wire it into CI and you can verify design system consistency on every pull request. There's also a diff command that compares two DESIGN.md files and returns token-level changes in a structured form. Design system version control — historically a manual process — gains a verifiable layer.

For Japanese UIs, the Google Labs spec alone falls short. It doesn't define the typography requirements specific to Japanese (CJK font fallback chains, line height, letter-spacing, kinsoku shori, mixed typesetting). The gap is filled by kzhrknt/awesome-design-md-jp, which publishes Japan-localized DESIGN.md files for over 10 services including Apple Japan, SmartHR, freee, note, MUJI, Mercari, LINE, and Toyota. For Japanese products, using both the Google Labs spec and the Japan edition together is the practical approach.

https://github.com/kzhrknt/awesome-design-md-jp

What DESIGN.md carries is the design system that used to be scattered across Figma files and style guide PDFs, now consolidated into a single file with both machine-readable and human-readable parts. Think of it as the spec foundation that lets AI agents generate UIs with a consistent look every time.

New Type 2: How SKILL.md (Anthropic) and AGENTS.md Specify Behavior

While DESIGN.md covers "appearance," SKILL.md and AGENTS.md cover "behavior" — defining what the agent is trying to do, how it should proceed, and what it must not do.

SKILL.md is the file format standardized by agentskills.io as part of the Agent Skills open standard. Anthropic's Claude Skills is one implementation of this standard; the same SKILL.md works across Claude Code, Claude.ai, and the Agent SDK. Because it's standards-compliant, the same file is also readable by other agents like OpenClaw and Hermes. The structure: declare metadata (skill name, description, allowed tools) in the YAML at the top of the file, and write the task procedure or domain knowledge in the Markdown body below.

https://agentskills.io/home

A clear example of SKILL.md is conorbronsdon/avoid-ai-writing. It's an English-only skill that detects and rewrites AI patterns in English text — transition phrases like "Moreover," significance inflation like "watershed moment," and roundabout verb constructions like "serves as." It uses a 100+ word replacement table organized into 3 tiers (Tier 1 always replaces, Tier 2 flags when 2+ words appear in the same paragraph, Tier 3 flags only at high density), and audits 36 pattern categories. Two modes: detect and rewrite.

https://github.com/conorbronsdon/avoid-ai-writing

What sets it apart from a one-shot prompt is the structured audit it returns. In rewrite mode, you get four discrete sections: identified issues, the rewritten text, a summary of changes, and a second-pass audit. What changed and why becomes transparent.

AGENTS.md covers the agent's overall behavior. Project assumptions, roles, prohibitions, escalation rules. As I mentioned at the top, it started with the Amp team at Sourcegraph; today OpenAI, Google, Cursor, and Factory jointly drive it, and it was donated to the Linux Foundation in December 2025. Think of CLAUDE.md as the Claude-specific version of AGENTS.md. Claude Code reads CLAUDE.md rather than AGENTS.md in its spec, but the pattern recommended by agents.md is to make AGENTS.md the actual file and symlink CLAUDE.md to it. In the personal repository I introduced earlier, the files under agents/ belong to this layer.

SKILL.md and AGENTS.md cover different ranges. AGENTS.md handles "overall context and boundaries." SKILL.md handles "an executable unit for a specific task."

The avoid-ai-writing English style auditor I mentioned is a specific task, so it ships as SKILL.md. A file like agents/genda/qa-support.md, which describes the assumptions and engagement style of a QA role, defines the agent's boundary — that goes on the AGENTS.md side.

The shared concern of these formats is "behavior and procedure," not visual appearance. What the agent knows, what it's tasked with, what it must avoid. That's a movement to fix these in a verifiable form.

The Three-Layer Split

Lining up the three file types, the layers each one handles become clear.

Layer Format What it carries Examples
Behavior AGENTS.md / CLAUDE.md (natural language + rules) Overall context, roles, prohibitions CLAUDE.md, role-specific files like agents/genda/qa-support.md
Individual task SKILL.md (YAML at top + Markdown body) Reusable tasks, procedures, domain knowledge avoid-ai-writing, in-house procedure skills
Appearance DESIGN.md (YAML at top + Markdown body) Design system spec, verifiable visual rules The Google Labs reference, individual service files in kzhrknt/awesome-design-md-jp

The three are complementary, not competing. CLIs like bergside/typeui are emerging as tools that can generate or update either SKILL.md or DESIGN.md, depending on what you choose — a sign of tooling that assumes the division of labor.

https://github.com/bergside/typeui

What's actually different across the layers is "where to place the balance between machine-readable and human-readable." AGENTS.md skews almost entirely human-readable; over-structuring it would block the contextual judgment and nuance it needs to convey. SKILL.md is partially structured by the YAML at the top, but the body stays human-readable — task granularity has to be readable by humans before it can be instructed. DESIGN.md puts machine-readable design tokens in the top YAML and human-readable design intent in the body, with the two cleanly separated.

The center of gravity between "machine-readable" and "human-readable" sits in different places per layer. That's just the standard structuring principle — "manage things at different layers in different files" — applied to AI agents. The file names themselves spell out the division: AGENTS.md ("instructions to the agent"), SKILL.md ("a reusable skill"), DESIGN.md ("the design system"). The names match what each one carries.

Teams that have been packing all their "AI rules" into a single CLAUDE.md now face a split decision. Open up your CLAUDE.md and run these questions against it — splits start to surface:

  • Is there a section writing design system rules? → If yes, that goes to DESIGN.md
  • Are specific task procedures in there (monthly aggregation, test review, contract review)? → If yes, those go to SKILL.md
  • What's left is overall agent context and boundaries (roles, prohibitions, escalation criteria) → that's the AGENTS.md equivalent that stays

The three-layer split works as a framework for splitting your file.

Connecting with SDD

Stepping back to look at the bigger picture: how does the three-layer split relate to the broader movement of "specs for AI"?

SDD is a development style where you write the spec — requirements, design, tasks, implementation — before generating the code. The underlying idea: "specs aren't disposable scaffolding, they're executable artifacts that produce code." AWS's Kiro provides a workflow that generates requirements.md, design.md, and tasks.md in order under .kiro/specs/{feature}/. GitHub's Spec Kit (over 90,000 stars) supports the same flow with slash commands like /specify, /plan, /tasks, /implement. The EARS notation (Easy Approach to Requirements Syntax) used by Kiro reduces ambiguity by formatting requirements into 5 fixed templates. SDD has spread quickly between 2025 and 2026.

https://kiro.dev/

https://github.com/github/spec-kit

The three-layer split (AGENTS.md / SKILL.md / DESIGN.md) and SDD look like separate movements on the surface. The SDD community concentrates on Kiro and spec-kit usage; the DESIGN.md side concentrates on formal specs and validation tooling. You don't see many articles bridging the two.

But put their philosophies side by side and the overlap is striking.

# Shared philosophy SDD (Kiro etc.) DESIGN.md / SKILL.md / AGENTS.md
1 Specify before implementing requirements → design → tasks → implementation behavior → implementation, appearance → implementation
2 Mix machine-readable + human-readable requirements.md (EARS notation) + natural language YAML at top + Markdown body
3 Persistent context for the AI reference .kiro/specs/{feature}/ every time reference DESIGN.md / AGENTS.md every time
4 Reduce ambiguity through structured syntax EARS notation structures requirements (5 templates) lint validates WCAG contrast ratios and structural rules
5 Fix "decisions made" as a place spec files are where decisions live spec files are where decisions live

Both sit inside the larger "specs for AI" movement and share the same underlying philosophy.

That said, they're not the same thing. The biggest difference, in one phrase: time horizon.

# Axis SDD DESIGN.md / SKILL.md / AGENTS.md
1 Time horizon Describes "what to build next" Describes "rules that already exist"
2 Scope Single feature / project lifecycle Persistent rules and styles
3 Update rhythm New per feature → consume → archive Long-term maintenance, gradual growth
4 Subject Requirements, design, tasks (procedure for action) Rules for behavior, individual tasks, appearance

SDD specs describe "what we're going to build." requirements.md is "what this feature needs to satisfy"; design.md is "how to implement this feature"; tasks.md is "how to break the feature into work." Once the feature ships, they finish their job and get archived.

The three-layer specs describe "what should always hold." DESIGN.md provides the color and typography rules every time you generate a UI; AGENTS.md provides the agent's assumptions across every session. They get maintained long-term and grow incrementally.

This time-horizon difference is why the two don't compete. Transient specs and persistent specs coexist in the same project. They can also reference each other. Imagine writing "use {colors.tertiary} for the button" inside .kiro/specs/checkout-feature/design.md — that lets a transient feature spec reference a color token from a persistent DESIGN.md. The pattern isn't widely established yet, but the structure fits cleanly.

One thing worth noting: as of May 2026, the active areas of SDD (the Kiro community and similar) and the active areas of DESIGN.md / SKILL.md / AGENTS.md haven't really crossed paths. The SDD side concentrates on "how to build a feature"; the three-layer side concentrates on "how to deliver the rules."

You don't have to be doing SDD to start with the three-layer split — the split alone gets you to the door of "specs for AI." If your team is already on SDD, start referencing DESIGN.md tokens from inside your feature specs and you avoid maintaining the same rules in two places. The two movements look set to converge in the next phase.

Not Everything Becomes a Spec

The discussion of the three-layer split tends to drift toward "shouldn't we just spec everything," but in practice, that doesn't happen.

Rules that can't be formally verified stay as natural-language documents. Tone, structural choices, cultural nuance. Things like "how to open an article with empathy" or "how to give an ending the right amount of resonance" — judgment-based qualities. The cost of speccing them isn't the issue; the essence falls out when you try.

The judgment is straightforward: "is this formally verifiable?"

  • Color contrast ratios (verifiable) → DESIGN.md
  • Word substitutions like "leverage → use" (verifiable) → SKILL.md
  • Tone (soft assertions, not textbook-sounding), overall stance (not teaching, just organizing) and similar (not verifiable) → stays in AGENTS.md / CLAUDE.md

For small teams, "one natural-language file" is often enough. If CLAUDE.md alone is keeping things running, there's no need to force a split. The trade-off between the cost of speccing and the load of operating it depends on team size and how long the operation has to last.

The three-layer split is something you adopt incrementally, just like SDD — you don't need to spec everything at once. Start with the complex areas, the areas where verification helps most.

In other words, the three-layer split isn't a goal. It's an option you adopt when the situation calls for it.

Where to Start

A few options come into view from this overview.

A reasonable first move is to open your CLAUDE.md or style guide and sort it into "formally verifiable" and "judgment-based" sections. Color and typography rules, word substitution lists, structural rules. If a useful amount of verifiable content sits there, pick one to break out into either DESIGN.md (appearance) or SKILL.md (task). Don't try to split everything at once — start with the most independent piece.

Pulling in external skills is another route. Drop a ready-made SKILL.md like avoid-ai-writing into ~/.claude/skills/ and your stance as a writer doesn't change — only the verification gets handed off to the machine.

Teams already running Kiro or spec-kit are probably at the stage where they could try referencing DESIGN.md tokens from inside .kiro/specs/{feature}/design.md. The cross-reference between feature specs and persistent specs is still a thin area in terms of public examples.

The shared stance: don't try to spec everything at once. Document split → operational trial → speccing — staged migration is the realistic path. The three-layer split isn't a finished form. It's a movement still in progress, and that's the safer way to read it.

AI rules started splitting from a single natural-language document into three spec formats. That's another side of the same movement as SDD.

Not everything becomes a spec, but managing different roles in different files — that ordinary structuring is starting to apply to AI agents, too.

Stop Using Firebase for Everything: Why I Switched to PostgreSQL (Supabase)

2026-05-03 05:34:14

1. The Hook
We’ve all been there. You start a new project, you want to move fast, so you plug in Firebase. It feels like magic at first. But what happens 6 months down the line when your app scales, your data becomes relational, and suddenly you are paying a premium just to run complex queries? You hit the NoSQL wall.

2. The Problem
The biggest trap modern developers fall into is treating a NoSQL document store (like Firebase/Firestore) as the ultimate solution for every single app.
Here is where the pain starts:
Vendor Lock-in: Moving away from Firebase once your app is live is a nightmare.
Complex Queries: Need to filter data based on multiple nested conditions? Good luck writing that without fetching half your database.
Pricing Surprises: It's free until you accidentally run a bad query in an infinite loop and wake up to a massive bill.

3. The 'StackByUjjwal' Solution
It's time to respect Relational Databases again. If you love the ease of Firebase but need the power of PostgreSQL, the industry is rapidly shifting towards tools like Supabase (the open-source Firebase alternative).
Here is how beautifully simple it is to fetch relational data using Supabase in your JavaScript app, without losing the power of SQL:

// 📁 db-service.js
import { createClient } from '@supabase/supabase-js'

// Initialize the PostgreSQL connection via Supabase
const supabaseUrl = '[https://your-project-id.supabase.co](https://your-project-id.supabase.co)'
const supabaseKey = 'YOUR_ANON_KEY'
const supabase = createClient(supabaseUrl, supabaseKey)

// Fetching Relational Data
export const getActiveUsersWithProfiles = async () => {
  try {
    const { data, error } = await supabase
      .from('users')
      .select(`
        id, 
        username, 
        profiles (avatar_url, bio)
      `)
      .eq('status', 'active');

    if (error) throw error;
    return data;

  } catch (error) {
    console.error("Database Query Failed: ", error.message);
  }
}

4. The Link Dump

Written by Ujjwal Sharma, Founder of @stackbyujjwal. Passionate about Full-Stack Web Development, building scalable architectures, and sharing code that makes sense.
Let's connect and build something awesome:
🔗 Linktree: https://linktr.ee/stackbyujjwal
🐙 GitHub: https://github.com/stackbyujjwal
📺 YouTube: https://youtube.com/@stackbyujjwal?si=mRRyKaWoZ-xbXQWp

I rewrote mp3gain in Rust — 'compatible' turned out to be three different things

2026-05-03 05:29:13

If you maintain a podcast, music server, or any audio pipeline that needs consistent volume across files, there's a non-trivial chance you have one of these somewhere:

RUN apt-get install -y mp3gain

Or in a beets config:

replaygain:
  backend: command
  command: mp3gain

Or buried in a cron job from 2014.

mp3gain was written in C by Glen Sawyer in 2003. Upstream development stopped around 2009. Distributors (Debian, Ubuntu, Homebrew) keep it alive with security patches, but no new features have shipped in 15+ years. Its AAC counterpart aacgain died around the same time and doesn't even build cleanly on modern 64-bit systems.

People keep using both because the popular alternatives — loudgain, rsgain, ffmpeg loudnorm — solve a related problem (writing ReplayGain tags) but not the same problem. A tag-only tool doesn't help when your players ignore tags entirely: DJ hardware, smart speakers, most car audio, podcast publishing pipelines that bake volume into the file. For those, you need the bitstream itself rewritten — losslessly, reversibly, fast.

Rather than CVE-patch a 22-year-old C codebase one more time, I spent the last year writing mp3rgain, a Rust implementation that reads and writes the same files mp3gain does. Halfway through, I realized the word "compatible" was hiding three completely different things.

Layer 1 — byte-identical output

The strictest compatibility claim is that the output file is bit-for-bit identical:

cp original.mp3 a.mp3 && cp original.mp3 b.mp3
mp3gain  -g 2 a.mp3
mp3rgain -g 2 b.mp3
sha256sum a.mp3 b.mp3
# → same hash

To get there, the Rust implementation has to match every detail of the C version's bitstream rewrite: synchronization word detection, MPEG version dispatch, side-information size calculation (which differs by MPEG version × channel mode), and bit-level reads/writes that span byte boundaries.

I wanted to "clean up" something the C code did awkwardly more than once. Every time I had to remind myself: the moment I lose byte-identical output, I lose the right to call this a drop-in replacement. There's a CI script (scripts/compatibility-test.sh) that diffs SHA-256 hashes between both tools across MPEG1/2/2.5, mono/stereo/joint stereo, CBR/VBR, and a range of gain values. If even one case mismatches, the PR doesn't merge.

Layer 2 — tag interoperability

mp3gain stores undo information in APEv2 tags:

mp3gain_undo: -3,-2,N
mp3gain_minmax: 100,148

If I run mp3gain -g 2, then later mp3rgain -u, the undo has to work — and vice versa. This is a different layer from byte-identical output: it's about the metadata block, not the audio frame data.

mp3rgain reads and writes the same APEv2 fields with the same string format. There's one intentional break: after -u, mp3gain leaves an empty APEv2 tag block in place (probably because rewriting it would shift downstream frame offsets). mp3rgain removes the tag completely. The audio data is identical either way and the bidirectional undo property still holds, so I judged this as still "compatible enough."

Layer 3 — text protocol

mp3gain -o (no argument) prints a tab-separated table:

File    MP3 gain    dB gain Max Amplitude   Max global_gain Min global_gain
song.mp3    0   0.0 17234   148 100

beets parses this with regex. So do an unknown number of personal scripts that have run unmodified for a decade. Change the column order, the header text, or the separator, and you break all of them silently.

mp3rgain emits the exact same header — one println! line at main.rs:1275:

println!("File\tMP3 gain\tdB gain\tMax Amplitude\tMax global_gain\tMin global_gain");

New structured output lives behind -o json, opt-in, never the default.

What I deliberately didn't keep

Compatibility isn't free, and not every quirk is worth preserving:

  • AAC support: mp3gain has none. mp3rgain rewrites AAC global_gain in place (the same idea aacgain used) and stores undo info in MP4 freeform metadata atoms because APEv2 doesn't fit MP4 containers.
  • -o json and --dry-run: new flags for automated pipelines. Preview safely, then apply — something the original CLI didn't really support.
  • ID3v2 RVA2 / TXXX ReplayGain tags (-s i): opt-in. foobar2000, mpd, and other ReplayGain-aware players read these; APEv2 tags are invisible to them.

Migrating: what it actually looks like

For most pipelines, migration is one substitution.

Shell scripts:

sed -i 's/\bmp3gain\b/mp3rgain/g' your_pipeline.sh

Dockerfile — replace the apt-installed binary with a 2 MB static image:

- RUN apt-get install -y mp3gain && rm -rf /var/lib/apt/lists/*
- ENTRYPOINT ["mp3gain"]
- FROM ghcr.io/m-igashi/mp3rgain:latest

That's it. The image is FROM scratch with a musl-static binary: no shell, no glibc, no apt cache to clean.

beets — change one line in ~/.config/beets/config.yaml:

replaygain:
  backend: command
- command: mp3gain
- command: mp3rgain

The full migration guide is at docs/migrating-from-mp3gain.md, with sed patterns, CI snippets, and the apt/dnf/pacman/brew/winget/cargo install matrix.

Why bother

Three reasons that mattered to me:

  1. Memory safety. mp3gain's history includes a stream of CVEs — heap overflows in the side-info parser, mostly. Patching those in 2025 means tracking down a long-quiet maintainer's intent. A Rust rewrite removes the whole class from the picture.
  2. AAC. Most personal libraries on Apple platforms are AAC, and there's been no working tool to volume-normalize them losslessly since aacgain stopped building. DJ hardware, car audio, and smart speakers all ignore ReplayGain tags, so tag-only tools don't help.
  3. Distribution. Static binary in a 2 MB image, plus packages on Homebrew / Winget / AUR / PPA / Docker / Cargo. No "build from source on this niche distro" required.

GitHub logo M-Igashi / mp3rgain

Lossless MP3 volume adjustment - a modern mp3gain replacement written in Rust

mp3rgain

License: MIT Rust crates.io GitHub Downloads mp3gain compatible

Lossless MP3/AAC volume adjustment - a modern mp3gain / aacgain replacement written in Rust

mp3rgain adjusts MP3 and AAC volume without re-encoding by modifying the global_gain field in each frame. This preserves audio quality while achieving permanent volume changes.

The only actively maintained tool that performs lossless AAC/M4A bitstream gain adjustment. aacgain has been unmaintained since ~2009 and rarely builds on modern 64-bit systems. mp3rgain is the only practical option today for re-encode-free AAC volume normalization.

Features

  • Only tool with lossless AAC bitstream gain: re-encode-free global_gain rewrite for AAC/M4A — a capability previously only available in the long-abandoned aacgain
  • Lossless & Reversible: No re-encoding, all changes can be undone (MP3 and AAC)
  • ReplayGain: Track and album gain analysis for MP3 and AAC/M4A
  • Zero dependencies: Single static binary (no ffmpeg, no mp3gain, no aacgain)
  • Cross-platform: macOS, Linux, Windows (x86_64 and ARM64)
  • mp3gain / aacgain compatible

What I'd love to hear

If you have mp3gain or aacgain in a pipeline somewhere, I'd be curious which of the three compatibility layers actually matters to you in practice — and whether anyone is relying on -s i ID3v2 ReplayGain tags I should know about. Issues and migration reports welcome.

Disclosure: prose drafted and generated descriptive cover.png with AI editorial assistance. Code, design decisions, and the SHA-256 verification setup are my own.

Dockerizing Next.js for production

2026-05-03 05:19:38

Most Dockerfiles for Next.js you'll find online ship a 1.2 GB image, leak environment variables at build time, and rebuild every layer on a one-line change. They work on the demo. They don't work in production.

This is the Dockerfile I actually run. Multi-stage, ~150 MB final image, build-time and runtime env vars cleanly separated, layer caching that survives a package.json change. I'll walk through every line, explain why each stage exists, and call out the four gotchas that account for most "it worked locally" production failures.

The full setup (Dockerfile plus docker-compose, GitHub Actions deploy pipeline, auth, testing) is in a production-grade Next.js + NestJS starter I'm building. Free for email subscribers — subscribe at mahmoud-mokaddem.com.

The Dockerfile, up front

If you're in a hurry, copy this and skip to Common gotchas. The rest of the post explains every line.

# Stage 1: deps
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci

# Stage 2: builder
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
ENV NEXT_TELEMETRY_DISABLED=1
RUN npm run build

# Stage 3: runner
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production NEXT_TELEMETRY_DISABLED=1
RUN addgroup --system --gid 1001 nodejs && adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT=3000 HOSTNAME=0.0.0.0
CMD ["node", "server.js"]

Three stages: deps, builder, runner. The first two do work; only the third ships.

Why multi-stage

A naïve Dockerfile copies your source, installs dependencies, builds, and runs — all in one stage. The image you ship to production carries everything that helped you build it: the full Node toolchain, npm's cache, dev dependencies, build artifacts you don't need at runtime, your .git directory if you weren't careful with .dockerignore. Easily 1+ GB.

Multi-stage builds let you do all that work in a "fat" intermediate image, then copy only the artifacts that need to ship into a clean final image. Each FROM starts a fresh image; COPY --from= reaches back into a previous stage to grab specific files.

For Next.js, the practical result: ~150 MB final image vs ~1.2 GB single-stage. Why this matters in production:

  • Faster registry pulls on small VPSes or autoscaling platforms. Pulling 1.2 GB on a 100 Mbps link takes ~96 seconds; pulling 150 MB takes ~12.
  • Faster cold starts on platforms like Fly.io and Cloud Run, where containers start on demand.
  • Lower registry cost when you push every commit.
  • Smaller security surface — fewer packages carrying potential CVEs in production.

The mental shortcut: do the messy work in a fat intermediate image, ship only the artifacts that need to run.

Stage 1 — Dependencies

FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci

node:20-alpine is a deliberate trade-off. Alpine Linux is ~50 MB; node:20-slim is ~340 MB; node:20 (Debian-based) is ~1 GB. Alpine wins on size and is fine for almost every Next.js app.

The catch: Alpine uses musl libc instead of glibc. Some npm packages with prebuilt native binaries (historically canvas, sharp, certain database drivers) ship glibc binaries that don't load on Alpine. If you hit a binary-compatibility error during npm ci, the fix is usually to switch this stage's base to node:20-slim and accept the larger image. For a vanilla Next.js app, you'll never see this.

Notice we copy only package.json and package-lock.json, not the source. This is layer-caching discipline. Docker caches each layer; if a layer's input hasn't changed, it reuses the cached output. By isolating the dependency install to the lockfile, we get full cache reuse on every commit that doesn't touch dependencies — which is most of them. If we copied the source first, every code change would re-run npm ci from scratch.

About npm ci vs npm install: ci is deterministic, installs exactly what's in the lockfile, fails if the lockfile is out of date, and is faster. Always ci in Docker. (Yarn: yarn install --frozen-lockfile. pnpm: pnpm install --frozen-lockfile.)

Stage 2 — Builder

FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
ENV NEXT_TELEMETRY_DISABLED=1
RUN npm run build

Fresh stage, fresh Alpine, node_modules pulled forward from stage 1. COPY . . brings in the source tree (filtered by .dockerignore, covered below).

The standalone output mode is the one Next.js config flag you actually need. Add it to your next.config.js:

module.exports = {
  output: 'standalone',
};

Without this flag, npm run build produces the standard Next.js build output and your final image has to ship the entire node_modules tree (~300 MB+). With it, Next.js traces every dependency actually used by your built routes and emits a self-contained server.js plus only those traced packages in .next/standalone/node_modules, typically ~15 MB. That one flag is the biggest size win in this Dockerfile.

npm run build produces three things we care about:

  • .next/standalone/ — the self-contained server plus traced node_modules
  • .next/static/ — built static assets (JS bundles, CSS) for _next/static/* routes
  • public/ — static files you put in the public folder, which Next.js doesn't bundle into standalone

Stage 3 copies these three things and nothing else.

Build-time vs runtime env vars

This is the most common Next.js + Docker bug I see, so it gets its own callout.

Variables prefixed NEXT_PUBLIC_ are baked into the client-side JavaScript bundle at build time. They are not read at runtime from the container's environment. If you set NEXT_PUBLIC_API_URL only at runtime via docker run -e, your client code will see whatever value it had at build time (usually empty), not what you set at runtime.

Two ways to handle it.

(a) Pass NEXT_PUBLIC_* as --build-arg and rebuild per environment:

ARG NEXT_PUBLIC_API_URL
ENV NEXT_PUBLIC_API_URL=$NEXT_PUBLIC_API_URL
RUN npm run build
docker build \
  --build-arg NEXT_PUBLIC_API_URL=https://api.example.com \
  -t my-app .

(b) Keep NEXT_PUBLIC_* for things that don't change per deploy (your domain, public Stripe key, public Sentry DSN), and put environment-specific config behind server-side data fetching where you can read process.env at runtime.

I prefer (b). Fewer images, simpler pipeline. Use (a) only when you genuinely need the value baked into the client bundle.

Stage 3 — Runner

FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production NEXT_TELEMETRY_DISABLED=1
RUN addgroup --system --gid 1001 nodejs && adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT=3000 HOSTNAME=0.0.0.0
CMD ["node", "server.js"]

Final stage. Fresh Alpine, no toolchain, no dev dependencies. This is what ships.

NODE_ENV=production matters. Next.js skips dev-only logging and telemetry, and many libraries optimize behavior based on it.

The non-root user: addgroup creates a system group, adduser creates a user in it, USER nextjs switches the runtime to that user. Many container platforms (Kubernetes, ECS, Fly with strict modes) refuse to run containers as root by default. Even when they don't, running as root expands the impact of any container-escape CVE. This costs nothing; do it now.

The three copies are where the standalone output pays off:

  1. COPY --from=builder /app/public ./publicpublic/ is not part of the standalone output. Forget this line and all your favicons, robots.txt, and static images return 404. The first time. Always.
  2. COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ — the actual server. --chown makes the non-root user own the files, otherwise it can't read its own runtime.
  3. COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static — also not in standalone. Forgetting this gives you a site with no JS or CSS.

EXPOSE 3000 is documentation, not a port-open. It tells docker run -p and orchestrators "this app expects to be reachable on 3000."

HOSTNAME=0.0.0.0 is required to accept connections from outside the container. Next.js's standalone server defaults to localhost, which means your container would only accept traffic from itself.

Use CMD ["node", "server.js"], not npm start. npm wraps the process and intercepts signals, so your container won't gracefully shut down on SIGTERM. Orchestrator-driven restarts hang for 30+ seconds before the kernel kills it. node server.js handles signals correctly.

The .dockerignore file

This file gets skipped a lot, and it's often the answer to "why is my build context 2 GB?"

node_modules
.next
.git
.env*
README.md
*.log
coverage
.vscode
.idea
.DS_Store

Why each entry:

  • node_modules — gets reinstalled in the deps stage.
  • .next — build artifacts get rebuilt; carrying old ones in confuses Next.js's cache.
  • .git — your version history shouldn't ship in the container.
  • .env*never bake secrets into images. Pass at runtime.
  • Logs, IDE folders, coverage reports — clutter.

The .env* line is a security concern worth dwelling on. If you've ever had an .env.local sitting in your working directory, .dockerignore is what keeps it out of the image. An image with .env.production baked in can be pulled by anyone with read access to your registry. Put real secrets in your runtime environment, not in the image.

Image size walkthrough

Approach Final image
Naïve single-stage on node:20 ~1.2 GB
Multi-stage on node:20 (no standalone) ~600 MB
Multi-stage on node:20-alpine (no standalone) ~400 MB
Multi-stage on node:20-alpine + standalone ~150 MB

Numbers are approximate; your app's specific dependencies move them ±20%.

What this saves you: deploy time drops from ~96 seconds to ~12 on a 100 Mbps registry pull. Cold start time on Fly or Cloud Run becomes meaningful at the standalone size. The biggest win is the standalone output flag. The Alpine base is second. Multi-stage is the structural decision that makes both composable.

docker-compose for local dev

This Dockerfile builds the production image. For local dev you usually want hot reload, a local Postgres, maybe Redis. A minimal compose file:

services:
  app:
    build: .
    ports: ['3000:3000']
    environment:
      DATABASE_URL: postgres://user:pass@db:5432/myapp
    depends_on: [db]
  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
      POSTGRES_DB: myapp
    volumes: ['db_data:/var/lib/postgresql/data']
volumes:
  db_data:

This runs the production build locally, which is useful for catching prod-only bugs but not for hot reload. For real dev work you want a separate docker-compose.dev.yml with the source mounted as a volume and next dev running. That's a full post in itself — coming next in this series.

Common gotchas

The four bugs that account for most "it worked locally" production failures with this setup:

1. Public folder not appearing. You forgot COPY --from=builder /app/public ./public. Symptom: 404s on every static asset. Fix: add the line.

2. NEXT_PUBLIC_* env vars not reaching the client. They were set at runtime, not build time. Symptom: client-side code reads undefined or stale values. Fix: pass via --build-arg (per the Stage 2 section) or restructure so the value isn't needed in the client bundle.

3. Container exits immediately, no logs. You're using npm start instead of node server.js. npm wraps the process and hides what's happening. Fix: CMD ["node", "server.js"].

4. OOM during npm run build on a small VPS. Hetzner CX11 / DigitalOcean $4 droplets often can't fit a Next.js build in RAM. Symptom: build fails with JavaScript heap out of memory or gets SIGKILLed. Fix: build in CI/CD and push the image to your registry, then pull on the VPS; or add a swap file on the VPS.

Each one has happened to me. Each one looks unrelated to Docker until you find it.

Where to deploy this

The Dockerfile doesn't change; the deploy target does.

  • Hetzner VPS + Coolify or Dokploy — cheapest, most control. What I'd pick for indie projects. Push the image to GitHub Container Registry; Coolify pulls and runs it.
  • DigitalOcean App Platform — push the Dockerfile, get a URL. Good middle ground.
  • Fly.io — global edge deploy, generous free tier for hobby work. fly launch auto-detects Next.js and writes a fly.toml for you.
  • AWS ECS / Fargate — enterprise default. More setup overhead, but the right call if you're already in AWS.

Each of these gets its own deploy walkthrough later in this series. The Dockerfile above works on all of them unchanged.

What's next

Two follow-ups in this series:

  • docker-compose for Next.js + NestJS local dev — the full dev-mode compose file with hot reload, Postgres, and Redis.
  • How I structure a NestJS project for production — the architecture conventions in the starter, with rationale.

If you've shipped this Dockerfile to a deploy target I didn't cover, I'd be curious what platform you picked and what bit you.

The full setup (Dockerfile, .dockerignore, docker-compose, GitHub Actions deploy pipeline, auth, testing) is in a production-grade Next.js + NestJS starter I'm building. Free for email subscribers — subscribe at mahmoud-mokaddem.com.

TestSprite MCP Server — Snelstartgids

2026-05-03 05:17:54

TestSprite MCP Server — Snelstartgids

TestSprite MCP is een krachtige AI-gedreven testserver die je direct vanuit je favoriete IDE helpt om frontend- en backendapplicaties automatisch te testen. Met TestSprite kun je eenvoudig testplannen genereren, uitvoeren en fouten automatisch laten oplossen, allemaal binnen je bestaande ontwikkelworkflow.

Installatie

Vereisten

Haal je API-sleutel op

  1. Log in op je TestSprite-dashboard via https://www.testsprite.com/dashboard
  2. Navigeer naar API Keys onder Instellingen
  3. Klik op "Nieuwe API Key"
  4. Kopieer je API-sleutel

IDE-configuratie

Cursor (One-Click)

  1. Haal je API-sleutel op
  2. Klik op de one-click installatielink voor Cursor
  3. Voer je API-sleutel in binnen Cursor
  4. Begin met testen

Cursor (Handmatig)

  1. Open Cursor-instellingen
  2. Navigeer naar Tools & Integratie
  3. Klik op "Voeg aangepaste MCP toe"
  4. Voeg de volgende configuratie toe:
{
  "mcpServers": {
    "TestSprite": {
      "command": "npx",
      "args": ["@testsprite/testsprite-mcp@latest"],
      "env": {
        "API_KEY": "your-api-key"
      }
    }
  }
}
  1. Controleer of er een groen bolletje verschijnt op het TestSprite MCP-servericoon

Belangrijk — Cursor Sandbox-modus: Cursor draait MCP-tools standaard in sandbox-modus, wat TestSprite beperkt. Om dit op te lossen:

  • Ga naar Cursor → Instellingen → Cursor-instellingen
  • Ga naar Chat → Auto-Run → Auto-Run-modus
  • Wijzig naar "Elke keer vragen" of "Alles uitvoeren"

Claude Code

cd /path/to/your/project
claude mcp add TestSprite --env API_KEY=your_api_key -- npx @testsprite/testsprite-mcp@latest

Controleer met:

claude mcp list

Verwachte output: TestSprite: npx @testsprite/testsprite-mcp@latest - ✓ Verbonden

VSCode

  1. Open het Command Palette
  2. Voer MCP: Add Server uit
  3. Kies Command (stdio) als type
  4. Typ npx @testsprite/testsprite-mcp@latest als het commando
  5. Geef het de naam TestSprite
  6. Voeg de volgende env-configuratie toe:
{
  "servers": {
    "testsprite": {
      "command": "npx",
      "args": ["-y", "@testsprite/testsprite-mcp@latest"],
      "env": {
        "API_KEY": "your-api-key"
      }
    }
  }
}

Andere IDE's

{
  "mcpServers": {
    "TestSprite": {
      "command": "npx",
      "args": ["@testsprite/testsprite-mcp@latest"],
      "env": {
        "API_KEY": "your-api-key"
      }
    }
  }
}

Verificatie

  • Je AI-assistent kan TestSprite MCP-tools zien
  • Geen "command not found"-fouten
  • Probeer: "Help me test dit project met TestSprite."

Je Eerste Test

Stap 1: Bereid je project voor

Start je applicatie:

# Voor frontend-applicaties
npm run dev          # Draait meestal op poort 3000, 5173 of 8080

# Voor backend-applicaties
node index.js        # Draait meestal op poort 8000, 3001 of 4000

Voorbeeld projectstructuur:

my-project/
├── frontend/          # React, Vue, Angular, enz.
│   ├── src/
│   ├── package.json
│   └── ...
├── backend/           # Node.js, Python, enz.
│   ├── app.py
│   ├── requirements.txt
│   └── ...
├── README.md
└── package.json

Stap 2: Het Magische Commando

Open de chatfunctie van je IDE en:

  1. Open een nieuw chatvenster
  2. Typ het magische commando:
Can you test this project with TestSprite?
  1. Sleep eventueel je projectmap naar het chatvenster
  2. Druk op Shift+Enter

Je AI-assistent neemt het over en begeleidt je door het volledige testproces.

Stap 3: Configuratie (Vereist)

Wanneer de testconfiguratiepagina in je browser opent, vul je het volgende in:

  1. Testtype

    • Frontend-modus: Test UI en gebruikersflows (knoppen, formulieren, navigatie)
    • Backend-modus: Test APIs, services, serverlogica
    • Codebase scope: Volledige projectscan
    • Code Diff scope: Alleen niet-gecommitete Git-wijzigingen (sneller voor iteraties)
  2. Testaccountgegevens (indien je app login vereist)

    • Frontend: Gebruikersnaam: [email protected] / Wachtwoord: jouw-test-wachtwoord
    • Backend: Basic / Bearer token / API-key / Geen authenticatie
  3. Applicatie-URL's:

   Frontend: http://localhost:5173
   Backend: http://localhost:4000
  1. Product Requirements Document (PRD): Upload je bestaande PRD (zelfs een concept is voldoende). TestSprite AI genereert hieruit een genormaliseerde PRD.

Stap 4: Geautomatiseerde Workflow

TestSprite verzorgt het volledige testproces automatisch:

  • Analyseert je projectcode
  • Genereert een uitgebreid testplan
  • Voert tests uit in de cloud
  • Maakt gedetailleerde rapporten
  • Stelt automatische oplossingen voor bij fouten

Stap 5: Testresultaten Bekijken

Na het testen vind je deze bestanden in je project:

testsprite_tests/
├── tmp/
│   ├── prd_files/                 # Geüploade PRD-bestanden
│   ├── config.json               # Testconfiguratie
│   ├── code_summary.json         # Code-analyse
│   ├── report_prompt.json        # AI-analysegegevens
│   └── test_results.json         # Gedetailleerde testresultaten
├── standard_prd.json             # Genormaliseerde PRD
├── TestSprite_MCP_Test_Report.md # Mens-leesbaar rapport
├── TestSprite_MCP_Test_Report.html # HTML-rapport
├── TC001_Login_Success_with_Valid_Credentials.py
├── TC002_Login_Failure_with_Invalid_Credentials.py
└── ...                           # Extra testbestanden

Het testrapport toont: totale dekking, slagingspercentage, mislukte tests met foutenanalyse en categorieën (Functioneel, UI/UX, Security, Performance).

Voorbeeld testplan:

{
  "testCases": [
    {
      "id": "TC001",
      "title": "User Authentication Login",
      "description": "Test user login with valid credentials",
      "category": "Functional",
      "priority": "High",
      "steps": [
        "Navigate to login page",
        "Enter valid username and password",
        "Click login button",
        "Verify successful login"
      ]
    }
  ]
}

Voorbeeld samenvatting testrapport:

{
  "summary": {
    "totalTests": 18,
    "passed": 12,
    "failed": 6,
    "passRate": "67%",
    "coverage": "85%"
  },
  "failures": [
    {
      "testId": "TC005",
      "title": "Admin Panel Access",
      "error": "Button not found: #admin-delete-btn",
      "recommendation": "Add missing delete button in admin panel"
    }
  ]
}

Stap 6: Automatische Bugfixes

Na het bekijken van de resultaten, vraag:

Please fix the codebase based on TestSprite testing results.

De AI zal:

  • Faalde tests analyseren
  • Problematische code identificeren
  • Gericht fixes toepassen
  • Tests opnieuw uitvoeren om fixes te verifiëren
  • Itereren tot alle problemen zijn opgelost

Tips voor Succes

  • Zorg dat je frontend en backend draaien en bereikbaar zijn voordat je start
  • Zelfs een minimale PRD (één paragraaf die je app beschrijft) is voldoende
  • Gebruik Code Diff scope voor snelle validatie van individuele wijzigingen
  • TestSprite werkt het beste als je app in een stabiele, uitvoerbare staat is

Met deze gids kun je direct aan de slag met geautomatiseerd testen in je IDE met TestSprite MCP. Veel testplezier!