MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Lutherstadt Wittenberg is Reforming Again – This Time It's Software

2026-04-06 22:17:31

An invitation to those who think differently.

The Eddy Current Principle

Kayakers know: behind every rock in the river, an eddy forms.
A quiet pocket where the water curves back upstream, calm inside the chaos.
Those who know this don't avoid obstacles, they use them to rest, regroup, and move forward.
We build software the same way.

The Obstacles We Use

Gradle pulls in hundreds of libraries, nobody knows what they all do.

WordPress installs updates with Google Analytics quietly bundled in.

Windows is always watching.

App Stores take 30%, for nothing.

These aren't problems to route around.

They're rocks. And behind every rock is an eddy.

What We're Building

An operating system that can be trusted.

  • No package manager apps delivered via HTTP, running natively, cached locally
  • No corporation - fully Open Source, anyone can host it
  • No hidden dependencies - AI-assisted code reviews as a trust layer
  • No middleman taking a cut - Bitcoin Lightning micropayments flow directly to developers - Stack: SML/SMS for UI, Go for services, Linux kernel as foundation

Anyone who compiles themselves pays nothing.

Those who use the distribution let Satoshis flow, including backwards, to everyone who ever contributed.

Even Linus Torvalds gets a share for the kernel.

Who We're Looking For

Not finished developers.

People who are curious.

Who prefer understanding over consuming.

Who find an idea more important than a CV.

All disciplines welcome:

C++, Go, Design, Philosophy, Journalism, Law, whatever you bring.

How It Works – Three Circles

Local – Wittenberg Saturdays

Full day, €80 per person. Room costs covered from just 2 participants.

Hands-on, in person, real coffee.
I show you my way of doing things efficiently using AI - and Ahimsa, Gandhi's principle of do no harm, as the ethical foundation beneath every line of code we write.

Regional – Day Trip

Berlin and Leipzig are 1–2 hours by train.

Come for the Saturday, go home the same evening.

Global – X Spaces / Online

Free or small ticket via X.

Africa, Russia, Americas, Asia – wherever you are.

The global circle feeds the local one.

How It Begins

We don't wait for perfect conditions.

Two people signing up is enough – the circle begins.

In the first circle we decide together: Where do we meet? When? What do we build first?

From there the circle grows to seven. The project name gets chosen. First steps get defined.

The circle can grow to thirteen – larger spaces available in Wittenberg as needed.

Those who stay longer become mentors, running their own circles.

No big deal – it's about passing on tools.

Once it was Google and Stack Overflow. Today it's AI.

The principle stays the same: learn to find solutions, not memorize them.

Coding in the background is helpful – but no longer mandatory.

The AI handles the code. We handle the thinking.

Why Wittenberg?

In 1517, Martin Luther changed the frame here.

He didn't fight the system – he changed the frame.

That's second-order thinking.

That's what we do.

Recognition

Every contribution gets an Atesti para Dana

a physical certificate of participation, signed by witnesses.

Not money. Memory. Frameable, collectible, tradeable like art.

In twenty years, an Atesti saying "I helped build this OS"

might be worth more than you think.

Interested? Get in touch.

CrowdWare - Lutherstadt Wittenberg, Germany

Codeberg: codeberg.org/crowdware

Aho 🌱

Webinar: Integrating SAST into DevSecOps — Key Points

2026-04-06 22:14:33

Today, we'd like to share with you our full video from the webinar 'Integrating SAST into DevSecOps'.

About speaker

Anton Tretyakov, an experienced DevOps engineer at PVS-Studio who builds and maintains the static analyzer infrastructure. He also writes about C++ in his spare time. During the webinar, Anton shares his insights on modernizing code security, seamlessly integrating static analysis tools into security workflows, and optimizing existing pipelines.

Key points

What is SAST?

Static application security testing (SAST) is a security check that automatically analyzes your code for errors and weak points without executing it. Unlike regular static analysis, a SAST tool detects potential vulnerabilities, not regular bugs.

Bugs vs Vulnerabilities

It's impossible to predict whether a bug will affect a program's behavior. A bug becomes a potential vulnerability when possible consequences of its presence in the source code are clearly defined. A potential vulnerability turns into a real one when it slips into real software and is exploited by a malicious user.

How does SAST work?

A SAST tool automates the tedious process of manually looking through every code line. To do this, it uses a syntax tree containing all the variables in a program. It scans the code for errors and issues a warning when it detects a flaw or potential undefined behavior that may harm the application.

How to integrate SAST into DevSecOps?

Early detection reduces the cost of fixing an error, so it is crucial to integrate a SAST tool into the development workflow as early as possible.

To check code, a SAST tool needs only a compilable project. Compared to other usual checks in a pipeline, static analysis provides comprehensive coverage of the whole program codebase. It works best when used together with other tests and checks.

Here's an example of how you can integrate static analysis into your pipeline:

build the project > pass credentials to the analyzer > run the analysis > get the analyzer report > extract metrics information from the report > export the metrics file to the Merge Request > move on to other development stages.

How to use SAST on legacy code

Static analysis detects various types of errors and can be used as the quality gate mechanism. When you work on a big project that also contains legacy code, you can see thousands of warnings issued by the analyzer. Sifting through them all is a really time-consuming and tedious task. This is where the one-direction approach comes into play.

Learn more: How to introduce a static code analyzer in a legacy project and not to discourage the team

How to use SAST as quality gate

PVS-Studio static analyzer allows running the analysis for the first time to save issued warnings in your database. Regularly check your project so that the tool can determine whether any changes have occurred. If the number of warnings increases, the changes to the repository are denied.

Learn more: Static analysis for pull requests. Another step towards regularity.

Want more?

That's only a part of the whole content that was covered during the session. If you want to learn more and see the whole webinar and look closer at slides, follow the link: Integrating SAST into DevSecOps.

You can also sign up for our upcoming webinars, for example: Let's make a programming language. Lexer. During the webinar, the speaker will walk you through how a lexer is actually implemented in code.

We hope to see you there!

The trick to AI coding memory isn't a bigger instruction file — it's smaller, layered knoledge

2026-04-06 22:13:31

Something I see constantly is people trying to solve the "my AI forgets everything" problem by making their instruction file bigger. 500 lines, 1,000 lines, 2,000 lines of CLAUDE.md (or .cursorrules, or whatever your tool uses).

It doesn't work. Research backs this up — AI accuracy drops when context gets too long, and instructions in the middle of large files get ignored entirely. You end up with a bloated file that eats your context window before you've even asked a question.

What actually works is the opposite: small, targeted files loaded only when relevant.

After about 1,500+ sessions across 60+ projects, here's the structure I settled on:

Tier 1 — Constitution (~200 lines, always loaded)
Your standing orders. Preferences, hard rules, and a routing table pointing to everything else. "Always use TypeScript strict mode." "Never mock the database in tests." That's it. If your global file is over 200 lines, you're putting things in the wrong place.

Tier 2 — Living Memory (~50 lines, always loaded)
A short list of corrections and gotchas — things the AI keeps getting wrong. "This table stores deltas, not cumulative values." "The VS Code extension doesn't fire CLI hooks." Every entry directly prevents a repeated mistake. This is the tier that shows value fastest.

Tier 3 — Project Brains (loaded per-project)
One file per project with deep context: business rules, schemas, key files, decision log, changelog. Only loads when you're working in that directory. If you have 5 projects, 80% of knowledge is only relevant to one of them — why load it all every time?

Tier 4 — Knowledge Store (queried on demand)
A searchable database (SQLite + FTS5 or just markdown files) for reference data: full schemas, API docs, terminology. The AI searches it when it needs something specific, instead of having it crammed into the instruction file.

Session Memory (the continuity layer)
A SQLite database that logs what happened in each conversation. At the start of a new session, the AI queries it for a project briefing — recent work, decisions, open issues. No more "where were we?" dance.

The key insight is the routing table in Tier 1. Instead of stuffing everything into one file, Tier 1 just says "here's where to find X" and the AI loads the right context at the right time.

A couple things I learned the hard way:

  • Budget every tier strictly. 200 lines for Tier 1, 50 for Tier 2. Constraints force quality. When you hit the limit, you're forced to move things to the right tier instead of dumping them in the always-loaded file.

  • Don't store what the AI can derive. File structure, code patterns visible in the source, git history — the AI can read all of that. Only store what it would get wrong without the instruction.

  • If you use AI to summarize sessions, add safeguards. We had a summarizer with no batch limits that tried to process 50 sessions at once, hit an API error, retried the full batch in a loop, and burned through a third of a week's token budget. Batch cap of 5, a processed flag so it never retouches completed sessions, and a lock file to prevent concurrent runs. Learned that one the expensive way.

This works with Claude Code, Cursor, Copilot, Codex, Aider — anything that reads instruction files. The filenames differ but the architecture is the same.

I wrote up the full system with templates and an automated setup script and put it on GitHub if anyone wants the details: https://github.com/sms021/SuperContext

Happy to go deeper on any of this — the architecture, session memory, how to migrate an existing giant instruction file, whatever.

GitHub Store: Your One-Stop Shop for Open-Source Apps!

2026-04-06 22:09:08

Quick Summary: 📝

GitHub Store is a cross-platform application store that allows users to discover and install open-source software directly from GitHub releases. It supports automatic detection of various executable formats and provides a streamlined one-click installation process with update tracking.

Key Takeaways: 💡

  • ✅ GitHub Store simplifies the discovery, installation, and management of open-source software from GitHub releases.

  • ✅ It offers one-click installation for various binary formats (APK, EXE, DMG, AppImage, DEB, RPM) across platforms.

  • ✅ The application automatically tracks and notifies users about updates for installed GitHub-hosted software.

  • ✅ It provides a familiar, user-friendly app-store style interface for managing open-source projects.

  • ✅ Built with Kotlin Multiplatform and Compose Multiplatform, ensuring a consistent experience on Android and Desktop.

Project Statistics: 📊

  • Stars: 10840
  • 🍴 Forks: 383
  • Open Issues: 26

Tech Stack: 💻

  • ✅ Kotlin

As developers, we all love open-source software. GitHub is a treasure trove of incredible projects, but let's be honest, finding, downloading, and keeping track of updates for all those cool applications can sometimes feel like a scavenger hunt. You navigate to a repository, dig through the releases section, figure out which file to download for your OS, and then manually install it. And repeat that process every time there's an update. Sound familiar? That's exactly the problem GitHub Store swoops in to solve.

Imagine having an app store, but exclusively for the fantastic projects hosted on GitHub. That's precisely what GitHub Store delivers. This brilliant cross-platform application simplifies the entire lifecycle of discovering, installing, and managing open-source software from GitHub releases. It transforms the often-clunky process into a smooth, familiar, app-store-like experience.

So, how does this magic happen? GitHub Store intelligently scans GitHub repositories and automatically detects installable binaries. Whether you're on Android and need an APK, on Windows looking for an EXE, or on macOS needing a DMG, it identifies the correct file format. What's even better is the one-click installation feature. No more manual downloads and installations; just a single tap or click, and you're good to go. Beyond initial installation, it keeps an eye on your installed applications, tracking updates and notifying you when a new version is available. This means your favorite open-source tools are always up-to-date without you having to lift a finger.

From a developer's perspective, this project is a game-changer. It's built using modern technologies like Kotlin Multiplatform and Compose Multiplatform, making it available seamlessly across Android and Desktop platforms. This not only showcases the power of these frameworks but also means developers can benefit from this tool regardless of their primary development environment. It frees up valuable time that would otherwise be spent on administrative tasks like package management, allowing you to focus on what you do best: coding. Think about the sheer convenience of having all your essential GitHub-hosted tools, from utility apps to dev environment enhancements, managed in one central, easy-to-use interface. It truly streamlines your workflow and makes exploring the vast open-source ecosystem a joy rather than a chore.

Learn More: 🔗

View the Project on GitHub

🌟 Stay Connected with GitHub Open Source!

📱 Join us on Telegram

Get daily updates on the best open-source projects

GitHub Open Source

👥 Follow us on Facebook

Connect with our community and never miss a discovery

GitHub Open Source

The Axios Supply Chain Attack: What Happened, How to Check, and What to Do Next

2026-04-06 22:09:00

Two malicious versions of Axios were published to npm on March 31, 2026, hiding a dependency that deployed a Remote Access Trojan to developer machines and CI/CD servers. Here is what happened, how the attacker covered their tracks, and exactly what to do if your environment was exposed.

On March 31, 2026, two versions of Axios were pulled from npm after security researchers confirmed they contained a hidden dependency that deployed a Remote Access Trojan to any machine that ran npm install during a window of under three hours. Axios, a JavaScript HTTP client with over 100 million weekly downloads, had been poisoned. The malicious code inside Axios itself was zero lines. The weapon was a dependency that nobody invited, delivered through a mechanism every JavaScript developer trusts: the npm install lifecycle.

What a Supply Chain Attack Is

A supply chain attack targets the tools and dependencies developers use to build software rather than the deployed application itself. Instead of trying to exploit a production server directly, the attacker inserts malicious code into a package that developers install as part of their normal workflow. Every npm install runs with implicit trust that the resulting dependency tree is safe. Supply chain attacks exploit that trust by corrupting a link in the chain well before the code reaches any deployment environment. Because the attack executes at install time on the developer's own machine or CI/CD runner, it can access secrets, credentials, and file system contents that production servers would never expose.

Axios became a target because of its reach. A package downloaded 100 million times per week appears in frontend frameworks, backend services, and enterprise codebases across every industry. A successful compromise of even a two-hour window touches hundreds of thousands of installs. The axios attack was not the first npm supply chain incident, but the scale of the target, the operational sophistication of the execution, and the completeness of the self-cleanup make it one of the most significant ever documented.

How the Attack Was Staged: A Timeline

The operation shows clear evidence of deliberate preparation. Forensic analysis by StepSecurity traced the groundwork back more than 18 hours before any malicious Axios version appeared on npm. The attacker executed each step in sequence, with each one making the next harder to detect.

  1. March 30, 2026, early morning: An attacker-controlled account published a package called [email protected] to npm. This version was a clean, bit-for-bit copy of the legitimate crypto-js library. No malicious code was present. Its sole purpose was to establish publishing history for the account so it would not trigger "brand-new account" warnings in security scanners during later steps.

  2. Later that same day, late evening: The same account published [email protected]. This version introduced two new files: an obfuscated JavaScript dropper named setup.js, and a pre-staged clean replacement file used for anti-forensic cover after execution. It also added a postinstall hook to the package manifest pointing to setup.js. Socket's automated scanner flagged this version as malicious within six minutes of publication.

  3. March 31, 2026, 12:21 AM UTC: The primary Axios maintainer account, which the attacker had compromised, published [email protected] to npm. The only difference from the previous clean version (1.14.0) was a single addition to package.json: plain-crypto-js listed as a runtime dependency. No corresponding commit or release tag exists in the Axios GitHub repository for this version. Every legitimate Axios release uses npm's OIDC Trusted Publisher mechanism, tying each publish cryptographically to a verified GitHub Actions workflow. The malicious version had no such verification.

  4. 39 minutes later: [email protected] was published using the same method. Both the 1.x and 0.x release branches were now compromised. Any project using a caret range on either branch would pull the poisoned version on its next npm install.

  5. Approximately 3:15 AM UTC: npm unpublished both malicious Axios versions and placed a security hold on plain-crypto-js. Between the first malicious publish and the removal, less than three hours elapsed.

What the Dropper Did Once Installed

When a developer ran npm install and resolved either malicious Axios version, npm automatically installed [email protected] as a transitive dependency, then executed the postinstall hook, launching setup.js. The dropper used a two-layer obfuscation scheme to hide its strings from static analysis tools: module names, the command-and-control server address, shell commands, and file paths were all encoded behind a combination of reversed Base64 and an XOR cipher keyed to a string embedded in the script.

Once the strings were decoded at runtime, the dropper detected the operating system and routed to a platform-specific payload delivery path. On macOS, it wrote an AppleScript to a temporary file, executed it silently, and instructed it to download a compiled binary to /Library/Caches/com.apple.act.mond, a path deliberately constructed to resemble a legitimate Apple system daemon. On Windows, it located PowerShell, copied it to a path inside Program Data under the name wt.exe (mimicking Windows Terminal), then used a hidden VBScript wrapper to fetch and run a PowerShell payload without opening any visible window. On Linux and other Unix-like systems, it fetched a Python script, saved it to the /tmp directory, and launched it as a background process orphaned from the npm process tree.

Supply chain attacks insert malicious code at the dependency level, where it runs before reaching any production environment.

All three platform payloads contacted the same command-and-control server at sfrclak.com on port 8000. The server returned a second-stage Remote Access Trojan tailored to each platform. The macOS RAT was captured and reverse-engineered by researchers before the server went offline. It was a fully functional C++ binary that fingerprinted the victim system, collected running processes and directory listings, executed arbitrary shell commands, deployed further payloads, and beaconed back to the attacker every 60 seconds. Each platform variant sent a distinct identifier in its POST body so the server could route the correct second-stage payload.

“There are zero lines of malicious code inside Axios itself. Both poisoned releases inject a fake dependency whose sole purpose is to run a postinstall script that deploys a cross-platform remote access trojan.”

  • StepSecurity, March 2026

After launching the platform-specific payload, setup.js performed three cleanup steps in sequence: it deleted itself, deleted the malicious package.json (which contained the postinstall hook), and replaced it with the pre-staged clean version that reported version 4.2.0 rather than 4.2.1. Running npm audit after the fact would raise no flags. Inspecting the package directory in node_modules would show what appeared to be a clean, ordinary package. The only forensic indicator that survived the cleanup was the presence of the plain-crypto-js directory itself, because that package has never appeared as a dependency in any legitimate Axios release.

The Windows payload left one additional persistent artifact: the copied PowerShell binary at %PROGRAMDATA%\wt.exe. This file would survive dependency reinstalls and system reboots. Even after removing plain-crypto-js and installing a clean version of Axios, wt.exe would remain on the machine as long as the operating system was not reformatted.

How to Determine Whether You Were Affected

The malicious versions were live between approximately 12:21 AM and 3:15 AM UTC on March 31, 2026. Any environment where npm install ran during that window and could have resolved [email protected] or [email protected] through a caret or wildcard range warrants a full investigation. Work through the following checks in order.

  1. Search your dependency lockfiles. Open package-lock.json, yarn.lock, or pnpm-lock.yaml in every project that uses Axios and search for references to axios 1.14.1, axios 0.30.4, and the string plain-crypto-js. Any match in a lockfile means the compromised version was resolved during an install. If your repositories live on GitHub, the code search interface lets you scan your entire organization's lockfiles at once.

  2. Check the installed node_modules directory. In any project where you suspect the package was installed, look inside node_modules for a folder named plain-crypto-js. Its presence confirms the dropper ran, regardless of the version number reported inside the folder. Because the dropper replaces the manifest with a downgraded version number as part of its cleanup, the folder's existence is a more reliable indicator than any version string.

  3. Review CI/CD pipeline logs. Identify every npm install or npm ci run that executed between 12:21 AM and 3:15 AM UTC on March 31. Search those logs for plain-crypto-js appearing in the dependency resolution output. If you have outbound network monitoring on your CI runners, look for connections to the domain sfrclak.com or the IP address 142.11.206.73 on port 8000.

  4. Check developer machines for RAT artifacts. The dropper writes a payload file at a predictable location on each operating system. On macOS, look for a file named com.apple.act.mond inside the Library/Caches directory. On Windows, look for a file named wt.exe inside the Program Data directory. On Linux, look for a file named ld.py inside the tmp directory. Finding any of these files confirms that the dropper reached its second stage and established a RAT on the machine.

  5. Search corporate network and DNS logs. If your organization runs DNS filtering, a proxy, or a firewall with logging, search for any queries or connections to the domain sfrclak.com. A hit from a developer machine or CI runner in the early hours of March 31 confirms that the RAT dropper contacted the attacker's server successfully.

What to Do Based on What You Found

The Compromised Version Appeared Only in Source Files

If you found the version in a lockfile or branch that was never actually installed in a live environment, the exposure is limited to source. Update your version specification to a known-safe release: [email protected] for 1.x projects, or [email protected] for 0.x projects. Add an overrides or resolutions block to your package.json to prevent future installs from resolving to the compromised versions through a transitive dependency. Delete the node_modules/plain-crypto-js directory if it exists, run a clean install, and regenerate your lockfile. Review any open pull requests that reference the compromised versions and close them without merging, even if CI passed. The CI runner may have executed the dropper on those branches.

The Package Was Installed on an Ephemeral CI Runner

GitHub-hosted runners and similar ephemeral environments are destroyed after each job executes, so the runner itself is not a persistent threat. The concern is the credentials that were accessible to the workflow during the compromised run. Rotate every secret that was injected into that environment: npm tokens, cloud provider access keys and session tokens for AWS, GCP, and Azure, SSH keys, API keys stored as environment variables, and any values sourced from the repository's secrets store. After rotating, review the access logs for each service those credentials could reach and look for activity in the window following the compromised install.

The Dropper Ran on a Developer Machine or Self-Hosted Runner

Treat any machine where you confirmed the dropper ran as fully compromised. A loaded Remote Access Trojan can execute arbitrary commands, deploy additional payloads, exfiltrate files, and establish persistence mechanisms beyond the initial install. Local inspection alone cannot determine what the attacker accessed, exfiltrated, or modified. The response path has four steps: isolate the machine from the network immediately; document every credential that was accessible on that machine before wiping it, including API keys, SSH keys, cloud access tokens, npm tokens, database connection strings, and any .env files across local projects; reformat the operating system and rebuild from a known-good baseline; rotate every documented credential from a separate, clean machine before reconnecting anything to internal services. Review the access logs for those services covering the period during and after the compromise window.

Hardening Your Workflow to Limit Exposure Going Forward

The axios attack exploited a detection window measured in hours. Several controls at the package manager level narrow or close that window for future attacks. Most major JavaScript package managers now support a minimum release age policy, a setting that prevents newly published packages from being automatically resolved until they have been live for a configured number of days. This policy would have blocked [email protected], which was published only hours before the compromised axios versions appeared. The documentation for npm, pnpm, yarn, and bun each covers this setting directly. Running installs with the ignore-scripts flag in CI/CD pipelines prevents postinstall hooks from executing at all, removing the specific mechanism the dropper relied on. Combining a minimum release age policy with script suppression in CI provides two independent controls against this class of attack.

The Agent Harness: Turning AI Slop Into Shipping Software

2026-04-06 22:08:30

Thesis: You can use AI coding agents on real production codebases and get predictable, high-quality results if you treat the agent like a junior engineer who needs guardrails, not a magic wand.

Audience: Mid-level and senior software engineers curious about AI-assisted development but skeptical of the hype.

Series Arc

The story follows a real Laravel + React codebase over ~3 months and ~258 commits from a legacy monolith with no tests to a well-structured application with automated quality gates, a React SPA migration in progress, and an AI agent that reliably ships production code with minimal supervision.

Stage 1: The Foundation — Tests First, Everything Else Second

Post 1: "Before You Let an Agent Touch Your Code, Write the Tests"

Why tests are the prerequisite for AI-assisted development, not an afterthought. How wrapping a legacy codebase in characterization tests creates the safety net that makes everything else possible. Covers the move from SQLite to MySQL in tests, the UserFactory pattern, and why test infrastructure is the highest-leverage investment.

  • The codebase before: no tests, no linting, no CI
  • Why tests come first (they're the reins, not the saddle)
  • Characterization tests for legacy code
  • The UserFactory facade pattern
  • TDD as a communication protocol with agents

Post 2: "Linting, Static Analysis, and the Pre-Commit Hook That Saved My Sanity"

Adding Pint, Psalm, Prettier, ESLint, and pre-commit hooks, not as busywork, but as machine-checkable standards the agent can verify its own work against. Why "make lint" is more important than code review when an agent is writing the code.

  • The tooling stack: Pint, Psalm, Prettier, ESLint, TypeScript
  • Pre-commit hooks as automated code review
  • CI as the final gate
  • Why agents need checkable standards, not style guides
  • The compound effect: each tool narrows the failure space

Stage 2: The Refactoring — Making Change Safe

Post 3: "Traits to Services: Refactoring for Testability (and for Agents)"

How extracting six traits into service classes with contracts created clean boundaries the agent could work within. The strategy: plan all six, execute one at a time, keep the app running in production throughout.

  • The trait problem: global state, hidden dependencies, untestable
  • Contract-first design: interfaces before implementations
  • The extraction sequence (Chat, CRM, OCR, Document Conversion, External API, Calculator)
  • Each extraction behind enough abstraction to keep production running
  • Why clear boundaries help agents more than documentation

Post 4: "Actions, Policies, and the Art of Obvious Code"

Extracting controller logic into Action classes with Result DTOs, and adding Laravel Policies for authorization. Why making the architecture explicit and boring is the best thing you can do for an AI agent.

  • Fat controllers → thin controllers + Actions
  • The Action pattern: one execute(), one Result DTO
  • Laravel Policies for authorization (replacing inline checks)
  • Role-scoped query builders
  • How this accidentally created the perfect migration bridge (web → API)

Stage 3: The Migration — Two Frontends, One Source of Truth

Post 5: "No Big-Bang Rewrites: Running Two Frontends Without Losing Your Mind"

The strategy for migrating from Blade to React without ever stopping feature delivery. The two-frontend architecture, environment gating, and the interim wrapper pattern that lets SPA pages ship to production inside legacy Blade shells.

  • Why big-bang rewrites fail (and what to do instead)
  • The two-path architecture: Legacy (Blade) + SPA (React)
  • Environment gating: SPA only in local/staging/test
  • The Interim wrapper pattern: SPA components inside Blade shells
  • Feature flags and analytics for gradual rollout

Post 6: "Trunk-Based Development with Short-Lived Branches"

How trunk-based development, conventional commits, small PRs, and CI/CD create the fast feedback loop that makes AI-assisted development practical. Why the branch should live for hours, not days.

  • Trunk-based development: why and how
  • Conventional commits as a communication protocol
  • The CI pipeline: build → lint → test → deploy
  • Forge deployment via webhook
  • Small batches, continuous integration, always releasable

Stage 4: The Harness — From Micromanaging to Managing

Post 7: "In-the-Loop to On-the-Loop: How I Stopped Micromanaging My AI Agent"

The turning point. Moving from approving every line to trusting the guardrails. What "on-the-loop" means in practice, why it requires everything from stages 1–3, and how the CLAUDE.md harness files made it work.

  • In-the-loop: reviewing every line, giving real-time direction
  • The frustration: micromanaging defeats the purpose
  • On-the-loop: setting direction, reviewing output, trusting guardrails
  • Why this only works with tests + linting + CI + clear architecture
  • The feedback loop: harness → agent output → review → update harness

Post 8: "Building the Agent Harness: Subdirectory CLAUDE.md Files"

The technical deep-dive on the harness system. Why one big instruction file doesn't work, how subdirectory CLAUDE.md files control context loading, and what goes in each one. Includes real examples from the codebase.

  • The context window problem: one big file blows up
  • Subdirectory CLAUDE.md files: lazy-loaded, scoped guidance
  • What each harness file covers (Actions, Services, Tests, SPA, etc.)
  • The harness table: mapping areas to guidance files
  • The feedback protocol: update the harness, reload, re-apply
  • How the harness checks its own work (make lint, make test, make test-js)

Post 9: "The Curator's Role: Managing a Codebase With an Agent"

The philosophical methodology. Your role shifts from writing every line to curating the repository and the agent. How Modern Software Engineering principles (Dave Farley) map perfectly to AI-assisted development. Why every project's harness is different because you're codifying your judgment. The engineer's role in the age of agents.

  • Simplicity wins: nine Markdown files, not a custom AI platform
  • Guardrails first — the case for doing this work up front
  • You're codifying yourself: the harness is your preferences made explicit
  • Every project is different: why generic AI advice falls short
  • The engineer's three roles: curator of design, guardrails, and documentation
  • Modern Software Engineering: optimize for feedback, work in small batches, empiricism over dogma
  • The harness as a living document (it evolves with every review)
  • Results: velocity, quality, confidence

Post 10: "Custom Skills: The End-to-End Workflow Made Executable"

How two custom slash commands turned a repeatable workflow into a consistent, end-to-end process — from Jira ticket to merged PR, with TDD and harness feedback at every step. Why automating the ceremony lets you focus on the judgment calls.

  • The repetition problem: typing the same instructions every session
  • Skills as Markdown files: plain English workflows in .claude/skills/
  • Two skills: /implement-jira-card (from Jira) and /implement-change (ad-hoc)
  • The eight-phase workflow: scope → requirements → plan → branch → TDD → commit → CI review → refactor
  • Eight feedback checkpoints: on-the-loop made concrete and repeatable
  • Harness feedback built into every checkpoint
  • TDD as a structural phase, not a preference
  • Separating ceremony from judgment: automate the sequence, keep the decisions
  • Structure scales, discipline doesn't

Gist Strategy

Each post links to 1–3 GitHub gists showing real code from the project:

  • Harness files (CLAUDE.md examples)
  • Action class + Result DTO pattern
  • Service contract + implementation
  • Test patterns (UserFactory, API tests)
  • CI workflow excerpts
  • Interim wrapper component
  • Pre-commit hook configuration
  • Makefile targets