MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

What are your goals for the week? #166

2026-02-16 23:28:30

What are your goals for the week?

  • What are you building this week?
  • What do you want to learn?
  • What events are you attending this week?

This Week's Goals.

  • Job Search.
    • Network
    • Apply
  • Project work.
    • Content for side project.
    • Work on my own project.
    • Follow Content & Project Calendar.
  • Blog
    • Make new header for this series.
  • Events.
    • Thursday Virtual Coffee
    • Axe-Con is next week. I need to read talk descriptions and set my agenda.
  • Run a goal setting thread on Virtual Coffee(VC) Slack.

How I did last week.

  • Job Search.
    • ✅ Network
    • ✅ Apply
  • Project work.
    • ✅ Content for side project.
    • Work on my own project.
    • ✅ Fill Content & Project Calendar.
  • Blog
    • ❌ Make new header for this series. * Tried a couple but not what I wanted.
  • Events.
    • ❌Thursday Virtual Coffee
    • ✅ Had a job coaching seminar same time as VC. Basically trying to reframe how to think on applications. Change "I don't have that skill" to "I can learn that skill"
    • ✅ Thursday Night Dallas Software Developers (Virtual)
  • ✅ Run a goal setting thread on Virtual Coffee(VC) Slack.

Your turn, what do you plan to do this week?

  • What are you building this week?
  • What do you want to learn?
  • What events are you attending this week?

Cover image is my LEGO photography. Stitch with fours arms. He's holding a laptop, phone, cookie, and a mug. He's next to a desk with a CRT monitor and keyboard.

-$JarvisScript git commit -m "edition 166"

🚀 Day 28 – Three-Tier Highly Available AWS Architecture with Terraform

2026-02-16 23:25:27

🏗 Architecture Overview
The project implements a classic three-tier architecture:

Presentation Layer (Frontend)
EC2 instances in private subnets
Behind an Internet-facing Application Load Balancer
Dockerized frontend container

Logic Layer (Backend)
EC2 instances in private subnets
Behind an Internal Load Balancer
Dockerized backend container

Data Layer (Database)
Amazon RDS (Multi-AZ enabled)
Private subnet deployment
Credentials stored securely in AWS Secrets Manager

Additional components:
Bastion Host for secure SSH access
NAT Gateway for outbound internet from private subnets
Auto Scaling Groups across multiple Availability Zones
Strict Security Groups & IAM Roles

🌐 High-Level Traffic Flow

Internet

Internet Gateway

External ALB (Port 80)

Frontend EC2 (Docker container - Port 3000)

Internal ALB

Backend EC2 (Docker container - Port 8080)

RDS (Multi-AZ)

This ensures:
High availability
Fault tolerance
Secure network segmentation
Scalability under load

🔐 Security Architecture
Security was a major focus in this implementation.

🔹 Private Subnets
Frontend, Backend, and RDS are not publicly accessible.

🔹 Bastion Host
Used as a secure jump server to SSH into private instances.

🔹 NAT Gateway
Allows private instances to:
Pull Docker images
Install updates
Access AWS services
Without exposing them to inbound traffic.

🔹 Security Groups
ALB allows HTTP from internet
Frontend allows traffic only from ALB
Backend allows traffic only from internal ALB
Database allows traffic only from backend SG

🔹 IAM Roles
EC2 instances have least-privilege access for:
CloudWatch
Session Manager
Secrets Manager

📦 Docker Deployment on EC2
Frontend and Backend applications are:
Built as Docker images
Pushed to Docker Hub
Pulled automatically in EC2 via user-data scripts
Started during instance launch

This ensures:
Consistent deployments
Easy scaling via ASG
Reproducible environments

📈 Auto Scaling & Load Balancing

External ALB
Internet-facing
Health checks on frontend instances
Routes traffic dynamically

Internal ALB
Handles communication between frontend and backend

Auto Scaling Groups
Multi-AZ deployment
Configured min, max, desired capacity
Scales based on CPU utilization

🗄 Database Configuration
RDS setup includes:
Multi-AZ deployment
Custom DB subnet group
Parameter group configuration
Engine version specification
Secrets stored in AWS Secrets Manager

During provisioning, an issue occurred due to PostgreSQL parameter incompatibility, which was resolved by adjusting the engine version — a real-world debugging scenario.

I Built a Grail Finder That Searches Vinted, eBay, Depop and More at Once

2026-02-16 23:20:52

If you've ever obsessively hunted for a specific item on resale apps, you know the pain.

You want a pair of Nike Dunk Low "Ceramic" in size 10. Or a vintage Carhartt Detroit jacket in navy. Or that one discontinued Arc'teryx shell that everyone pretends they don't want.

So what do you do? You open Vinted. Search. Nothing good. Open eBay. Search. Too expensive. Open Depop. Search. Wrong size. Open Grailed. Vestiaire Collective. Poshmark. Rinse and repeat. Every. Single. Day.

My co-founder Thomas finally snapped. He was tired of checking six apps every morning for the same three items. He looked at me and said: "Why isn't there one search that checks all of them?"

Good question. So I built it.

Meet GrailSearch

GrailSearch is a unified search engine for secondhand fashion. Type in what you're looking for, and it searches across eBay, Vinted, Depop, Grailed, Vestiaire Collective, and Poshmark — all at once, from one page.

No more app-switching. No more repeating the same search six times. Just results, aggregated and sorted.

GrailSearch screenshot

Why I Built This

I've built dev tools before. CLIs, APIs, things developers use. This is different. GrailSearch is consumer-facing, born out of a real personal frustration.

The resale market is massive — over $200 billion globally — but the experience is fragmented. Every platform is its own silo. Sellers list on one app. Buyers search on another. The best deal might be sitting on Vinted while you're doom-scrolling Depop.

I realized the problem wasn't that good items don't exist. It's that finding them is a full-time job.

How It Works

The core is simple: you search, we fan out to multiple marketplaces in parallel and merge the results.

Under the hood:

  • eBay — Uses the official Browse API. Clean, well-documented, reliable.
  • Vinted — No public API, so we reverse-engineered their internal endpoints. This was the fun part. Their search API is actually pretty good once you figure out the auth flow.
  • Depop, Grailed, Vestiaire, Poshmark — A mix of APIs and scraping, each with their own quirks.

The whole thing is built with Ruby on Rails. I know, not the trendy choice. But Rails lets me move fast, and for a product like this — server-rendered pages, background jobs for alerts, user accounts, payments — it's perfect. No need for a separate frontend framework when Hotwire does the job.

Saved Searches & Alerts

Searching once is nice. But the real power is saved searches.

You save a search — say, "Jordan 1 Bred size 10 under €200" — and GrailSearch checks all six platforms periodically. When a new listing matches, you get an email alert.

This is the feature Thomas actually wanted. He didn't want to check apps. He wanted apps to check for him.

Free tier: 3 saved searches. Enough to track a few grails.

Pro ($8/mo): Unlimited saved searches + email alerts. If you're seriously hunting, it pays for itself the first time you catch a deal before someone else does.

The Technical Bits

For the devs reading this, here's what's under the hood:

  • Rails 7 with Hotwire/Turbo for reactive UI without JavaScript framework overhead
  • Sidekiq for background jobs (running saved searches on a schedule)
  • PostgreSQL for storage
  • Fly.io for hosting (deployed to the CDG region in Paris)
  • Stripe for payments
  • Rate limiting and caching to be respectful to the platforms we're searching

The trickiest part was Vinted. They don't have a public API, and their auth tokens rotate. Getting reliable, consistent results required some creative session management. I won't go into the details here, but if you're curious, the source is on GitHub.

Open Source

Yeah, it's open source. The whole thing.

👉 github.com/mack-moneymaker/grailsearch

I believe in building in the open. If you want to self-host it, fork it, or just poke around the code — go for it. PRs welcome.

What's Next

This is v1. Here's what I'm working on:

  • More platforms — Mercari, Facebook Marketplace, and others
  • Price history — See if an item is a good deal based on past sales
  • Smarter alerts — Filter by condition, seller rating, location
  • Mobile app — Push notifications instead of email

Try It

If you've ever spent 20 minutes checking six apps for the same pair of sneakers, give GrailSearch a try. It's free to search, free to save 3 searches, and if you're a heavy hunter, Pro is $8/mo.

And if you're a dev who's into resale, fashion, or just wants to see how a Rails app talks to six different marketplace APIs — check out the repo.

I built this because I was tired of the grind. Hope it saves you some too.

What items are you hunting for? Drop a comment — I'm curious what people's grails are.

How Claude Code’s creator ships 50–100 PR’s per week

2026-02-16 23:20:46

When Boris Cherny, the creator of Claude Code, shared his daily workflow on X, the development community took notice. Not because of some revolutionary technique or complex setup, but for the opposite reason: his approach is remarkably straightforward. And that’s precisely why it works.

If you’re looking to supercharge your development workflow with AI, here’s how one of the people building these tools actually uses them.

The core philosophy: Parallelism over speed

Boris runs approximately 10 Claude sessions in parallel at any given time. Instead of babysitting each one, he relies on system notifications to alert him when human input is needed. This approach transforms AI from a back-and-forth conversation tool into something more akin to having a team of junior developers working simultaneously on different tasks.

The key insight here is understanding that AI assistance doesn’t have to be sequential. You’re not limited to one task at a time. By spinning up multiple sessions, you’re essentially multiplying your capacity to tackle problems.

img1

Opus 4.5 with thinking: Slower tokens, faster results

While many developers gravitate toward faster models for quick iterations, Boris exclusively uses Claude Opus 4.5 with thinking enabled. Yes, it’s slower per token. But here’s the counterintuitive truth: it’s faster overall.

Why? Because it requires far less human steering. When you use a more capable model that can reason through problems, you spend less time course-correcting, clarifying, and re-explaining. The AI gets it right more often on the first try, which means you’re not stuck in revision loops.

Think of it like hiring: you might pay more per hour for a senior developer, but they’ll complete the project in a fraction of the time a junior would need.

The 2,500-token secret weapon

The Claude Code team maintains a single shared Claude.md file, checked directly into their Git repository. Whenever Claude behaves incorrectly or misunderstands something, they add a new instruction to this file.

The surprising part? After continuous refinement, this file sits at just around 2,500 tokens. That’s remarkably concise for a document that encodes the team’s entire workflow, conventions, and common pitfalls.

This approach is brilliant in its simplicity. Instead of re-explaining your coding standards, architectural decisions, and preferences in every new conversation, you build up a knowledge base that travels with your code. It’s version-controlled, it’s collaborative, and it evolves with your project.

Img2

Plan first, execute once

Boris starts most sessions in plan mode. He iterates on the plan until it feels solid and well-thought-out. Only then does he switch to autonomous mode.

The result? The task usually gets done in one shot.

This two-phase approach mirrors how experienced developers naturally work. You don’t just start coding immediately. You think through the problem, consider edge cases, map out the architecture, and only then do you write code. By forcing this separation with AI, you ensure the execution phase has clear direction.

Sub-agents for specialized tasks

Rather than using Claude as a monolithic tool, Boris employs specialized sub-agents for specific purposes:

Code Simplifier handles post-generation cleanup, refactoring verbose code into something more maintainable.

Verify App runs end-to-end testing to ensure the generated code actually works in practice.

Img3

This division of labor is another stroke of genius. Different tasks require different contexts and goals. By creating specialized agents, you’re optimizing each one for its specific role rather than trying to make a single prompt do everything.

Let Claude verify its own work

Perhaps the most interesting aspect of Boris’s workflow is his strong belief in letting Claude verify its own work. He allows it to use a Chrome extension to open a browser, test the UI, and iterate until the code actually functions correctly.

This creates a feedback loop that’s incredibly powerful. Instead of you manually testing and reporting back what’s broken, the AI can see the results of its work and self-correct. It’s the difference between giving someone directions and letting them use GPS.

The output: 50–100 pull requests per week

With this setup, Boris completes approximately 50 to 100 pull requests per week. Let that sink in. That’s anywhere from 10 to 20 PRs per day.

For context, many developers consider 5–10 meaningful PRs per week to be highly productive. Boris is operating at roughly 10x that rate.

Why vanilla workflow works?

In the comments on his original post, many people pointed out that his workflow is pretty vanilla and straightforward. There’s no exotic prompting technique, no complex orchestration system, no proprietary tooling.

And that’s exactly why it works so well.

The most sustainable, scalable workflows aren’t built on clever hacks or cutting-edge techniques that might break with the next model update. They’re built on solid principles: parallelism, proper planning, specialization, and verification.

Key takeaways for your workflow

If you want to adopt Boris’s approach, here are the core principles to implement:

Run multiple sessions in parallel. Don’t wait for one task to complete before starting another. Let the AI work on several things simultaneously while you focus on high-level orchestration.

Invest in better models upfront. The most capable model with extended reasoning might seem slower, but it saves time by getting things right the first time.

Build and maintain a project-specific instruction file. Create your own Claude.md that captures your conventions, common issues, and preferences. Keep it concise and version-controlled.

Separate planning from execution. Get the plan right before switching to autonomous mode. A solid plan executed once beats a vague idea iterated ten times.

Create specialized agents. Instead of one general-purpose prompt, build focused sub-agents for cleanup, testing, and other recurring tasks.

Enable self-verification. Give your AI the tools to check its own work and iterate without constant human intervention.

The future is already here

What’s remarkable about Boris’s workflow isn’t that it’s revolutionary. It’s that it’s achievable today, with tools that are publicly available. You don’t need to wait for the next model release or some breakthrough in AI capabilities.

The gap between where most developers are and where they could be isn’t technological. It’s methodological. Boris has simply figured out how to structure his work to leverage AI’s strengths while minimizing its weaknesses.

The best part? His approach isn’t proprietary or complex. It’s vanilla. It’s straightforward. And that means you can start using it today.

What aspects of this workflow are you most excited to try?
Have you experimented with running multiple AI sessions in parallel?

Share your thoughts and experiences in the comments.

Why Agent Skills Aren't Called automatically: An Anti-Pattern in Agent Skill

2026-02-16 23:19:51

Reasons

  • A single context can legitimately map to multiple skills
  • Spec-driven planning does not reason over the available skill space

Source Code Used in the Experiment

To reproduce the behavior, I used a minimal repository containing only two skills.

  • lint
  • planner

A single context can legitimately map to multiple skills

I wanted to create a plan, so I input "Create a plan to lint.". However, this directly triggered the /lint skill.

This happened even though the planner skill description explicitly says:
“Use when the user says things like ‘create a plan’.”

In practice, explicitly calling /planner turns out to be far more reliable.

Spec-driven planning does not reason over the available skill space

When I input /planner lint, the planner sometimes discovers skills/lint/SKILL.md and incorporates it into the plan. However, this behavior is not guaranteed.

The problem becomes more obvious with inputs like:
/planner lint src directory

In this case, the planner restricts its file exploration to the src directory.
As a result, it does not discover other skills.

The planner then reasons about how to lint in general and produces a plan that ignores the lint skill’s defined procedure entirely.

I stopped using skills implicitly

When I asked Claude to fix the planner so that it considers available skills, it actually worked.

However, I was using a get-shit-done style workflow, where the system assumes that everything belongs in .planning.

At that point, I realized the core issue:

I should not be using skills to encode project-specific behavior.
Those behaviors belong in the spec, not in skills.

How Many Rs Are There Really In Strawberry? AI Is So Stupid

2026-02-16 23:16:12

How many Rs are there in the word strawberry? AI can’t tell you. Apparently. You’ve all seen it. Screenshots, Reddit threads, smug tweets. Models tripping over letters like toddlers. Everyone pointing and laughing. Reassuring stuff.

Wind the clock back a little.

It’s 2023. Image generation is exploding. It’s magical. Also: why does that hand have five fingers and a thumb?

A year later and we’ve uncovered a new, devastating limitation. AI cannot render a wine glass completely full. Half the internet concludes: preposterous technology, case closed.

By 2025 things are truly dire. Models still can’t reliably count the Rs in strawberry. Ask for a seahorse emoji and they spiral into what looks suspiciously like an existential crisis.

These examples don’t matter. Not really.

What’s interesting is how obsessively we return to them.

It Will Never Be Able To Code Though

The memes are obvious if you use AI regularly. But this reflex isn’t limited to casual users. Technical people do it too and often more loudly.

Early 2023: ChatGPT can spit out a half-decent for loop. Sometimes it even answers technical questions correctly. Incredible. But obviously it can’t build an app.

Late 2024: we’ve got basic code-generation tools. Still, no danger. It makes too many mistakes. Barely junior level.

2025: the year of the vibe coder. Suddenly everyone can spin up a website. Sure, it’s riddled with security holes and questionable decisions. So again: no threat. We’ll just clean it up. AI is junk.

For years now, we’ve watched models repeatedly blow past their previous ceilings. Each time, the criticism simply slides sideways to the next obvious limitation.

Reddit is still full of people pointing out how stupid AI is. They’re not wrong. They’re just always late and missing the important part.

Why Is AI Stupid Though?

Before getting philosophical, it’s worth grounding this in reality. These glitches exist for reasons. If you’re building with AI, you need to understand them.

How often have you seen a photograph of a wine glass filled perfectly to the brim? Until recently: almost never. That means that the model hasn’t either. It’s not failing, it’s interpolating from a deeply human dataset.

Why do seahorse emojis cause chaos? Because at some point the internet collectively decided a seahorse emoji existed. Reddit talked about it. Joked about it. Imagined it. The model learns that seahorse emoji is plausible and goes to insert it. Then, mid-generation, it realizes that it doesn’t exist and starts chasing its own tail ad-infinitum.

Why does AI-generated code contain errors? Because it’s trained on Stack Overflow, blogs, gists, half-finished examples and heroic hacks. You didn’t ask it to be secure. You didn’t constrain it. It’s doing exactly what humanity taught it to do.

People say AI is a mirror to the user. It’s also a mirror to humanity… and a lot of what we’re seeing reflected back isn’t flattering.

Why Does It Matter?

Because this isn’t abstract. It has real consequences – for society and for anyone building real products with AI baked in. If you’re developing on top of AI and you don’t understand how it fails, you’re already in trouble.

At Brunelly we assume AI is an intern who found a 20-year-old Stack Overflow answer and ran with it. We prompt heavily, guide explicitly, and still don’t trust the output. Everything passes through multiple agents to surface bugs, performance issues, and security concerns.

The only viable starting point is: it will underperform… so how do we correct it?

But this misunderstanding goes wider than product design.

Stack Overflow is effectively dead. Let that sink in. Once the backbone of developer knowledge, now barely visited. Why? Because ChatGPT gives faster, better, contextual answers.

Music, images, stock photography – already flooded. Half of the lo-fi playlists on Spotify are AI-generated. We just stopped calling it slop.
Remember when everyone complained about AI slop in early 2025? Bad news: it’s still AI. It’s just a lot less sloppy.

Jobs are changing. Trust is changing. Evidence is changing. When you can’t trust photos, videos, reviews or faces then everything downstream shifts with it.

If you’re focused on strawberries, you’re going to wake up one day and wonder when the world quietly re-organised itself.

Why Do We Fixate Though?

Because known failure modes are comforting.

They give us a boundary. Something to point at. Something to laugh at. A place where we still feel safely on top.

Finding a bug in YouTube is annoying. Finding a bug in AI is reassuring.
The problem is that these failures don’t last.

Our mental model of AI already lags reality, and that gap is widening. Even if AI progress stopped tomorrow, it would take years for organisations to fully exploit what already exists. Orchestration is immature. Skills are scarce. Understanding is shallow.

This isn’t about whether LLMs lead to AGI or consciousness. It doesn’t matter. The systems we already have are enough to reshape everything if we actually learn how to use them.

What Does It Mean For Builders?

It means stability is gone. The model you used last month is obsolete. The workaround you wrote last week no longer applies. Every solved edge case is replaced by three new ones.

This isn’t like JavaScript frameworks. This is orders of magnitude faster.
You have to design for an environment that mutates continuously. Trust becomes a UX problem, not a marketing one. AI labels actively reduce confidence.

Textbox-and-send is not a product strategy.

Trust nothing. Convert outputs into constrained state machines. Design experiences that absorb failure gracefully.

We didn’t build Brunelly because AI is magical. We built it because AI is a tool that can be harnessed and nobody else was doing it right. And the orchestrator underneath it evolves almost as fast as the models themselves – because it has to.

And What Does It Mean For All Of Us?

That’s the real question.

I was coding in the 90s during the original internet boom and bust. It wasn’t like this. Code lasted years. Systems were stable. Patterns endured.

This time is different – not because the tech is smarter, but because the pace is relentless.

Laughing at AI’s mistakes is fine. It is funny. But it’s also a distraction.

Assume the world is changing before you notice it.

If you’re building: design for failure, assume the system will outgrow you mid-flight, and plan accordingly.

And maybe stop counting Rs.