MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

How I Shipped 3 Production SaaS Backends in 30 Days Using Claude Code (Without Context Loss Destroying Everything)

2026-02-15 03:08:30

I've been using Claude Code for the last 4 months to build SaaS backends. Love it. Until I don't.

You know the pattern. Day 1: Claude writes beautiful auth logic. You're impressed. Day 3: Ask it to add Stripe webhooks. Day 5: Auth is broken. No idea what changed. Day 7: Context window full. Start new session. Day 8: "Wait, what database schema are we using again?"

Every. Single. Time.

I'd spend more time re-explaining my project than actually building it. The "brilliant colleague with amnesia" metaphor is painfully accurate.

The Context Loss Problem Nobody's Solving

Here's what I kept hitting:

Mid-session drift. Claude would start with async/await, then randomly switch to .then() chains 200 lines later. Why? Context degradation. The model "forgets" earlier patterns as the conversation grows.

Schema amnesia. I'd define a users table with specific columns in message 5. By message 40, Claude's suggesting queries for columns that don't exist.

Security regression. RLS policies carefully set up in Phase 1? Completely ignored when adding features in Phase 3.

The Groundhog Day effect. Close laptop Friday. Open Monday. Spend 30 minutes re-explaining the entire project before Claude can write a single line.

I tried everything the internet suggested:

  • ✗ Longer prompts with full context (hit token limits, quality degraded anyway)
  • ✗ Custom instructions (too vague, didn't persist across sessions)
  • ✗ Separate chats for each feature (lost the big picture, broke dependencies)
  • ✗ Manual "memory dumps" (exhausting, error-prone)

Nothing worked. The fundamental issue is that LLMs have working memory, not long-term memory. They're brilliant in the moment, terrible at maintaining state.

What Actually Fixed It: Multi-Agent Orchestration

I realized the problem isn't the AI. It's the workflow.

Human developers don't keep entire codebases in their heads either. They use documentation. Design docs. Database schemas. API specs. External references that persist.

So I built a system that orchestrates Claude through specialized agents, each with fresh context windows and specific jobs.

The Four Files That Maintain State

1. PROJECT.md - The vision document

  • Problem being solved (plain English)
  • Target users and workflows
  • Core value proposition
  • Success criteria

2. REQUIREMENTS.md - Traceable feature definitions

  • Every requirement has a unique ID (AUTH-01, PAY-02, etc.)
  • v1 scope (must have), v2 scope (future), out-of-scope (won't do)
  • Acceptance criteria for each

3. ROADMAP.md - Phased execution plan

  • Phase 0: Infrastructure
  • Phase 1: Core feature
  • Phase 2: Supporting features
  • Phase 3: Polish
  • Each requirement mapped to specific phases

4. STATE.md - The living memory

  • Completed phases (locked from modification)
  • Current phase (only modifiable code)
  • Database schema (exact DDL)
  • API routes built (paths, methods, business logic)
  • Architectural decisions

These files are sized to avoid context degradation (under 10k tokens each) and serve as a single source of truth for both humans and AI.

The Multi-Agent System

Instead of one long Claude conversation, the system spawns specialized parallel agents:

Research agents (4 running in parallel before coding):

  • Stack researcher → best technologies for your domain
  • Features researcher → table stakes vs differentiators
  • Architecture researcher → system design patterns
  • Pitfalls researcher → common mistakes to avoid

Execution agents:

  • Planner → creates verified task plans
  • Executor → runs plans with atomic commits
  • Verifier → tests and auto-debugs
  • Mapper → analyzes existing codebase

Each agent gets fresh context. No degradation. No drift.

The Workflow Cycle

1. INITIALIZE
   Describe vision → AI creates PROJECT.md, REQUIREMENTS.md, ROADMAP.md

2. DISCUSS (each phase)
   Shape implementation preferences before committing

3. PLAN  
   Research domain patterns → create verified execution plan

4. EXECUTE
   Run plans in parallel waves with fresh contexts → atomic git commits

5. VERIFY
   User acceptance testing with automatic debugging

Repeat 2-5 for each phase

Critical rule: Completed phases are locked. The AI can only modify code in the current phase. This prevents the "adding payments breaks auth" problem entirely.

Boilerplate-Aware Intelligence

The AI knows what's already built in the boilerplate (auth, Stripe, Razorpay, Supabase, multi-tenancy, emails, admin panel). It only plans what's custom to your domain.

This means:

  • Zero time wiring auth to database
  • Zero time setting up payment webhooks
  • Zero time building admin panels
  • Pure focus on your unique business logic

The Results (Why I'm Sharing This)

Last 30 days, I built 3 production SaaS backends using this system:

Analytics Dashboard (13 hours total, across 4 sessions)

  • Custom analytics schema (metrics, data_points, aggregations)
  • Ingestion API with validation
  • Time-series calculations (daily, weekly, monthly)
  • CSV export with date filtering
  • Now has 8 paying users making $96/month

Feedback Widget (11 hours, 3 sessions)

  • Feedback schema with metadata
  • Widget embedding API (iframe + script tag)
  • Admin CRUD with filtering
  • Email notifications on submission
  • Webhook system for integrations
  • 5 signups in first week

Content Calendar (9 hours, 2 sessions)

  • Content schema with scheduling
  • CRUD API with role-based access
  • Publishing logic with timezone handling
  • Calendar view backend
  • En route to production

All production-ready. All built with AI orchestration. All using persistent state across weeks.

Commands That Run It

After building this system for myself, I packaged it:

/propelkit:new-project

This master command:

  • Asks deep questions about your project
  • Spawns research agents for your domain
  • Creates PROJECT.md, REQUIREMENTS.md, ROADMAP.md
  • Generates phased execution plan
  • Hands you off to phase-by-phase building

Then for each phase:

/propelkit:discuss-phase 1    # Shape your preferences
/propelkit:plan-phase 1       # Research + create execution plan  
/propelkit:execute-phase 1    # Build with parallel agents
/propelkit:verify-work        # Test with auto-debugging

The system maintains STATE.md automatically. Close laptop. Come back days later. Resume exactly where you left off.

PropelKit - The Packaged System

After the third project, I productized it.

What you get:

Production Next.js boilerplate (saves 100+ hours):

  • Auth (email, OAuth, sessions)
  • Stripe + Razorpay payments
  • Supabase (PostgreSQL with RLS)
  • Multi-tenancy (organizations, teams, roles)
  • Credits system (usage-based billing)
  • Email templates (8 pre-built)
  • Admin panel (user management, analytics)
  • 26 AI PM commands

Stack: Next.js 16, TypeScript, Supabase, Stripe, Razorpay

One-time purchase. You own the code. Build unlimited products.

The AI PM uses the exact multi-agent orchestration system described above. Persistent state. Parallel research. Boilerplate-aware. Atomic commits.

Demo: propelkit.dev (watch the AI questioning, research, roadmap generation, and execution)

Why This Approach Works

Context engineering - Separate files under degradation thresholds, not one massive chat

Multi-agent orchestration - Fresh contexts per agent, no drift accumulation

Boilerplate awareness - AI knows what exists, only builds what's custom

Atomic commits - One feature per commit, precision rollback

Phase locking - Completed code stays completed, no random rewrites

Domain research - AI understands your industry before writing code

This isn't just for PropelKit. The principles work anywhere - you need persistent state files and fresh context windows per task.

What's your experience with AI code context loss? Have you found other systems that work?

Beyond console.log: Building a Robust Observability Pipeline in 2026

2026-02-15 03:07:40

The days of relying solely on application logs to debug complex, distributed systems are over. With microservices architectures and serverless functions becoming the standard, understanding the state of your application requires more than just knowing what happened—it requires knowing where, why, and how it happened across a sprawling infrastructure.

In 2026, observability isn't just about logs; it's about the three pillars: Metrics, Logs, and Distributed Tracing.

The Shift from Monitoring to Observability

Monitoring tells you when a system is broken (e.g., CPU > 90%). Observability tells you why it is broken (e.g., "Service A is slow because Service B is taking 500ms to query PostgreSQL").

An effective observability pipeline must be proactive, not reactive. If you are waiting for a user to report an error before you see it, your observability pipeline has failed.

Designing the Pipeline: The OpenTelemetry Standard

OpenTelemetry (OTel) has emerged as the industry-standard framework for instrumenting, generating, collecting, and exporting telemetry data. By adopting OTel, you avoid vendor lock-in and create a unified, standard data format for your traces and metrics.

  1. Instrumentation: Use OpenTelemetry auto-instrumentation libraries to collect data from your applications without changing your code.

  2. Collection: Deploy an OpenTelemetry Collector as a sidecar or agent. This component is crucial because it decouples your application from the backend monitoring tool.

  3. Backend: Send the data to a backend of your choice (e.g., Grafana Tempo, Honeycomb, Datadog).

Key 2026 Trends: Distributed Tracing & Edge Computing

  • Context Propagation: When a request flows through multiple microservices, you must ensure the same trace ID accompanies it. This allows you to visualize the entire request journey.

  • Edge Functions: With more logic moving to the edge (e.g., Vercel, Cloudflare Workers), your traces must span from the edge function down to your backend APIs, giving a complete picture of latency.

Implementing Real-time Alerts

Don't alert on everything. Alert on symptoms, not causes. A high CPU is a cause; a high 5xx error rate is a symptom. Use tools like Prometheus for metrics and Grafana for visualization to set up SLI/SLO (Service Level Indicator/Objective) alerting.

Conclusion

Building a robust observability pipeline takes time, but it is an investment in stability. It turns debugging from a frantic guessing game into a methodical investigation.

bilingual_pdf, an app by @rudifa

2026-02-15 03:06:08

If you are learning another (human) language, you might wish to create bilingual documents where a text in a language you know and its translation into the language you are learning are displayed in two columns, side by side.

bilingual_pdf is a CLI application for Mac, Linux and Windows that will create a two-column bilingual PDF document, from your input Markdown document.

As an example, here is the bilingual_pdf project's README.md used as a source to bilingual_pdf, and the resulting English + Spanish version README.en.es.pdf.

By default, bilingual_pdf translates your input automatically using Google Translate. You can also get the resulting translation as a Markdown document, which you can edit and use instead of the automatic translation.

Enjoy!

RepoHealth-AI: Made using Github Copilot CLI

2026-02-15 03:04:13

This is a submission for the GitHub Copilot CLI Challenge

What I Built

RepoHealth AI is a full-stack web application that analyzes any public GitHub repository and gives it a comprehensive health score powered by real data and AI-generated insights.

The idea came from a simple frustration — when you discover a new open source repo, it's hard to quickly judge whether it's well-maintained, properly documented, or worth depending on. RepoHealth AI solves that in seconds.

Here's what it does:

You paste any GitHub repository URL and the app instantly fetches live data from the GitHub API and displays a full dashboard including:

  • Repository Health Score — A calculated score out of 100 based on README quality, license presence, commit frequency, issue activity, and contributor count
  • Key Stats — Stars, forks, open/closed issues, watchers, contributors, last commit date, and primary language
  • Commit Activity Chart — Visual representation of commits over the last 30 days
  • Language Breakdown — Pie chart showing language distribution with official language colors
  • Top Contributors — Bar chart of the most active contributors by commit count
  • Issues Overview — Visual breakdown of open vs closed issues
  • AI-Powered Insights — On-demand AI analysis (powered by Google Gemini) that provides a repo summary, key strengths, and prioritized improvement suggestions with High/Medium/Low priority labels

The app features a full dark/light theme toggle, responsive design across all screen sizes, smooth animations, and a clean professional UI built to feel like a real developer tool.

Live App: https://repohealth-ai.vercel.app

GitHub Repo: https://github.com/Muhammad-Ahmed-Rayyan/RepoHealth-AI

Demo

🔗 Live Demo: https://repohealth-ai.vercel.app

How to test:

  1. Visit the live app
  2. Paste any public GitHub repository URL (e.g., https://github.com/facebook/react)
  3. Hit the search button and wait for the dashboard to load
  4. Scroll down and click "Generate AI Insights" to get AI-powered recommendations

Screenshots:

Landing Page — Clean search interface with dark theme

Landing Page

Dashboard — Full repository health analysis with charts and stats

Dashboard View 1

Dashboard View 2

Dashboard View 3

Dashboard View 4

Dashboard View 5

Dashboard View 6

My Experience with GitHub Copilot CLI

GitHub Copilot CLI was not just a helper in this project — it was essential to the entire build. From the very first line of code to the final deployment, every part of this project was built using Copilot CLI directly in the terminal.

How I used it:

The entire project architecture was scaffolded using Copilot CLI. I described what I wanted — a full-stack app with a React frontend, Node.js/Express backend, GitHub API integration, and AI-powered analysis — and Copilot CLI generated the complete project structure, boilerplate, and initial implementations.

Where it made the biggest difference — Debugging:

The most impressive part of working with Copilot CLI was how fast it helped resolve bugs. For example:

  • When the GitHub API was returning 401 Unauthorized errors, Copilot CLI diagnosed the root cause immediately — the process.env.GITHUB_TOKEN was being read at module import time before dotenv.config() had run. It suggested converting the static axios instance into a factory function that reads the token at runtime. Problem solved in minutes.

  • When the AI analysis was failing with "Cannot read properties of undefined (reading 'exists')", Copilot CLI traced the issue to a data flow problem — the repo data wasn't being passed correctly to the AI service after separating AI generation into its own triggered flow. It restructured the API call chain and added optional chaining (?.) throughout the AI service.

  • When Google Gemini returned a 404 model not found error, Copilot CLI suggested running a curl command to list available models from the API, then immediately updated the model name to the correct one.

Other highlights:

  • Generated all Recharts chart components with proper dark theme configurations
  • Built the health score calculation algorithm from scratch based on my requirements
  • Helped design the responsive Tailwind CSS layout and card system
  • Implemented the dark/light theme toggle with localStorage persistence
  • Wrote the complete prompt engineering for the Gemini AI analysis section

Working with Copilot CLI felt like having a senior developer pair programming with me directly in the terminal. The speed at which it could read existing code, understand context, and apply targeted fixes was genuinely impressive. Tasks that would have taken hours of Stack Overflow searching were resolved in a matter of minutes.

This project genuinely would not have been completed in the available time without Copilot CLI.

Team Members: @ahmed_waseem_ec0fb5a03620 and @waleed_zaidi_92fd4c9733df

GitHub Copilot CLI in Action

GitHub Copilot CLI in Action — Screenshot 1

GitHub Copilot CLI in Action — Screenshot 2

GitHub Copilot CLI in Action — Screenshot 3

GitHub Copilot CLI in Action — Screenshot 4

GitHub Copilot CLI in Action — Screenshot 5

Destructive Reader LLM — When an Author Gets Tired of Reddit's Gatekeeping

2026-02-15 03:03:40

This is a submission for the GitHub Copilot CLI Challenge

What I Built

I'm a systems administrator who dabbles in programming — and an author with two published novels. I'm working on my third book, and I needed honest, structured feedback on my chapters.

I found r/DestructiveReaders, a Reddit community known for "brutal but loving" literary critique. The concept is exactly what I wanted: direct, specific feedback that doesn't sugarcoat problems but always offers solutions. The reality was different. The community requires extensive karma-building before you can receive a critique — other authors report spending days earning enough credit. And after all that effort, the critiques I read varied wildly in quality.

So I built my own.

Destructive Reader LLM is a Python CLI tool that takes a fiction chapter and delivers structured literary critique in the r/DestructiveReaders style. It uses NVidia Nemotron Nano 30B via Ollama — a free cloud model — guided by a carefully crafted system prompt that captures the community's ethos: be brutal, be loving, be specific, always offer a fix.

The critique follows a consistent structure:

  • Opening Hook — one thing that works, the biggest problem, overall take
  • The Big Issues (2-3 max) — quoted from your text, explained, with concrete fixes
  • Reader Journey — where the critic was hooked, lost, confused, or kept reading
  • Quick Fixes — ranked actionable changes with before/after examples
  • What's Working — genuine positives with quoted evidence

This isn't a toy project. I use it on my actual manuscript chapters. The critique below was generated from a chapter of my published novel in 15 seconds.

Demo

GitHub Repository: github.com/aweussom/DestructiveReader-LLM

Running the tool against a chapter from my published novel:

python destructive-reader-llm.py Markdown/01-AWAKENING.md

The generated critique is saved as Markdown alongside the chapter file, ready to reference during revision.

My Experience with GitHub Copilot CLI

I used GitHub Copilot CLI (v0.0.410, running on the free Claude Haiku 4.5 model) as my development partner for the entire build. The whole tool went from idea to working software in a single session.

Step 1: Describe the project and test connectivity

I opened Copilot CLI and described what I needed — a test script to verify I could connect to Ollama cloud and the Nemotron model. Copilot CLI generated a working test_ollama.py on the first attempt. It worked after I corrected the model name from nemotron-3-nano:latest to nemotron-3-nano:30b-cloud.

Step 2: Build the main tool

I gave Copilot CLI a clear spec: read INSTRUCTIONS.md, accept a chapter filename as argument, build a combined prompt, send to Ollama, save the critique as <chapter-name>-critique-<timestamp>.md. Copilot CLI read my instructions file to understand the context, then generated the complete destructive-reader-llm.py — 145 lines covering argument parsing, file loading, prompt construction, API calls, and output saving. It worked on first run.

Step 3: Refine the output

The critique was truncated on console but saved correctly to disk. I asked Copilot CLI to print the full response, add timing, and display the output filename. Two targeted edits, done.

Step 4: Evaluate the results

Here's where it got interesting. I asked Copilot CLI to read the original chapter, the instructions, and the generated critique — then tell me whether the Nemotron critique was any good and how it would compare to Claude Sonnet 4.5. Copilot CLI gave a thoughtful assessment: Nemotron nails the brutal-but-constructive voice but misses some thematic subtlety that a larger model would catch. Its recommendation — stick with Nemotron for the punchy r/DestructiveReaders style, consider a second model for deeper thematic analysis.

Overall impression

The free tier Haiku 4.5 model in Copilot CLI was more than capable for this kind of structured code generation. Copilot handled the boilerplate and let me focus on what actually matters — the critique prompt and the workflow design. From first prompt to working tool: one session, no debugging required beyond correcting a model name.

Going Back to My School: Where My Habits, Goals, and Dreams Were Born

2026-02-15 02:51:19

Today, I went to my school for the 81st Annual Day celebration — the same school where I studied from 1st to 8th standard. Walking into that place after so many years felt like stepping back into my childhood.

I met my teachers — my English teacher, my 5th standard teacher, science teachers, and our headmistress. I also met my school friends from classes 1 to 8. Seeing everyone again brought back so many memories.

I met Muthammal (classmate) , who studied with me from 6th to 8th standard. She is now married and has two children. Life has changed so much,
One of the most emotional moments was seeing the tree we planted together during our 6th–8th standard days. That tree is still there, growing strong — just like the memories we created in that school.and anand i met him after long time.

Some buildings are no longer there, but many things remain exactly the same as they were during our school days. As we walked around, we spoke about our memories — how we participated in annual day programs, how excited we were to stand on stage, and how proud we felt when we won prizes.

Goals I Set as a School Student

During my school days (1st to 8th standard), I always had three goals:

  1. To win a sports prize
  2. To get a no-leave prize
  3. To become a rank holder (1st rank)

I set these goals seriously during my 6th and 7th standards, but I couldn’t achieve them then. I felt disappointed — but I didn’t give up.

In 8th standard, everything changed.

I won three prizes in one year:

  1. 1st prize in long jump

  2. No-leave prize

  3. 1st rank

From 1st to 8th standard, I studied only in this school. I don’t know if anyone else achieved all these together —maybe someone did, maybe not — but for me, it was a huge personal victory.

I remember telling myself:

“Before I leave this school, I must win all three prizes.”

And it really happened.

Lessons That Stayed in My Blood

During my last few school days, I stopped talking to some boys. and teachers try to convince me to spoke to them.
I also used to come late regularly — and honestly, even today, these habits are still with me. Some things never change; they become part of our blood.

But this school also gave me discipline, confidence, and short-term goal setting — habits that still guide my life.

Teachers Who Changed My Life

We used to call our English teacher “Blade” because her advice was always sharp and honest. One day she said:

“I am a sharp blade, not an unsharp blade.”

That line is still fresh in my mind.

When I spoke with my teachers this time, they told me something very special:

My 5th standard teacher(vassuki)try to get remember my name through this..

I was from the first batch of students who won prizes through the NMMS scholarship, and many students were selected after that.
At first, my teacher couldn’t remember my name — but suddenly she remembered me while talking. That moment touched my heart deeply.

Because of the NMMS scholarship, I was able to buy the mobile phone I am using today. That support changed my life.

When I won, the school even put up a banner in the village. Seeing that banner pushed me strongly to believe:

“One day, I must give something back — especially through education.”

Proud to Be a Government School Student

Maybe because I studied in a government school, I always feel a strong responsibility to guide others. Even today, I believe that my future — and my children’s future — should stay connected to government education, just like this school where I studied.

One sentence my teacher once corrected still stays with me:

“You should say ‘at least’, not ‘as atleast’.”

I will never forget that.

A Funny Memory I’ll Never Forget 😄

The day after Annual Day, I came late again — but still won three prizes.
One prize stuck with other price broke , and everyone tried to split it. and then i try it to split by put it on the windows gaps on the iron bars. I pulled it and took it.
and english teacher said

"athu avanoda price avan vitruvana atha".

That moment still makes me smile.

Little Details I’ll Never Forget

My birthday is on 17th September, and my English teacher’s birthday is on 18th September. That closeness always felt special to me.

My English teacher taught me something very important:

Present: read (pronounced "reed")
Past Tense: read (pronounced "red")
Past Participle: read (pronounced "red")..

Past Tense: Yesterday, I read (red) that book.
Past Participle: I have read (red) that book already.

Small corrections like this shaped my confidence in English. and this is still helped me on my college papers and interviews.

One day, I thought I had fever and sat quietly in class. My English teacher suddenly asked everyone:

“What is your aim?”

and every one came front and said something..
when my turn is come ,i went front and tell"............."
did u remember??( to english teacher.)
then i answered u said clap everyone.. and every one ask why what he said ... and u explained it..

Discipline That Shaped Me

During my school days, I once used bad words at home. My mother came to school and complained to the teacher.

In front of the whole class, my teacher said:

"everyone should call him as "..bad person..""

That moment was painful and embarrassing — but it changed me forever. It taught me self-control and respect.

A Flashback From College Life

When I moved to college, I spoke freely with everyone and did many things. There, bad words were spoken very casually by many people.

One day, my friend George told me:

“I like you because even though you mix with everyone, you never use bad words.”

That was a very special moment for me. I realized that my school and teachers had quietly shaped my character.

Teachers I Still Miss

I miss my Maths teacher and 4th standard teacher — I’m sorry I’ve forgotten her name.
I also miss Saraswathy teacher and Latha teacher.and drawing sir music teacher, Sewing Tailoring Teacher.

About Our Music Teacher

One of my friends, Anand, recently met our music teacher by chance.

Even after so many years, she recognized him by his voice and asked him,

“How are you?”

She may be blind by her eyes, but not by her heart.

That moment reminded me how deeply teachers touch our lives — sometimes in ways that go beyond sight and time.

In school, we always used to call them just “Teacher, Teacher”. Remembering teacher’s names later feels like a big task — but the values they taught us are unforgettable.

A Memory from My 8th Standard – Writing a Poem About My Mother

During my 8th standard, our teacher gave us an English assignment to write a poem about our mother.

Every Sunday, a newspaper used to come to our house. In that paper, there was a section in Dinamalar where poems were published. I think it was for Mother’s Day special. here they writen a poem about Mother.

I read one poem about a mother from that newspaper, and I liked it very much. I copy that poem from the paper and put on my assignment and I cut that poster from the paper pasted it for my assignment.

Later, the teacher asked everyone to read their poems in front of the class. When my turn came, I read it.

After listening, the teacher said,

“I feel like I have read this poem somewhere before.”

I became a little scared. I thought,

“Maybe they think I copied it, and they may reduce my marks.”

But the teacher smiled and said,

“It’s okay. At least you read it well and understood it.”

Then she appreciated me in front of the whole class, and everyone clapped. That moment made me very happy.

After that day, I started going regularly to the nearby mad Pot shop to read newspapers. I thought,

“Maybe this will help me for my next assignment.”

That habit stayed with me for a long time.

Prize Distribution Function – A Moment of Surprise

During the prize distribution function, I was truly shocked.

I could hear the names of so many different games being announced — games that didn’t exist during our time.

I also noticed that the new generation students’ names sounded very different from ours.

Listening to all this made me realize how times change — new games, new names, new generations — but the spirit of school celebrations remains the same.