2026-02-15 03:08:30
I've been using Claude Code for the last 4 months to build SaaS backends. Love it. Until I don't.
You know the pattern. Day 1: Claude writes beautiful auth logic. You're impressed. Day 3: Ask it to add Stripe webhooks. Day 5: Auth is broken. No idea what changed. Day 7: Context window full. Start new session. Day 8: "Wait, what database schema are we using again?"
Every. Single. Time.
I'd spend more time re-explaining my project than actually building it. The "brilliant colleague with amnesia" metaphor is painfully accurate.
Here's what I kept hitting:
Mid-session drift. Claude would start with async/await, then randomly switch to .then() chains 200 lines later. Why? Context degradation. The model "forgets" earlier patterns as the conversation grows.
Schema amnesia. I'd define a users table with specific columns in message 5. By message 40, Claude's suggesting queries for columns that don't exist.
Security regression. RLS policies carefully set up in Phase 1? Completely ignored when adding features in Phase 3.
The Groundhog Day effect. Close laptop Friday. Open Monday. Spend 30 minutes re-explaining the entire project before Claude can write a single line.
I tried everything the internet suggested:
Nothing worked. The fundamental issue is that LLMs have working memory, not long-term memory. They're brilliant in the moment, terrible at maintaining state.
I realized the problem isn't the AI. It's the workflow.
Human developers don't keep entire codebases in their heads either. They use documentation. Design docs. Database schemas. API specs. External references that persist.
So I built a system that orchestrates Claude through specialized agents, each with fresh context windows and specific jobs.
1. PROJECT.md - The vision document
2. REQUIREMENTS.md - Traceable feature definitions
3. ROADMAP.md - Phased execution plan
4. STATE.md - The living memory
These files are sized to avoid context degradation (under 10k tokens each) and serve as a single source of truth for both humans and AI.
Instead of one long Claude conversation, the system spawns specialized parallel agents:
Research agents (4 running in parallel before coding):
Execution agents:
Each agent gets fresh context. No degradation. No drift.
1. INITIALIZE
Describe vision → AI creates PROJECT.md, REQUIREMENTS.md, ROADMAP.md
2. DISCUSS (each phase)
Shape implementation preferences before committing
3. PLAN
Research domain patterns → create verified execution plan
4. EXECUTE
Run plans in parallel waves with fresh contexts → atomic git commits
5. VERIFY
User acceptance testing with automatic debugging
Repeat 2-5 for each phase
Critical rule: Completed phases are locked. The AI can only modify code in the current phase. This prevents the "adding payments breaks auth" problem entirely.
The AI knows what's already built in the boilerplate (auth, Stripe, Razorpay, Supabase, multi-tenancy, emails, admin panel). It only plans what's custom to your domain.
This means:
Last 30 days, I built 3 production SaaS backends using this system:
Analytics Dashboard (13 hours total, across 4 sessions)
Feedback Widget (11 hours, 3 sessions)
Content Calendar (9 hours, 2 sessions)
All production-ready. All built with AI orchestration. All using persistent state across weeks.
After building this system for myself, I packaged it:
/propelkit:new-project
This master command:
Then for each phase:
/propelkit:discuss-phase 1 # Shape your preferences
/propelkit:plan-phase 1 # Research + create execution plan
/propelkit:execute-phase 1 # Build with parallel agents
/propelkit:verify-work # Test with auto-debugging
The system maintains STATE.md automatically. Close laptop. Come back days later. Resume exactly where you left off.
After the third project, I productized it.
What you get:
Production Next.js boilerplate (saves 100+ hours):
Stack: Next.js 16, TypeScript, Supabase, Stripe, Razorpay
One-time purchase. You own the code. Build unlimited products.
The AI PM uses the exact multi-agent orchestration system described above. Persistent state. Parallel research. Boilerplate-aware. Atomic commits.
Demo: propelkit.dev (watch the AI questioning, research, roadmap generation, and execution)
Context engineering - Separate files under degradation thresholds, not one massive chat
Multi-agent orchestration - Fresh contexts per agent, no drift accumulation
Boilerplate awareness - AI knows what exists, only builds what's custom
Atomic commits - One feature per commit, precision rollback
Phase locking - Completed code stays completed, no random rewrites
Domain research - AI understands your industry before writing code
This isn't just for PropelKit. The principles work anywhere - you need persistent state files and fresh context windows per task.
What's your experience with AI code context loss? Have you found other systems that work?
2026-02-15 03:07:40
The days of relying solely on application logs to debug complex, distributed systems are over. With microservices architectures and serverless functions becoming the standard, understanding the state of your application requires more than just knowing what happened—it requires knowing where, why, and how it happened across a sprawling infrastructure.
In 2026, observability isn't just about logs; it's about the three pillars: Metrics, Logs, and Distributed Tracing.
The Shift from Monitoring to Observability
Monitoring tells you when a system is broken (e.g., CPU > 90%). Observability tells you why it is broken (e.g., "Service A is slow because Service B is taking 500ms to query PostgreSQL").
An effective observability pipeline must be proactive, not reactive. If you are waiting for a user to report an error before you see it, your observability pipeline has failed.
Designing the Pipeline: The OpenTelemetry Standard
OpenTelemetry (OTel) has emerged as the industry-standard framework for instrumenting, generating, collecting, and exporting telemetry data. By adopting OTel, you avoid vendor lock-in and create a unified, standard data format for your traces and metrics.
Instrumentation: Use OpenTelemetry auto-instrumentation libraries to collect data from your applications without changing your code.
Collection: Deploy an OpenTelemetry Collector as a sidecar or agent. This component is crucial because it decouples your application from the backend monitoring tool.
Backend: Send the data to a backend of your choice (e.g., Grafana Tempo, Honeycomb, Datadog).
Key 2026 Trends: Distributed Tracing & Edge Computing
Context Propagation: When a request flows through multiple microservices, you must ensure the same trace ID accompanies it. This allows you to visualize the entire request journey.
Edge Functions: With more logic moving to the edge (e.g., Vercel, Cloudflare Workers), your traces must span from the edge function down to your backend APIs, giving a complete picture of latency.
Implementing Real-time Alerts
Don't alert on everything. Alert on symptoms, not causes. A high CPU is a cause; a high 5xx error rate is a symptom. Use tools like Prometheus for metrics and Grafana for visualization to set up SLI/SLO (Service Level Indicator/Objective) alerting.
Conclusion
Building a robust observability pipeline takes time, but it is an investment in stability. It turns debugging from a frantic guessing game into a methodical investigation.
2026-02-15 03:06:08
If you are learning another (human) language, you might wish to create bilingual documents where a text in a language you know and its translation into the language you are learning are displayed in two columns, side by side.
bilingual_pdf is a CLI application for Mac, Linux and Windows that will create a two-column bilingual PDF document, from your input Markdown document.
As an example, here is the bilingual_pdf project's README.md used as a source to bilingual_pdf, and the resulting English + Spanish version README.en.es.pdf.
By default, bilingual_pdf translates your input automatically using Google Translate. You can also get the resulting translation as a Markdown document, which you can edit and use instead of the automatic translation.
Enjoy!
2026-02-15 03:04:13
This is a submission for the GitHub Copilot CLI Challenge
RepoHealth AI is a full-stack web application that analyzes any public GitHub repository and gives it a comprehensive health score powered by real data and AI-generated insights.
The idea came from a simple frustration — when you discover a new open source repo, it's hard to quickly judge whether it's well-maintained, properly documented, or worth depending on. RepoHealth AI solves that in seconds.
Here's what it does:
You paste any GitHub repository URL and the app instantly fetches live data from the GitHub API and displays a full dashboard including:
The app features a full dark/light theme toggle, responsive design across all screen sizes, smooth animations, and a clean professional UI built to feel like a real developer tool.
Live App: https://repohealth-ai.vercel.app
GitHub Repo: https://github.com/Muhammad-Ahmed-Rayyan/RepoHealth-AI
🔗 Live Demo: https://repohealth-ai.vercel.app
How to test:
https://github.com/facebook/react)Screenshots:
Landing Page — Clean search interface with dark theme
Dashboard — Full repository health analysis with charts and stats
GitHub Copilot CLI was not just a helper in this project — it was essential to the entire build. From the very first line of code to the final deployment, every part of this project was built using Copilot CLI directly in the terminal.
How I used it:
The entire project architecture was scaffolded using Copilot CLI. I described what I wanted — a full-stack app with a React frontend, Node.js/Express backend, GitHub API integration, and AI-powered analysis — and Copilot CLI generated the complete project structure, boilerplate, and initial implementations.
Where it made the biggest difference — Debugging:
The most impressive part of working with Copilot CLI was how fast it helped resolve bugs. For example:
When the GitHub API was returning 401 Unauthorized errors, Copilot CLI diagnosed the root cause immediately — the process.env.GITHUB_TOKEN was being read at module import time before dotenv.config() had run. It suggested converting the static axios instance into a factory function that reads the token at runtime. Problem solved in minutes.
When the AI analysis was failing with "Cannot read properties of undefined (reading 'exists')", Copilot CLI traced the issue to a data flow problem — the repo data wasn't being passed correctly to the AI service after separating AI generation into its own triggered flow. It restructured the API call chain and added optional chaining (?.) throughout the AI service.
When Google Gemini returned a 404 model not found error, Copilot CLI suggested running a curl command to list available models from the API, then immediately updated the model name to the correct one.
Other highlights:
Working with Copilot CLI felt like having a senior developer pair programming with me directly in the terminal. The speed at which it could read existing code, understand context, and apply targeted fixes was genuinely impressive. Tasks that would have taken hours of Stack Overflow searching were resolved in a matter of minutes.
This project genuinely would not have been completed in the available time without Copilot CLI.
Team Members: @ahmed_waseem_ec0fb5a03620 and @waleed_zaidi_92fd4c9733df
2026-02-15 03:03:40
This is a submission for the GitHub Copilot CLI Challenge
I'm a systems administrator who dabbles in programming — and an author with two published novels. I'm working on my third book, and I needed honest, structured feedback on my chapters.
I found r/DestructiveReaders, a Reddit community known for "brutal but loving" literary critique. The concept is exactly what I wanted: direct, specific feedback that doesn't sugarcoat problems but always offers solutions. The reality was different. The community requires extensive karma-building before you can receive a critique — other authors report spending days earning enough credit. And after all that effort, the critiques I read varied wildly in quality.
So I built my own.
Destructive Reader LLM is a Python CLI tool that takes a fiction chapter and delivers structured literary critique in the r/DestructiveReaders style. It uses NVidia Nemotron Nano 30B via Ollama — a free cloud model — guided by a carefully crafted system prompt that captures the community's ethos: be brutal, be loving, be specific, always offer a fix.
The critique follows a consistent structure:
This isn't a toy project. I use it on my actual manuscript chapters. The critique below was generated from a chapter of my published novel in 15 seconds.
GitHub Repository: github.com/aweussom/DestructiveReader-LLM
Running the tool against a chapter from my published novel:
python destructive-reader-llm.py Markdown/01-AWAKENING.md
The generated critique is saved as Markdown alongside the chapter file, ready to reference during revision.
I used GitHub Copilot CLI (v0.0.410, running on the free Claude Haiku 4.5 model) as my development partner for the entire build. The whole tool went from idea to working software in a single session.
I opened Copilot CLI and described what I needed — a test script to verify I could connect to Ollama cloud and the Nemotron model. Copilot CLI generated a working test_ollama.py on the first attempt. It worked after I corrected the model name from nemotron-3-nano:latest to nemotron-3-nano:30b-cloud.
I gave Copilot CLI a clear spec: read INSTRUCTIONS.md, accept a chapter filename as argument, build a combined prompt, send to Ollama, save the critique as <chapter-name>-critique-<timestamp>.md. Copilot CLI read my instructions file to understand the context, then generated the complete destructive-reader-llm.py — 145 lines covering argument parsing, file loading, prompt construction, API calls, and output saving. It worked on first run.
The critique was truncated on console but saved correctly to disk. I asked Copilot CLI to print the full response, add timing, and display the output filename. Two targeted edits, done.
Here's where it got interesting. I asked Copilot CLI to read the original chapter, the instructions, and the generated critique — then tell me whether the Nemotron critique was any good and how it would compare to Claude Sonnet 4.5. Copilot CLI gave a thoughtful assessment: Nemotron nails the brutal-but-constructive voice but misses some thematic subtlety that a larger model would catch. Its recommendation — stick with Nemotron for the punchy r/DestructiveReaders style, consider a second model for deeper thematic analysis.
The free tier Haiku 4.5 model in Copilot CLI was more than capable for this kind of structured code generation. Copilot handled the boilerplate and let me focus on what actually matters — the critique prompt and the workflow design. From first prompt to working tool: one session, no debugging required beyond correcting a model name.
2026-02-15 02:51:19
Today, I went to my school for the 81st Annual Day celebration — the same school where I studied from 1st to 8th standard. Walking into that place after so many years felt like stepping back into my childhood.
I met my teachers — my English teacher, my 5th standard teacher, science teachers, and our headmistress. I also met my school friends from classes 1 to 8. Seeing everyone again brought back so many memories.
I met Muthammal (classmate) , who studied with me from 6th to 8th standard. She is now married and has two children. Life has changed so much,
One of the most emotional moments was seeing the tree we planted together during our 6th–8th standard days. That tree is still there, growing strong — just like the memories we created in that school.and anand i met him after long time.
Some buildings are no longer there, but many things remain exactly the same as they were during our school days. As we walked around, we spoke about our memories — how we participated in annual day programs, how excited we were to stand on stage, and how proud we felt when we won prizes.
During my school days (1st to 8th standard), I always had three goals:
I set these goals seriously during my 6th and 7th standards, but I couldn’t achieve them then. I felt disappointed — but I didn’t give up.
In 8th standard, everything changed.
I won three prizes in one year:
1st prize in long jump
No-leave prize
1st rank
From 1st to 8th standard, I studied only in this school. I don’t know if anyone else achieved all these together —maybe someone did, maybe not — but for me, it was a huge personal victory.
I remember telling myself:
“Before I leave this school, I must win all three prizes.”
And it really happened.
During my last few school days, I stopped talking to some boys. and teachers try to convince me to spoke to them.
I also used to come late regularly — and honestly, even today, these habits are still with me. Some things never change; they become part of our blood.
But this school also gave me discipline, confidence, and short-term goal setting — habits that still guide my life.
Teachers Who Changed My Life
We used to call our English teacher “Blade” because her advice was always sharp and honest. One day she said:
“I am a sharp blade, not an unsharp blade.”
That line is still fresh in my mind.
My 5th standard teacher(vassuki)try to get remember my name through this..
I was from the first batch of students who won prizes through the NMMS scholarship, and many students were selected after that.
At first, my teacher couldn’t remember my name — but suddenly she remembered me while talking. That moment touched my heart deeply.
Because of the NMMS scholarship, I was able to buy the mobile phone I am using today. That support changed my life.
When I won, the school even put up a banner in the village. Seeing that banner pushed me strongly to believe:
“One day, I must give something back — especially through education.”
Maybe because I studied in a government school, I always feel a strong responsibility to guide others. Even today, I believe that my future — and my children’s future — should stay connected to government education, just like this school where I studied.
One sentence my teacher once corrected still stays with me:
“You should say ‘at least’, not ‘as atleast’.”
I will never forget that.
The day after Annual Day, I came late again — but still won three prizes.
One prize stuck with other price broke , and everyone tried to split it. and then i try it to split by put it on the windows gaps on the iron bars. I pulled it and took it.
and english teacher said
"athu avanoda price avan vitruvana atha".
That moment still makes me smile.
My birthday is on 17th September, and my English teacher’s birthday is on 18th September. That closeness always felt special to me.
My English teacher taught me something very important:
Present: read (pronounced "reed")
Past Tense: read (pronounced "red")
Past Participle: read (pronounced "red")..
Past Tense: Yesterday, I read (red) that book.
Past Participle: I have read (red) that book already.
Small corrections like this shaped my confidence in English. and this is still helped me on my college papers and interviews.
One day, I thought I had fever and sat quietly in class. My English teacher suddenly asked everyone:
“What is your aim?”
and every one came front and said something..
when my turn is come ,i went front and tell"............."
did u remember??( to english teacher.)
then i answered u said clap everyone.. and every one ask why what he said ... and u explained it..
During my school days, I once used bad words at home. My mother came to school and complained to the teacher.
In front of the whole class, my teacher said:
"everyone should call him as "..bad person..""
That moment was painful and embarrassing — but it changed me forever. It taught me self-control and respect.
When I moved to college, I spoke freely with everyone and did many things. There, bad words were spoken very casually by many people.
One day, my friend George told me:
“I like you because even though you mix with everyone, you never use bad words.”
That was a very special moment for me. I realized that my school and teachers had quietly shaped my character.
I miss my Maths teacher and 4th standard teacher — I’m sorry I’ve forgotten her name.
I also miss Saraswathy teacher and Latha teacher.and drawing sir music teacher, Sewing Tailoring Teacher.
One of my friends, Anand, recently met our music teacher by chance.
Even after so many years, she recognized him by his voice and asked him,
“How are you?”
She may be blind by her eyes, but not by her heart.
That moment reminded me how deeply teachers touch our lives — sometimes in ways that go beyond sight and time.
In school, we always used to call them just “Teacher, Teacher”. Remembering teacher’s names later feels like a big task — but the values they taught us are unforgettable.
During my 8th standard, our teacher gave us an English assignment to write a poem about our mother.
Every Sunday, a newspaper used to come to our house. In that paper, there was a section in Dinamalar where poems were published. I think it was for Mother’s Day special. here they writen a poem about Mother.
I read one poem about a mother from that newspaper, and I liked it very much. I copy that poem from the paper and put on my assignment and I cut that poster from the paper pasted it for my assignment.
Later, the teacher asked everyone to read their poems in front of the class. When my turn came, I read it.
After listening, the teacher said,
“I feel like I have read this poem somewhere before.”
I became a little scared. I thought,
“Maybe they think I copied it, and they may reduce my marks.”
But the teacher smiled and said,
“It’s okay. At least you read it well and understood it.”
Then she appreciated me in front of the whole class, and everyone clapped. That moment made me very happy.
After that day, I started going regularly to the nearby mad Pot shop to read newspapers. I thought,
“Maybe this will help me for my next assignment.”
That habit stayed with me for a long time.
During the prize distribution function, I was truly shocked.
I could hear the names of so many different games being announced — games that didn’t exist during our time.
I also noticed that the new generation students’ names sounded very different from ours.
Listening to all this made me realize how times change — new games, new names, new generations — but the spirit of school celebrations remains the same.