2026-05-01 02:06:49

If you’ve ever felt stuck while learning Data Structures and Algorithms (DSA), you’re not alone. The real challenge isn’t intelligence or talent—it’s consistency. The good news? Science has a lot to say about how to build it, and the insights are surprisingly practical.
Research in Deliberate Practice, popularized by Anders Ericsson, shows that expertise isn’t about mindless repetition—it’s about focused, structured practice over time. In DSA, this means solving problems regularly, analyzing mistakes, and gradually increasing difficulty.
Another key idea comes from the Spacing Effect. Studies show that learning is stronger when practice is spread out over time instead of crammed into long sessions. So solving 2–3 problems daily is far more effective than doing 20 in one day and burning out.
Consistency is essentially a habit, and habits are governed by the brain’s reward system. According to research on Dopamine, small wins—like solving a problem or understanding a concept—release dopamine, reinforcing the behavior.
This connects with the Habit Loop, introduced by Charles Duhigg:
Over time, this loop becomes automatic.
The Kaizen approach suggests making tiny improvements daily. Start with:
This reduces resistance and builds momentum.
Instead of passively reading solutions, try recalling approaches. This leverages Active Recall, proven to strengthen memory retention.
Revisit problems after 1 day, 3 days, and 1 week. This uses Spaced Repetition, which significantly improves long-term retention.
Humans are motivated by visible progress. Even a simple tracker works:
This ties back to dopamine reinforcement—progress itself becomes rewarding.
Struggling isn’t failure—it’s part of learning. Studies in cognitive science show that “desirable difficulty” improves understanding. When a problem feels hard, your brain is literally forming stronger connections.
This aligns with both the spacing effect and habit formation science.
Consistency in DSA isn’t about motivation—it’s about systems. When you combine deliberate practice, spaced learning, and habit loops, progress becomes almost inevitable.
You don’t need to be extraordinary. You just need to show up—regularly, intentionally, and patiently.
2026-05-01 01:57:45
The ASP.NET Core middleware pipeline is the backbone of every HTTP request your application processes. Yet most developers only scratch the surface—registering UseAuthentication() and UseAuthorization() without understanding what actually happens in between. This comprehensive guide takes you from fundamental concepts to advanced patterns, performance optimization, and real-world usage scenarios.
Middleware in ASP.NET Core is software that's assembled into an app pipeline to handle requests and responses. Each middleware component:
Think of it as an assembly line: each station does one specific job (logging, authentication, CORS headers, compression), and the request moves from station to station. The response then travels back through the same chain in reverse order.
At the core of every middleware component are request delegates—functions that process HTTP requests. There are three ways to define them:
app.Use() - passes to next middlewareapp.Run() - short-circuits the pipelineThe HttpContext object carries everything about the current HTTP request and response. Every middleware receives it, and it's how components communicate:
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
app.Use(async (context, next) =>
{
Console.WriteLine($"Before: {context.Request.Path}");
await next(); // Call next middleware
Console.WriteLine($"After: {context.Response.StatusCode}");
});
app.Run(async context =>
{
await context.Response.WriteAsync("Hello, World!");
});
app.Run();
Registration order = execution order. This is the most critical concept. The request flows forward through middleware, the response flows backward. Get the order wrong, and your application silently misbehaves.
Here's what happens when you register three middleware components:
app.Use(async (context, next) =>
{
Console.WriteLine("Middleware 1: Incoming request");
await next();
Console.WriteLine("Middleware 1: Outgoing response");
});
app.Use(async (context, next) =>
{
Console.WriteLine("Middleware 2: Incoming request");
await next();
Console.WriteLine("Middleware 2: Outgoing response");
});
app.Run(async context =>
{
Console.WriteLine("Middleware 3: Terminal - handling request");
await context.Response.WriteAsync("Hello, world!");
});
Console output:
Middleware 1: Incoming request
Middleware 2: Incoming request
Middleware 3: Terminal - handling request
Middleware 2: Outgoing response
Middleware 1: Outgoing response
Notice the nesting pattern—it's like Russian dolls. Middleware 1 wraps Middleware 2, which wraps Middleware 3.
ASP.NET Core ships with middleware for the most common cross-cutting concerns:
Catches unhandled exceptions and returns structured error responses.
app.UseExceptionHandler();
Maps incoming requests to endpoints. In .NET 10 with minimal APIs, routing is implicit.
app.UseRouting();
Establishes user identity and enforces access policies. Always register authentication before authorization.
app.UseAuthentication();
app.UseAuthorization();
Redirects HTTP requests to HTTPS.
app.UseHttpsRedirection();
Controls cross-origin request access for browser-based clients.
app.UseCors("AllowFrontend");
Compresses responses using gzip or Brotli to reduce bandwidth.
app.UseResponseCompression();
Server-side response caching with tag-based invalidation.
app.UseOutputCache();
Built-in rate limiting with fixed window, sliding window, token bucket, and concurrency algorithms.
app.UseRateLimiter();
This is the most common approach for custom middleware. A convention-based middleware class must:
RequestDelegate in its constructor (represents the next middleware)InvokeAsync method that takes HttpContext and returns Task
await next(context) to pass control (or skip it to short-circuit)Request Logging Middleware Example:
public class RequestLoggingMiddleware
{
private readonly RequestDelegate _next;
private readonly ILogger<RequestLoggingMiddleware> _logger;
public RequestLoggingMiddleware(RequestDelegate next, ILogger<RequestLoggingMiddleware> logger)
{
_next = next;
_logger = logger;
}
public async Task InvokeAsync(HttpContext context)
{
var startTime = DateTime.UtcNow;
_logger.LogInformation("Incoming {Method} {Path}",
context.Request.Method, context.Request.Path);
await _next(context);
var elapsed = DateTime.UtcNow - startTime;
_logger.LogInformation("Completed {Method} {Path} with {StatusCode} in {Elapsed}ms",
context.Request.Method, context.Request.Path,
context.Response.StatusCode, elapsed.TotalMilliseconds);
}
}
Registration:
// Option 1: Direct registration
app.UseMiddleware<RequestLoggingMiddleware>();
// Option 2: Extension method (preferred)
public static class MiddlewareExtensions
{
public static IApplicationBuilder UseRequestLogging(this IApplicationBuilder app)
{
return app.UseMiddleware<RequestLoggingMiddleware>();
}
}
app.UseRequestLogging();
Convention-based middleware has a limitation: it's registered as a singleton by default. If you need scoped services (like DbContext), you'll encounter lifetime mismatch issues. The IMiddleware interface solves this:
public class CorrelationIdMiddleware : IMiddleware
{
private readonly ILogger<CorrelationIdMiddleware> _logger;
public CorrelationIdMiddleware(ILogger<CorrelationIdMiddleware> logger)
{
_logger = logger;
}
public async Task InvokeAsync(HttpContext context, RequestDelegate next)
{
var correlationId = context.Request.Headers["X-Correlation-Id"].FirstOrDefault()
?? Guid.NewGuid().ToString();
context.Items["CorrelationId"] = correlationId;
context.Response.Headers["X-Correlation-Id"] = correlationId;
using (_logger.BeginScope(new Dictionary<string, object>
{ ["CorrelationId"] = correlationId }))
{
_logger.LogInformation("Request {Path} assigned CorrelationId {CorrelationId}",
context.Request.Path, correlationId);
await next(context);
}
}
}
Registration:
// Step 1: Register in DI container
builder.Services.AddTransient<CorrelationIdMiddleware>();
// Step 2: Add to pipeline
app.UseMiddleware<CorrelationIdMiddleware>();
When to use which:
ASP.NET Core provides three methods for branching the middleware pipeline:
Creates a separate pipeline for matching paths:
app.Map("/health", async ctx =>
{
await ctx.Response.WriteAsync("OK");
});
Creates a permanent fork based on a predicate:
app.MapWhen(
context => context.Request.Query.ContainsKey("branch"),
appBuilder => appBuilder.Run(async context =>
{
var branchVer = context.Request.Query["branch"];
await context.Response.WriteAsync($"Branch used = '{branchVer}'");
})
);
Runs middleware for matching requests, then rejoins the main pipeline:
app.UseWhen(
context => context.Request.Path.StartsWithSegments("/api"),
appBuilder => appBuilder.UseMiddleware<RequestLoggingMiddleware>()
);
Key Difference:
Short-circuiting means returning a response without calling next(), preventing downstream middleware from executing. This is crucial for performance.
public class MaintenanceMiddleware
{
private readonly RequestDelegate _next;
private readonly IConfiguration _configuration;
public MaintenanceMiddleware(RequestDelegate next, IConfiguration configuration)
{
_next = next;
_configuration = configuration;
}
public async Task InvokeAsync(HttpContext context)
{
var isMaintenanceMode = _configuration.GetValue<bool>("MaintenanceMode");
if (isMaintenanceMode)
{
context.Response.StatusCode = StatusCodes.Status503ServiceUnavailable;
context.Response.ContentType = "application/json";
await context.Response.WriteAsJsonAsync(new
{
Status = 503,
Message = "Service is under maintenance. Please try again later."
});
return; // Short-circuit - do not call next
}
await next(context);
}
}
Register this before other middleware so it can block all requests during maintenance.
If you can return a response without calling next(), do it immediately. Every skipped middleware saves CPU cycles.
app.Use(async (context, next) =>
{
// Cheap check first — skip entire pipeline for OPTIONS
if (context.Request.Method == HttpMethods.Options)
{
context.Response.StatusCode = StatusCodes.Status204NoContent;
return; // no next() — short-circuit
}
await next();
});
Put cheap, high-skip-rate middleware first. Authentication, CORS, and routing should run before expensive operations.
Recommended Order:
app.UseExceptionHandler(); // Only fires on errors
app.UseHttpsRedirection(); // Cheap redirect
app.UseRouting(); // Essential
app.UseAuthentication(); // May short-circuit unauthorized
app.UseAuthorization(); // Same
app.UseResponseCompression(); // Compress responses
app.UseEndpoints(); // Execute endpoints
Using .Result, .Wait(), or GetAwaiter().GetResult() inside middleware will deadlock your thread pool.
// ❌ BAD — blocks thread pool
app.Use(async (context, next) =>
{
var data = GetSomethingAsync().Result; // Deadlock risk!
await next();
});
// ✅ GOOD — truly asynchronous
app.Use(async (context, next) =>
{
var data = await GetSomethingAsync(); // Proper async
await next();
});
Middleware that reads context.Request.Body into memory forces the entire body to be buffered. For file uploads or large payloads, process the stream directly or set reasonable body size limits.
Every layer of your stack should be async—database calls, HTTP client requests, serialization. If one piece sync-blocks, it bottlenecks the whole pipeline.
Replaces UseStaticFiles() with built-in compression and ETag support:
// Old approach
app.UseStaticFiles();
// .NET 9 — auto-compresses, adds ETags, better caching
app.MapStaticAssets();
Exception handlers got a StatusCodeSelector attribute:
builder.Services.AddExceptionHandler<AppExceptionHandler>();
public class AppExceptionHandler : IExceptionHandler
{
public async ValueTask<bool> TryHandleAsync(
HttpContext httpContext, Exception exception,
CancellationToken cancellationToken)
{
httpContext.Response.StatusCode = exception switch
{
NotFoundException => StatusCodes.Status404NotFound,
ValidationException => StatusCodes.Status422UnprocessableEntity,
_ => StatusCodes.Status500InternalServerError
};
await httpContext.Response.WriteAsJsonAsync(
new { error = exception.Message }, cancellationToken);
return true; // handled
}
}
Route groups now support ProducesProblem consistently for uniform error shapes.
app.UseRequestLogging(); // Capture request/response details
app.UseCorrelationId(); // Add correlation IDs for tracing
app.UseSerilogRequestLogging(); // Structured logging
app.UseExceptionHandler(); // Global exception handling
app.UseHttpsRedirection(); // Force HTTPS
app.UseCors("AllowFrontend"); // CORS headers
app.UseAuthentication(); // Establish identity
app.UseAuthorization(); // Check permissions
app.UseRateLimiter(); // Rate limiting
app.UseResponseCompression(); // Compress responses
app.UseOutputCache(); // Server-side caching
app.UseStaticFiles(); // Serve static assets
app.UseRouting(); // Route matching
Problem: Authentication before authorization causes random failures.
Solution: Always follow: Authentication → Authorization → Endpoint execution.
Problem: Using .Result or .Wait() in middleware blocks thread pool.
Solution: Always use await for async operations.
Problem: Convention-based middleware is singleton; scoped services cause memory leaks.
Solution: Use IMiddleware interface for per-request activation.
Problem: Middleware calls next() even when it shouldn't, wasting resources.
Solution: Short-circuit early when possible (auth failures, maintenance mode).
Problem: Middleware buffering large request bodies into memory.
Solution: Process streams directly or set body size limits.
Problem: Exception middleware registered after other middleware misses exceptions.
Solution: Register exception handling middleware first in the pipeline.
The ASP.NET Core middleware pipeline is where your application does its real work. Understanding the fundamentals—execution order, branching, short-circuiting—is essential for building robust, performant applications. Follow the recommended patterns, avoid common pitfalls, and leverage .NET 9 improvements for optimal results.
Remember: middleware is the backbone of every HTTP request. Get it right, and your application runs smoothly. Get it wrong, and you'll spend hours debugging subtle issues.
2026-05-01 01:56:45
A vector database is a database used to store vectors (points in space) where data with similar meanings are positioned close together. These vectors are generated using embedding models or LLM embedding models. One of the embedding models is nomic-embed-text. We can download this model using Ollama.
One-hot encoding is a technique used to convert categorical data (like words) into binary vectors.
How it works:
Each unique word in a vocabulary is mapped to a vector that is mostly zeros except for a single 1 at a specific index.
Example:
Today is Wednesday
Tomorrow is Thursday
I am travelling Today
Wednesday is a nice series
Vocabulary values:
[Today, is, Wednesday, Tomorrow, Thursday, I, am, Travelling, a, nice, series]
Vector representation:
Line 1 = [1,1,1,0,0,0,0,0,0,0,0]
Line 2 = [0,1,0,1,1,0,0,0,0,0,0]
Line 3 = [1,0,0,0,0,1,1,1,0,0,0]
Line 4 = [0,1,1,0,0,0,0,0,1,1,1]
Disadvantages:
No semantic meaning
High dimensionality
Not scalable
Because of these limitations, modern RAG systems use vector databases where chunks are converted into vectors in a high-dimensional space, where similar meanings are positioned close together.
Documents will be split into chunks. Each chunk will be converted into a vector using an embedding model. The resulting vector will be stored in the vector DB. Chunks with similar semantic meaning are stored closer together in vector space.
When a user query arrives, the LLM will search for the vectors that are closest to the user query by distance.
To calculate the distance, we can use:
Euclidean Distance (based on the Pythagorean theorem)
Manhattan method
Cosine similarity (finds the smaller angle to the user vector)
Calculating similarity against every vector becomes computationally expensive. For that, we use ANN and KNN algorithms.
Some of the popular vector databases are:
Chroma
FAISS
Pinecone
Qdrant – commonly used for embeddings, semantic search, and image similarity search.
MongoDB – It also has vector database support
Data Ingestion
Data or documents will be split into chunks.
Each chunk will be converted into vectors using embedding models
Stored in the vector DB.
Data Retrieval
User query will be converted into a vector using an embedding model. Semantically related vectors will be obtained using search algorithms in the vector DB. Along with the user query, the retrieved chunks are provided to the LLM as context to get output in human-readable format.
2026-05-01 01:55:48
We had seven clusters, sixty developers, and a $40K/month AWS bill no one could explain. Here's the architecture that fixed it — and what we'd do differently.
Three days. That's how long a mid-level engineer waited for a staging environment last year while a Friday release deadline approached.
Not because we were negligent. Because staging environment provisioning required a senior engineer to manually wire Postgres, Redis, ingress config, RBAC bindings, and namespace allocation — while that same senior engineer was handling an active incident and two other identical requests. The environment was ready Thursday. The feature shipped late.
We had a platform engineering problem. What took us longer to admit was that the obvious solutions were going to make it worse.
Sprawl is insidious because it looks like growth. Namespaces accumulate. Engineers spin up test environments, finish the work, move on. The namespace stays. The Postgres pod stays. The load balancer stays. Nobody deletes things they didn't explicitly create.
When finance flagged a $40K month-over-month spike, we spent a week cross-referencing AWS Cost Explorer with Slack history trying to figure out which team owned what. We couldn't. Cost attribution was aspirational. The actual state of our clusters was known only approximately, by the people who'd been there long enough to remember what they'd provisioned.
Flexera's State of the Cloud 2025 puts industry-wide cloud waste at up to 32% from idle and overprovisioned resources. We were running hotter than that.
The YAML problem compounded everything. Junior engineers couldn't self-serve — every new service needed a senior engineer to write Deployment manifests, configure resource limits, set up HPA, wire RBAC, and identify the right ServiceAccount for private registry access. We'd built an architecture that required senior engineers for routine operations. That's not a staffing problem. That's a design problem.
Measured honestly: 20–35% of our engineering hours were going to infrastructure toil. That's consistent with IDC's research on how developers actually spend their time. It's also roughly 1.5 FTE per month doing work that, in theory, shouldn't require human judgment.
We ran a two-month Backstage proof of concept. Here's what we learned.
Backstage is a React application that your team owns. That's the thing nobody says clearly upfront. The plugin ecosystem is real. The software catalog concept is good. But operating Backstage in production means maintaining a React app, a Node backend, a Postgres database, and a plugin integration layer — in addition to the clusters you're trying to simplify. Cortex's analysis of real deployments puts the staffing requirement at 3–12 engineers. For a three-person platform team, that math doesn't work. And Backstage ships with no AI features. Every AI capability is a plugin you build and maintain yourself.
We looked at Humanitec and Port. Both are genuinely capable. Both have a structural problem: your infrastructure state lives in their cloud. Environment definitions, deployment configs, service topology — all stored externally. When we asked both vendors what a migration away would look like, neither gave a satisfying answer. That's not a knock on them — it's the inherent tension of a SaaS IDP. To give you a good product, they need to own your state.
Humanitec's pricing at the time: $2,199/month for five users. We had sixty developers.
The constraint we set: all state lives in the cluster, in standard Kubernetes primitives. No external services storing our data. Migrate away by running kubectl get.
Fortem is a Kubernetes Operator with a UI layer. When a developer requests an environment, they create a FortemEnvironment custom resource. The Operator's reconciliation loop provisions the constituent resources — Deployments, Services, PVCs, ConfigMaps, RBAC bindings — and writes status conditions back to the CRD.
apiVersion: fortem.dev/v1alpha1
kind: FortemEnvironment
metadata:
name: feature-payments-v2
namespace: team-backend
spec:
template: microservice-stack
services:
- name: payments-api
image: registry.internal/payments:pr-442
- name: postgres
preset: postgres-15-small
- name: redis
preset: redis-7-ephemeral
ttl: 72h
The spec is declarative and portable. Put it in Git. Apply it with kubectl. The TTL field handles cleanup — when it expires, the Operator tears down the environment and releases the resources. No manual deletion. No orphaned namespaces.
Three AI integrations sit on top of the Operator:
NL-to-manifest. Engineers describe an environment in plain English and get a FortemEnvironment manifest back, with dry-run preview before anything is applied. This works well for templated environments. It's less reliable for novel configurations — the LLM occasionally generates plausible-looking but invalid resource specs, which the dry-run catches. We treat it as a starting point, not a final answer.
Idle detection. The Operator tracks inbound traffic and deployment activity per namespace. Zero traffic + zero deploys for 48 hours (configurable) triggers an idle flag. Auto-shutdown or manual review, your choice. The first month caught 23 abandoned environments. A typical idle environment — Postgres, a few services, load balancer — runs $180–250/month. We recovered roughly $4,200/month from that initial pass.
Incident diagnosis. On crash loop or unexpected HPA trigger, the Operator aggregates recent logs, events, and resource metrics into a structured prompt and runs it through the configured LLM. Output is a root cause summary and a suggested fix. It's correct often enough to cut mean-time-to-understand significantly — not correct enough to act on without review.
Install is a single Helm chart, runs entirely inside your cluster:
helm install fortem fortem/fortem \
--namespace fortem-system \
--create-namespace \
--set ai.provider=anthropic \
--set ai.apiKey=$ANTHROPIC_API_KEY
No egress requirements beyond your LLM provider. No Fortem infrastructure touches your data.
Migrating away: kubectl get fortemenv -A -o yaml > environments.yaml. The underlying resources are all native K8s objects. They exist independently of Fortem. The migration path is real because we tested it — we ran the export against a staging cluster before committing to the architecture.
Environment provisioning: 2–3 days to under 8 minutes. This is the number that gets cited, and it's accurate, but it understates the change. The bigger shift is that provisioning no longer requires senior engineer involvement. Junior engineers self-serve. The senior engineers work on things that need senior judgment.
Cloud spend: down 55% from the baseline we measured at the start of the idle detection project. The idle environment reclamation accounts for most of it. Right-sizing recommendations from the AI layer account for the rest.
Cost attribution: automatic. Every FortemEnvironment carries team and namespace labels that flow through to cost metering. The monthly finance conversation is now a dashboard, not a spreadsheet archaeology project.
What didn't get better: the Operator model trades one kind of complexity for another. You're maintaining CRD schemas, managing controller health, and debugging reconciliation loops when the Operator gets into a bad state. We've had three incidents where the Operator's reconciler got stuck on a malformed resource and stopped processing the queue. That's recoverable, but it requires understanding the Operator internals. The abstraction has a floor.
Community tier is free — one cluster, three environments, basic AIOps. The docs walk through a working environment in about 20 minutes on an existing cluster.
The engineer who sent that Tuesday Slack message hasn't waited more than 10 minutes for an environment since we shipped this. That outcome isn't because we built something clever. It's because environment provisioning is now a reconciliation loop — deterministic, auditable, and not dependent on a senior engineer being available.
2026-05-01 01:54:27
Part 1: Why Vibe Coding Breaks Down
Last month, I spent an afternoon building a URL shortener with Claude.
The first prompt worked beautifully. Code appeared. Tests passed. I felt like a wizard.
By the fifth prompt, things got weird. The AI added features I did not ask for. It changed the database schema twice. It could not remember what the API endpoints were supposed to be.
By the tenth prompt, I was not coding. I was negotiating with a machine that had forgotten the conversation we had twenty minutes ago.
This is not a failure of AI. This is the failure of the method.
Most of us use AI the same way we use a search engine: we ask for something, we get an answer, we move on. That works for recipes and trivia. It does not work for building software.
Because AI has a hidden limit. Not a technical limit — a structural one. And once you see it, you cannot unsee it.
In this series, I will show you that limit. Then I will show you a simple way around it.
The solution is not a better prompt. It is a different way to think about the work.
What if the problem was never the AI?
1.1 The Pattern: It Starts So Well
You have felt this. Everyone who has built anything non-trivial with AI has felt this.
You sit down with Claude, GPT, or Copilot. You start a new project. The first few exchanges feel like magic.
"Build me a URL shortener API."
Code appears. Beautiful, working code. You test it. It works. You are elated.
"Now add analytics so I can see how many times each short link was clicked."
The AI adds it. Still works. Two features done.
"Add user accounts so people can manage their own links."
The AI adds it. But now things start breaking.
The database schema from feature 1 does not quite fit. The authentication logic from feature 3 conflicts with the analytics query from feature 2. The AI tries to fix it, but introduces regressions. You try to clarify. The context window fills up. The AI loses track of earlier decisions.
You are no longer coding. You are herding cats with a keyboard.
This is "vibe coding" — casually prompting an AI to generate code, hoping it all holds together. It works for tiny scripts. It works for demos. But for real projects? It collapses under its own weight.
Why?
1.2 The Hidden Limit You Have Probably Noticed But Never Named
Here is a simple experiment you can run yourself.
Task A (Small and Focused):
Write a Python function that takes a URL string and returns True if it is a valid URL, False otherwise. Handle edge cases like missing protocols, invalid characters, and malformed domains.
The AI will likely produce a clean, correct, well-commented function in seconds. Quality: Excellent.
Task B (Large and Ambiguous):
Build a complete URL shortener service with a web interface, a REST API, user accounts, click analytics, and a database. Use Python.
The AI will produce something. But the result will be fragile. The authentication piece might not integrate cleanly with the analytics piece. The database schema might be inconsistent. Quality: Poor to Mediocre.
Same AI. Same model. Same capabilities.
The only difference is the size and scope of the task.
LLMs have a context window — a limit to how much information they can hold in working memory at once. Give them a small, self-contained problem, and they operate within that window comfortably. Give them a large system design, and they spill over the edges. They forget earlier decisions. They contradict themselves.
But there is a deeper insight here. A more important one.
1.3 The New Design Method That Changes Everything
AI does not think like a system architect. It does not hold a complete mental model of your project, with all its interconnections and trade-offs.
AI thinks in functions.
It understands:
"Given input X, produce output Y"
"Validate this string"
"Query that database table"
"Format this response"
It does not naturally understand:
How the URL validator should coordinate with the key generator
Whether the storage manager should be called before or after analytics logging
What happens when the redirect handler cannot find a key
These are integration concerns — and they are not the AI's strength.
Most developers, when encountering AI's limitations on large tasks, do one of two things:
Give up — "AI is not ready for real projects"
Push harder — "Let me write a longer, more detailed prompt"
Both miss the point.
The correct response is to change how you structure the work:
Do not ask the AI to build the system. Ask the AI to build the functions. You connect the functions into the system.
1.4 What This Unlocks
An AI can write an excellent URL validator. An AI can write an excellent key generator. An AI can write an excellent storage manager. An AI can write an excellent redirect handler.
Each of these is a functional block — a self-contained unit with a clear input, a clear output, and no hidden dependencies.
The AI's job is to implement each block.
Your job — the human's job — is to:
Define what the blocks are
Specify how they connect
Validate that the connections work
This is not "AI does everything." This is AI and human working symbiotically. Each doing what they do best.
Before (Vibe Coding)
After (FBD)
One giant prompt for the whole system
Many small prompts, one per block
AI loses context, contradicts itself
Each block's context is small and focused
Integration happens accidentally
Integration is explicit and designed
Hard to debug (which block failed?)
Each block can be tested in isolation
Changes require regenerating large portions
Change one block, leave others untouched
1.5 What Is Coming in Part 2 and Part 3
In Part 2, we will introduce Functional Block Design — a simple, repeatable methodology that turns this insight into practice. You will learn:
How to decompose a system into functional blocks
How to write a Block Spec (Description for humans, Prompt for AI)
How to generate code from the Prompt
How to store everything in one .py file
In Part 3, we will build the complete URL shortener — all six blocks, integrated into a working system.
By the end of this series, you will have a repeatable way to turn AI-generated code into systems that actually work together.
No more herding cats.
, "Cover image generated with Grok / xAI".2026-05-01 01:54:14
Roxnor Git বিভ্রাটের গল্প !
যখন Git Reset আর Refspec এর চক্করে মাথা কাজ করছিল না 😅
📌 আমি একটি প্লাগইনের Role Based Access Control নিয়ে কাজ করছিলাম। লাস্ট কমিট পুশ করে, একটু আরাম করে বসলাম। হঠাৎ টিম লিড ভাইয়ের আগমন, লাস্ট কমিটটা আলাদা পুল রিকুয়েস্ট দিতে হবে। কারন লাস্ট কমিটের ফাইলগুলি ছিল এক্সিটিং এপিআই গুলোতে রুল পার্মিশন টা চেক দেওয়া। তাই চেন্জড ফাইল অনেক গুলি দেখাচ্ছিল। আমি ভাবলাম, git reset --mixed HEAD~1 দিয়ে একটু রিভার্ট করি। এখন একসাথে প্রায় 40টার মতো ফাইল Unstaged অবস্থায় চলে আসলো। কিন্তু এই অবস্থায় ফোর্স পুশ দেওয়াটা রিস্কি হতে পারে। তাই ভাবলাম,আমার ফাইলের পরিবর্তন গুলি নিরাপদে রেখে,সব ক্লিন করে তারপর একটা PR দিব ৷ তাই -
১. প্রথমেই git stash কমান্ড দিয়ে সব আন স্টেজড ফাইলগুলোকে নিরাপদ জায়গায় সরিয়ে নিলাম।
২. git push origin shishir-xs --force চালালাম।
তখনই আসলো error: src refspec shishir-xs does not match any.
git branch কমান্ড দিয়ে কনফার্ম হলাম আমি ভুল করে সোর্স ব্রাঞ্চ হিসেবে shishir-xs দিচ্ছিলাম, অথচ আমার বর্তমান ব্রাঞ্চ ছিল role-based-access-control 😵💫।
এবার শেষমেশ সঠিক ব্রাঞ্চ দিয়ে git push origin role-based-access-control --force দিতেই রিমোট রিপোজিটরি আমার লোকাল স্টেটের সাথে পার্ফেক্টলি সিঙ্কড হয়ে গেল।
৩. এবার git stash pop করে ফাইলগুলো ফেরত আনলাম এবং development branch এর উপর নতুন branch তৈরি করে,পরিবর্তন গুলি কমিট করে , Pull Request দিয়ে ল্যাপটপ বন্ধ করলাম।