2026-02-01 22:25:31
Why is reading Markdown still so... ugly?
As developers, we live in Markdown. We write documentation, READMEs, notes, and blog posts in it. But whenever I opened a local
.md
file in my browser, I was greeted with the same raw, unstyled text or a 90s-style HTML render that looked like it was designed by a backend engineer in 1998 (no offense, backend friends!).
I wanted something better. I wanted an experience that felt like reading a premium documentation site—something with:
✨ Beautiful Typography (Inter & JetBrains Mono)
🌫️ Glassmorphism UI
🌓 Perfect Dark Mode
📊 Rich Diagram Support
So, I built Markdown Viewer Premium, and I want to share how it transforms the way you view documentation.
The Problem with Default Viewers
Most browser extensions for Markdown are:
Outdated: Built years ago, using old styling.
Basic: Just standard HTML, no syntax highlighting or diagrams.
Ugly: No attention to whitespace, readability, or aesthetics.
Meet Markdown Viewer Premium 🚀
This isn't just a parser; it's a rendering engine designed for aesthetics and readability.
Key Features
🎨 Stunning "Glass" Aesthetics
I spent a lot of time tweaking the CSS variables (using Tailwind) to get that perfect "frosted glass" look. The sidebar, table of contents, and code blocks all have a subtle transparency that blurs the background, giving it a modern OS feel.
🧜♀️ Mermaid Diagrams with Zoomable Popup
Markdown Viewer Premium automatically renders Mermaid diagrams into clear SVGs. Click any diagram to open it in a zoomable lightbox—perfect for complex flowcharts that need inspection.
💻 Rich Code, Math & GFM Support
Syntax Highlighting: Beautiful updates for all major languages.
GFM Support: Full support for Task Lists [ ], Tables, and Strikethrough.
LaTeX Support: Write $ E = mc^2 $ and see it rendered instantly with KaTeX.
Copy Button: One-click copy for code snippets.
🖨️ Professional PDF Export
Need to share? Hit Ctrl+P (or Cmd+P) to export your beautifully rendered markdown as a clean, styled PDF.
🛠 Power User Tools
Raw View: Instant toggle between rendered view and raw markdown text.
Theme Switching: seamless Toggle between Light and Dark modes.
Smart TOC: Sticky table of contents for easy navigation.
🐙 Seamless GitHub Integration
Directlypaste a GitHub file URL (blob or raw), and the viewer will automatically fetch and render it. No need to download the file first. Perfect for reading documentation directly from repositories.
Why Use This?
I built this to fix the pain points of existing tools:
Readability: Most viewers ignore line height and typography. This one prioritizes them.
Completeness: It doesn't break on complex diagrams or math equations.
Speed: It loads instantly, even for large documents.
Try it out!
I built this to scratch my own itch, but I think you'll love it too. If you care about the aesthetics of your tools, give it a try.
Link to Chrome Web Store
https://chromewebstore.google.com/detail/markdown-viewer-premium/abnpdibfmmdcjhdakgjeiepimokkhjjo?authuser=0&hl=en
Let me know what you think in the comments! 👇
2026-02-01 22:22:46
A few weeks ago, while reviewing a Spring Boot codebase, I came across a service that injected HttpServletRequest as a field into a singleton bean. This bean is called from service layer to construct and publish domain events, where events needed information from the incoming request. The code “worked,” and at first glance even looked convenient and served the purpose. But the deeper I looked, the more uneasy I felt. A request-scoped concern was quietly living inside a singleton bean, and a transport-layer abstraction had leaked straight into business logic. That small design choice opened the door to a larger conversation about how easily Spring’s magic can blur architectural boundaries if we’re not careful.
@Component
public class EventFactory {
@Autowired
private HttpServletRequest request;
public void prepareAndSendEvent(){
OrderEvent ordEvent = new OrderEvent;
ordEvent.sentClientIP(request.getRemoteAddr());
}
}
Why does this feel off?
Any body familiar with the Spring bean's scope can easily tell that this is a singleton bean. Singleton beans are created and initialised only once, right during the startup phase. As the name suggests, the same object is used for serving every http request that comes in. On the other hand, HttpServletRequest is a request-scoped bean and a new object of it is instantiated for every incoming Http request. The natural question that needs to be answered is - how is request field variable initialised when during startup no http requests are expected? Will this even work without mixing up information between different Http requests? This where Spring magic steps in - Spring injects proxy as a placeholder for request-scoped beans. This is not a flaw, rather a powerful tool that Spring relies on to manage the request processing.
Bridging difference in scope
Spring during bean initialisation field realises that it cannot bind any object to request variable(in the code snipped above). But, it is familiar with HttpServletRequest being a request scoped variable - for managing lifecycle of a http request - whose current value is stored in the ThreadLocal(Scoped variable in future versions) until the thread created for the request processing is alive. Back to our code, Spring creates a proxy method and initialises the return value of that method to request variable. When the the business logic access value from request, the proxy method gets the actual value from the current HttpServletRequest. This is Spring's usual behaviour to deal with difference in scope of a bean and injected bean(s).
Why this is a smell?
Spring with its one request per thread execution model and different bean annotations such as "@Component", "@Service", "@Repository" makes the design a three layered architecture with stateless singleton beans encapsulating business logic a breeze. This separation of concerns in terms of controller layer where HttpRequests land and go no further, service layer which receives a DTO(Data Transfer Object) constructed out of Http Request body, Request header and Session Object. Service layer applies business logic and calls gateway layer/ repository layer further. In the same manner, the client responses end at gateway layer. DTO is created out of the received response for service layer, if needed.
It is the developer(s) responsibility to maintain and enforce the separation of concern in the application. But in this case, because Spring makes it possible to inject request scoped bean into a singleton bean through the mechanism of scoped proxies, it is tempting to (mis)use this magic. Since it works flawlessly many developers may argue for it. However, the price that we pay is the silent degradation of clean code and compromise of layered architecture for very little convenience.
What problems can this cause?
Apart from the smells mentioned above this approach can pose challenges to maintainability and testing strategy of the application.
Falls apart in asynchronous execution
The reason why the use of current http request in the method that prepares and sends the event works is that it is performed synchronously, in the same thread where the ThreadLocal created by Spring is available for real-time object resolution. It is not uncommon, that in the lifetime of an application - for speeding up response time - such a method comes to be executed asynchronously. In such a case, the code works but the ThreadLocal variable is not available as asynchronous execution is performed on a different thread outside of the current context. As a result the execution will be break with IllegalStateException.
Testing complexity increases
Write unit tests for EventFactory gets complex because HttpServletRequest has to be mocked. Mocked HttpServletRequest is required to be mocked only while testing the controller beans in a Spring Boot application. With this approach, focus shifts from unit tests to mocking HttpServletRequest resulting in complex tests.
Recommended approach
The best approach to make information from HTTP request to be made available in business layers, is context extract and method injection. The required information is extracted into a DTO and send as a method parameter to the business layer beans.
@RestController
@RequestMapping("/orders")
public class OrderController {
private final OrderService orderService;
@GetMapping
public List<OrderDto> getOrders(
HttpServletRequest request,
@RequestHeader("X-Tenant-Id") String tenantId) {
String userAgent = request.getHeader("User-Agent");
String clientIp = request.getRemoteAddr();
RequestContext ctx = new RequestContext(tenantId, clientIp);
return orderService.getOrders(ctx);
}
}
@Component
public class EventFactory {
public void prepareAndSendEvent(EventInfo eventInfo, RequestContext requestContext){
// requestContext object has the needed information
}
}
TL;DR
Injecting Request-Scoped HttpServletRequest into business layer works due to Spring magic, but it comes with a hidden cost and risk. It is the developer's responsibility to avoid using it when a cleaner alternative is available.
2026-02-01 22:19:46
This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI
2026-02-01 22:19:14
A practical guide to designing AI-ready codebases that actually understand your project.
"Create a lease activation endpoint."
I typed this instruction to Claude for the 47th time. I know it was the 47th because I'd started keeping a tally in frustration.
Each time, I'd watch it generate code that looked right but missed critical details:
So I'd correct it. Explain the state machine. Remind it about organization_id. Paste in example code from other files. Twenty minutes later, we'd have working code.
The next day? Same conversation. Same corrections. Same twenty minutes burned.
I estimated I was losing 6-8 hours per week just re-explaining context. That's a full workday, every week, saying the same things to an AI that couldn't remember yesterday.
Something had to change.
Before I found what worked, I tried several approaches that didn't:
Attempt #1: The mega-prompt. I wrote a 3,000-word system prompt covering everything. Result? Claude got confused, cherry-picked random details, and the token cost was brutal. The AI couldn't distinguish what mattered for the current task.
Attempt #2: Code comments everywhere. I added extensive JSDoc comments and inline documentation. But Claude still couldn't see the big picture—it would read one file and miss how it connected to everything else.
Attempt #3: A single massive README. 50 pages of documentation in one file. Claude would quote irrelevant sections while missing the specific thing I needed. It's like giving someone an encyclopedia when they asked for directions.
The breakthrough came when I stopped thinking about documentation for humans and started thinking about documentation for AI consumption.
Week six of the project. I was explaining the lease state machine for the dozenth time when I noticed something: I always explained it the same way. Same states, same transitions, same edge cases.
What if I just... wrote it down once? In a format optimized for Claude to parse quickly?
I created a simple markdown file:
## State Transitions
DRAFT → PENDING_APPROVAL, CANCELLED
PENDING_APPROVAL → APPROVED, DRAFT (rejected)
APPROVED → ACTIVE, CANCELLED
ACTIVE → EXPIRED, TERMINATED, RENEWED
Next time I needed lease work, I told Claude: "Read this file first, then implement the activation endpoint."
It worked. First try. No corrections.
That file became the seed of a system that would eventually grow to 90+ files—and save me an estimated 300+ hours over the next three months.
Context Engineering is the practice of designing documentation systems that give AI assistants deep, persistent understanding of your codebase.
Think of it like this: instead of training a new developer from scratch every morning, you hand them a comprehensive onboarding guide that contains everything they need to be productive immediately.
The key insight is that AI assistants don't need less documentation—they need structured documentation designed for quick comprehension and targeted loading.
Layered Depth: Not everything needs full context. Simple tasks need simple context; complex tasks need comprehensive context.
Targeted Loading: Load only what's relevant to the current task. Token budgets are real constraints.
Single Source of Truth: Define patterns once, reference everywhere. No more "which version is correct?"
After weeks of iteration (and plenty of wrong turns), I settled on a three-tier system:
┌─────────────────────────────┐
│ Quick Reference │ ~100 lines
│ (Fields, Enums, APIs) │ Load for simple tasks
└──────────────┬──────────────┘
│
┌──────────────▼──────────────┐
│ Domain Context │ ~400 lines
│ (State Machines, Rules) │ Load for complex features
└──────────────┬──────────────┘
│
┌──────────────▼──────────────┐
│ Pattern Guides │ ~300 lines
│ (How to implement) │ Load for new code
└─────────────────────────────┘
These are optimized for fast lookups. Entity fields, enums, API signatures, basic relationships. When I just need to know "what are the lease statuses?" I load the quickref—nothing more.
# Leasing Quick Reference
## Entity: Lease
| Field | Type | Required | Notes |
|-------|------|----------|-------|
| id | UUID | auto | PK |
| lease_number | string(50) | auto | Format: LS-{YYYY}-{SEQ} |
| status | enum | yes | LeaseStatus |
| start_date | date | yes | |
| end_date | date | yes | Must be > start_date |
| rent_amount | decimal(12,2) | yes | |
| security_deposit | decimal(12,2) | yes | Default: 0 |
## Enums
enum LeaseStatus {
DRAFT, PENDING_APPROVAL, APPROVED, ACTIVE,
EXPIRED, TERMINATED, RENEWED, CANCELLED
}
## State Transitions (Summary)
DRAFT → PENDING_APPROVAL, CANCELLED
PENDING_APPROVAL → APPROVED, DRAFT
APPROVED → ACTIVE, CANCELLED
ACTIVE → EXPIRED, TERMINATED, RENEWED
## API Endpoints
| Method | Path | Description |
|--------|------|-------------|
| POST | /leases/:id/activate | Activate lease |
| POST | /leases/:id/terminate | Terminate lease |
| POST | /leases/:id/renew | Renew lease |
~95 lines. Everything Claude needs for simple CRUD work. Nothing it doesn't.
These contain the full business logic. State machine diagrams, transition rules, workflow implementations, validation logic. When I'm implementing a complex feature, I load the domain file.
# Leasing Domain - Full Context
## State Machine
### State Diagram
┌──────────────┐
│ DRAFT │
└──────┬───────┘
│ submit()
▼
┌──────────────────────────────┐
│ PENDING_APPROVAL │
└──────┬────────────┬──────────┘
│ │
approve()│ │reject()
▼ ▼
┌──────────┐ ┌──────────┐
│ APPROVED │ │ DRAFT │
└────┬─────┘ └──────────┘
│ activate()
▼
┌──────────┐
│ ACTIVE │
└────┬─────┘
┌────────────┼────────────┐
▼ ▼ ▼
┌────────┐ ┌──────────┐ ┌─────────┐
│EXPIRED │ │TERMINATED│ │ RENEWED │
└────────┘ └──────────┘ └─────────┘
## Lease Activation Workflow
async activate(tenant, leaseId) {
return this.dataSource.transaction(async (manager) => {
// 1. Fetch with tenant isolation (CRITICAL)
const lease = await manager.findOne(Lease, {
where: {
id: leaseId,
organization_id: tenant.organizationId // Never skip this
},
relations: ['unit', 'tenant'],
});
// 2. Validate state transition
if (lease.status !== LeaseStatus.APPROVED) {
throw new BusinessRuleException('INVALID_STATUS',
`Cannot activate lease in ${lease.status} status`);
}
// 3. Check for overlapping leases on same unit
const overlapping = await this.checkOverlappingLeases(
lease.unit_id, lease.start_date, lease.end_date, lease.id
);
if (overlapping) {
throw new BusinessRuleException('LEASE_OVERLAP',
'Unit has overlapping lease');
}
// 4. Update lease
lease.status = LeaseStatus.ACTIVE;
lease.activated_at = new Date();
await manager.save(lease);
// 5. Update unit status (side effect)
await manager.update(Unit, lease.unit_id, {
status: UnitStatus.OCCUPIED,
current_lease_id: lease.id,
current_tenant_id: lease.tenant_id,
});
// 6. Generate first invoice (side effect)
await this.invoiceService.generateForLease(lease, manager);
// 7. Log activity (side effect)
await this.activityService.log({
entity_type: 'lease',
entity_id: lease.id,
action: 'ACTIVATED',
actor_id: tenant.userId,
}, manager);
// 8. Queue notification (side effect)
await this.notificationService.queue({
type: NotificationType.LEASE_ACTIVATED,
recipient_id: lease.tenant_id,
});
return lease;
});
}
Now Claude understands not just what to build, but how it should work—including all five side effects that my early attempts kept missing.
These are implementation how-tos. API structure, database patterns, testing approaches. Framework-specific enough to be useful, but focused on our conventions.
# API Patterns
## Controller Structure
@Controller('resources')
export class ResourceController {
@Get()
@Permissions('resource:view')
async findAll(
@Tenant() tenant: TenantContext, // Always inject tenant
@Query() query: QueryDto,
): Promise<Paginated<ResourceDto>> {
return this.service.findAll(tenant, query);
}
@Post(':id/activate')
@Permissions('resource:activate')
@HttpCode(HttpStatus.OK) // Not 201 for state changes
async activate(
@Tenant() tenant: TenantContext,
@Param('id', ParseUUIDPipe) id: string,
): Promise<ResourceDto> {
return this.service.activate(tenant, id);
}
}
## Error Handling
// Framework exceptions for HTTP errors
throw new NotFoundException('Lease not found');
throw new ForbiddenException('Insufficient permissions');
// Custom exception for business rules
throw new BusinessRuleException('LEASE_OVERLAP',
'Unit already has active lease');
// Always include error code for client handling
If your project has significant frontend work, the three-tier system benefits from UI-specific additions:
┌─────────────────────────────┐
│ UI Quick Reference │ ~100 lines
│ (Which pattern to use?) │ Decision tree for forms, pages
└──────────────┬──────────────┘
│
┌──────────────▼──────────────┐
│ Focused UI Patterns │ ~150 lines each
│ (Forms, Modals, Tables) │ One file per UI pattern
└──────────────┬──────────────┘
│
┌──────────────▼──────────────┐
│ Component Methodology │ ~400 lines
│ (Where components live) │ Package structure, composition rules
└─────────────────────────────┘
I added these files to handle frontend complexity:
docs/patterns/ui/
├── FORM_PATTERNS.md # Modal vs page vs wizard decision tree
├── DETAIL_PAGES.md # Full detail vs focused detail templates
├── EMPTY_STATES.md # Empty state patterns and copy
├── SKELETON_LOADERS.md # Loading state patterns
└── MODALS_TOASTS.md # Confirmation dialogs, notifications
docs/quickref/UI_PATTERNS.qr.md # Quick decision tree
docs/patterns/COMPONENT_METHODOLOGY.md # Where to put components
The key insight: UI decisions are recursive. Before building a page, you need to decide which pattern. Before building a form, you need to decide modal vs page vs wizard. Having these decisions documented prevents inconsistent UX across features.
My CLAUDE.md now includes a critical rule:
### Component-First Development (CRITICAL)
**BEFORE building ANY UI feature, you MUST:**
1. Load `docs/patterns/COMPONENT_METHODOLOGY.md`
2. Decompose the design into component hierarchy
3. Check `packages/ui` for existing components
4. Build missing components in shared package FIRST
5. THEN compose the page in the app
**Import rule:**
// ✅ ALWAYS import from package
import { Button, Card } from '@myproject/ui';
// ❌ NEVER create duplicates in apps/
This single rule eliminated the "duplicate component" problem where Claude would create a new Button in every feature folder.
One pattern that emerged later—and should have been there from the start—is a formal clarification phase before writing code.
The problem: Claude would jump into implementation with reasonable-sounding assumptions that didn't match my actual requirements. I'd end up with working code that solved the wrong problem.
The fix: A structured question phase in CLAUDE.md:
### Pre-Implementation Clarification Phase
**For non-trivial tasks**: After loading context but BEFORE writing code,
ask clarifying questions.
**Skip for trivial tasks**: Typo fixes, single-line changes, simple renames.
#### Question Categories
| Category | Questions to Consider |
|----------|----------------------|
| **User Value & UX** | Does this solve a real user problem? Is the UX optimal? Could it be simpler? |
| **Edge Cases & Errors** | What happens at boundaries? Empty states? Max limits? Rollback needed? |
| **Conflicts & Coherence** | Does this contradict existing behavior? Fits existing patterns? |
| **Security & Performance** | Auth concerns? Data access controls? N+1 queries? Will this scale? |
#### Before Proceeding, Ensure:
- All ambiguities resolved
- Edge cases identified and handling agreed
- No contradictions with existing system
This added maybe 2-3 minutes to each feature but saved 20+ minutes of rework. Claude now asks "Should the delete button require confirmation?" before implementing, rather than after I notice it's missing.
At the root sits CLAUDE.md—the file that's always loaded. This is your AI's "home base" containing:
The crucial section is the Context Selection Matrix:
## Context Loading Guide
| Task Type | Load These Files |
|-----------|------------------|
| Quick lookup | docs/quickref/{DOMAIN}.qr.md |
| Simple CRUD | quickref + docs/patterns/API.md |
| Complex feature | docs/domains/{DOMAIN}.md + patterns/API.md |
| State machine | docs/domains/{DOMAIN}.md (required!) |
| Writing tests | docs/patterns/TESTING.md + quickref |
This tells Claude exactly what to read before starting any task. No guessing, no missing context, no wasted tokens loading irrelevant files.
Building this system wasn't smooth. Here are three mistakes that cost me time:
Early on, I made all my context files comprehensive 400-line domain documents. Every simple CRUD task loaded full state machines and workflow code.
The problem? Token waste and context confusion. Claude would reference complex workflow logic when I just wanted a simple GET endpoint.
The fix: Create the quickref tier. Simple questions get simple context. Complex questions get comprehensive context. Match the depth to the task.
My first quickref files duplicated content from domain files. When I updated the state machine, I'd forget to update the quickref summary. Claude would get conflicting information.
The fix: Quickrefs now summarize and reference. At the top of every quickref: > **Full details**: docs/domains/LEASING.md. The quickref has enough for simple tasks; complex work loads the source of truth.
I kept seeing the same mistakes: any types, missing tenant filters, console.logs left in code. I'd correct them, but they'd reappear.
The fix: An explicit "DO NOT" section in CLAUDE.md:
## Anti-Patterns (DO NOT)
// ❌ Don't use `any`
const data: any = response;
// ❌ Don't skip tenant filtering
return this.repo.find(); // Security bug!
// ❌ Don't hardcode IDs
const adminId = '550e8400-e29b-41d4-a716-446655440000';
// ❌ Don't use console.log in production code
console.log('debug:', data);
Negative examples are as valuable as positive ones. Maybe more.
Here's something I learned the hard way: context isn't free. Every token of context is a token that can't be used for reasoning or code generation.
I hit this wall when Claude started truncating responses mid-function. Too much context, not enough room for output.
Now I think about context loading like database queries—load only what you need:
| Context Load | ~Tokens | Use Case |
|---|---|---|
| CLAUDE.md only | 3,500 | General questions |
| + 1 quickref | 5,000 | Simple CRUD |
| + 1 domain | 8,500 | Complex feature |
| + 1 domain + 1 pattern | 11,000 | Complex new feature |
| + 2 domains (cross-cutting) | 13,500 | Multi-domain work |
My rule: Stay under 15,000 tokens to leave headroom for code generation.
This is why the three-tier system matters. I don't load full domain files for simple questions—I load the quickref. The full context exists when I need it, but I'm not paying the token cost when I don't.
One pattern that paid off unexpectedly: Architecture Decision Records (ADRs). These document why we made specific choices.
# ADR-003: Multi-tenancy Approach
## Context
We need to isolate data between organizations...
Options considered:
1. Database per tenant
2. Schema per tenant
3. Shared schema with organization_id
## Decision
We chose **shared schema with organization_id**.
## Rationale
- Simpler infrastructure (one database)
- Easier migrations (one schema)
- Works at our scale (hundreds of orgs, not thousands)
- Team has experience with this pattern
## Consequences
- Every query MUST include organization_id filter
- Risk of cross-tenant data leaks if filter forgotten
- Added TenantContext to enforce this at framework level
Without this ADR, Claude would suggest "why not use separate databases?" every few weeks. Now I point to the decision and its rationale. The AI understands the tradeoffs and doesn't re-litigate settled architecture.
I have ADRs for: monorepo structure, ORM choice, authentication strategy, state machine implementation, and file storage approach. Six documents that prevent dozens of circular conversations.
Let me show you what changed.
Me: Create a lease activation endpoint
Claude: I'll create a POST /leases/:id/activate endpoint:
@Post(':id/activate')
async activate(@Param('id') id: string) {
const lease = await this.leaseRepo.findOne(id);
lease.status = 'ACTIVE';
return this.leaseRepo.save(lease);
}
What's wrong:
Time to fix: 20+ minutes of back-and-forth
Me: Create a lease activation endpoint
@docs/domains/LEASING.md @docs/patterns/API.md
Claude: Looking at the leasing domain, activation requires:
Here's the implementation following your API patterns:
// Controller
@Post(':id/activate')
@Permissions('lease:activate')
@HttpCode(HttpStatus.OK)
async activate(
@Tenant() tenant: TenantContext,
@Param('id', ParseUUIDPipe) id: string,
): Promise<LeaseResponseDto> {
return this.leaseService.activate(tenant, id);
}
// Service
async activate(tenant: TenantContext, leaseId: string): Promise<Lease> {
return this.dataSource.transaction(async (manager) => {
const lease = await manager.findOne(Lease, {
where: {
id: leaseId,
organization_id: tenant.organizationId
},
relations: ['unit', 'tenant'],
});
if (!lease) {
throw new NotFoundException('Lease not found');
}
if (lease.status !== LeaseStatus.APPROVED) {
throw new BusinessRuleException('INVALID_STATUS',
`Cannot activate lease in ${lease.status} status`);
}
// Check overlapping leases
const overlapping = await this.checkOverlappingLeases(
lease.unit_id,
lease.start_date,
lease.end_date,
lease.id,
manager
);
if (overlapping) {
throw new BusinessRuleException('LEASE_OVERLAP',
'Unit has overlapping lease');
}
// Update lease
lease.status = LeaseStatus.ACTIVE;
lease.activated_at = new Date();
await manager.save(lease);
// Update unit (side effect)
await manager.update(Unit, lease.unit_id, {
status: UnitStatus.OCCUPIED,
current_lease_id: lease.id,
current_tenant_id: lease.tenant_id,
});
// Generate invoice (side effect)
await this.invoiceService.generateForLease(lease, manager);
// Log activity (side effect)
await this.activityService.log({
organization_id: tenant.organizationId,
entity_type: 'lease',
entity_id: lease.id,
action: 'ACTIVATED',
actor_id: tenant.userId,
}, manager);
// Queue notification (side effect)
await this.notificationService.queue({
type: NotificationType.LEASE_ACTIVATED,
recipient_id: lease.tenant_id,
});
return lease;
});
}
Time to working code: First try. Maybe minor tweaks.
That's the difference. Not "slightly better"—categorically different. The AI understood the requirements, the patterns, the side effects, and the error handling. All from two file references.
The template is generic. Here's how to make it yours, using a SaaS billing system as an example.
List your major feature areas:
Copy docs/quickref/_TEMPLATE.qr.md to docs/quickref/SUBSCRIPTIONS.qr.md:
# Subscriptions Quick Reference
> **Full details**: docs/domains/SUBSCRIPTIONS.md
## Entity: Subscription
| Field | Type | Required | Notes |
|-------|------|----------|-------|
| id | UUID | auto | PK |
| organization_id | UUID | yes | Tenant isolation |
| plan_id | UUID | yes | FK → plans |
| status | enum | yes | SubscriptionStatus |
| current_period_start | timestamp | yes | |
| current_period_end | timestamp | yes | |
| cancel_at_period_end | boolean | yes | Default: false |
## Enums
enum SubscriptionStatus {
TRIALING = 'TRIALING',
ACTIVE = 'ACTIVE',
PAST_DUE = 'PAST_DUE',
CANCELED = 'CANCELED',
UNPAID = 'UNPAID'
}
## State Transitions
TRIALING → ACTIVE, CANCELED
ACTIVE → PAST_DUE, CANCELED
PAST_DUE → ACTIVE, UNPAID, CANCELED
UNPAID → ACTIVE, CANCELED
CANCELED → (terminal)
## API Endpoints
| Method | Path | Description |
|--------|------|-------------|
| POST | /subscriptions | Create subscription |
| POST | /subscriptions/:id/cancel | Cancel subscription |
| POST | /subscriptions/:id/reactivate | Reactivate canceled |
~80 lines. Enough for Claude to handle basic subscription work.
Update CLAUDE.md:
## Domain Quick Reference
| Domain | Quick Ref | Full Context |
|--------|-----------|--------------|
| Auth | quickref/AUTH.qr.md | domains/AUTH.md |
| Subscriptions | quickref/SUBSCRIPTIONS.qr.md | domains/SUBSCRIPTIONS.md |
| Invoices | quickref/INVOICES.qr.md | domains/INVOICES.md |
Now Claude knows where to find subscription context.
You don't need full domain files immediately. Create them when you hit complex logic:
domains/SUBSCRIPTIONS.md
Build what you need, when you need it.
As the project grew, I needed more than context files—I needed to track what to build and in what order.
The task system has three components:
docs/tasks/TASK_INDEX.md)
# Task Index
**Total**: 127 tasks | **Complete**: 0 | **In Progress**: 0
## Features
| # | Feature | Tasks | Complete | Status |
|---|---------|-------|----------|--------|
| 0 | Foundation | 18 | 0 | 🔴 Not Started |
| 1 | Portfolio | 18 | 0 | 🔴 Not Started |
| 2 | Properties | 15 | 0 | 🔴 Not Started |
| 3 | Units | 13 | 0 | 🔴 Not Started |
...
## Recommended Order
0.x Foundation → 2.x Properties → 3.x Units → 5.x Tenants → 4.x Leases
docs/tasks/01-portfolio/tasks.md)
Each feature gets a task file with atomic, testable tasks:
# Portfolio Tasks
**Context**: Load `docs/quickref/PORTFOLIO.qr.md`
## Tasks
### 1.1 Create Portfolio Entity
- [ ] Define entity with fields from quickref
- [ ] Add TypeORM decorators
- [ ] Create migration
- **Tests**: Entity creates, validates required fields
- **Blocks**: 1.2, 1.3
### 1.2 Portfolio CRUD Endpoints
- [ ] GET /portfolios (list with pagination)
- [ ] GET /portfolios/:id (single with relations)
- [ ] POST /portfolios (create)
- [ ] PATCH /portfolios/:id (update)
- **Tests**: Each endpoint, validation errors, tenant isolation
- **Blocked by**: 1.1
docs/tasks/context-slices/README.md)
Maps task types to required context files:
## Context Slices
| Task Pattern | Load These Files |
|--------------|------------------|
| `*.1` Entity tasks | DATABASE.md + quickref |
| `*.2` CRUD tasks | API.md + quickref |
| `*.3` State machine tasks | Full domain file |
| `*.4` UI tasks | COMPONENT_METHODOLOGY.md + domain |
This system lets me (or Claude) pick up any task and know exactly:
I've packaged this system into a template repository you can fork and customize.
context-engineering-template/
├── CLAUDE.md # Customize this first
├── .cursorrules # Cursor IDE variant
├── docs/
│ ├── PROGRESS.md # Project status
│ ├── CONTEXT_INDEX.md # Navigation guide
│ ├── quickref/
│ │ ├── _TEMPLATE.qr.md # Copy for each domain
│ │ └── EXAMPLE.qr.md # Annotated example
│ ├── domains/
│ │ ├── _TEMPLATE.md # Copy when needed
│ │ └── EXAMPLE.md # Full example with state machine
│ ├── patterns/
│ │ ├── API.md # REST patterns
│ │ ├── DATABASE.md # Database patterns
│ │ └── TESTING.md # Testing patterns
│ ├── decisions/
│ │ └── _TEMPLATE.md # ADR template
│ └── tasks/
│ ├── TASK_INDEX.md # Master tracker
│ ├── README.md # How to use tasks
│ ├── context-slices/ # Task → context mapping
│ └── 00-example/ # Example feature tasks
All templates include annotations explaining each section. Delete them once you understand the system.
Let me be concrete about what this cost and what it returned.
Investment:
Return:
But the bigger win isn't time—it's quality. The code Claude generates now fits naturally into the codebase. Fewer bugs. Less refactoring. More consistency. The patterns I defined in documentation become the patterns in the code.
One thing I underestimated: Claude doesn't naturally verify its work. It generates code and moves on. I added an explicit quality checklist to CLAUDE.md:
## Quality Checklist (MANDATORY)
**Before marking ANY task complete:**
☐ Type check → pnpm typecheck → ZERO errors
☐ Lint → pnpm lint → ZERO errors
☐ Format → pnpm format → Applied
☐ Tests pass → pnpm test → ALL green
☐ Tests written → New code has tests → See coverage rules
☐ Build works → pnpm build → Succeeds
More importantly, I require Claude to output the results:
### Report Format
After completing work, output:
## Quality Checklist
- [x] Type check passed
- [x] Lint passed (0 errors)
- [x] Formatted
- [x] Tests passed (12 passed, 0 failed)
- [x] New tests: lease.service.spec.ts (8 tests)
- [x] Build succeeded
This seemingly small addition had outsized impact. Claude now runs the checks because it knows it needs to report them. The verification step went from "often skipped" to "always done."
Context Engineering isn't about writing more documentation—it's about writing smarter documentation. Documentation designed for AI consumption, structured for quick loading, and organized for targeted retrieval.
The 90+ markdown files I created represent about a week of focused work, spread over three months of iteration. They've saved me hundreds of hours of repetitive explanation and correction.
But more importantly, they changed the nature of my work with AI. Instead of fighting against amnesia every session, I have a collaborator that understands my project deeply—every time, from the first message.
If you're using AI coding assistants more than a few times per week, this investment pays for itself fast. The question isn't whether to build a context system, but how soon you'll start.
Fork the template. Customize it for your project. And stop repeating yourself.
Template Repository: github.com/daylay92/context-engineering-template
Have questions or built something with this? I'd love to hear about it. Open an issue on the template repo or reach out on Twitter/X.
This article was written while building Qace Homes PMS, a property management system. The context engineering system described here evolved from real frustration into a real solution—90+ files that fundamentally changed how I work with AI.
Tags: #ai #claude #cursor #copilot #productivity #developer-tools #programming #context-engineering
2026-02-01 22:16:59
This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI
I work at the unique intersection of AI, art, and design: I design AI & 3D fashion and study at Parsons School of Design, exploring how AI can reshape fashion and creative workflows. As a Google Developer Expert (GDE) in AI and Cloud, I regularly contribute to open source projects.
For this challenge, I wanted to build a portfolio that breaks down the silos between my "developer" life and my "artist & designer" life, and also being a leader in the Google Developers communities. Usually, developers have a GitHub, and designers have a Behance. I needed one space that could showcase both my code and designs.
My goal with the portfolio site is to: 1) Showcase my multi-dimensional skillsets 2) Explore how I can quickly prototype with Google's GenAI tools.
I built my portfolio website using Google AI:
Step 1. NotebookLM - first I explored the hackathon rules and brainstorm ideas by giving NotebookLM the links to the "New Year, New You Portfolio Challenge", my 2025 Year in Review blog post, and a few of my YouTube videos.
Step 2. Stitch - I started a few iteration of UI / UX design in Stitch, then I selected a design to export the code to Google AI Studio.
Step 3. Google AI Studio - I spent a few hours coding there and deployed my portfolio to Cloud Run by pressing the rocket icon.
Throughout the entire process I used Gemini to guide me in tools, processes, brainstorm design and help write this blog post.
/portfolio) acting as a Content Delivery Network (CDN) for my assets, serving "Web Quality" (600x800) images to keep the Cloud Run container lightweight and fast.One of the interesting challenges was intellectual property. As an open-source advocate, I wanted to share my code, but as a designer, I would like to protect my artwork. I implemented a Hybrid Licensing model:
Being a designer, I want to emphasize the aesthetics of design and I would like to have my portfolio to look fashion-forward and editorial.
Instead of the standard "About" and "Projects" tabs, I structured my portfolio site around the three dimensions of my skills. I used Gemini to help me refine this information architecture so it tells a story rather than just listing links:
I am most proud of the fact that I put together my portfolio within 2 days, with limited web development knowledge. It was also great that I successfully deployed to Cloud Run in Google AI Studio. Seeing my "Art & Design" work live on a scalable Google Cloud URL—without having to configure a Dockerfile manually—was a fantastic "New Year" win.
Now that I have a beautiful portfolio website deployed to Cloud Run from Google AI Studio, I plan to transition to Antigravity to code the backend adding a few interactive features with Gemini and generative media models on Google Cloud.
2026-02-01 22:16:02
This is a new scheduler for the data
First data
Second data
This is for testing