2026-03-15 06:28:42
When I started building PropFirm Key, I quickly realized that comparing proprietary trading firms is far more complex than just looking at sticker prices. A $200 challenge from Firm A might actually cost more than a $300 challenge from Firm B once you factor in activation fees, profit splits, reset costs, and available discounts.
That insight led me to build the True Cost Calculator — a tool that computes the real, all-in cost of prop firm challenges. In this article, I'll walk through the technical implementation: the data model, the calculation engine, the React UI, and the performance considerations that come with processing 370+ challenges in the browser.
Live demo: propfirmkey.com/en/tools/true-cost-calculator
Most prop firm comparison sites show the listed price. But traders actually pay more (or less) depending on:
The True Cost Calculator normalizes all of these into a single comparable metric: cost per dollar of funded capital, expressed as a percentage.
I chose SQLite for this project because the dataset is read-heavy and relatively small (35 firms, 370 challenges, 19 active offers). SQLite eliminates the need for a database server, simplifies Docker deployment, and provides incredible read performance.
Here's the core schema using Drizzle ORM:
// src/lib/db/schema.ts
import { sqliteTable, text, integer, real } from 'drizzle-orm/sqlite-core';
export const firms = sqliteTable('firms', {
id: text('id').primaryKey(),
slug: text('slug').notNull(),
name: text('name').notNull(),
logo: text('logo'),
country: text('country').notNull(),
rating: real('rating').default(0),
trustpilotRating: real('trustpilot_rating'),
totalReviews: integer('total_reviews').default(0),
profitSplit: integer('profit_split').default(80),
minPrice: integer('min_price').default(99),
marketType: text('market_type').default('forex'),
isActive: integer('is_active').default(1),
// ... ~87 columns total covering features, rules, payouts
});
export const challenges = sqliteTable('challenges', {
id: text('id').primaryKey(),
firmId: text('firm_id').notNull().references(() => firms.id),
name: text('name'),
accountSize: integer('account_size').notNull(),
steps: text('steps').notNull(), // "1-step", "2-step", "3-step"
originalPrice: real('original_price').notNull(),
discountedPrice: real('discounted_price'),
activationFee: real('activation_fee'),
profitSplit: real('profit_split').default(80),
profitTargetPhase1: real('profit_target_phase1').default(0),
profitTargetPhase2: real('profit_target_phase2'),
dailyLossLimit: real('daily_loss_limit').default(0),
maxLoss: real('max_loss').default(0),
maxLossType: text('max_loss_type').default('static'),
minTradingDays: integer('min_trading_days').default(0),
});
export const offers = sqliteTable('offers', {
id: text('id').primaryKey(),
firmId: text('firm_id').notNull().references(() => firms.id),
promoCode: text('promo_code').notNull(),
discountPercent: real('discount_percent').default(0),
isExclusive: integer('is_exclusive').default(0),
isNew: integer('is_new').default(0),
title: text('title').notNull(),
});
The key design decision here is separating challenges from firms. A single firm might offer 15-20 different challenge configurations (varying account sizes, step counts, and rule sets). This one-to-many relationship is essential for accurate comparisons.
The server-side data fetching uses Drizzle's relational queries:
// src/lib/db/queries.ts
import { db } from './index';
import { challenges, firms, offers } from './schema';
import { eq, desc } from 'drizzle-orm';
export async function getChallengesWithFirm({ limit = 200 }) {
return db.query.challenges.findMany({
limit,
with: {
firm: {
columns: {
id: true,
slug: true,
name: true,
logo: true,
countryFlag: true,
rating: true,
totalReviews: true,
isVerified: true,
marketType: true,
},
},
},
});
}
export async function getOffersWithFirm(limit = 30) {
return db.query.offers.findMany({
limit,
with: {
firm: {
columns: {
id: true,
name: true,
slug: true,
},
},
},
});
}
Using with for eager loading avoids N+1 queries. The entire dataset (370 challenges + 19 offers) loads in a single round trip, typically under 5ms with SQLite.
This is the heart of the tool. The calculation needs to be client-side because users interact with filters and parameters in real time.
// True Cost = (Effective Price + Activation Fee) / Account Size * 100
interface ChallengeData {
originalPrice: number;
discountedPrice?: number;
activationFee?: number;
accountSize: number;
profitSplit: number;
steps: string;
dailyLossLimit: number;
maxLoss: number;
}
interface OfferData {
firmId: string;
promoCode: string;
discountPercent: number;
}
function calculateTrueCost(
challenge: ChallengeData,
offer?: OfferData
): {
effectivePrice: number;
trueCostPercent: number;
savingsAmount: number;
totalCost: number;
} {
// Step 1: Determine base price
const basePrice = challenge.discountedPrice || challenge.originalPrice;
// Step 2: Apply promo discount if available
const discountMultiplier = offer
? (100 - offer.discountPercent) / 100
: 1;
const effectivePrice = basePrice * discountMultiplier;
// Step 3: Add activation fee (charged after passing)
const activationFee = challenge.activationFee || 0;
const totalCost = effectivePrice + activationFee;
// Step 4: Calculate true cost as percentage of funded capital
const trueCostPercent = (totalCost / challenge.accountSize) * 100;
// Step 5: Calculate savings vs. original price
const savingsAmount = challenge.originalPrice - effectivePrice;
return {
effectivePrice: Math.round(effectivePrice * 100) / 100,
trueCostPercent: Math.round(trueCostPercent * 1000) / 1000,
savingsAmount: Math.round(savingsAmount * 100) / 100,
totalCost: Math.round(totalCost * 100) / 100,
};
}
Consider two real examples from our database:
| Firm | Account Size | Original Price | Discount | Effective Price | Activation Fee | True Cost % |
|---|---|---|---|---|---|---|
| Maven Trading | $10,000 | $13 | — | $13 | $0 | 0.130% |
| The5ers | $6,000 | $22 | 5% (PFKEY) | $20.90 | $0 | 0.348% |
| Blue Guardian | $10,000 | $27 | 50% (PFK) | $13.50 | $0 | 0.135% |
| FXIFY | $5,000 | $39 | 28% (PFK) | $28.08 | $0 | 0.562% |
Without the True Cost calculation, you might think $13 and $27 are wildly different. But with a 50% discount code, Blue Guardian becomes nearly identical to Maven Trading per dollar of funded capital.
Next.js 16's App Router with Server Components is perfect for this use case. The data fetching happens server-side, reducing the JavaScript bundle and providing instant SEO-friendly HTML.
// src/app/[locale]/(public)/tools/true-cost-calculator/page.tsx
import { getChallengesWithFirm, getOffersWithFirm } from '@/lib/db/queries';
import { TrueCostBloomberg } from '@/components/tools/TrueCostBloomberg';
export const revalidate = 3600; // ISR: revalidate every hour
export default async function TrueCostCalculatorPage() {
// Server-side data fetching — zero client-side API calls
const [challenges, offers] = await Promise.all([
getChallengesWithFirm({ limit: 200 }),
getOffersWithFirm(30),
]);
// Transform and serialize data for the client component
const challengesData = challenges.map((c) => ({
id: c.id,
firmId: c.firmId,
accountSize: c.accountSize,
steps: c.steps,
originalPrice: c.originalPrice,
discountedPrice: c.discountedPrice || undefined,
activationFee: c.activationFee || undefined,
profitSplit: c.profitSplit || 80,
dailyLossLimit: c.dailyLossLimit || 0,
maxLoss: c.maxLoss || 0,
firm: c.firm ? {
id: c.firm.id,
slug: c.firm.slug,
name: c.firm.name,
logo: c.firm.logo || undefined,
rating: c.firm.rating || 0,
} : undefined,
}));
const offersData = offers.map((o) => ({
id: o.id,
firmId: o.firmId,
promoCode: o.promoCode,
discountPercent: o.discountPercent || 0,
title: o.title,
isExclusive: o.isExclusive || false,
}));
return (
<section className="container py-8">
<TrueCostBloomberg
challenges={challengesData}
offers={offersData}
/>
</section>
);
}
The key architectural decisions:
Promise.all for parallel data fetching — challenges and offers are independent queriesnull values and normalize types before serializing to the clientrevalidate = 3600 — ISR gives us fresh data hourly without hammering SQLite on every requestThe TrueCostBloomberg component handles filtering, sorting, and the Bloomberg terminal-inspired UI. Here's the simplified structure:
'use client';
import { useMemo, useState } from 'react';
import { useTranslations } from 'next-intl';
interface TrueCostProps {
challenges: ChallengeData[];
offers: OfferData[];
}
export function TrueCostBloomberg({ challenges, offers }: TrueCostProps) {
const t = useTranslations('tools');
// Filters state
const [accountSizeRange, setAccountSizeRange] = useState<[number, number]>([0, 500000]);
const [selectedSteps, setSelectedSteps] = useState<string[]>([]);
const [selectedMarket, setSelectedMarket] = useState<string>('all');
const [sortBy, setSortBy] = useState<'trueCost' | 'price' | 'savings'>('trueCost');
// Build offers lookup map: firmId -> best offer
const offersMap = useMemo(() => {
const map = new Map<string, OfferData>();
for (const offer of offers) {
const existing = map.get(offer.firmId);
if (!existing || offer.discountPercent > existing.discountPercent) {
map.set(offer.firmId, offer);
}
}
return map;
}, [offers]);
// Calculate true cost for each challenge and apply filters
const processedChallenges = useMemo(() => {
return challenges
.map((challenge) => {
const offer = offersMap.get(challenge.firmId);
const result = calculateTrueCost(challenge, offer);
return { ...challenge, ...result, offer };
})
.filter((c) => {
if (c.accountSize < accountSizeRange[0] || c.accountSize > accountSizeRange[1]) return false;
if (selectedSteps.length > 0 && !selectedSteps.includes(c.steps)) return false;
if (selectedMarket !== 'all' && c.firm?.marketType !== selectedMarket) return false;
return true;
})
.sort((a, b) => {
switch (sortBy) {
case 'trueCost': return a.trueCostPercent - b.trueCostPercent;
case 'price': return a.effectivePrice - b.effectivePrice;
case 'savings': return b.savingsAmount - a.savingsAmount;
default: return 0;
}
});
}, [challenges, offersMap, accountSizeRange, selectedSteps, selectedMarket, sortBy]);
return (
<div className="glass-card p-6">
{/* Filters bar */}
<div className="flex flex-wrap gap-4 mb-6">
<AccountSizeSlider value={accountSizeRange} onChange={setAccountSizeRange} />
<StepFilter selected={selectedSteps} onChange={setSelectedSteps} />
<MarketFilter selected={selectedMarket} onChange={setSelectedMarket} />
<SortSelect value={sortBy} onChange={setSortBy} />
</div>
{/* Results table */}
<div className="table-premium">
<table>
<thead>
<tr>
<th>{t('firm')}</th>
<th>{t('account_size')}</th>
<th>{t('original_price')}</th>
<th>{t('discount')}</th>
<th>{t('effective_price')}</th>
<th>{t('true_cost')}</th>
<th>{t('savings')}</th>
</tr>
</thead>
<tbody>
{processedChallenges.map((c) => (
<tr key={c.id} className="premium-row">
<td>
<FirmIdentity firm={c.firm} />
</td>
<td className="font-data tabular-nums">
${c.accountSize.toLocaleString()}
</td>
<td className="font-data tabular-nums text-muted-foreground line-through">
${c.originalPrice}
</td>
<td>
{c.offer && (
<span className="savings-badge">
-{c.offer.discountPercent}% ({c.offer.promoCode})
</span>
)}
</td>
<td className="font-data tabular-nums text-status-success">
${c.effectivePrice}
</td>
<td className="font-data tabular-nums font-bold">
{c.trueCostPercent.toFixed(3)}%
</td>
<td className="font-data tabular-nums text-status-success">
{c.savingsAmount > 0 && `$${c.savingsAmount}`}
</td>
</tr>
))}
</tbody>
</table>
</div>
</div>
);
}
useMemo
Processing 370 challenges with calculations, filtering, and sorting on every render would be wasteful. The useMemo hook ensures we only recompute when the inputs actually change:
offersMap — Rebuilt only when offers array changes (effectively never after initial render)processedChallenges — Recalculated only when filters change or the base data changesIn practice, this means the calculation runs ~5-10 times during a typical user session, not 370 times per keystroke.
PropFirm Key serves traders globally, so every page — including the True Cost Calculator — is available in 10 languages: English, French, Spanish, German, Portuguese, Italian, Russian, Chinese, Japanese, and Dutch.
We use next-intl with URL-based routing:
// src/i18n/config.ts
export const locales = ['en', 'fr', 'es', 'de', 'pt', 'it', 'ru', 'zh', 'ja', 'nl'] as const;
export type Locale = (typeof locales)[number];
// URL structure: /en/tools/true-cost-calculator, /fr/tools/true-cost-calculator, etc.
The translation files contain all UI strings:
// src/messages/en.json (tools namespace)
{
"tools": {
"true_cost_meta_title": "True Cost Calculator — Compare Real Prop Firm Prices",
"true_cost_h1_prefix": "Calculate the",
"true_cost_h1_highlight": "True Cost",
"true_cost_live_badge": "{count} challenges analyzed live",
"firm": "Firm",
"account_size": "Account Size",
"original_price": "Original Price",
"effective_price": "Effective Price",
"true_cost": "True Cost %",
"savings": "Savings"
}
}
Server Components use getTranslations() (async), while Client Components use the useTranslations() hook. This separation is critical — mixing them causes hydration mismatches.
Running SQLite in Docker requires careful volume management:
# Dockerfile
FROM node:20-alpine AS runner
WORKDIR /app
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/public ./public
# SQLite WAL mode needs the DIRECTORY mounted, not the file
# This prevents WAL corruption from cross-filesystem access
VOLUME ["/app/data"]
EXPOSE 3000
CMD ["node", "server.js"]
# docker-compose.yml
services:
app:
build: .
ports:
- "3001:3000"
volumes:
- ./data:/app/data # SQLite directory (NOT file)
- static-chunks:/app/.next/static # Persistent chunks across rebuilds
restart: unless-stopped
volumes:
static-chunks:
We had a production incident where accessing the SQLite database from both the host and the container simultaneously corrupted the WAL (Write-Ahead Log). The fix: always access the database through docker exec, never directly from the host while the container is running.
# WRONG — causes WAL corruption
sqlite3 data/propfirmkey.db "SELECT * FROM firms"
# CORRECT — same process as the app
docker exec propfirmkey-app sqlite3 /app/data/propfirmkey.db "SELECT * FROM firms"
A tool page needs extra SEO attention because search engines can't "use" the interactive tool. We compensate with:
// Metadata with full SEO configuration
export async function generateMetadata({ params }): Promise<Metadata> {
const { locale } = await params;
const t = await getTranslations({ locale, namespace: 'tools' });
const canonical = `https://propfirmkey.com/${locale}/tools/true-cost-calculator`;
return {
title: t('true_cost_meta_title'),
description: t('true_cost_meta_description'),
keywords: ['prop firm true cost', 'prop firm calculator', 'cheapest prop firm'],
openGraph: {
title: t('true_cost_meta_title'),
description: t('true_cost_meta_description'),
url: canonical,
images: [{ url: 'https://propfirmkey.com/og-image.png', width: 1200, height: 630 }],
},
alternates: {
canonical,
languages: generateHreflangLinks('/tools/true-cost-calculator').languages,
},
};
}
The current Lighthouse scores for the True Cost Calculator page:
| Metric | Score |
|---|---|
| Performance | 92 |
| Accessibility | 97 |
| Best Practices | 95 |
| SEO | 100 |
Key factors:
content-visibility: auto on below-the-fold sections for paint optimizationtabular-nums and font-data (JetBrains Mono) — pre-loaded to prevent layout shift on numeric datarevalidate = 3600 — cached at the edge, rebuilt hourlySQLite is production-ready for read-heavy, single-server applications. Our 370-challenge dataset loads in under 5ms.
Server Components + Client interactivity is the sweet spot. Fetch data server-side, calculate client-side. Best of both worlds.
True cost > listed price. If you're building any comparison tool, think about what "cost" really means to your users. The compound metric (cost/funded capital) revealed insights that raw prices obscured.
i18n from day one. Retrofitting 10 locales is painful. We started with next-intl early and now serve traders in their native language across 10 locales.
SQLite + Docker needs care. Mount the directory, not the file. Never access from both host and container. Use WAL mode but respect its constraints.
The True Cost Calculator is live at propfirmkey.com/en/tools/true-cost-calculator — processing 370+ challenges from 35 firms with real-time discount calculations. If you're building fintech tools with Next.js, I hope this architectural breakdown was useful.
The full platform at PropFirm Key covers firm comparisons, detailed reviews, and the latest discount codes for prop trading challenges.
Built with Next.js 16, TypeScript, SQLite, Drizzle ORM, and Tailwind CSS. Deployed on Docker with Caddy reverse proxy.
2026-03-15 06:22:13
Modern AI coding assistants like Claude, GitHub Copilot, and ChatGPT can dramatically accelerate development. Recently, while working on a feature update, I had to modify an existing API to fetch data from a new system while maintaining backward compatibility.
The migration was gradual. Some clients would continue using the old system for a while, while others would start using the new one. Because of that, the implementation had to support both behaviors during the transition period.
Like many developers today, I used an AI coding assistant to speed up the implementation.
At first, it seemed straightforward.
But the process turned out to be more interesting than expected.
The AI-generated code worked functionally. It handled the new system integration, preserved backward compatibility, and integrated with the existing service.
But after reviewing the code carefully, a few issues surfaced:
In other words, the code worked, but it wasn't clean.
It took multiple iterations and careful review before the implementation reached a version that was both correct and maintainable.
This experience reinforced something important.
AI assistants are excellent at generating working code quickly. They help remove boilerplate, explore possible implementations, and reduce the time spent writing repetitive logic.
However, they do not fully understand the context of your system.
They don't know:
Because of this, AI often generates code that is technically correct but contextually imperfect.
And that is where code review becomes critical.
When development speed increases, the risk of suboptimal code entering the codebase also increases.
If developers accept AI-generated code in the first iteration, teams may gradually accumulate:
Over time, these small issues compound, making systems harder to maintain.
This means developers must become even better reviewers than before.
Good code review is no longer just about catching bugs. It is about evaluating whether the code truly fits the system.
When reviewing AI-assisted code, now intentionally ask a few additional questions.
AI often introduces extra flexibility that the use case does not require.
Extra flexibility today can easily become unnecessary complexity tomorrow.
Because AI suggestions often evolve across multiple prompts, some intermediate logic can remain even after the final version is generated.
This can result in code paths that are never actually used.
AI may suggest patterns that work in general but do not align with your system's architecture.
Examples include:
Maintainability matters.
Even if the code works, the question remains:
Would another engineer understand this logic six months from now?
One interesting shift I’ve noticed is that development is starting to look more like collaboration between a developer and an AI assistant.
The workflow increasingly looks like this:
The developer's role shifts slightly from writing every line of code to evaluating, refining, and validating generated code.
As AI becomes more capable at writing code, the value of engineers will increasingly come from their ability to:
In other words, the skill of thinking about code becomes even more important than writing code.
Here is a quick comparison between Traditional and AI-Assistent development flow.
Developer
↓
Write Code
↓
Code Review
↓
Merge
Developer Idea
↓
AI Generates Code
↓
Developer Reviews & Refines
↓
AI Iteration
↓
Code Review
↓
Merge
AI accelerates code generation, but developers are still responsible for validating the architecture, reducing unnecessary complexity, and ensuring maintainability.
AI tools are powerful accelerators for software development. They can help teams move faster and explore solutions more quickly.
But speed should not come at the cost of quality.
If anything, the rise of AI in development makes strong code review practices more important than ever.
Ultimately, AI can generate code, but engineers are still responsible for the systems they build.
If you have reached here, then I have made a satisfactory effort to keep you reading. Please be kind enough to leave any comments or share any corrections.
2026-03-15 06:17:27
Temporary post to validate the updated n8n → blog workflow. If this renders on the site it confirms fresh content is being processed correctly.
The workflow that pushes GitHub content into the blog collection was updated and retested. The source for the site is at https://github.com/COMMENTERTHE9/Cx_lang and the deployed site lives at https://cx-lang.com — this post exercises the end-to-end path from repo to published blog content.
The change was made to verify the end-to-end automation. This is a functional validation: push content to the repo, let the workflow (n8n) pick it up, and confirm the content appears on the blog.
Confirm this post appears on https://cx-lang.com. Delete this post after the workflow test is complete.
Originally published at https://cx-lang.com/blog/2026-03-14-n8n-test-post-9
2026-03-15 06:17:03
Let's be real: treating a Large Language Model (LLM) provider like a highly available, always-on utility is a massive architectural risk. We've all experienced it. You deploy a sophisticated agentic workflow, and suddenly the primary API goes down, gets aggressively rate-limited, or starts throwing 5xx errors.
Relying on a single provider—even an industry giant—creates a systemic vulnerability. To build true enterprise-grade AI applications, we have to decouple the application layer from specific vendors. The goal is to engineer a resilient "intelligence backbone" that autonomously shifts traffic based on availability, latency, and unit economics.
Instead of wrestling with half a dozen different SDKs and writing custom retry loops for OpenAI, Anthropic, Meta, and DeepSeek, modern architectures are shifting toward unified routing planes.
By using an API gateway like OpenRouter, your application interfaces with just one endpoint. The complexity is handled entirely behind the scenes: the gateway uses built-in fallback logic to automatically reroute failed requests to secondary models, or to alternative infrastructure providers hosting the exact same open-weight model.
The cleanest way to manage routing at scale is by externalizing your logic into a declarative JSON configuration. This keeps your application code lean and allows Platform or FinOps teams to adjust routing priorities dynamically without triggering a full code deployment.
Here is what a production-ready routing payload looks like:
{
"model": "meta-llama/llama-3.3-70b-instruct",
"messages": [{"role": "user", "content": "Analyze this dataset..."}],
"provider": {
"order": ["deepinfra/turbo", "fireworks"],
"allow_fallbacks": true,
"sort": "latency",
"zdr": true,
"max_price": {"prompt": 1, "completion": 2}
}
}
Beyond provider fallbacks, OpenRouter supports model-level fallbacks using the models array. This is a game-changer for resilience—if your primary model is completely unavailable across all providers, the gateway can automatically fall back to semantically similar models:
{
"models": [
"anthropic/claude-sonnet-4.5",
"openai/gpt-5-mini",
"google/gemini-3-flash-preview"
],
"messages": [{"role": "user", "content": "Analyze this dataset..."}],
"provider": {
"sort": {"by": "throughput", "partition": "none"},
"zdr": true
}
}
Setting partition: "none" removes model grouping, allowing the router to sort endpoints globally across all models. This means if Claude is slow or down, your request automatically routes to the fastest available alternative—whether that's GPT-5-mini or Gemini—without any code changes.
For enterprise applications with strict latency requirements, you can set explicit performance thresholds using preferred_max_latency and preferred_min_throughput. These work with percentile statistics (p50, p75, p90, p99) calculated over a rolling 5-minute window:
{
"model": "deepseek/deepseek-v3.2",
"messages": [{"role": "user", "content": "Generate report..."}],
"provider": {
"sort": "price",
"preferred_max_latency": {
"p90": 2,
"p99": 5
},
"preferred_min_throughput": {
"p90": 50
}
}
}
Providers not meeting these thresholds are deprioritized (moved to fallback positions) rather than excluded entirely. This ensures your requests always execute while preferring endpoints that meet your SLA requirements.
Why this configuration is powerful:
order): We explicitly target optimized endpoints first, like DeepInfra's high-speed turbo instances.sort): Setting this to "latency" instructs the gateway to actively seek out the fastest responding provider for your chosen model.zdr): A non-negotiable flag for enterprise compliance, ensuring your chosen providers do not log your sensitive prompts.max_price): Prevents automated fallovers from accidentally defaulting to a premium, budget-draining endpoint during a weekend outage.Your application code remains blissfully simple. You just inject this JSON into a standard REST call:
import requests, json
# Load declarative routing policy
config = json.load(open("routing_config.json"))
# A single API call handles all fallbacks and routing internally
response = requests.post(
"<https://openrouter.ai/api/v1/chat/completions>",
headers={"Authorization": f"Bearer {API_KEY}"},
json=config
)
Running complex Retrieval-Augmented Generation (RAG) pipelines or large-context reasoning models gets expensive fast. A mature FinOps strategy requires strict controls, and centralizing your routing makes this vastly easier to manage.
You can establish cost-aware routing dynamically. By setting the provider.sort key to "price", the gateway automatically hunts down the cheapest inference provider currently hosting your requested open-source model. The max_price parameter ensures your AI spend remains entirely predictable, even when fallback chains are triggered.
To understand the savings potential, consider the price variance across providers for the same model. For example, Llama 3.3 70B pricing varies significantly:
For a workload processing 100 million tokens monthly, switching from the most expensive to the most affordable provider saves ~$57,000 per month. The max_price parameter acts as a circuit breaker—if no compliant provider is available under your ceiling, the request fails gracefully rather than silently draining your budget.
This architecture is incredibly powerful, but it's not a silver bullet. The biggest trade-off is centralization. By moving away from individual provider SDKs, you are trading multiple potential points of failure for a single, massive one: the routing gateway itself.
If the unified API's load balancers fail, your entire stack loses access to external AI simultaneously. It's a calculated risk—you're betting that a dedicated routing platform will maintain better aggregate uptime than any individual LLM provider.
Relying on a solitary API endpoint is no longer acceptable for modern, mission-critical systems. It exposes your business to unpredictable vendor rate limits, unannounced deprecations, and frustrating outages.
By adopting a centralized routing plane with declarative JSON configurations, engineering teams can cleanly abstract away the chaos of the AI provider ecosystem. You gain the ability to orchestrate dynamic fallback arrays and latency-based routing without constantly rewriting application logic. This pattern definitively hardens your application, creating a robust foundation for the next generation of autonomous agents.
2026-03-15 06:15:52
Spoiler: 497 commits, three sleepless nights with SQLite, and one very stubborn race condition that refused to die.
Reading time: ~12 minutes · For: AI agent developers, architecture drama enthusiasts
If you missed the last few months of LedgerMind's life, here's the short version: we took a system that in version 3.0 simply worked, and turned it into a system that works fast, reliably, and with elements of artificial intelligence.
Sounds like marketing bullshit? I get it. So let's jump straight to the facts:
| Metric | v3.0 | v3.3.2 | Change |
|---|---|---|---|
| Search (OPS) | ~2,000 | 5,500+ | +175% |
| Write (latency) | ~500ms | 14ms | -97% |
| Commits between versions | — | 497 | 😅 |
| Critical bugs in production | Had them | Zero now | 🎉 |
But let's start from the beginning. Because behind these numbers lies a real engineering drama.
We had a problem. A beautiful, classic TOCTOU race (Time-Of-Check-To-Time-Of-Use). Two agents simultaneously decide to write a decision for the same target. First checks — no conflicts. Second checks — no conflicts. First writes. Second writes. Boom. Metadata corrupted.
"This rarely happens," someone said.
"Rarely isn't never," replied CI/CD at 3 AM.
The fix: real ACID transactions with BEGIN IMMEDIATE, a global lock registry, and automatic stale lock cleanup after 10 minutes. Now you can run ten agents on one project — they'll figure it out.
SQLite is a wonderful thing until you try to write to it from a background worker and a user request simultaneously. Then it becomes... less wonderful.
sqlite3.OperationalError: database is locked
This error haunted us like a ghost. We tried:
What worked: splitting enrichment batches into per-proposal transactions + worker.pid for detecting stuck workers + automatic stale lock cleanup.
Now the background worker calmly runs every 5 minutes, and users don't even notice it exists. As it should be.
Before, knowledge in LedgerMind was static. You wrote it — it sat there until you deleted it. Boring.
Now every piece of knowledge has three life phases:
PATTERN → EMERGENT → CANONICAL
LifecycleEngine manages transitions automatically. You do nothing — the system decides when knowledge has "grown up."
Why? Because after a month of operation, you accumulate hundreds of decisions. And you want to see current ones in search, not those you wrote on day one and forgot.
This is probably my favorite feature of v3.3.
Before: you record decisions, the system stores them.
After: the system analyzes sequences of your decisions and identifies thinking patterns.
# You just record decisions
memory.record_decision("Use PostgreSQL", target="db")
memory.record_decision("Add JSONB", target="db")
memory.record_decision("Migrations via Alembic", target="db")
# The system notices the pattern:
# "When user builds API → PostgreSQL + JSONB + Alembic"
# And next time will suggest this stack automatically
This isn't magic. It's the Trajectory-based Reflection Engine, which builds graphs of your decisions and finds repeating paths in them.
We added Gemini CLI support and brought VS Code integration to "Hardcore" level:
| Client | Automation Level |
|---|---|
| VS Code | Hardcore — shadow context, terminal, chats |
| Claude Code | Full — auto-record + RAG |
| Cursor | Full — auto-record + RAG |
| Gemini CLI | Full — auto-record + RAG |
What does this mean in practice? You don't think about LedgerMind at all. It just works. Before every LLM request, the system injects context from memory. After every response — it writes the result automatically.
You work as usual. The system works for you.
Early in v3.3 development, we noticed something unpleasant: search slowed down. From ~4,000 OPS to ~2,000 OPS.
Cause: added linked_id validation for connections between events and decisions. Every search did a full table scan.
Fix: index on linked_id + fast-path heuristics for simple queries + metadata batching.
-- Before: slow JOIN without index
SELECT * FROM decisions WHERE linked_id IN (...)
-- After: fast lookup by index
CREATE INDEX idx_linked_id ON episodic_events(linked_id);
Result: 5,500+ OPS for semantic search, 14,000+ OPS for keyword-only.
Writing a decision to LedgerMind isn't just INSERT INTO. It's:
And all this fits into 8 operations per second. For comparison: v3.0 had ~2 OPS.
How? Deferred VectorStore loading, splitting transactions into proposals, path validation caching.
Yes, LedgerMind now runs on Android via Termux. With a 4-bit quantized model.
| Metric | Mobile (GGUF) | Server (MiniLM) |
|---|---|---|
| Search (latency) | 0.13ms | 0.05ms |
| Write (latency) | 142.7ms | 14.1ms |
| Search (OPS) | 5,153 | 11,019 |
Why? Because sometimes you need to prototype on the go. And because we can.
Symptom: when merging duplicates, the system returned at least 2 targets required, even when duplicates existed.
Cause: group size validation happened after transaction start, when data was already partially modified.
Fix: validate group size before transaction + randomize candidates to prevent infinite merge loops.
vitality Field
Symptom: CANONICAL knowledge ranks lower than fresh PATTERNs.
Cause: the vitality field needed for lifecycle ranking wasn't loaded in search fast-path.
Fix: add vitality calculation to fast-path + fix transitions in LifecycleEngine.
Symptom: worker processes the same proposals over and over. Tokens disappear. Time disappears.
Cause: SQL query didn't exclude already-processed records.
Fix: add enrichment_status field with pending → completed transition + stuck record detection.
Before: one huge Memory class at 2,000+ lines.
After: nine specialized services:
Memory (coordinator)
├── EpisodicStore # short-term events
├── SemanticStore # long-term decisions + Git
├── VectorStore # embeddings
├── ConflictEngine # conflict detection
├── ResolutionEngine # supersede validation
├── DecayEngine # pruning old data
├── ReflectionEngine # pattern discovery
└── LifecycleEngine # phase management
Why? Each component can be tested, optimized, and replaced independently. And when a new developer arrives in six months, they won't run away in horror.
We removed:
preferred_language → now enrichment_language
arbitration_mode → replaced with intelligent conflict resolutionlite mode → completely cut from architectureWhy does this matter? Less dead code = fewer bugs = fewer questions like "what does this setting do?".
Why:
# Backup
ledgermind-mcp run --path /path/to/v3.0/memory
# Upgrade
pip install --upgrade ledgermind
# Initialize
ledgermind init
Judging by commits and TODOs in the code:
| Feature | Confidence | Evidence |
|---|---|---|
| Real-time collaboration (CRDT) | Medium | Multi-agent namespacing groundwork |
| Cloud hosting | Medium | Docker + REST gateway ready |
| Knowledge graph visualization | High | DecisionStream ontology enables graph queries |
| LangChain/LlamaIndex integration | High | MCP protocol compatibility |
When we started v3.3, I thought: "A few features, some optimizations, release in a month."
Reality: 497 commits, three critical bugs in production, one night debugging SQLite locking, and lots of coffee.
But when I see search running at 5,500+ OPS, the background worker doing its job without a single lock, the system automatically "understanding" patterns in my decisions — I realize: it was worth it.
LedgerMind v3.3.2 isn't just "a new version." It's a system you can trust.
Now go build something awesome.
Article written based on analysis of 497 commits between v3.0.0 and v3.3.2. The author didn't sleep for two nights but will catch up tomorrow.
P.S. If you find a bug — open an issue. We're fast. Promise.
P.P.S You can watch video tutorial on my X.com // Detect dark theme var iframe = document.getElementById('tweet-2032901678538580120-188'); if (document.body.className.includes('dark-theme')) { iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=2032901678538580120&theme=dark" }
2026-03-15 06:12:18
Focus: Terminal-first, step-by-step command explanations for practical exam prep
Linux is an open-source operating system kernel. A Linux Distribution (distro) is a collection of software built around the Linux kernel, all licensed under open-source or GPL. 90% of all cloud infrastructure is powered by Linux, making it the foundation of DevOps.
.deb packages and apt package manager.rpm packages and yum/dnf package managerapt update # Update the package index (local DB of available repos)
apt upgrade # Upgrade installed packages to latest versions
apt install <pkg_name> # Install a package
apt remove <pkg_name> # Remove a package
apt search <pkg_name> # Search for a package
apt show <pkg_name> # Show package details
apt list --installed # List all installed packages
🔥 EXAM TIP: Always run
apt updateBEFOREapt install. Update refreshes the package list, upgrade actually updates the software.
Linux organizes everything in a tree structure starting from / (root):
| Directory | Purpose |
|---|---|
/ |
Root — top of the entire filesystem tree |
/home |
User home directories (e.g., /home/daniyal) |
/etc |
System configuration files (ssh config, apache config, etc.) |
/var |
Variable data — logs (/var/log), web files (/var/www) |
/usr |
User programs and utilities (/usr/bin, /usr/lib) |
/bin |
Essential command binaries (ls, cp, mv, cat) |
/sbin |
System binaries (systemctl, fdisk) — need root |
/tmp |
Temporary files — cleared on reboot |
/opt |
Optional/third-party software |
/root |
Home directory of the root user |
/dev |
Device files (hardware represented as files) |
/proc |
Virtual filesystem — running process info |
🔥 EXAM TIP: Know these paths by heart:
/etc/ssh/sshd_config,/var/www/html,/var/log/apache2/,/etc/apache2/sites-available/
uname -a # Full system info (kernel name, version, architecture)
lsb_release -a # Distribution-specific info
cat /etc/os-release # OS release details
A Linux command has three parts: command [options] [arguments]
wc -l devops_course_outline.txt
# ^ ^ ^
# | | +-- argument (what to operate on)
# | +------- option (modifies behavior, starts with - or --)
# +------------ command (program to execute)
sudo (superuser do) lets you run commands with root (administrator) privileges:
sudo ls -la /root # List root's home directory with elevated privileges
sudo apt install nginx # Install software (needs root)
| Command | What It Does | Example |
|---|---|---|
pwd |
Print current working directory |
pwd → /home/daniyal
|
cd |
Change directory | cd /var/www/html |
cd ~ or cd
|
Go to home directory | cd ~ |
cd .. |
Go up one level (parent dir) | cd .. |
cd - |
Go to previous directory | cd - |
ls |
List directory contents | ls /etc |
ls -la |
List ALL files (hidden + details) | ls -la |
tree |
Show directory tree structure | tree /var/www |
mkdir |
Create a directory | mkdir myproject |
mkdir -p |
Create nested directories | mkdir -p a/b/c |
rmdir |
Remove empty directory | rmdir myproject |
rm -r |
Remove directory and contents | rm -r myproject |
/. Example: /usr/bin/python3
../usr/bin
cd /usr/bin # Absolute path
cd ../usr/bin # Relative path
| Command | Purpose |
|---|---|
cat file.txt |
Display entire file at once (not ideal for large files) |
cat -n file.txt |
Display with line numbers |
tac file.txt |
Display file in reverse (last line first) |
less file.txt |
Page through large files (scroll up/down, search with /) |
head file.txt |
Show first 10 lines (use -n 20 for first 20) |
tail file.txt |
Show last 10 lines (use -n 20 for last 20) |
tail -f file.txt |
Follow file in real-time (great for logs!) |
wc file.txt |
Count lines, words, characters |
wc -l file.txt |
Count only lines |
touch newfile.txt # Create empty file (or update timestamp)
touch -t 12091600 file.txt # Set specific timestamp (Dec 9, 4PM)
echo "hello" > file.txt # Write to file (OVERWRITES!)
echo "world" >> file.txt # Append to file
cp source.txt dest.txt # Copy file
mv old.txt new.txt # Rename/move file
rm file.txt # Delete file
rm -rf directory/ # Force delete directory and contents
find Command
The find command searches the filesystem in real-time. Extremely useful in DevOps for automation, cleanup, backups, and CI/CD scripts.
find /var/log -name "*.log" # Find all .log files in /var/log
find /usr -name gcc # Find files/dirs named gcc
find /usr -type d -name gcc # Find only DIRECTORIES named gcc
find /usr -type f -name gcc # Find only regular FILES named gcc
find / -size +10M # Find files larger than 10MB
find / -size 0 # Find empty files
find / -mtime 3 # Files modified 3 days ago
find / -user root # Files owned by root
find -name "*.jpg" -exec rm {} ';' # Find and DELETE all .jpg files
find / -size +10M -exec command {} ';' # Find large files and run command
Key find Options:
| Option | Purpose |
|---|---|
-name |
Match filename (case-sensitive) |
-iname |
Match filename (case-insensitive) |
-type f |
Regular files only |
-type d |
Directories only |
-size |
File size (+100M = over 100MB, -10k = under 10KB) |
-mtime |
Modified time in days |
-exec |
Execute a command on each result |
-user |
Files owned by specific user |
📝 NOTE: The
{}is a placeholder filled with each file found. The';'or\;terminates the-execcommand.
locate Command
locate zip # Fast search using pre-built database
# Database is updated by 'updatedb' (runs daily automatically)
which & whereis
which python3 # Shows path of executable: /usr/bin/python3
whereis python3 # Shows binary, source, and man page locations
echo "hello" > file.txt # Redirect stdout to file (overwrite)
echo "world" >> file.txt # Redirect stdout to file (append)
command < input.txt # Feed file as stdin to command
command 2> errors.txt # Redirect stderr to file
command > out.txt 2>&1 # Redirect BOTH stdout AND stderr to file
cat /etc/passwd | wc -l # Pipe: send output of cat as input to wc
cat /etc/passwd | cut -d: -f1 | sort | uniq # Chain multiple pipes
🔥 EXAM TIP:
>overwrites,>>appends.2>&1redirects errors to same place as stdout. The pipe|chains commands together.
kill -SIGKILL <PID> # Force kill a process (same as kill -9)
kill -9 <PID> # Force kill by PID
node server.js & # Run process in background (& suffix)
lsof -i :80 # Find what process is using port 80
📝 NOTE: You can only kill your own processes (unless you are root).
man apache2 # Full manual page for a command
man -k compress # Search man pages by keyword
info make # GNU Info documentation (alternative to man)
command --help # Quick help/usage summary (fastest option)
| Permission | Symbol | Number | On Files | On Directories |
|---|---|---|---|---|
| Read | r |
4 | View/copy file contents | List directory contents (ls) |
| Write | w |
2 | Modify file contents | Create/delete files inside |
| Execute | x |
1 | Run the file as program | Enter the directory (cd) |
| Number | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
|---|---|---|---|---|---|---|---|---|
| Permission | --- |
--x |
-w- |
-wx |
r-- |
r-x |
rw- |
rwx |
| Meaning | None | Exec only | Write only | Write+Exec | Read only | Read+Exec | Read+Write | ALL |
-rw-r--r-- 1 sajid devops_students 13 2022-05-11 08:29 lecture.txt
^^^^^^^^^ ^ ^
| | +-- group owner
| +-------- file owner (user)
+-------------------- permission string
First char: - = file, d = directory, l = symlink
Next 9 chars: rwx for user | rwx for group | rwx for others
Example: -rw-r--r-- means:
User: rw- (read + write) = 4+2+0 = 6
Group: r-- (read only) = 4+0+0 = 4
Other: r-- (read only) = 4+0+0 = 4
Octal: 644
chmod — Change Permissions
chmod 644 file.txt # rw-r--r-- (standard file permission)
chmod 755 directory/ # rwxr-xr-x (standard directory permission)
chmod 700 secret.txt # rwx------ (owner only, full access)
chmod 666 file.txt # rw-rw-rw- (everyone can read/write)
chmod +x script.sh # Add execute permission for everyone
chmod u+x script.sh # Add execute for user only
chmod g-w file.txt # Remove write from group
chmod o-rwx file.txt # Remove all permissions from others
chown — Change Ownership
sudo chown root:root file.txt # Change owner AND group to root
sudo chown daniyal file.txt # Change owner only
sudo chown daniyal:www-data file.txt # Change owner to daniyal, group to www-data
sudo chown -R www-data:www-data /var/www/html/ # Recursive (all files inside)
| Rule | Permission | Why |
|---|---|---|
| Directories |
755 (rwxr-xr-x) |
Owner full access, others can read/enter |
| Regular files |
644 (rw-r--r--) |
Owner read/write, others read only |
| Scripts (.sh) |
755 (rwxr-xr-x) |
Must be executable |
| SSH private key |
600 (rw-------) |
ONLY owner can read (SSH refuses otherwise) |
| ~/.ssh/ directory |
700 (rwx------) |
ONLY owner can access |
| authorized_keys |
600 (rw-------) |
ONLY owner can read/write |
| Web files owner | www-data:www-data |
Web server user must be able to read |
| NEVER use | 777 |
SECURITY RISK — everyone has full access! |
🔥 EXAM TIP: If a web page shows '403 Forbidden', first check permissions and ownership! Owner should be
www-data, dirs755, files644.
cat /etc/passwd # List ALL users on the system
sudo adduser newuser # Create new user (interactive - sets password)
sudo useradd newuser # Create user (non-interactive - no password set)
sudo usermod -aG sudo newuser # Add user to sudo group (give admin rights)
sudo userdel newuser # Delete a user
sudo userdel -r newuser # Delete user AND their home directory
su - newuser # Switch to another user
whoami # Show current username
id # Show user ID, group ID, and groups
cat /etc/group # List all groups
sudo groupadd devopsgroup # Create a new group
sudo usermod -aG devopsgroup user1 # Add user1 to devopsgroup
sudo groupdel devopsgroup # Delete a group
groups username # Show which groups a user belongs to
📝 NOTE: The
-aGflag is critical! Without-a, the user gets REMOVED from all other groups. Always use-aGtogether.
/etc/passwd — Contains user account info (username:x:UID:GID:info:home:shell)/etc/shadow — Contains encrypted passwords (only root can read)/etc/group — Contains group definitionsSSH (Secure Shell) allows you to connect and log into remote systems securely. It encrypts ALL communication between client and server. SSH runs on port 22 by default. The server must have sshd (SSH daemon) installed and running.
ssh username@remote_server_ip # Connect to remote machine
ssh [email protected] # Example with IP
whoami # Verify you are logged in
hostname # Verify which machine you are on
exit # Disconnect from remote
sudo systemctl status ssh # Check if SSH is running
sudo systemctl start ssh # Start SSH service
sudo systemctl stop ssh # Stop SSH service
sudo systemctl restart ssh # Restart (apply config changes)
sudo systemctl enable ssh # Start automatically on boot
sudo systemctl disable ssh # Don't start on boot
🔥 EXAM TIP:
systemctl enablemakes service start on boot.systemctl startstarts it NOW. They are different! You often need BOTH.
The main SSH server config file is: /etc/ssh/sshd_config
sudo nano /etc/ssh/sshd_config # Edit SSH server configuration
# Key settings to know:
Port 22 # Default SSH port
PermitRootLogin yes/no # Allow/block root login via SSH
PasswordAuthentication yes/no # Allow/block password-based login
PubkeyAuthentication yes # Allow key-based authentication
# IMPORTANT: After ANY change, restart SSH!
sudo systemctl restart ssh
Key-based auth is more secure than passwords. It uses a pair of keys:
~/.ssh/id_rsa): Stays on YOUR machine. NEVER share this!~/.ssh/id_rsa.pub): Goes to the REMOTE server.Step-by-step setup:
# STEP 1: Generate key pair on YOUR machine (client)
ssh-keygen -t rsa
# Press Enter for default location (~/.ssh/id_rsa)
# Press Enter for no passphrase (or set one for extra security)
# Creates: ~/.ssh/id_rsa (private) and ~/.ssh/id_rsa.pub (public)
# STEP 2: Copy public key to the REMOTE server
ssh-copy-id username@remote_server
# OR manually:
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys # On the remote server
# STEP 3: Set correct permissions (CRITICAL!)
chmod 700 ~/.ssh # Directory: owner only
chmod 600 ~/.ssh/id_rsa # Private key: owner only
chmod 600 ~/.ssh/authorized_keys # Auth keys: owner only
# STEP 4: Now you can login without password!
ssh username@remote_server
📝 NOTE: The remote server stores public keys in
~/.ssh/authorized_keys(one key per line). If permissions on this file are wrong, SSH will REFUSE the key and fall back to password auth.
Create ~/.ssh/config to avoid typing username and IP every time:
# File: ~/.ssh/config
Host myserver
HostName 192.168.1.100
User daniyal
IdentityFile ~/.ssh/id_rsa
# Now you can just type:
ssh myserver
# Instead of: ssh [email protected]
Run commands on a remote server without opening a full session:
ssh user@server "whoami" # Run whoami on remote
ssh user@server "mkdir ~/test_dir" # Create dir on remote
ssh user@server "cat /etc/hostname" # Read remote file
ssh user@server "systemctl restart apache2" # Restart service remotely
🔥 EXAM TIP: Non-interactive SSH is key for scripting — you can loop over servers:
for server in list; do ssh server "command"; done
# Edit /etc/ssh/sshd_config:
PermitRootLogin no # Block root from logging in via SSH
PasswordAuthentication no # Force key-based auth only
# Restart to apply:
sudo systemctl restart ssh
# Test: trying ssh root@server should now be DENIED
sudo apt update
sudo apt install apache2 # Install Apache
sudo systemctl start apache2 # Start the service
sudo systemctl stop apache2 # Stop the service
sudo systemctl restart apache2 # Restart (apply changes)
sudo systemctl reload apache2 # Reload config without downtime
sudo systemctl status apache2 # Check if running
sudo systemctl enable apache2 # Start on boot
curl localhost # Test: should show Apache default page
| Path | Purpose |
|---|---|
/var/www/html/ |
Default web root (put your website files here) |
/etc/apache2/ |
Main configuration directory |
/etc/apache2/sites-available/ |
Virtual host config files (available) |
/etc/apache2/sites-enabled/ |
Symlinks to active virtual hosts |
/etc/apache2/apache2.conf |
Main Apache configuration file |
/etc/apache2/ports.conf |
Which ports Apache listens on |
/var/log/apache2/error.log |
Error log file |
/var/log/apache2/access.log |
Access log file |
# Step 1: Clone or copy your website files
sudo git clone https://github.com/user/repo.git /var/www/html/mysite
# Step 2: Set correct ownership
sudo chown -R www-data:www-data /var/www/html/mysite
# Step 3: Set correct permissions
sudo chmod -R 755 /var/www/html/mysite/ # Directories
sudo find /var/www/html/mysite -type f -exec chmod 644 {} \; # Files
# Step 4: Verify
curl localhost
Virtual hosts let you run MULTIPLE websites on one server, each on a different port or domain.
# Step 1: Create config file in sites-available
sudo nano /etc/apache2/sites-available/mysite.conf
Content of mysite.conf:
<VirtualHost *:8080>
ServerName mysite.local
DocumentRoot /var/www/mysite
ErrorLog /var/log/apache2/mysite-error.log
CustomLog /var/log/apache2/mysite-access.log combined
</VirtualHost>
# Step 2: Add Listen directive for new port
# Edit /etc/apache2/ports.conf and add: Listen 8080
# Step 3: Enable the site
sudo a2ensite mysite.conf
# Step 4: Test config and reload
sudo apache2ctl configtest # Check for syntax errors
sudo systemctl reload apache2
# Step 5: Verify
curl localhost:8080
🔥 EXAM TIP:
sites-available= config files stored here.sites-enabled= symlinks to active configs. Usea2ensite/a2dissiteto enable/disable.
sudo apt update
sudo apt install nginx
sudo systemctl start nginx
sudo systemctl stop nginx
sudo systemctl restart nginx
sudo systemctl status nginx
sudo systemctl enable nginx
curl localhost # Test
/var/www/html/ (same as Apache)/etc/nginx/sites-available/ and /etc/nginx/sites-enabled/
/etc/nginx/nginx.conf
/etc/nginx/sites-available/default
# File: /etc/nginx/sites-available/mysite
server {
listen 80;
server_name mysite.local;
root /var/www/mysite;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
# Enable the site by creating a symlink:
sudo ln -s /etc/nginx/sites-available/mysite /etc/nginx/sites-enabled/
# Test config:
sudo nginx -t
# Reload:
sudo systemctl reload nginx
📝 NOTE: Both Apache and Nginx use port 80 by default. They CANNOT run on the same port at the same time!
If Apache fails to start with "Address already in use" error, another process (like Nginx) is already using port 80.
# Check what's using a specific port
sudo lsof -i :80 # Show process using port 80
# Production-friendly command (all open network connections)
sudo lsof -i -P -n
# -i = open network connections
# -P = show port numbers (not service names)
# -n = don't resolve hostnames
# Check service logs for errors
sudo journalctl -xe # Show recent system logs with details
sudo journalctl -u apache2 # Show only Apache logs
# Monitor log files in real-time
sudo tail -f /var/log/apache2/error.log
sudo tail -f /var/log/nginx/error.log
# Step 1: Check which service is using port 80
sudo lsof -i :80
# Output shows: nginx (PID 1234)
# Step 2: Stop the conflicting service
sudo systemctl stop nginx
# Step 3: Start the service you want
sudo systemctl start apache2
# Step 4: Verify
sudo systemctl status apache2
curl localhost
| Error | Likely Cause | Fix |
|---|---|---|
| 403 Forbidden | Wrong permissions/ownership |
chown www-data:www-data, chmod 755/644
|
| 404 Not Found | Wrong DocumentRoot path | Check config file, verify path exists |
| Address in use | Port conflict |
lsof -i :80, stop conflicting service |
| Service won't start | Config syntax error |
apache2ctl configtest or nginx -t
|
| Connection refused | Service not running | systemctl start apache2/nginx |
#!/bin/bash
# The first line is the SHEBANG — tells the system to use bash
# Every script MUST start with #!/bin/bash
# Make script executable:
chmod +x script.sh
# Run script:
./script.sh
# OR
bash script.sh
#!/bin/bash
NAME="Daniyal" # Assign (NO spaces around =)
echo "Hello $NAME" # Use variable with $
echo "Hello ${NAME}" # Curly braces version (safer)
# Read user input
echo "Enter your name:"
read USERNAME # Stores input in USERNAME
echo "Welcome $USERNAME"
# Command substitution — store command output in variable
TODAY=$(date) # Run date command, store result
echo "Today is: $TODAY"
HOSTNAME=$(hostname)
echo "Machine: $HOSTNAME"
🔥 EXAM TIP: NO SPACES around
=when assigning variables!NAME='value'is correct.NAME = 'value'is WRONG and will cause an error.
| Variable | Meaning |
|---|---|
$0 |
Script name itself |
$1, $2, $3... |
First, second, third argument... |
$# |
Total number of arguments passed |
$@ |
All arguments as separate words |
$? |
Exit code of the LAST command (0=success, non-0=failure) |
#!/bin/bash
# Example: ./myscript.sh hello world
echo "Script name: $0" # ./myscript.sh
echo "First arg: $1" # hello
echo "Second arg: $2" # world
echo "Total args: $#" # 2
echo "All args: $@" # hello world
#!/bin/bash
# Every command returns an exit code
# 0 = SUCCESS, anything else = FAILURE
ls /tmp
echo $? # 0 (success — /tmp exists)
ls /nonexistent
echo $? # 2 (failure — directory doesn't exist)
# Set your own exit code
exit 0 # Exit script with success
exit 1 # Exit script with failure
#!/bin/bash
# Basic if/else
if [ -d "/var/www/html" ]; then
echo "Directory exists"
else
echo "Directory does NOT exist"
fi
# Check if file exists
if [ -f "/etc/ssh/sshd_config" ]; then
echo "SSH config found"
fi
# Compare strings
if [ "$1" == "install" ]; then
echo "Installing..."
elif [ "$1" == "remove" ]; then
echo "Removing..."
else
echo "Usage: $0 install|remove"
exit 1
fi
# Check if command succeeded
apt install nginx -y
if [ $? -eq 0 ]; then
echo "Installation successful"
else
echo "Installation FAILED"
exit 1
fi
Common test operators:
| Operator | Meaning |
|---|---|
-f file |
File exists and is a regular file |
-d dir |
Directory exists |
-e path |
Path exists (file or directory) |
-z string |
String is empty (zero length) |
-n string |
String is NOT empty |
str1 == str2 |
Strings are equal |
str1 != str2 |
Strings are NOT equal |
num1 -eq num2 |
Numbers are equal |
num1 -ne num2 |
Numbers are NOT equal |
num1 -gt num2 |
Greater than |
num1 -lt num2 |
Less than |
#!/bin/bash
# Great for handling command-line flags like -i, -r, -h
case "$1" in
-i|--install)
echo "Installing..."
;;
-r|--remove)
echo "Removing..."
;;
-h|--help)
echo "Usage: $0 [-i install] [-r remove] [-h help]"
;;
*)
echo "Invalid option: $1"
echo "Use -h for help"
exit 1
;;
esac
#!/bin/bash
# Loop through a list
for server in server1 server2 server3; do
echo "Restarting $server..."
ssh $server "systemctl restart apache2"
done
# Loop through files
for file in *.sh; do
echo "Found script: $file"
done
# Loop with numbers
for i in 1 2 3 4 5; do
echo "Number: $i"
done
# C-style loop
for ((i=1; i<=5; i++)); do
echo "Count: $i"
done
tar Command (Archives/Backups)
# Create a compressed archive
tar -czf backup.tar.gz /path/to/directory
# -c = create
# -z = compress with gzip
# -f = filename follows
# Create with timestamp in name
tar -czf backup_$(date +%Y%m%d_%H%M%S).tar.gz /var/www/html/
# Extract an archive
tar -xzf backup.tar.gz
# -x = extract
# List contents without extracting
tar -tzf backup.tar.gz
awk Command
# awk processes text line by line, splitting into fields
# Default field separator is whitespace
# Print second column of output
df -h | awk '{print $2}'
# Print specific fields with custom format
free -m | awk '/Mem:/ {print "Total: "$2" MB, Used: "$3" MB"}'
# Use custom field separator
cat /etc/passwd | awk -F: '{print $1}' # Print usernames (field 1, : separator)
# Filter lines matching a pattern
df -h | awk '!/tmpfs/ {print $0}' # Exclude lines containing tmpfs
#!/bin/bash
# Heredoc writes multi-line content to a file
cat << EOF > /var/www/html/index.html
<!DOCTYPE html>
<html>
<head><title>System Report</title></head>
<body>
<h1>Server: $(hostname)</h1>
<p>Date: $(date)</p>
</body>
</html>
EOF
# Variables inside heredoc get expanded!
# Use << 'EOF' (with quotes) to PREVENT variable expansion
#!/bin/bash
# Redirect command output to log file
apt install nginx -y > nginx.log 2>&1
# > nginx.log = redirect stdout to file
# 2>&1 = redirect stderr (2) to same place as stdout (1)
# Combined: ALL output (normal + errors) goes to nginx.log
# Append to log
echo "$(date) - Installation complete" >> nginx.log
backup.sh — Automated Directory Backup
#!/bin/bash
echo "Enter directory path to backup:"
read DIR_PATH
if [ ! -d "$DIR_PATH" ]; then
echo "ERROR: Directory $DIR_PATH does not exist!"
exit 1
fi
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="backup_${TIMESTAMP}.tar.gz"
tar -czf "$BACKUP_FILE" "$DIR_PATH"
if [ $? -eq 0 ]; then
echo "SUCCESS: Backup created: $BACKUP_FILE"
echo "$(date) - SUCCESS - $BACKUP_FILE" >> backup.log
else
echo "FAILED: Backup failed!"
echo "$(date) - FAILED" >> backup.log
exit 1
fi
#!/bin/bash
echo "Enter domain/directory name:"
read DOMAIN
sudo mkdir -p /var/www/$DOMAIN
sudo cat << EOF > /var/www/$DOMAIN/index.html
<html><body><h1>Welcome to $DOMAIN</h1></body></html>
EOF
sudo cat << EOF > /etc/apache2/sites-available/$DOMAIN.conf
<VirtualHost *:80>
ServerName $DOMAIN
DocumentRoot /var/www/$DOMAIN
ErrorLog /var/log/apache2/${DOMAIN}-error.log
CustomLog /var/log/apache2/${DOMAIN}-access.log combined
</VirtualHost>
EOF
sudo chown -R www-data:www-data /var/www/$DOMAIN
sudo chmod -R 755 /var/www/$DOMAIN
sudo a2ensite $DOMAIN.conf
sudo systemctl reload apache2
echo "Website $DOMAIN deployed successfully!"
wserver.sh — Nginx Install/Remove with Flags
#!/bin/bash
case "$1" in
-i)
if dpkg -l | grep -q nginx; then
echo "Nginx is already installed."
exit 0
fi
sudo apt install nginx -y > nginx.log 2>&1
if [ $? -eq 0 ]; then
echo "Nginx installed successfully."
else
echo "ERROR: Installation failed. Check nginx.log"
exit 1
fi
;;
-r)
if ! dpkg -l | grep -q nginx; then
echo "Nginx is not installed."
exit 0
fi
sudo apt remove nginx -y > nginx.log 2>&1
if [ $? -eq 0 ]; then
echo "Nginx removed successfully."
else
echo "ERROR: Removal failed. Check nginx.log"
exit 1
fi
;;
*)
echo "Usage: $0 [-i install] [-r remove]"
exit 1
;;
esac
system_health.sh — HTML Report
#!/bin/bash
TOTAL_MEM=$(free -m | awk '/Mem:/ {print $2}')
USED_MEM=$(free -m | awk '/Mem:/ {print $3}')
AVAIL_MEM=$(free -m | awk '/Mem:/ {print $7}')
TOTAL_SWAP=$(free -m | awk '/Swap:/ {print $2}')
USED_SWAP=$(free -m | awk '/Swap:/ {print $3}')
DISK_TOTAL=$(df -h --total --exclude-type=tmpfs | awk '/total/ {print $2}')
DISK_USED=$(df -h --total --exclude-type=tmpfs | awk '/total/ {print $3}')
DISK_AVAIL=$(df -h --total --exclude-type=tmpfs | awk '/total/ {print $4}')
cat << EOF > system_health.html
<!DOCTYPE html>
<html>
<head><title>System Health Report</title></head>
<body>
<h1>System Health Report</h1>
<p>Date: $(date)</p>
<h2>Memory Usage</h2>
<p>Total: ${TOTAL_MEM}MB | Used: ${USED_MEM}MB | Available: ${AVAIL_MEM}MB</p>
<h2>Disk Usage</h2>
<p>Total: $DISK_TOTAL | Used: $DISK_USED | Available: $DISK_AVAIL</p>
</body>
</html>
EOF
sudo cp system_health.html /var/www/html/index.html
if [ $? -eq 0 ]; then
echo "SUCCESS: Report hosted at http://localhost"
else
echo "ERROR: Failed to deploy report"
exit 1
fi
systeminfo.sh
#!/bin/bash
echo "Kernel Name: $(uname -s)"
echo "Kernel Release: $(uname -r)"
echo "Processor: $(uname -p)"
echo "OS: $(cat /etc/os-release | grep PRETTY_NAME | cut -d= -f2)"
EDITOR="nano"
echo "Favorite Editor: $EDITOR"
echo "Editor Location: $(which $EDITOR)"
echo "Documentation: $(whereis $EDITOR)"
Docker is a platform that lets you package, ship, and run applications in isolated environments called containers. A container is like a lightweight virtual machine that shares the host OS kernel. Think of it as: your app + all its dependencies, packaged together so it runs the same everywhere.
ubuntu:22.04 vs ubuntu:latest
docker --version # Check Docker version
docker version # Detailed client + server version
docker run hello-world # Test Docker works
# PULL an image from Docker Hub
docker pull ubuntu # Download ubuntu image (latest)
docker pull ubuntu:22.04 # Download specific version
docker pull nginx # Download nginx image
# RUN a container from an image
docker run ubuntu # Runs and exits immediately (no process)
docker run ubuntu sleep 5 # Runs, sleeps 5 seconds, then exits
# RUN in interactive mode (get a shell inside container)
docker run -it ubuntu bash # -i=interactive, -t=terminal
# You are now INSIDE the container!
echo "Hello Docker"
ls
exit # Exit container (container stops)
# RUN with a custom name
docker run --name myubuntu ubuntu sleep 30
# RUN in detached mode (background)
docker run -d ubuntu sleep 300 # -d = detached (runs in background)
docker run -dit --name test ubuntu bash # Detached + interactive
# LIST containers
docker ps # Show RUNNING containers only
docker ps -a # Show ALL containers (running + stopped)
# STOP a container
docker stop <container_id> # Graceful stop
docker stop myubuntu # Stop by name
# REMOVE a container
docker rm <container_id> # Remove stopped container
docker rm myubuntu # Remove by name
# STOP ALL containers at once
docker stop $(docker ps -q) # -q = quiet (only IDs)
# REMOVE ALL containers
docker rm $(docker ps -aq) # Remove all (running + stopped)
🔥 EXAM TIP:
docker run ubuntuexits immediately because there is no long-running process. Usesleep,-it bash, or-dto keep it alive.
# LIST downloaded images
docker images
# REMOVE an image
docker rmi ubuntu # Remove ubuntu image
docker rmi <image_id> # Remove by ID
# Note: Must remove all containers using that image first!
# ATTACH to a running container (connects to main process)
docker attach test
# CTRL+P then CTRL+Q = detach WITHOUT stopping container
# If you type 'exit', container STOPS
# EXEC runs a NEW process inside a running container
docker exec -it test bash
# If you type 'exit', only the exec process ends
# The container keeps running!
📝 NOTE: Use
execwhen you want to inspect a running container without affecting it. Useattachto reconnect to the main process.
docker logs <container_id> # View container logs
docker logs myubuntu
docker inspect test # Full container details (JSON)
docker inspect --format='{{.State.Status}}' test # Get specific info
-p)
Port mapping connects a port on your HOST machine to a port inside the container.
# Map host port 8081 to container port 80
docker run -d -p 8081:80 nginx
# ^ ^
# | +-- container port (nginx listens on 80)
# +-------- host port (access via localhost:8081)
# Run multiple web servers on different ports
docker run -d --name web1 -p 8081:80 ubuntu/apache2
docker run -d --name web2 -p 8082:80 ubuntu/apache2
# Access in browser:
# http://localhost:8081 --> web1
# http://localhost:8082 --> web2
🔥 EXAM TIP: Format is
-p HOST_PORT:CONTAINER_PORT. The HOST port is what you type in the browser. The CONTAINER port is what the app listens on inside.
-v) / Bind Mounts
Volumes let you share files between your host machine and the container.
# Mount current directory into container
docker run -it -v $(pwd):/scripts ubuntu bash
# $(pwd) = your current directory on host
# /scripts = path inside the container
# Mount a specific file into nginx
echo '<h1>Hello Docker!</h1>' > index.html
docker run -d -p 8081:80 -v $(pwd)/index.html:/usr/share/nginx/html/index.html nginx
# You have just hosted a simple web app!
# Mount a Python script
echo 'print("Hello, world!")' > hello.py
docker run -it --rm -v $(pwd)/hello.py:/hello.py python:3.10-slim bash
# Then inside: python hello.py
📝 NOTE: The
--rmflag automatically removes the container when it exits. Great for temporary containers.
| Command | Purpose |
|---|---|
docker version |
Check Docker client and server version |
docker run <image> |
Start a container from an image |
docker run --name <n> <img> |
Run with a custom name |
docker run <img>:<tag> |
Run a specific version |
docker run -it <img> bash |
Run interactively with terminal |
docker run -d <img> |
Run in detached (background) mode |
docker run -p H:C <img> |
Map host port H to container port C |
docker run -v H:C <img> |
Mount host path H to container path C |
docker run --rm <img> |
Auto-remove container on exit |
docker ps |
List running containers |
docker ps -a |
List ALL containers |
docker stop <id/name> |
Stop a running container |
docker rm <id/name> |
Remove a stopped container |
docker images |
List downloaded images |
docker rmi <image> |
Remove an image |
docker pull <image> |
Download image from Docker Hub |
docker exec -it <id> bash |
Open shell in running container |
docker attach <id> |
Attach to main process of container |
docker logs <id> |
View container output logs |
docker inspect <id> |
Detailed container info (JSON) |
docker stop $(docker ps -q) |
Stop ALL running containers |
docker rm $(docker ps -aq) |
Remove ALL containers |
— list, create, add to sudo, switch user, create group
systemctl start <service> # Start a service NOW
systemctl stop <service> # Stop a service NOW
systemctl restart <service> # Restart a service
systemctl reload <service> # Reload config without stopping
systemctl status <service> # Check if service is running
systemctl enable <service> # Start on boot
systemctl disable <service> # Don't start on boot
| Path | What |
|---|---|
/etc/ssh/sshd_config |
SSH server configuration |
~/.ssh/id_rsa |
Your SSH private key |
~/.ssh/id_rsa.pub |
Your SSH public key |
~/.ssh/authorized_keys |
Public keys allowed to login |
~/.ssh/config |
Client SSH shortcuts |
/var/www/html/ |
Default web root (Apache + Nginx) |
/etc/apache2/sites-available/ |
Apache virtual host configs |
/etc/apache2/sites-enabled/ |
Active Apache sites (symlinks) |
/etc/nginx/sites-available/ |
Nginx server block configs |
/etc/nginx/sites-enabled/ |
Active Nginx sites (symlinks) |
/var/log/apache2/error.log |
Apache error log |
/var/log/nginx/error.log |
Nginx error log |
/etc/passwd |
User account information |
/etc/group |
Group definitions |
/etc/shadow |
Encrypted passwords (root only) |
644 = rw-r--r-- # Standard file permission
755 = rwxr-xr-x # Standard directory / script permission
700 = rwx------ # Private (owner only)
600 = rw------- # SSH keys
777 = rwxrwxrwx # NEVER USE (security risk)
docker run -it ubuntu bash # Interactive shell
docker run -d -p 8080:80 nginx # Background + port map
docker run -v $(pwd):/app ubuntu # Volume mount
docker ps -a # List all containers
docker stop $(docker ps -q) # Stop all running
docker rm $(docker ps -aq) # Remove all containers
docker exec -it <id> bash # Shell into running container
CTRL+P then CTRL+Q # Detach without stopping
#!/bin/bash # Shebang (always first line)
$1 $2 $# $@ $? # Args, count, all, exit code
read VAR # Read user input
if [ condition ]; then ... fi # Conditional
case "$1" in pattern) ... ;; esac # Flag handling
for item in list; do ... done # Loop
tar -czf name.tar.gz dir/ # Create archive
awk '{print $1}' file # Text processing
cat << EOF > file ... EOF # Heredoc
command > file 2>&1 # Redirect all output
Good luck on your exam! Practice these commands in the terminal — reading is not enough, you need muscle memory! 🚀