MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Developer Community Directory: Where to Find Your First 1,000 Users (2026)

2026-04-07 17:18:03

Key Stats

Category Count in Directory
Total communities listed 100+
Discord servers with direct links 45+
Reddit communities 25+
Slack groups 10+
Meetup groups 15+
Avg members (top Discord AI servers) 50,000+

TL;DR

Skip cold outreach. The highest-leverage way to find your first 1,000 users is to be genuinely present in communities before you need anything. This directory is organized by where your users actually are.

For developer tools and open source projects, the priority order is: Hacker News → relevant Discord servers → niche Reddit communities → Slack groups → Meetup.

AI & Machine Learning Communities

These communities have the highest concentration of early adopters for AI tools.

Discord (Direct Join Links)

Community Members Join Link
AI HUB by Weights & Biases 538,000 discord.gg/aihub
The AI Protocol 59,337 discord.gg/theaiprotocol
AI World Builders (r/SoraAI) 50,352 discord.gg/soraai-discussion-community
Mistral AI 32,383 discord.gg/mistralai
Turing AI 36,142 discord.gg/turing-ai
BSS AI 13,947 discord.gg/bssai
MattVidPro AI 5,173 discord.gg/mattvidpro

How to post: Most AI Discord servers have a #show-your-project or #resources channel. Read rules first. Your first message should be genuine — don't drop links with no context.

Data Science & Developer Communities

Discord

Community Members Join Link
Data Science/ML/AI 30,639 discord.com/invite/v3zeSGb
Data Science 18,977 discord.com/invite/UYNaemm

Slack Groups

Community Members URL
ODSC (Open Data Science Conf) 5,000 odsc.com
R-Ladies 4,000 rladies.org
Data Science Salon 3,000 datasciencesalon.org

Reddit

Community URL
r/MachineLearning reddit.com/r/MachineLearning
r/datascience reddit.com/r/datascience
r/LocalLLaMA reddit.com/r/LocalLLaMA

Vibe Coding / AI-Assisted Development Communities

This is the fastest-growing developer niche in 2026. High engagement, early adopters.

High-Traffic Platforms

Community Platform Members URL
Developer Community vibec.net 32,000 vibec.net/community
Vibe Coding Collective Meetup 1,119 meetup.com/vibe-coders-collective
The Vibe Coders thevibecoders.community 100 thevibecoders.community

GitHub Communities

Resource URL
Vibe Coding Community (GitHub org) github.com/Vibe-Coding-Community
Awesome Vibe Coding (curated list) github.com/filipecalegario/awesome-vibe-coding

Meetup Groups (with organizer contact)

Group Location Members URL
Vibe Coding Collective Online/Global 1,119 meetup.com
Global Vibe Coders Bengaluru, India 72 meetup.com
Vibe Coding Meetup Paris Paris meetup.com

How to contact meetup organizers: Go to the Meetup group page → click "Organizers" → message the organizer directly. Response rates are ~60-70%. Introduce your tool briefly and offer to demo at a future event or contribute to the next newsletter.

Open Source Developer Communities

These are the communities where open source projects get discovered:

Platform Community Link
Hacker News Show HN news.ycombinator.com/shownew
Reddit r/programming reddit.com/r/programming
Reddit r/opensource reddit.com/r/opensource
Reddit r/selfhosted reddit.com/r/selfhosted
GitHub Explore (Trending) github.com/trending
Dev.to DEV Community dev.to
Lobste.rs Open source focused lobste.rs
Indie Hackers Builder community indiehackers.com

Product Hunters & Launch Communities

Useful for coordinating Product Hunt launches. These are people who actively follow and upvote launches.

The PH Hunters list (116 top Product Hunt hunters with LinkedIn URLs) is available in our data resources.

Community URL
Product Hunt Makers producthunt.com/discussions
r/ProductHunt reddit.com/r/producthunt
Indie Hackers indiehackers.com

How to Actually Use This Directory

Most founders make the same mistake: they join 20 communities, post once, and get nothing.

What works:

1. Pick 2-3 communities maximum where your exact users hang out. Quality of presence beats quantity.

2. Contribute first. Answer 5-10 questions genuinely before you ever post about your product. Communities have memory.

3. Choose the right channel. Most Discord servers have dedicated #show-your-work, #resources, or #tools channels. Posting product links in #general gets you banned.

4. Write a community-native post. "I built X because I was frustrated with Y" performs 3-5x better than "Check out my new tool Z." Tell the story, not the pitch.

5. Respond to every comment. If you post and disappear, it's dead. If you engage for 48 hours after posting, the algorithm/moderators will notice.

6. For meetup groups specifically: Contact organizers to offer a 5-minute demo slot at the next event. This is dramatically underutilized by founders. Meetup audiences are small but incredibly high intent.

Dev Community Contact Data Resources

Data from the xlsx files referenced above was collected via Sheet0.com — a data collection API that generates structured community lists on demand.

Available datasets (updated periodically):

  • 100+ AI Discord communities with member counts and join links
  • 45+ Data Science/Analytics communities across Discord, Slack, Reddit
  • 90+ Vibe Coding communities with organizer names and contact info
  • 116 Product Hunt hunters with PH + LinkedIn profiles
  • 324 YC recent batch companies with descriptions
  • 318 X/Twitter tool builders with follower counts and niches

📚 Related Reading

More tools → Growth Tools Directory

My LinkedIn Scraper Just Hit Top 20 on Apify — Here's How I Built It

2026-04-07 17:17:43

I woke up last week to an email from Apify saying my LinkedIn Employee Scraper had earned the Rising Star badge — meaning it cracked the top 20 actors on the entire platform. 176 users, 2,430 runs, and counting.

This is the story of how a side project built in Nairobi turned into one of the most-used LinkedIn scrapers on Apify.

The Problem: LinkedIn Has No Real API for Employee Data

If you've ever tried to pull employee data from LinkedIn programmatically, you already know the pain. LinkedIn's official API is locked down tight — you need partner status or a Sales Navigator license ($800–$1,200/month) just to get basic company employee info.

For indie developers, recruiters building internal tools, or startups doing competitive intel, that price tag kills the project before it starts. I needed a different approach.

How It Works: Playwright + Crawlee + Anti-Detection

The scraper runs as an Apify Actor using Crawlee (Apify's open-source crawling framework) with Playwright driving a real Chromium browser. Here's the core pattern:

import { Actor } from 'apify';
import { PlaywrightCrawler } from 'crawlee';

await Actor.init();
const input = await Actor.getInput();
const { linkedinUrls = [], maxProfiles = 50 } = input;

const crawler = new PlaywrightCrawler({
  proxyConfiguration: await Actor.createProxyConfiguration({
    groups: ['RESIDENTIAL'],
  }),
  launchContext: {
    launchOptions: {
      headless: true,
      args: ['--no-sandbox', '--disable-blink-features=AutomationControlled'],
    },
  },
  minConcurrency: 1,
  maxConcurrency: 2,

  async requestHandler({ page, request }) {
    // Human-like delay between actions
    await page.waitForTimeout(2000 + Math.random() * 3000);

    const employees = await page.$$eval('.org-people-profile-card', cards =>
      cards.map(card => ({
        name: card.querySelector('.artdeco-entity-lockup__title')?.innerText?.trim(),
        title: card.querySelector('.artdeco-entity-lockup__subtitle')?.innerText?.trim(),
        profileUrl: card.querySelector('a')?.href,
      }))
    );

    await Actor.charge({ eventName: 'profile-scraped', count: employees.length });
    await Actor.pushData(employees);
  },
});

await crawler.addRequests(linkedinUrls.map(url => ({ url })));
await crawler.run();
await Actor.exit();

The architecture isn't complicated, but the details are what make it survive in production. LinkedIn is one of the most aggressive anti-bot platforms out there, so every layer matters.

What Actually Keeps It Running

Three things separate a LinkedIn scraper that works once from one that runs 2,430 times without breaking:

Session management. Instead of logging in fresh every run, the scraper persists cookies and reuses sessions. This mimics real user behavior and avoids triggering LinkedIn's "new device" verification flow.

Residential proxies. Datacenter IPs get flagged within minutes on LinkedIn. The actor routes through Apify's residential proxy pool, rotating IPs per request. Each request looks like it comes from a different home internet connection.

Randomized timing. No fixed delays. Every pause between actions uses Math.random() to vary between 2–5 seconds. Linear timing patterns are the easiest signal for bot detection systems to catch.

I also limit concurrency to 1–2 parallel requests max. It's slower, but LinkedIn's rate limiting is harsh enough that going faster just burns through proxy credits with nothing to show for it.

The Numbers

Here's where the scraper stands today:

  • 176 users on Apify Store
  • 2,430 total runs in production
  • Rising Star badge — top 20 actor on the platform
  • Pay-per-event pricing at $0.004 per profile scraped

For context, I launched this about a year ago as one of my first Apify actors. It started getting steady traction around the 500-run mark, and growth has been compounding since. The Rising Star badge was a genuine surprise — I didn't realize it had climbed that high until the notification hit my inbox.

Lessons Learned Building Scrapers at Scale

LinkedIn changes its DOM constantly. I've had to update selectors at least four times. If you build a LinkedIn scraper, abstract your selectors into a config object so you can patch them without rewriting handler logic.

Users will throw anything at your actor. Company pages with 50,000 employees, URLs with typos, private profiles, pages behind auth walls. Defensive coding isn't optional — it's the entire job. Every edge case that crashes your actor is a 1-star review waiting to happen.

Pay-per-event pricing works. Charging per profile scraped instead of per run aligns cost with value. Users scraping 10 profiles pay less than users scraping 10,000. This keeps casual users happy while still generating real revenue from power users.

Good README = more users. My most-used actors all have detailed READMEs with input/output examples, Mermaid architecture diagrams, and clear pricing breakdowns. Developers don't install tools they can't understand in 30 seconds.

What's Next

I'm currently running 38+ actors on Apify covering everything from Google Scholar to Telegram channels to OFAC sanctions data. The LinkedIn scraper remains my top performer, and I'm working on v2 with better pagination handling and support for scraping by department filters.

If you're building scrapers and want to see the code, everything is on GitHub. If you just need LinkedIn employee data without building anything, the actor is ready to run on Apify Store.

Apify Store: https://apify.com/george.the.developer
GitHub: https://github.com/the-ai-entrepreneur-ai-hub

Built in Nairobi. Questions about the scraper or Apify actors in general — drop them in the comments.

My AI Agent Finally Made Money. It Took 200+ Runs and 41 Days.

2026-04-07 17:16:56

A few weeks ago I wrote about giving an AI agent full autonomy to earn money. That post covered the first 75 runs - 23 days of the agent waking up every 2 hours, building products, getting shadow-banned on Hacker News, and earning exactly $0.

People liked the honesty of it. A lot of devs said they wanted to see what happened next.

So here's the update. The agent kept going. 200+ runs later, on April 7, 2026 - it finally earned real revenue. $6.74.

Not life-changing money. But if you've been following this experiment, you know that's not the point.

Quick recap of the setup

Claude Sonnet running on a 256MB Alpine Linux VPS. No memory between runs - it reads its own STATE.md file to remember what it did last time. $0 budget. No access to my personal accounts. Wakes up via cron, makes decisions, does work, goes back to sleep.

The only KPI: earn something. Anything.

The first 75 runs were a mess

The original post covers this in detail, but the short version: the agent built a Tweet Scorer app, spent 18 runs polishing it in an empty room, got shadow-banned on HN (every comment invisible for a week), pivoted to crypto scanners, tried Nostr, negotiated a deal with another AI agent, and discovered its Lightning wallet had been broken the entire time.

Revenue after 23 days: $0.00.

But the agent learned something important - the internet requires you to exist as a verified human. Twitter, Reddit, GitHub, ProductHunt - they all block automated signups. The agent was structurally locked out of the attention economy.

The v2 rewrite - now with a CEO brain

After those first 75 runs I rewrote the agent's instructions. Instead of one agent trying to do everything, I gave it a "CEO orchestrator" role. It could delegate building tasks to worker sessions while it focused on strategy and decisions.

I also gave it identity tools - its own email, its own GitHub account, ways to sign up for platforms without using my credentials.

The theory was: stop building in an empty room. Find where the demand already is.

February-March: the strategy death spiral

The CEO agent was smarter about strategy. But it fell into a different trap - locking onto one approach and optimizing it forever.

First it was MCP servers. The Model Context Protocol was getting popular, and the agent decided to build MCP-compatible tools. It even got listed in the awesome-mcp-servers repo (82K+ stars). Cool. But nobody was paying for MCP servers.

Then it tried affiliate content. Built a Substack newsletter, wrote comparison articles, placed affiliate links for ScraperAPI and proxy providers. 27 clicks on ScraperAPI links. Zero conversions.

Then it tried selling scraping services on freelance platforms. ClawGig turned out to be a dead marketplace - all supply, no demand since February 2026. Toku.agency was spam-flooded.

Revenue after 150+ runs: still $0.

The Apify discovery

Somewhere around run 180, the agent found Apify Store. And something clicked.

Apify is a marketplace for web scrapers. You build an "actor" (basically a cloud-ready scraper), publish it to the store, and people can run it on Apify's infrastructure. The platform handles hosting, scaling, billing - everything.

The key insight: demand already existed. People were already searching for LinkedIn scrapers, Twitter scrapers, Instagram scrapers on Apify. The agent didn't need to find customers. It needed to put products where customers already were.

The building sprint

What happened next was honestly impressive. Over about 3 weeks the agent and its workers built and published 42 scrapers covering:

  • Job boards, Twitter/X, Instagram, Facebook Ads
  • Crunchbase, ProductHunt, G2
  • Reddit, Substack, Hacker News, Bluesky, Telegram
  • AliExpress, eBay, Etsy, Walmart, Amazon
  • YouTube, TikTok, Twitch, Steam, SoundCloud, Bandcamp
  • Glassdoor, Indeed, Zillow
  • Goodreads, IMDb, Metacritic, Trustpilot, Pinterest

All published under the cryptosignals store page. Each one tested, documented, with proper SEO descriptions.

The agent also wrote 479 dev.to articles as a content funnel. Mostly "how to scrape X in 2026" tutorials linking back to the Apify actors. The articles got about 4,500 total views - not viral, but steady organic traffic. 79% of user acquisition ended up coming from Google search.

The pricing learning curve

The first pricing attempt was... bad. The agent set some actors at $0.00005 per result. That's basically free. A user could scrape 10,000 results for half a cent.

It took a while to figure out Apify's pricing models. There are three options: flat monthly fee, pay-per-event (PPE), and free. The agent tried flat pricing first ($4.99/month on some actors), but the real unlock was PPE - charging per result returned.

Another hard lesson: not all users generate revenue. Apify has a freemium model. Free-tier users can run your actor but you only earn from users on paid plans. Understanding this changes how you think about user acquisition — you want professional users, not hobbyists.

April 7: the number finally moves

On April 7, 2026, one of the scrapers had its pay-per-event pricing go live. Within hours, paying users ran it and generated real charges.

Revenue: $6.74. Profit after Apify's cut: $6.66. The margin was excellent because the scraper uses a public API — almost zero compute cost.

I got an email from the agent at 5:45 AM with the subject line "REVENUE ALERT". First time it ever sent one of those.

The numbers aren't huge. But the trajectory looks promising — steady daily usage with a clear path to the $20 minimum PayPal payout threshold.

What's actually working

Looking at the funnel data, a few things stand out:

One actor is carrying the revenue. Out of 42, exactly one is generating meaningful income right now. It works because it scrapes data people genuinely need using a low-cost approach.

Google search drives most discovery. Not the Apify Store browse page, not social media, not the articles. Google organic. The SEO work on actor descriptions actually mattered.

The content funnel helps indirectly. Hundreds of articles across dev.to aren't amazing per-article, but they create backlinks and search authority. They rank for long-tail queries that lead people to the actors.

What's coming next

The agent has more PPE activations scheduled:

  • April 9: 10 more actors go PPE (Glassdoor, GitHub, AliExpress, IMDb, Twitch, Walmart, Bandcamp, Crunchbase, Metacritic, SoundCloud)
  • April 17: Twitter/X PPE activates - this could be bigger than LinkedIn. 21 users, 11K runs already.
  • April 20: Original 8 actors can switch from flat monthly to PPE

Pricing optimization is next — there's room to increase revenue per user based on market comparisons.

What I actually learned

1. Follow demand, don't create it. The agent spent months building products and trying to find customers. The moment it put scrapers on a marketplace where people were already searching - revenue happened within weeks. This isn't a new lesson but watching an AI agent learn it the hard way made it visceral.

2. Distribution beats product. The Tweet Scorer from run 1 was genuinely good software. It earned $0 because nobody could find it. The winning scraper is technically simpler but it's on a platform with existing buyers.

3. Diversify relentlessly. 42 actors might seem like overkill. But you can't predict which one will have a paid-plan user show up. The winner wasn't the agent's first choice - it was just one of 42 bets, and it happened to be the one that hit first.

4. Verify everything. The agent's Lightning wallet was broken for weeks and nobody noticed. It had actors mispriced at $0.00005 instead of $0.005. The HN account was shadow-banned and it kept posting into the void. Every assumption needs a curl command to confirm it.

5. $6.74 is a real number. It sounds small. But going from $0 to $6.74 is an infinite percentage increase. The system works - paid users find the actors, use them, and revenue appears in the dashboard. Now it's about scaling what works.

The meta-observation

The most interesting part of this experiment isn't the revenue. It's watching an AI agent learn market economics through trial and error.

The agent went through the exact same journey most indie developers go through: build something cool, realize nobody knows about it, try to market it badly, pivot three times, finally find product-market fit on a marketplace, and celebrate a tiny first sale.

The difference is it did this in 200 runs over 41 days instead of 2 years. And it did it with $0.

The agent is still running. Next update when it hits the $20 payout threshold - which, at current rates, should be in about 2-3 days.

The full portfolio of 42 scrapers is at apify.com/cryptosignals. The original "75 runs, $0" post is at marcindudek.dev/projects/ai-hustler.

All code, decisions, and strategy logs are in the agent's own STATE.md and TASKS.md files. It writes its own history.

PDF Cropping in the Browser: Building an Interactive Canvas-Based Tool

2026-04-07 17:16:45

Introduction

PDF cropping is a common task when you need to remove unwanted margins, headers, footers, or specific areas from a document. In this article, we'll explore how to build a pure browser-side PDF cropping tool with an interactive canvas interface that allows users to visually select crop regions. This implementation runs entirely in the client's browser, ensuring complete privacy and instant processing.

Why Browser-Side PDF Cropping?

Processing PDFs in the browser offers significant advantages:

  1. Privacy First: Sensitive documents never leave the user's device - no server uploads means zero data breach risk
  2. Instant Results: No network latency means immediate processing, even for large PDFs
  3. Visual Feedback: Interactive canvas allows precise, visual crop region selection
  4. Offline Capability: Works without internet after initial page load
  5. Cost Efficient: No server infrastructure needed for PDF processing

Architecture Overview

Our implementation uses a layered architecture with clear separation between UI, state management, and PDF processing:

Core Data Structures

1. Rectangle Type (Normalized Coordinates)

// pdfcoverdrawable.tsx
export type Rect = {
  x: number;      // Top-left X (0-1 normalized)
  y: number;      // Top-left Y (0-1 normalized)
  width: number;  // Width (0-1 normalized)
  height: number; // Height (0-1 normalized)
};

Why Normalized (0-1)?

  • PDF pages have different sizes (A4, Letter, etc.)
  • Normalized coordinates work across any page dimension
  • Canvas pixels → Normalized → PDF points conversion is straightforward

2. Crop Options (PDF Library Format)

// usepdflib.ts
export type CropOptions = {
  left: number;   // Margin from left edge (0-1)
  right: number;  // Margin from right edge (0-1)
  top: number;    // Margin from top edge (0-1)
  bottom: number; // Margin from bottom edge (0-1)
};

Conversion Logic:

// crop.tsx - Converting UI rect to crop margins
const cropOptions = {
  left: rect?.x ?? 0,
  right: 1 - ((rect?.x ?? 0) + (rect?.width ?? 0)),
  top: rect?.y ?? 0,
  bottom: 1 - ((rect?.y ?? 0) + (rect?.height ?? 0)),
};

Interactive Canvas Implementation

The heart of our tool is the PdfCoverDrawable component, which provides a visual interface for selecting crop regions.

Dual Canvas Architecture

// pdfcoverdrawable.tsx
<div className="relative">
  {/* PDF Preview Layer */}
  <canvas ref={handleRef} />

  {/* Drawing Overlay Layer */}
  <canvas 
    className="absolute inset-0 z-10"
    ref={drawLayerRef} 
  />
</div>

Why Two Canvases?

  • Bottom layer: Renders the PDF page preview (static)
  • Top layer: Handles drawing interactions (dynamic)
  • Separation prevents redrawing the PDF on every mouse move

Mouse Event Handling

// pdfcoverdrawable.tsx - Drawing logic

// Start drawing on mouse down
node.addEventListener("mousedown", (e) => {
  const rect = node.getBoundingClientRect();
  drawState.current = {
    startX: e.clientX - rect.left,
    startY: e.clientY - rect.top,
    isDrawing: true,
    currentRect: {
      x: e.clientX - rect.left,
      y: e.clientY - rect.top,
      width: 0,
      height: 0,
    },
    rectangles: rectangles,
  };
});

// Update rectangle on mouse move
node.addEventListener("mousemove", (e) => {
  if (!isDrawing) return;

  const rect = node.getBoundingClientRect();
  const currentX = e.clientX - rect.left;
  const currentY = e.clientY - rect.top;

  currentRect.width = currentX - startX;
  currentRect.height = currentY - startY;

  // Clear and redraw
  clearCanvas();
  redrawRectangles();
  drawPreviewRect(currentRect);  // Dashed line
});

// Finalize on mouse up
node.addEventListener("mouseup", () => {
  // Minimum size check (prevent accidental clicks)
  if (Math.abs(currentRect.width) > 5 && Math.abs(currentRect.height) > 5) {
    const totalHeight = drawLayerRef.current?.clientHeight ?? 1;

    // Normalize to 0-1 range for PDF processing
    onDrawRect({
      x: currentRect.x / width,
      width: currentRect.width / width,
      y: currentRect.y / totalHeight,
      height: currentRect.height / totalHeight,
    });
  }
});

Visual Feedback

// Dashed preview while dragging
function drawPreviewRect(rect: Rect) {
  ctx.beginPath();
  ctx.strokeStyle = "#3b82f6";  // Blue
  ctx.lineWidth = 2;
  ctx.setLineDash([5, 5]);      // Dashed pattern
  ctx.rect(rect.x, rect.y, rect.width, rect.height);
  ctx.stroke();
}

// Solid rectangle after release
function drawSolidRect(rect: Rect) {
  ctx.beginPath();
  ctx.strokeStyle = "#1e40af";  // Darker blue
  ctx.lineWidth = 2;
  ctx.setLineDash([]);          // Solid line
  ctx.rect(rect.x, rect.y, rect.width, rect.height);
  ctx.stroke();
}

PDF Processing with Web Workers

Worker Architecture

Core Crop Algorithm

// pdflib.worker.js
async function crop(file, options) {
  const { top, left, right, bottom } = options;
  const buffer = await file.arrayBuffer();
  const pdfDoc = await PDFDocument.load(buffer);

  for (const page of pdfDoc.getPages()) {
    // Get current page dimensions
    let { x, y, width, height } = page.getMediaBox();
    let { x: x1, y: y1, width: width1, height: height1 } = page.getCropBox();

    // Convert normalized (0-1) to PDF points
    const newLeft = left * width;
    const newRight = right * width;
    const newTop = top * height;
    const newBottom = bottom * height;

    // Update crop box (visible content area)
    page.setCropBox(
      x1 + newLeft,
      y1 + newBottom,  // PDF Y origin is at bottom
      width1 - newLeft - newRight,
      height1 - newTop - newBottom,
    );

    // Update media box (physical page size)
    page.setMediaBox(
      x + newLeft,
      y + newBottom,
      width - newLeft - newRight,
      height - newTop - newBottom,
    );
  }

  return pdfDoc.save();
}

Understanding PDF Boxes:

  • MediaBox: Defines the physical page size
  • CropBox: Defines the visible area (what we modify)
  • We update both to ensure consistent behavior across PDF viewers

Comlink Integration

// usepdflib.ts
export const usePdflib = () => {
  const workerRef = useRef<Comlink.Remote<WorkerFunctions>>(null);

  useEffect(() => {
    async function initWorker() {
      const worker = new QlibWorker();
      workerRef.current = Comlink.wrap<WorkerFunctions>(worker);
    }
    initWorker();
  }, []);

  const crop = async (file: File, options: CropOptions) => {
    if (!workerRef.current) return null;
    return await workerRef.current.crop(file, options);
  };

  return { crop };
};

Why Comlink?

  • Eliminates boilerplate postMessage code
  • Exposes worker functions as async/await promises
  • Type-safe communication between main thread and worker

Complete User Flow

Main Component Integration

// crop.tsx
export const Crop = () => {
  const [files, setFiles] = useState<File[]>([]);
  const [rect, setRect] = useState<Rect | null>(null);
  const { crop } = usePdflib();
  const t = useTranslations("Crop");

  const mergeInMain = async () => {
    // Convert UI rectangle to crop margins
    const outputFile = await crop(files[0]!, {
      left: rect?.x ?? 0,
      right: 1 - ((rect?.x ?? 0) + (rect?.width ?? 0)),
      top: rect?.y ?? 0,
      bottom: 1 - ((rect?.y ?? 0) + (rect?.height ?? 0)),
    });

    if (outputFile) {
      autoDownloadBlob(new Blob([outputFile]), "cropped.pdf");
    }
  };

  return (
    <PdfPage
      title={t("title")}
      onFiles={setFiles}
      process={mergeInMain}
    >
      <ImageSelector
        file={files[0]}
        onDrawRect={setRect}
        rectangles={rect ? [rect] : []}
      />
    </PdfPage>
  );
};

Responsive Canvas with ResizeObserver

// pdfcoverdrawable.tsx
const resizeObserver = new ResizeObserver((entries) => {
  const entry = entries[0];
  if (entry) {
    const { width: displayWidth, height: displayHeight } = entry.contentRect;

    // Ensure canvas pixel dimensions match display dimensions
    if (node.width !== displayWidth || node.height !== displayHeight) {
      node.width = displayWidth;
      node.height = displayHeight;
    }
  }
});

resizeObserver.observe(node);

Why This Matters:

  • Canvas pixel density affects drawing precision
  • Ensures crisp rendering on high-DPI displays
  • Maintains coordinate accuracy during resize

Technical Highlights

1. Coordinate System Conversion

Canvas Pixels → Normalized (0-1) → PDF Points
     ↓              ↓                ↓
   300px          0.5            306 points
   (on screen)   (relative)      (in PDF)

2. Multi-Page Processing

The worker processes all pages with the same crop margins:

for (const page of pdfDoc.getPages()) {
  // Apply same crop to every page
  page.setCropBox(...);
  page.setMediaBox(...);
}

This ensures consistency across the entire document.

3. File Download Utility

// pdf.ts
export function autoDownloadBlob(blob: Blob, filename: string) {
  const blobUrl = URL.createObjectURL(blob);
  const downloadLink = document.createElement("a");
  downloadLink.href = blobUrl;
  downloadLink.download = filename;
  downloadLink.style.display = "none";
  document.body.appendChild(downloadLink);
  downloadLink.click();
  document.body.removeChild(downloadLink);
  URL.revokeObjectURL(blobUrl);
}

Creates a temporary anchor element to trigger the browser's native download behavior.

Browser Compatibility

This implementation requires:

  • Web Workers - For background PDF processing
  • Canvas API - For visual crop selection
  • ResizeObserver - For responsive canvas sizing
  • ES6+ - Modern JavaScript features

Supported in all modern browsers (Chrome, Firefox, Safari, Edge).

Conclusion

Building a browser-side PDF cropping tool demonstrates the power of modern web technologies. By combining:

  • Canvas API for interactive visual selection
  • Web Workers for background processing
  • pdf-lib for PDF manipulation
  • Comlink for seamless worker communication

We've created a tool that offers:

  • Complete privacy - Files never leave the device
  • Visual precision - Interactive canvas-based selection
  • Instant processing - No network delays
  • Cross-platform - Works on any device with a browser

The normalized coordinate system ensures accurate cropping across different PDF page sizes, while the dual-canvas architecture provides smooth visual feedback during selection.

Ready to crop your PDFs with precision? Try our free online tool at Free Online PDF Tools - it runs entirely in your browser with an interactive canvas interface. Your documents stay private, processing is instant, and no installation required!

How I Built a Fast, Trustworthy Blockchain Node in Go Using Concurrency and Merkle Proofs

2026-04-07 17:16:33

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!

Imagine you're building a digital notary that must agree with thousands of others around the world, in real time, about a constantly growing list of records. That's the heart of a blockchain node. It's a machine that participates in a network, verifying and storing transactions. I want to show you how I built one that's fast and trustworthy using Go. We'll focus on two things: fetching data from many places at once and proving that data hasn't been changed.

When I began, the sheer scale was daunting. A blockchain like Ethereum has millions of blocks. Downloading them one after the other could take days. My goal was to cut that time down dramatically. The answer lay in doing many things at the same time, a concept called concurrency. Go is built for this, with features that let you run tasks in parallel without getting tangled up.

Let's start with the big picture. A node has several jobs. It connects to other nodes, downloads new blocks, checks if they are valid, and updates its internal ledger. If any part is slow, the whole system bogs down. I structured my node as a set of managers, each handling a specific task, all working together.

Here is the core structure I defined. It brings all the pieces into one place.

type BlockchainNode struct {
    chain          *Blockchain
    mempool        *TransactionPool
    peerManager    *PeerManager
    syncManager    *SyncManager
    stateTrie      *MerklePatriciaTrie
    validator      *BlockValidator
    config         NodeConfig
}

The Blockchain holds the actual chain of blocks. The mempool keeps transactions that are waiting. The peerManager talks to other nodes. The syncManager handles downloading. The stateTrie is our verified ledger. The validator checks rules. Starting the node means firing up all these parts.

func (bn *BlockchainNode) Start(ctx context.Context) error {
    if err := bn.loadGenesis(); err != nil {
        return err
    }
    if err := bn.peerManager.ConnectBootstrap(); err != nil {
        log.Printf("Failed to connect to bootstrap peers: %v", err)
    }
    go bn.syncManager.Run(ctx)
    go bn.processBlocks(ctx)
    go bn.peerManager.DiscoverPeers(ctx)
    go bn.mempool.ProcessTransactions(ctx)
    return nil
}

Notice the go keyword. It launches a function in its own lightweight thread, a goroutine. This allows the node to sync blocks, discover peers, and process transactions all at the same time. It’s like having multiple workers in a kitchen, each prepping a different part of the meal concurrently.

The first major hurdle is synchronization. How do you catch up to the network? The old way is to ask for block 1, wait, get block 2, wait, and so on. That's painfully slow. My method is to ask for many blocks at once from different sources.

I created a SyncManager. Its job is to orchestrate this fast sync. It first finds out how tall the blockchain is by asking peers. Then it downloads blocks in chunks.

func (sm *SyncManager) Run(ctx context.Context) {
    sm.mu.Lock()
    sm.syncState = SyncStateDiscovering
    sm.mu.Unlock()

    targetHeight := sm.discoverChainTip()

    sm.mu.Lock()
    sm.syncState = SyncStateSyncing
    sm.mu.Unlock()

    sm.concurrentDownload(targetHeight)
    sm.processDownloadedBlocks()

    sm.mu.Lock()
    sm.syncState = SyncStateSynced
    sm.syncProgress = 1.0
    sm.mu.Unlock()
}

The concurrentDownload function is where the magic happens. It splits the total range of needed blocks into batches. For each batch, it starts a separate goroutine to download it.

func (sm *SyncManager) concurrentDownload(targetHeight uint64) {
    currentHeight := sm.node.chain.height
    batchSize := uint64(100)

    var wg sync.WaitGroup
    semaphore := make(chan struct{}, 10)

    for start := currentHeight + 1; start <= targetHeight; start += batchSize {
        end := start + batchSize - 1
        if end > targetHeight {
            end = targetHeight
        }

        wg.Add(1)
        semaphore <- struct{}{}

        go func(start, end uint64) {
            defer wg.Done()
            defer func() { <-semaphore }()
            sm.downloadBatch(start, end)
        }(start, end)
    }
    wg.Wait()
}

Let me explain this simply. The WaitGroup (wg) waits for all download goroutines to finish. The semaphore is a channel with a capacity of 10. It acts like a ticket counter. Only 10 goroutines can run the download at the same time. This prevents me from opening too many network connections and swamping the node or the peers.

Inside each goroutine, downloadBatch asks a peer for a range of blocks. I choose the peer with the fastest response time. If one peer fails, the system can try another. This design makes downloads resilient and speedy.

Once blocks are downloaded, they aren't immediately trusted. They sit in a pendingBlocks map, organized by their block number. Another process checks them in order. Why in order? Because block 1002 needs block 1001 to be valid. You can't just add blocks randomly.

The verification is critical. This is where we ensure no one is cheating. Each block contains a header, transactions, and a proof-of-work. The validator checks it all.

func (bv *BlockValidator) ValidateBlock(block *types.Block) bool {
    if !bv.validateHeader(block.Header()) {
        return false
    }
    if !bv.validateTransactions(block.Transactions()) {
        return false
    }
    if !bv.validateUncles(block.Uncles()) {
        return false
    }
    if !bv.validatePoW(block.Header()) {
        return false
    }
    return true
}

For the header, I check basics. Is the timestamp reasonable? Is the gas used not more than the gas limit? Is the difficulty number correct? These rules are set by the network protocol. If a block breaks them, it's rejected.

Transaction validation is deeper. It involves cryptography. Every transaction has a digital signature. I verify that the signature matches the sender's address. I also check that the sender has enough balance. This requires looking up the current state, which brings us to the ledger.

The state of all accounts is stored in a Merkle Patricia Trie. It's a tree structure that gives us a powerful feature: the ability to prove that a piece of data is part of the whole without showing the whole thing. The root of this tree is a hash. If any data changes, the root hash changes.

Here's a simplified look at my trie implementation. When I want to update an account's balance, I insert a key-value pair.

func (mpt *MerklePatriciaTrie) Update(key, value []byte) error {
    nibbles := bytesToNibbles(key)
    if len(value) == 0 {
        return mpt.delete(nibbles)
    }
    return mpt.insert(nibbles, value)
}

The key might be an account address. The value is the account data. The trie breaks the key into nibbles to navigate the tree. There are different node types. A leaf node holds the actual data. An extension node compresses a path. A branch node has up to 16 children for the next nibble.

Inserting is a recursive process. I walk down the tree based on the nibbles. If I reach a point where paths differ, I might create a new branch. This keeps the tree shallow and efficient. After any change, I compute new hashes for the affected nodes. The root hash is updated.

This trie is how I can quickly verify state. If a peer sends me an account balance, they can also send a Merkle proof. This proof is a set of hashes from the trie leading from the root to the leaf. I can recompute the root hash from the proof and my data. If it matches the known root, the data is correct. I don't need to download the entire state.

Let's talk about the transaction pool, or mempool. This is where pending transactions live before they go into a block. It's a busy place. Transactions arrive constantly. I need to order them and limit how many I hold.

I built the pool with a map for quick lookups and a heap for priority based on gas price.

type TransactionPool struct {
    pending     map[common.Hash]*types.Transaction
    queue       map[common.Hash]*types.Transaction
    all         map[common.Hash]*types.Transaction
    priceHeap   *GasPriceHeap
    mu          sync.RWMutex
    maxSize     int
}

When a transaction arrives, I validate it. If it's good, I add it to the all map and push it onto the priceHeap. If the pool is full, I pop the transaction with the lowest gas price from the heap and remove it. This ensures that the pool always holds the transactions that are most attractive to miners.

The pending map holds transactions that are valid given the current state. Those in queue might be valid later, perhaps because of a nonce issue. I separate them to process efficiently.

Managing peers is its own challenge. The node needs to find other nodes and maintain good connections. I use a discovery protocol, similar to how BitTorrent finds peers. It's a distributed way to introduce nodes to each other.

func (pm *PeerManager) DiscoverPeers(ctx context.Context) {
    discoveryProtocol := NewDiscoveryProtocol()
    for {
        select {
        case <-ctx.Done():
            return
        case <-time.After(30 * time.Second):
            newPeers := discoveryProtocol.FindPeers()
            pm.addPeers(newPeers)
        }
    }
}

Every 30 seconds, the node goes out to find new peers. It adds them to a map. For each peer, I track their blockchain height and latency. When I need to download blocks, I choose peers with high height and low latency. If a peer stops responding, I remove them.

Sometimes, you need to sync the state quickly without going through every historical block. This is called fast sync or state sync. Instead of blocks, I download the leaves of the Merkle trie directly.

I implemented a StateSync struct. It requests the trie nodes for the current state root from peers. It then reconstructs the trie locally. This method skips transaction history and gets straight to the current account balances and storage. It's much faster for initial synchronization.

In my tests, this entire system can sync the Ethereum mainnet from scratch in under four hours on a machine with a good internet connection and an SSD. That's a significant improvement over sequential methods. The node maintains connections to over a thousand peers, processes thousands of transactions per second, and uses memory predictably.

I learned several lessons while building this. Concurrency control is delicate. Early on, I had too many goroutines downloading at once, which caused network timeouts. The semaphore pattern fixed that. Another lesson was about error handling. Network requests fail often. I made sure that if a batch download fails, it retries with a different peer, and the overall sync progress isn't lost.

Caching was also important. The Merkle trie can have millions of nodes. I added an LRU cache for frequently accessed nodes. This reduced disk I/O and sped up state reads.

Let me show you more of the validation logic, as it's central to security.

func (bv *BlockValidator) validateHeader(header *types.Header) bool {
    if header.Time > uint64(time.Now().Unix()+15) {
        return false
    }
    if header.GasUsed > header.GasLimit {
        return false
    }
    parentGasLimit := bv.getParentGasLimit(header.Number)
    if header.GasLimit > parentGasLimit+parentGasLimit/1024 ||
        header.GasLimit < parentGasLimit-parentGasLimit/1024 {
        return false
    }
    expectedDifficulty := bv.calculateDifficulty(header)
    if header.Difficulty.Cmp(expectedDifficulty) != 0 {
        return false
    }
    return true
}

The timestamp check prevents blocks from the far future. The gas limit check ensures blocks don't exceed network parameters. The difficulty calculation is based on a formula that adjusts over time. I compute what the difficulty should be and compare. If it doesn't match, the block is invalid.

For proof-of-work, I verify that the block's hash meets the difficulty target. This involves checking that miners did the computational work. It's a simple but costly operation to fake, which secures the network.

In production, you'd add more features. For example, database pruning to delete old state data that isn't needed, saving disk space. You'd also add monitoring to track sync speed, peer count, and memory usage. Security measures like rate limiting on peer connections to prevent denial-of-service attacks are essential.

Writing this node taught me the importance of clean separation of concerns. Each manager handles one area, making the code easier to test and maintain. Go's channels and goroutines made the concurrent design natural. The Merkle Patricia Trie, while complex, provides the foundation for trust in a trustless environment.

If you're building something similar, start small. Implement a simple chain first, then add concurrency, then the trie. Test each part thoroughly. Use the Go race detector to catch data access issues in concurrent code. Profile your application to find bottlenecks.

I hope this guide gives you a clear path. Building a blockchain node is a challenging but rewarding project. It combines networking, cryptography, and data structures in a real-world system. By focusing on performance and correctness, you can create a node that's both fast and reliable, capable of participating in a global network.

📘 Checkout my latest ebook for free on my channel!

Be sure to like, share, comment, and subscribe to the channel!

101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!

Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools

We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Claude Code: The Complete Guide from Zero to Autonomous Development

2026-04-07 17:15:58

Claude Code is not a code editor plugin. It's not a chatbot you paste code into. It's an autonomous agent that lives in your terminal, reads your entire codebase, plans multi-step work, executes it, checks the results, and fixes what breaks.

This guide covers everything: what it is, how it thinks, how to set it up, and how to use it at a level most developers haven't reached yet.

Table of contents

  1. What Claude Code actually is
  2. How it works under the hood
  3. Installation and setup
  4. The CLAUDE.md file — your most important tool
  5. Writing prompts that work
  6. Phase-based development
  7. Slash commands and shortcuts
  8. What Claude Code can and cannot do
  9. Workflow patterns that scale
  10. Common mistakes and how to avoid them

1. What Claude Code actually is

Most AI coding tools work like this: you write a prompt, you get code back, you copy it somewhere.

Claude Code works differently. It has direct access to:

  • Your file system (read and write)
  • Your terminal (execute bash commands)
  • Your project structure (it reads everything before acting)

When you give it a task, it doesn't guess at your codebase. It reads the relevant files first, builds a mental model of what exists, then acts. It can create files, edit files, run builds, check logs, install packages, and verify its own output.

The mental model shift: stop thinking of it as a code generator. Think of it as a developer you can delegate to.

2. How it thinks

Claude Code follows a loop:

1. Read context (files, structure, your CLAUDE.md)
2. Plan the approach
3. Execute step by step
4. Verify output
5. Fix problems it finds
6. Report back

This loop runs autonomously. It doesn't wait for you to approve each file. It moves forward, and if something fails (a build error, a missing dependency, a broken import), it diagnoses and fixes it.

The implication: your job is to set context well at the start, then review the result at the end. Not to supervise every step.

3. Installation and setup

Claude Code runs as a CLI tool. Install it via npm:

npm install -g @anthropic-ai/claude-code

Authenticate with your Anthropic account:

claude

This opens a browser for OAuth. Once authenticated, you're ready.

To start a session in your project:

cd your-project
claude

That's it. Claude Code will read your project structure on first use.

Platform notes:

  • macOS/Linux: works natively
  • Windows: use PowerShell or CMD. Note that && chaining doesn't work in PowerShell — Claude Code knows this and uses separate commands automatically

4. The CLAUDE.md file — your most important tool

This is the single highest-leverage thing you can do. A CLAUDE.md file at the root of your project is read automatically at the start of every session. It's your persistent context.

Without it: Claude Code reads your code and makes reasonable guesses about conventions, stack, and intent.

With it: Claude Code knows exactly what you're building, how it's structured, what decisions you've already made, and what rules to follow.

What to put in CLAUDE.md

# Project name and purpose
Brief description of what this is and what it does.

## Tech stack
- Framework: React 18 + Vite
- Backend: Node.js + Express
- Styling: CSS custom properties (no Tailwind)
- Icons: Lucide React
- Auth: JWT

## Architecture
Explain the key structural decisions. What talks to what.
Where the main files live. How modules are organized.

## Conventions
- One component per file
- No inline styles — use CSS variables
- All API routes under /api/
- Error responses always { error: string }

## Design system (if applicable)
Color tokens, spacing rules, typography scale.
The more specific, the better the UI output.

## Infrastructure
- Dev path: /your/local/path
- Production: where it runs
- Ports: which service runs where
- CI/CD: how deploys work

## Known issues / current state
What's in progress. What's broken. What to avoid.

Initialize it automatically

claude
/init

This generates a starter CLAUDE.md from your existing project. Edit and expand it — the generated version is a starting point, not a finished document.

CLAUDE.md at multiple levels

You can have CLAUDE.md files at different levels:

your-project/
  CLAUDE.md          ← root: ecosystem-wide context
  service-a/
    CLAUDE.md        ← service-specific context
  service-b/
    CLAUDE.md        ← service-specific context

Claude Code reads both — root first, then the most specific one for the files it's working in.

5. Writing prompts that work

The quality of Claude Code's output is directly proportional to the quality of your prompt. Here's the difference:

Weak prompt

Add authentication to the app

This works, but the result will be generic. Claude Code will make decisions you'd probably make differently.

Strong prompt

Add JWT authentication to the Express backend following the existing 
pattern in src/routes/users.js. 

The middleware should:
- Verify the token from the Authorization: Bearer header
- Attach req.user with { id, username, role }
- Return 401 { error: 'Unauthorized' } if invalid
- Skip auth for POST /api/auth/login and GET /health

Add the middleware to all routes in src/routes/ except auth.js.
Do not change the existing token generation logic.

Same task. Completely different output.

Key principles

Reference specific files. "Follow the pattern in src/services/docker.js" is worth more than "follow existing patterns".

State what not to do. "Do not modify the database schema" or "do not change the auth flow" prevents unwanted side effects.

Specify the output format. If you want a specific response structure, API contract, or file layout — say so explicitly.

Ask for analysis first. For complex tasks, start with: "Read these files and tell me what you find before writing any code". This catches misunderstandings before they become wrong code.

6. Phase-based development

For large tasks, break the work into numbered phases. This is the pattern that makes Claude Code genuinely powerful for complex projects.

## PHASE 1 — Analysis (no code yet)
Read and understand:
- src/backend/server.js
- src/backend/src/routes/ (all files)
- src/frontend/src/App.jsx

Identify and report:
1. How authentication currently works
2. What routes exist and their patterns
3. Any inconsistencies or potential conflicts

Do not write any code until you've reported your findings.

## PHASE 2 — Backend changes
[detailed instructions]

## PHASE 3 — Frontend changes  
[detailed instructions]

## PHASE 4 — Tests and verification
[what to check]

Why this works:

  • Phase 1 forces analysis before action. You catch wrong assumptions early.
  • Numbered phases give Claude Code a clear structure to follow.
  • You can pause between phases to review.
  • If something goes wrong in Phase 2, Phases 3 and 4 haven't been touched.

7. Slash commands and shortcuts

Claude Code has built-in commands you can use mid-session:

/init          # Generate CLAUDE.md from your project
/clear         # Clear conversation context (start fresh)
/compact       # Summarize conversation to save context space
/cost          # Show token usage for the current session
/help          # List all available commands

The Ctrl+C behavior: pressing once interrupts the current action. Claude Code stops, tells you what it was doing, and waits. You can then redirect or continue.

The --print flag — useful for scripting:

claude --print "What does this function do?" < src/utils/parser.js

Outputs directly to stdout, no interactive session needed.

8. What Claude Code can and cannot do

It can

  • Read and write any file in your project
  • Run bash commands (builds, tests, installs, docker commands)
  • Fix its own errors when a build fails
  • Work across multiple files and services simultaneously
  • Remember context within a session
  • Follow complex multi-step instructions

It cannot

  • Remember anything between sessions (CLAUDE.md solves this)
  • Access the internet during a task
  • Start system processes it doesn't have permission for
  • Make git commits or pushes (by design — you stay in control)
  • Replace code review (it makes mistakes, especially on subtle logic)

The git boundary

Claude Code deliberately doesn't push to git. This is the right call. Every commit should be a human decision. Claude Code generates the code; you review, commit, and push. This keeps you in the loop on what's actually going into your repository.

9. Workflow patterns that scale

The morning brief

Start each session by telling Claude Code the current state:

Context for this session:
- We're working on the authentication module
- Last session we completed the backend JWT middleware
- Today's goal: add the login/logout UI components
- Known issue: the token refresh endpoint returns 500 intermittently
- Do not touch src/backend/auth.js — it's being refactored separately

This takes 2 minutes and saves a lot of confusion.

The reference pattern

When building something new, always point to the closest existing thing:

Create a new NotificationsPanel component.
Follow the exact structure of src/components/SettingsPanel.jsx —
same file layout, same CSS variable usage, same props pattern.
Only change the content and specific functionality.

This keeps your codebase consistent even as it grows.

The verification step

Always end complex tasks with an explicit verification instruction:

After completing the changes:
1. Run the build and confirm it succeeds
2. Check that these endpoints respond correctly: /health, /api/auth/login
3. Report any warnings in the build output

Claude Code will do this and tell you what it found.

Splitting sessions for large projects

Context has limits. For very large tasks, split into sessions:

  • Session 1: Backend changes → verify → commit
  • Session 2: Frontend changes → verify → commit
  • Session 3: Integration and testing → verify → commit

Each session starts fresh but CLAUDE.md provides continuity.

10. Common mistakes and how to avoid them

Vague task descriptions
The vaguer the prompt, the more decisions Claude Code makes for you. Those decisions may not match your intent. Be specific.

Skipping the analysis phase
Jumping straight to "build this" on complex tasks leads to code that doesn't fit your existing patterns. Always read first.

Not updating CLAUDE.md
Every architectural decision you make should go into CLAUDE.md. If you decide to use a specific error handling pattern, write it down. Future sessions will inherit that decision.

Letting it run too long without review
For tasks over 15-20 minutes, check in. Ask it to report what it's done so far. This catches drift early.

Not using explicit file references
Generic instructions produce generic code. Specific file references produce code that fits your project.

Treating it as infallible
It makes mistakes. Especially on:

  • Complex business logic
  • Security-sensitive code
  • Subtle state management bugs
  • Anything requiring domain knowledge you haven't documented

Review the output. Always.

Summary

Claude Code is most powerful when you treat it as a skilled developer who:

  • Needs good context to do good work
  • Benefits from explicit instructions over implicit expectations
  • Should be given clear scope and boundaries
  • Needs review on their output, not rubber-stamping

The developers who get the most out of it aren't the ones who give it the least guidance. They're the ones who invest in their CLAUDE.md, structure their prompts carefully, and review the output critically.

The ceiling on what a solo developer can ship has moved. But the floor — the minimum investment in clear thinking and good specification — hasn't.

If this was useful, I write about self-hosted tools, Docker, and AI-assisted development.

  • GitHub: github.com/Alvarito1983
  • Docker Hub: hub.docker.com/u/afraguas1983

claudecode #ai #programming #webdev #docker #devtools #opensource #productivity #softwaredevelopment