MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Simulating IoT Devices Without Physical Hardware 🚀

2026-05-03 15:47:47

One of the recurring challenges while building IoT systems is testing device communication, telemetry handling, MQTT flows, and event-driven architectures without constantly relying on physical hardware. To solve this problem, I recently started building a lightweight IoT Simulator CLI focused on helping developers simulate virtual devices directly from the terminal.

The project is designed for developers working on IoT platforms, MQTT brokers, RabbitMQ pipelines, telemetry ingestion systems, automation workflows, and real-time event processing. Instead of manually mocking payloads or depending on real devices during development, the simulator allows virtual devices to behave like actual connected hardware with configurable telemetry, topics, and runtime interactions.

The current version focuses on providing a simple but flexible developer experience with config-driven device simulation and MQTT communication support. The goal is to make local IoT testing faster, easier, and more scalable while keeping the simulator lightweight and developer-friendly.

This is still the first version, but there’s a lot planned ahead — including more advanced payload generators, richer simulation controls, better orchestration capabilities, and expanded protocol support.

📦 npm Package: PACKAGE LINK

Would love feedback from developers working with IoT systems, MQTT infrastructure, or event-driven architectures.

I Built a Multi-Agent AI Pen Tester Because AI Coding Tools Are Shipping Vulnerable Code

2026-05-03 15:35:14

AI coding assistants are everywhere. Developers are shipping code faster than ever using Claude, Copilot, and Cursor.
They're also shipping SQL injection, hardcoded secrets, broken authentication, and XSS - faster than ever.
The problem is obvious once you think about it: AI tools optimize for working code, not secure code. They'll write a login form that functions perfectly and is trivially bypassable with ' OR 1=1--. They'll hardcode an API key because it's the fastest way to make the demo work. They'll skip input validation because you didn't ask for it.
Most solo developers and small teams will never hire a penetration tester. A basic pen test costs $500–$2,000 and takes weeks to schedule. So the vulnerabilities just ship.
I built VulnSwarm to fix that.

What VulnSwarm Does
VulnSwarm deploys a swarm of specialized AI agents that mirror a real penetration testing team. Instead of one model trying to do everything, each agent has a distinct role:
🔭 Recon Agent — maps the attack surface. Identifies entry points, fingerprints the tech stack, flags the highest-risk areas.
💥 Exploit Agent — takes the recon and determines what's actually exploitable. Rates each finding by severity, exploitability, and impact. Assigns CVSS-like scores.
🗡️ Red Team Agent — thinks like an attacker. Chains vulnerabilities together into realistic attack paths. Finds the worst-case scenario.
🛡️ Blue Team Agent — the defender. Takes everything the red team found and writes specific, code-level fixes. Prioritizes by effort vs. impact.
📄 Report Agent — synthesizes everything into a professional penetration testing report with an overall risk score, severity breakdown, and remediation roadmap.
The agents debate each other. The red team challenges the exploit analysis. The blue team pushes back on severity ratings. The result is more nuanced than any single model pass.

Testing It on OWASP Juice Shop
To test VulnSwarm, I pointed it at OWASP Juice Shop — a deliberately vulnerable web app designed for security testing practice.
I also tested it manually first. In about 30 seconds I:

Logged in as admin using ' OR 1=1-- in the email field
Accessed the admin panel at /administration
Retrieved 21 user email addresses
Found an exposed crypto wallet seed phrase in customer feedback

Then I ran VulnSwarm. Here's what it found automatically:
Risk Score: CRITICAL (90/100)

🔴 File Upload Endpoints — CVSS 9.0
Exploitable to inject malicious code or exfiltrate sensitive data.

🔴 Unvalidated API Endpoints — CVSS 9.0
API endpoints lack input validation and sanitization.

🟠 Missing Content-Security-Policy — CVSS 5.3
🟠 Missing Strict-Transport-Security — CVSS 5.3
🟠 Missing X-XSS-Protection — CVSS 5.3
🟠 Missing Referrer-Policy — CVSS 5.3
🟠 Missing Permissions-Policy — CVSS 5.3
This ran in about 15 minutes on a CPU-only VPS using llama3.2:3b. Larger models produce deeper findings — the SQL injection I found manually would have been caught by qwen2.5:14b or Claude.

How the Multi-Agent Architecture Works
The key insight is that security analysis benefits from multiple perspectives arguing with each other — the same way a real security team works.
A single model asked "find vulnerabilities in this app" will produce a list. It won't challenge its own assumptions. It won't think about how vulnerabilities chain together. It won't prioritize fixes by what a developer can actually implement today.
The agent pipeline forces specialization:
Your Code/App


┌──────────┐ ┌───────────┐ ┌──────────┐ ┌─────────┐
│ Recon │───▶│ Exploit │───▶│ Red Team │───▶│ Blue │
│ Agent │ │ Agent │ │ Agent │ │ Team │
└──────────┘ └───────────┘ └──────────┘ └────┬────┘


┌──────────┐
│ Report │
│ Agent │
└──────────┘
Each agent only sees what it needs to. The exploit agent doesn't know about fixes — it just finds problems. The blue team agent doesn't know about attack chains — it just writes solutions. The report agent synthesizes everything into something a developer or CTO can actually act on.

Running It Yourself
VulnSwarm supports Claude, GPT-4o, Gemini, OpenRouter, and Ollama. If you want to run it completely free and locally:
bashgit clone https://github.com/aaronsood/VulnSwarm.git
cd VulnSwarm
pip install -r requirements.txt

Pull a local model

ollama pull llama3.2:3b

Run it

python -m cli.main
For web app scanning, spin up a test target first:
bashdocker run --rm -p 3000:3000 bkimminich/juice-shop
Then point VulnSwarm at http://localhost:3000.
Web scanning is localhost-only by default — VulnSwarm won't touch anything you don't own.

What It Doesn't Do (Yet)
VulnSwarm is early. It's a first pass, not a replacement for a professional security team.
It misses zero-days. It won't find novel attack chains that require deep business logic understanding. Smaller models miss things that larger models catch. It doesn't yet integrate with CI/CD pipelines or GitHub Actions.
The roadmap includes all of that. For now it solves the problem that matters most: the 99% of developers who ship with zero security review and no budget to fix that.

The Bigger Picture
There's something poetic about using AI to find the vulnerabilities that AI introduced. As AI coding tools become the default way software gets written, AI security tooling needs to keep pace.
VulnSwarm is open source, MIT licensed, and early. If you're in security or AI tooling, contributions are very welcome.
GitHub: github.com/aaronsood/VulnSwarm

Built and tested on a Saturday with a CPU-only VPS, a deliberately hackable web app, and too much coffee.

LinkedIn Quietly Migrated From ProseMirror to Quill — and Broke Every Browser Automation Tool That Touched the Composer

2026-05-03 15:34:07

I shipped a fix to my MCP server last week for LinkedIn's ProseMirror composer. It worked. Two days later, every LinkedIn post automation broke.

This is the post-mortem of what changed, how I figured it out, and why "automate the platform" stories almost always end this way.

The crash

The symptom was specific. My MCP server's safari_fill tool — which dutifully filled ProseMirror by walking React Fiber and calling editor.commands.setContent(html) — was now crashing the helper daemon and dismissing the composer dialog the instant it touched the contenteditable.

Same composer URL. Same DOM tree at first glance. Same selectors. Different editor underneath.

The DOM tells the truth

I dropped into the browser console and ran the usual probe:

const el = document.querySelector('[contenteditable="true"]');
el.editor // -> undefined
el.closest('.ProseMirror') // -> null
el.closest('.ql-editor') // -> <div class="ql-editor">

There it was. .ql-editor is the canonical Quill class name. LinkedIn had swapped the post composer from ProseMirror to Quill at some point in early 2026 with no announcement I can find.

Why it was crashing

Quill, like ProseMirror, doesn't let you "just" stuff text into the contenteditable. Both editors hold an internal model — Quill calls it a Delta — and the DOM is downstream of that model.

If you bypass the model and write to the DOM directly, two things happen:

  1. The model and DOM disagree.
  2. The next user-driven event (a keystroke, a save) triggers a re-render that throws because the diff is incoherent.

That's what was killing the composer. My fill was writing to innerText, the Delta state thought the editor was still empty, the React tree tried to reconcile, and the dialog evaporated. The Swift daemon caught the cascading exception and crashed itself for good measure.

The fix: drive Quill the way it expects to be driven

Quill exposes a programmatic API. You just need a reference to the instance. The lookup order I landed on:

  1. Walk up to find an ancestor with class .ql-container.
  2. Try .__quill — Quill 2.x attaches the instance there directly.
  3. Fall back to React Fiber: walk up the fiber chain looking for memoizedProps.quill or stateNode.quill (LinkedIn wraps Quill in a React component that holds the instance in props).
  4. If still nothing, fall back to a real CGEvent Cmd+V paste — Quill respects clipboard events with isTrusted: true.

Once you have the instance, the actual fill is one line:

quill.setContents([{ insert: text + '\n' }], 'api');

The 'api' source flag is the part that matters. It tells Quill "this came from your own API, update your model and the DOM together." The text commits, the Delta stays consistent, and the React parent doesn't try to re-conciliate against a corrupted model.

What this taught me about platform automation

Two lessons, both old, both worth re-learning:

Editors aren't a stable interface. ProseMirror and Quill have different APIs, different state models, and different rules for "what counts as a real edit." Targeting one of them only works until the platform decides it doesn't anymore. LinkedIn made this swap with zero changelog. The only way I knew was that my code broke.

The DOM is the lowest common denominator. The editor model is the actual one. Every automation tool that synthesizes events on the contenteditable is operating one layer below the truth. Sometimes that works (because the editor reconciles). Sometimes it doesn't (because the editor crashes or silently discards the input). The robust path is always to find the editor instance and call its API.

There's a third lesson, which is more uncomfortable: I couldn't fully verify my fix on LinkedIn, because LinkedIn's modal-opening behavior in headless contexts is independently broken right now. The composer button accepts clicks, the dialog DOM materializes, but it never visually opens. So the Quill detection is in place — and verified on test pages — but the LinkedIn-specific live path is still gated on a separate modal issue I haven't cracked.

This is the texture of platform automation. Two unrelated bugs, same week, same target. Each one looks like the other. You ship a fix for one and the other one masquerades as a regression.

The takeaway

If you're building anything that types into a third-party rich text editor — Slack, LinkedIn, Discord, Medium, Notion — the editor identity is part of your contract with the platform, and the platform doesn't owe you stability there. Detect the editor type at runtime. Have a fallback for the unknown case (real clipboard events, ideally). Log what you found, so when it changes you find out from your own telemetry instead of from a Slack message at 11pm.

And read the contenteditable's class list before you touch it. ProseMirror and Quill have different class signatures and the DOM will tell you what you're dealing with — if you ask.

The fix shipped in [email protected]. Source on GitHub.

Why Your Reddit Video Downloads Have No Sound (And How to Fix It)

2026-05-03 15:20:44

If you've ever tried to download a video from Reddit, you've probably ended up with a silent MP4 file. No audio. No error. Just a video that should have sound but doesn't.

This isn't a bug in your downloader. It's how Reddit stores videos.

The Problem

Most video platforms (YouTube, Twitter, etc.) serve videos as a single muxed file — video and audio combined in one stream. Easy to download, plays anywhere.

Reddit doesn't do that. When you upload a video to Reddit, their backend splits it into two separate files stored on v.redd.it:

DASH_720.mp4    ← video only, no audio track
DASH_audio.mp4  ← audio only

When you watch on Reddit, the player loads both files and syncs them client-side. When you download, most tools grab only the video file.

Why It Happens

Reddit uses MPEG-DASH (Dynamic Adaptive Streaming over HTTP). DASH is designed for adaptive streaming where the player picks the best video quality and audio quality independently based on bandwidth.

If you visit a Reddit video URL directly:

https://v.redd.it/abc123/DASH_720.mp4

You'll get a perfectly playable video file — that just happens to have no audio track. The audio lives at:

https://v.redd.it/abc123/DASH_audio.mp4

A naive downloader (curl, wget, basic browser save) only grabs the URL it sees. So you get a silent video.

The Fix

You need to:

  1. Download both the video and audio streams
  2. Merge them into a single MP4 with FFmpeg

Here's the minimal FFmpeg command that does it:

ffmpeg -i DASH_720.mp4 -i DASH_audio.mp4 \
  -c:v copy -c:a aac \
  output.mp4

The flags matter:

  • -c:v copy → don't re-encode video (preserves quality, instant)
  • -c:a aac → encode audio as AAC (Reddit's audio is sometimes raw, AAC ensures compatibility)
  • Two -i flags → input files; FFmpeg matches them by index

If you skip -c:v copy and let FFmpeg re-encode, you'll lose quality and the operation takes 10x longer.

Doing It Programmatically (Python)

If you're building a tool, yt-dlp handles this automatically when configured correctly:

import yt_dlp

ydl_opts = {
    'format': 'bestvideo+bestaudio/best',
    'merge_output_format': 'mp4',
    'postprocessors': [{
        'key': 'FFmpegVideoConvertor',
        'preferedformat': 'mp4',
    }],
}

with yt_dlp.YoutubeDL(ydl_opts) as ydl:
    ydl.download(['https://reddit.com/r/funny/comments/abc123/title/'])

The key is bestvideo+bestaudio — the + syntax tells yt-dlp to download both streams and merge them. Without the +, you get whatever single stream Reddit returns first (usually video-only).

merge_output_format: 'mp4' ensures the final file is a standard MP4 (FFmpeg might default to MKV otherwise).

Edge Cases

A few things that tripped me up:

1. Some Reddit videos genuinely have no audio. GIF posts and silent screen recordings have no DASH_audio.mp4 file at all. Handle this gracefully:

ydl_opts = {
    'format': 'bestvideo+bestaudio/best',  # falls back to "best" if audio missing
    ...
}

2. Cross-posted videos use different paths. A video cross-posted from r/A to r/B has the original v.redd.it URL. Don't try to construct URLs from the post path — extract the actual v.redd.it URL from the post metadata.

3. NSFW posts require an extra header. Reddit serves NSFW posts to logged-in users, but the video CDN itself doesn't care. You can fetch the video files directly without auth as long as you have the v.redd.it URL.

Why Most Tools Don't Bother

Implementing this correctly requires:

  • Detecting that you're on Reddit (URL parsing)
  • Extracting the post metadata to find both stream URLs
  • Downloading both files (extra bandwidth)
  • Running FFmpeg to merge (extra CPU)
  • Handling all the edge cases above

A lot of "free Reddit downloaders" skip the merging step because it requires server-side FFmpeg processing or a Wasm FFmpeg in the browser. Both add complexity.

If you want a working version that handles all this, AllClip's Reddit downloader does the merging server-side — paste any Reddit URL and you get an MP4 with audio.

vite-plugin-pack-orchestrator,One Vite Plugin for Compression, Checksums, and Auto Hash-Renaming

2026-05-03 15:18:48

Why Another Wheel?

There are already some Vite packing plugins out there — vite-plugin-zip-pack, vite-plugin-compress, etc. They work, but they always feel like they're missing something. Most of them only support ZIP and offer fairly limited functionality.

In real-world projects, the build packaging step is rarely that simple:

  1. Multiple compression formats 🗜️ — ZIP for sharing with colleagues, TAR.GZ for deploying to Linux servers, 7Z for higher compression ratio archiving. Different scenarios demand different formats.
  2. File checksums 🔐 — After packaging, you need MD5/SHA1 checksums to verify version consistency, especially when delivering builds to clients.
  3. Flexible naming ✏️ — Version numbers, timestamps, hash values — the more information in the filename, the better.
  4. CI/CD friendly 🚀 — Every build artifact in a pipeline should be uniquely traceable. Auto hash-renaming after compression saves you the hassle of writing scripts to do it manually.

Existing plugins basically can't satisfy all of these at once, which is why I wrote vite-plugin-pack-orchestrator.

What Makes It Different

Feature Most Packing Plugins This Plugin
Formats ZIP only ZIP / TAR / TAR.GZ / 7Z
Checksums None MD5 / SHA1 / SHA256
File Naming Fixed name [name] [version] [timestamp] [hash] placeholders
Hook System None onBeforeBuild / onAfterBuild / onError hooks
File Filtering Partial include + exclude glob patterns
7Z Support Requires system-installed 7z Bundled, zero dependencies
Output Dir Fixed location Custom archiveOutDir

Installation

npm install vite-plugin-pack-orchestrator -D

Quick Start

The most basic usage — two lines of config and you're done:

// vite.config.ts
import { defineConfig } from 'vite';
import orchestrator from 'vite-plugin-pack-orchestrator';

export default defineConfig({
  plugins: [
    orchestrator({
      pack: {
        outDir: 'dist',          // Directory to pack, defaults to 'dist'
        format: 'zip',           // Format: zip | tar | tar.gz | 7z
        fileName: 'myapp',       // Archive filename
      },
    }),
  ],
  build: { outDir: 'dist' },
});

Run vite build, and you'll get myapp.zip in your project root.

Configuration Reference

pack — Packaging Options

pack: {
  outDir: 'dist',              // Source directory (relative to project root), default 'dist'
  fileName: 'myapp',           // Filename, supports placeholders (see below)
  format: 'zip',               // Format: 'zip' | 'tar' | 'tar.gz' | '7z'
  compressionLevel: 9,         // Compression level 0-9, default 9 (maximum)
  archiveOutDir: './releases', // Archive output directory, defaults to project root
  exclude: ['**/*.map'],       // Files to exclude (glob matching)
  include: ['**/*.js'],        // Files to include (optional, includes all if not set)
}

fileName Placeholders

The filename supports the following placeholders, automatically replaced at build time:

Placeholder Description Example
[name] name from package.json my-awesome-app
[version] version from package.json 1.2.0
[timestamp] Current timestamp 1714012345678
[hash] Bundle content MD5 hash (full 32 chars) a1b2c3d4e5f6...
[hash:8] First N chars of MD5 hash (custom) a1b2c3d4
// Example: fileName = 'release-[version]-[timestamp]'
// Output: release-1.2.0-1714012345678.zip

// Example: fileName = '[name]-v[version]'
// Output: my-awesome-app-v1.2.0.zip

// Example: fileName = '[name]-[hash]'
// Output: my-awesome-app-a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6.zip

// Example: fileName = '[name]-[hash:8]'
// Output: my-awesome-app-a1b2c3d4.zip

If fileName doesn't include an extension, the plugin automatically appends .zip, .tar.gz, etc. based on the format.

Hooks — Lifecycle Hooks

onBeforeBuild — Before Build

Called before Vite starts bundling. Great for pre-build cleanup:

hooks: {
  onBeforeBuild: async () => {
    // Pre-build processing
  },
}

onBundleGenerated — After Bundle Generation

Called after Vite generates the bundle but before archiving. You can access the build output:

hooks: {
  onBundleGenerated: (bundle) => {
    console.log('Generated files:', Object.keys(bundle));
  },
}

onAfterBuild — After Archive Creation (Core Feature)

This is the most powerful feature of this plugin. After the archive is created, the plugin automatically calculates MD5 / SHA1 / SHA256 checksums and passes them to onAfterBuild. You can use these checksums to rename the archive.

Return a new path (different from the original) and the plugin will automatically rename the file:

hooks: {
  onAfterBuild: (path, format, checksums) => {
    // path      — Full path of the current archive
    // format    — Archive format ('zip' | 'tar' | 'tar.gz' | '7z')
    // checksums — Checksums object: { md5: string, sha1: string, sha256: string }
    return path; // Return original path = no rename
  },
}

Real-world examples:

// Example 1: Insert SHA1 short hash before extension (most common)
// myapp.zip → myapp-3a7b2c1d.zip
onAfterBuild: (path, format, checksums) =>
  path.replace(/(\.(?:zip|tar\.gz|tar|7z))$/, `-${checksums.sha1.slice(0, 8)}$1`);

// Example 2: Replace entire filename with MD5
// myapp.zip → a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6.zip
onAfterBuild: (path, format, checksums) =>
  path.replace(/^.+(?=\.\w+$)/, checksums.md5);

// Example 3: Append format and hash to original filename
// myapp.zip → myapp-zip-a1b2c3d4.zip
onAfterBuild: (path, format, checksums) =>
  path.replace(/(\.\w+)$/, `-${format}-${checksums.sha256.slice(0, 8)}$1`);

// Example 4: Fully custom filename, using format param for extension
// myapp.zip → release-a1b2c3d4e5f6.zip
onAfterBuild: (path, format, checksums) =>
  `release-${checksums.md5.slice(0, 12)}.${format}`;

// Example 5: No rename, just use checksums for something else (e.g. write to file)
onAfterBuild: async (path, format, checksums) => {
  fs.writeFileSync('checksums.json', JSON.stringify(checksums));
  // Not returning or returning original path = no rename
}

onError — On Error

Callback when packaging fails. Great for integrating alert notifications:

hooks: {
  onError: async (error) => {
    console.error('Packaging failed:', error.message);
    // Integrate with Slack / Teams / email alerts here
  },
}

Why Auto Hash-Renaming Matters for CI/CD

In continuous integration / continuous deployment pipelines, every build artifact needs to be uniquely traceable. If your archive is always named dist.zip, how do you tell this build apart from the last one? Which version do you grab when rolling back?

This plugin uses the onAfterBuild hook to get checksums and automatically insert a hash into the filename:

hooks: {
  onAfterBuild: (path, format, checksums) =>
    path.replace(/(\.zip)$/, `-${checksums.sha1.slice(0, 8)}$1`);
}

Build output:

myapp-1.0.2-3a7b2c1d.zip
myapp-1.0.2-7f9e4b2a.zip

The filename itself is the fingerprint 🔑 — you can distinguish different builds at a glance. Deployment scripts can locate versions directly by filename without maintaining a separate version mapping table. Rollback is simple — just find the previous hash filename and deploy it. Combined with [version] and [timestamp] placeholders, traceability is even stronger.

Complete Example

Putting it all together, here's a production-ready configuration:

// vite.config.ts
import { defineConfig } from 'vite';
import orchestrator from 'vite-plugin-pack-orchestrator';

export default defineConfig({
  plugins: [
    orchestrator({
      pack: {
        outDir: 'dist',                    // Pack the dist directory
        fileName: 'myapp-[version]',       // Filename with version
        format: 'zip',                     // ZIP format
        archiveOutDir: './releases',       // Output to releases directory
        exclude: ['**/*.map'],             // Exclude sourcemaps
      },
      hooks: {
        // Auto-append SHA1 hash after compression
        onAfterBuild: (path, format, checksums) =>
          path.replace(/(\.(?:zip|tar\.gz|tar|7z))$/, `-${checksums.sha1.slice(0, 8)}$1`),
        // Log on error
        onError: (error) => console.error('Packaging failed:', error.message),
      },
    }),
  ],
  build: { outDir: 'dist' },
});

One vite build does it all — no extra packaging scripts needed.

# Why You Abandon Every React Project You Start (And How to Fix It)

2026-05-03 15:16:24

If you look at your GitHub repositories right now, how many unfinished React projects do you have? Three? Ten? Fifty?

You start with a massive surge of motivation. You run npx create-react-app or set up a new Vite project. You spend three days perfectly configuring Tailwind CSS, setting up Redux, and carefully architecting your folder structure.

And then... you get bored. You hit a minor roadblock with authentication, or you realize the scope is too big, and you quietly abandon it.

Here is the brutal truth: You don't have a discipline problem. You have a psychological defense mechanism.

The "Paralyzed Visionary" Trap

In the developer world, we often confuse "architecture" with "execution." When you spend 20 hours configuring a React project before writing a single line of business logic, your brain gets a dopamine hit. You feel productive.

But subconsciously, you are doing this to avoid the actual risk of launching. As long as you are "optimizing the React state manager," your app isn't live. If it isn't live, no one can judge it. No one can tell you your idea is bad.

This is a clinical pattern known as the Paralyzed Visionary. You see the perfect 5-year version of the app in your head, and the massive gap between your current empty repo and that vision terrifies you into inaction.

Stop Blaming the Tech Stack

The biggest lie developers tell themselves is: "If I just switch to Next.js or learn a better state management library, I'll finally finish this project."

The tech stack is not the bottleneck. Your ego is the bottleneck.

How to Break the Cycle

To break this pattern, you need to apply artificial, extreme constraints to your workflow.

  1. The 3-Feature Limit: Your MVP is legally not allowed to have more than three features. Write them down. If a feature isn't one of those three, you are not allowed to build it until after launch.
  2. The "Ugly" Rule: Force yourself to deploy a version of the app that visually embarrasses you. If you aren't embarrassed by your first release, you launched too late.
  3. Use a Clinical Prompt: Your brain needs an external boss to prevent over-engineering. I built a custom AI prompt called PsychoPrompt specifically designed to diagnose your builder flaws and force ChatGPT to act as a strict manager that prevents you from going down the React rabbit hole.

Stop hoarding empty repositories. Fix the psychology, and the code will follow.

(If you are serious about curing your launch paralysis, you can take the free diagnostic test at PsychoPrompt and unlock the full Psycho-Builder's Toolkit).