2026-02-14 20:36:44
I built idea-2-repo, a CLI that takes plain-English product ideas and turns them into a structured starter repository — complete with docs, TODOs, architecture outlines, and source scaffolding — by leveraging GitHub Copilot CLI’s natural-language coding agent directly in the terminal.
What I built
idea2repo aims to remove blank-project friction by generating a ready-to-use project skeleton from one simple idea prompt.
Core capabilities
✔ Idea normalization + project classification
✔ Architecture suggestion with Copilot CLI
✔ Scaffold generation (docs, TODOs, source files)
✔ Offline fallback (REASONING_BACKEND=offline)
✔ Dry-run mode for safe previews
1) Environment & Repo Context
I ensured tool versions and repository context were set up before generation.
I verified the core validation pipeline end-to-end.
Lint & Build
Unit / Integration Tests
E2E Tests
3) Copilot CLI Usage
I confirmed Copilot CLI availability and used it to shape architecture ideas.
Copilot Prompt Output
Prompt used:
copilot -p "Design a TypeScript CLI scaffold architecture in 5 concise bullets."
4) Real Project Generation
I ran a real project generation (non-dry run):
node dist/bin/idea2repo.js generate "AI-powered expense tracker for freelancers"
Generated artifacts verified:
✔ Project folder exists
✔ Docs and scaffold exist
✔ TODOs and architecture files present

5) Offline Fallback Demonstration
Verified offline mode behavior with:
REASONING_BACKEND=offline node dist/bin/idea2repo.js generate "Simple todo api for students" --dry-run
6) CLI UX / Help Output
CLI usability demonstration:
node dist/bin/idea2repo.js --help
What I Learned
Building idea2repo taught me a few practical lessons:
Copilot CLI shines at rapid architectural ideation when prompts are clear and scoped.
Offline fallback matters for reproducibility and deterministic runs.
Dry-run plus real-run improves confidence during demos and validation.
Repo
GitHub: https://github.com/GeoAziz/idea-2-repo
2026-02-14 20:36:03
Large Language Models (LLMs) have become an inseparable part of the modern computing landscape. Software development is no exception. We’ve moved past the stage where AI only generates simple boilerplate; today, LLMs are capable of implementing complex logic and architecting entire applications.
Naturally, this raises a pressing question for those of us in the industry: Is there a future for software developers, or are we being phased out?
Some might argue that "no-code" movements have always tried to replace developers, only to fail because a professional is always eventually needed. However, the current shift feels different. This is the most significant paradigm shift I’ve witnessed since I started earning a living through code.
Ignoring LLMs in favor of "pure" manual coding will soon lead to a dead end. But there is a flip side: if you simply rely on LLMs to write code while acting as a mere "code checker," you become easily replaceable.
To stay relevant, developers must choose distinct paths. The middle ground is rapidly disappearing.
The Architect of Experience : Use AI as a force multiplier to solve human problems. Your value lies in how quickly you can integrate APIs and LLMs to build trendy, user centric services.
The Architect of Systems (The Fundamentalist): Focus on the "under-the-hood" mechanics. As LLMs flood the world with abstraction, we need engineers who understand why a system fails under high concurrency or why a memory management strategy is suboptimal.
I recently came across a blog post that perfectly encapsulates these sentiments. To be honest, this article has been motivated from here.
https://notes.eatonphil.com/2026-01-19-llms-and-your-career.html
The greatest trap of the LLM era is complacency. We must not settle for what the LLM hands us. Think of it this way: the time you've saved by not having to manually search through documentation should be reinvested into verifying and understanding the generated logic. When you combine your hard-earned experience with the efficiency of an LLM, your competitiveness doesn't just increase—it multiplies. You aren't just a consumer of AI; you are its auditor and architect.
The "Black Box" problem is a silent threat. In the data world, we stand on the shoulders of giants like Kubernetes, Spark, Flink, and Airflow. Yet, very few engineers understand these tools beyond their documentation.
We must remember a fundamental truth: No matter how sophisticated the AI or how complex the technology, everything eventually runs on CPU, Memory, and Network. When a critical issue occurs in production, the root cause is almost always found within these three pillars. Engineers who can bridge the gap between high level AI abstractions and these fundamental hardware constraints will never be out of demand.
This is exactly why I’ve been focusing on "build-your-own-x" projects. The goal is to bridge the gap between "using" a tool and "understanding" its core principles.
Interestingly, you don't need to go back to school to do this. Your LLM is the ultimate mentor. A modern LLM can drastically reduce the time it takes to grasp complex internal architectures. It shouldn't just write your code; it should explain the "why" behind the logic, acting as a teacher that uncovers what you didn't even know you were missing.
While headlines talk about developer layoffs, there is still an immense demand for engineers who possess deep technical intuition and an insatiable curiosity.
The AI era doesn't mean we study less; it means we must study deeper. Even if LLMs handle the bulk of the coding, services built for humans will always require people who understand the soul of the machine. Don't let the convenience of AI stifle your technical curiosity—use that convenience to fuel it.
2026-02-14 20:35:31
If you want a private Docker registry but don't want to depend on a cloud registry or object storage, Refity might be what you need. It's a small, self-hosted registry that uses SFTP as the only backend for your images. Everything—blobs, manifests, metadata—lives on your SFTP server. No S3, no GCS, no vendor lock-in.
I built Refity for myself that already have SFTP (rent from Hetzner Storage Box) and want to reuse it for container images without adding another storage system.
Refity doesn't replace your SFTP; it uses it as the only storage backend.
docker push and docker pull; works with existing tools and CI.docker pull commands.So: a classic "registry in front of your own storage" with a simple UI and no new storage stack.
git clone https://github.com/troke12/refity.git
cd refity
cp .env.example .env
Edit .env with your SFTP details and (for production) a JWT_SECRET:
FTP_HOST=your-sftp.example.com
FTP_PORT=22
FTP_USERNAME=sftpuser
FTP_PASSWORD=sftppass
JWT_SECRET=your-long-random-secret # required in production
docker-compose up -d
admin / admin — change it right away).Push an image:
docker tag nginx localhost:5000/mygroup/nginx:latest
docker push localhost:5000/mygroup/nginx:latest
Pull:
docker pull localhost:5000/mygroup/nginx:latest
Groups and repositories are created from the UI (or you can align with how you organize folders on SFTP). No auto-creation of top-level paths; you keep control over structure and access.
troke12/refity-backend and troke12/refity-frontend on Docker Hub (tags: latest, v1.0.0).If you try it and have feedback or ideas, open an issue or discussion on GitHub.
2026-02-14 20:25:13
Most of us have written something like this at some point:
const data = JSON.parse(hugeString);
It works.
Until it doesn't.
At some point the file grows.\
50MB. 200MB. 1GB. 5GB.
And suddenly:
This isn't a JavaScript problem.
It's a buffering problem.
Most parsing libraries operate in buffer mode:
That means memory usage scales with file size.
Streaming flips the model:
That architectural difference matters far more than micro-optimizations.
I've been working on a project called convert-buddy-js, a Rust-based
streaming conversion engine compiled to WebAssembly and exposed as a
JavaScript library.
It supports:
The core goal was simple:
Keep memory usage flat, even as file size grows.
Not "be the fastest library ever."\
Just predictable. Stable. Bounded.
Here's an example from benchmarks converting XML → JSON.
| Scenario | Tool | File Size | Memory Usage |
|---|---|---|---|
| xml-large | convert-buddy | 38.41 MB | ~0 MB change |
| xml-large | fast-xml-parser | 38.41 MB | 377 MB |
The difference is architectural.
The streaming engine processes elements incrementally instead of
constructing large intermediate structures.
I benchmarked against:
Here's a representative neutral case (1.26 MB CSV):
| Tool | Throughput |
|---|---|
| convert-buddy | 75.96 MB/s |
| csv-parse | 22.13 MB/s |
| PapaParse | 19.57 MB/s |
| fast-csv | 15.65 MB/s |
In favorable large cases (13.52 MB CSV):
| Tool | Throughput |
|---|---|
| convert-buddy | 91.88 MB/s |
| csv-parse | 30.68 MB/s |
| PapaParse | 24.69 MB/s |
| fast-csv | 19.68 MB/s |
In most CSV scenarios tested, the streaming approach resulted in roughly
3x--4x throughput improvements, with dramatically lower memory overhead.
For tiny NDJSON files, native JSON parsing can be faster.
| Scenario | Tool | Throughput |
|---|---|---|
| NDJSON tiny | Native JSON | 27.10 MB/s |
| NDJSON tiny | convert-buddy | 10.81 MB/s |
That's expected.
When files are extremely small, the overhead of streaming infrastructure
can outweigh benefits.\
Native JSON.parse is heavily optimized in engines and extremely
efficient for small payloads.
The goal here isn't to replace native JSON for everything.
It's to handle realistic and large workloads predictably.
For medium nested NDJSON datasets:
| Tool | Throughput |
|---|---|
| convert-buddy | 221.79 MB/s |
| Native JSON | 136.84 MB/s |
That's where streaming and incremental transformation shine ---
especially when the workload involves structured transformation rather
than just parsing.
Install:
npm install convert-buddy-js
Then:
import { convert } from "convert-buddy-js";
const csv = 'name,age,city\nAlice,30,NYC\nBob,25,LA\nCarol,35,SF';
// Configure only what you need. Here we output NDJSON.
const buddy = new ConvertBuddy({ outputFormat: 'ndjson' });
// Stream conversion: records are emitted in batches.
const controller = buddy.stream(csv, {
recordBatchSize: 2,
// onRecords can be async: await inside it if you need (I/O, UI updates, writes...)
onRecords: async (ctrl, records, stats, total) => {
console.log('Batch received:', records);
// Simulate slow async work (writing, rendering, uploading, etc.)
await new Promise(r => setTimeout(r, 50));
// Report progress (ctrl.* is the most reliable live state)
console.log(
`Progress: ${ctrl.recordCount} records, ${stats.throughputMbPerSec.toFixed(2)} MB/s`
);
},
onDone: (final) => console.log('Done:', final),
// Enable profiling stats (throughput, latency, memory estimates, etc.)
profile: true
});
// Optional: await final stats / completion
const final = await controller.done;
console.log('Final stats:', final);
It works in:
Because the core engine is written in Rust and compiled to WebAssembly.
Not because it's trendy.
Because:
WebAssembly allows that engine to run safely in the browser without
server uploads.
You probably don't need it if:
It makes sense if:
There are many good parsing libraries in the JavaScript ecosystem.
PapaParse is mature.\
csv-parse is robust.\
Native JSON.parse is extremely optimized.
convert-buddy-js is simply an option focused on:
If that matches your constraints, it may be useful.
If not, the ecosystem already has excellent tools.
If you're curious, the full benchmarks and scenarios are available in
the repository.
convert-buddy-js — npm
brunohanss/convert-buddy
And if you have workloads where streaming would make a difference, I’d be interested in feedback.
You can get more information or try the interactive browser playground here: https://convert-buddy.app/
2026-02-14 20:19:08
Most performance monitoring tools test your site from one location, or run tests sequentially across regions. That means testing from 18 locations can take 20+ minutes.
We needed something faster. Ahoj Metrics tests from 18 global regions simultaneously in about 2 minutes. Here's how.
The core idea is simple: don't keep workers running. Spawn them on demand, run the test, destroy them.
We use Fly.io's Machines API to create ephemeral containers in specific regions. Each container runs a single Lighthouse audit, sends the results back via webhook, and destroys itself.
Here's how a request flows through the system:
The key design decision: one audit = one ReportRequest, regardless of how many regions you test. Test from 1 region or 18 - it's the same user action.
Here's the actual code that creates a machine in a specific region:
class FlyMachinesService
API_BASE_URL = "https://api.machines.dev/v1"
def self.create_machine(region:, env:, app_name:)
url = "#{API_BASE_URL}/apps/#{app_name}/machines"
body = {
region: region,
config: {
image: ENV.fetch("WORKER_IMAGE", "registry.fly.io/am-worker:latest"),
size: "performance-8x",
auto_destroy: true,
restart: { policy: "no" },
stop_config: {
timeout: "30s",
signal: "SIGTERM"
},
env: env,
services: []
}
}
response = HTTParty.post(
url,
headers: headers,
body: body.to_json,
timeout: 30
)
if response.success?
Response.new(success: true, data: response.parsed_response)
else
Response.new(
success: false,
error: "API error: #{response.code} - #{response.body}"
)
end
end
end
A few things worth noting:
auto_destroy: true is the magic. The machine cleans itself up after the process exits. No lingering containers, no zombie workers, no cleanup cron jobs.
performance-8x gives us 4 vCPU and 8GB RAM. Lighthouse is resource-hungry - it runs a full Chrome instance. Underpowered machines produce inconsistent scores because Chrome competes for CPU time. We tried smaller sizes and the variance was too high.
restart: { policy: "no" } means if Lighthouse crashes, the machine just dies. We handle the failure on the Rails side by checking for timed-out reports.
services: [] means no public ports. The worker doesn't need to accept incoming traffic. It runs Lighthouse and POSTs results back to our API. That's it.
Each Fly.io machine runs a Docker container that does roughly this:
The callback is a simple webhook. The worker doesn't need to know anything about our database, user accounts, or billing. It just runs a test and reports back.
On the Rails side, each Report record tracks its own status:
class ReportRequest < ApplicationRecord
has_many :reports
def check_completion!
return unless reports.all?(&:completed?)
update!(status: "completed")
update_cached_stats!
check_monitor_alert if site_monitor.present?
end
end
When a worker POSTs results, the corresponding Report is updated. After each update, we check if all reports for the request are done. If so, we aggregate the results, calculate averages, and update the dashboard.
Each report is independent. If the Sydney worker fails but the other 17 succeed, you still get 17 results. The failed region shows as an error without blocking everything else.
This is the part that makes ephemeral workers compelling. Compare two approaches:
Persistent workers (18 regions, always-on):
Ephemeral workers (our approach):
At low volume, ephemeral is dramatically cheaper. The crossover point where persistent workers become more cost-effective is well beyond our current scale.
The tradeoff is cold start time. Each machine takes a few seconds to boot. For our use case (users expect a 1-2 minute wait anyway), that's invisible.
We use Solid Queue (Rails 8's built-in job backend) for everything. No Redis, no Sidekiq.
# config/recurring.yml
production:
monitor_scheduler:
class: MonitorSchedulerJob
queue: default
schedule: every minute
The MonitorSchedulerJob runs every minute, checks which monitors are due for testing, and kicks off the Fly.io machine spawning. Monitor runs are background operations - they don't count toward the user's audit quota.
This keeps the architecture simple. One PostgreSQL database handles the queue (via Solid Queue), the application data, and the cache. No Redis to manage, no separate queue infrastructure to monitor.
Lighthouse needs consistent resources. When we first used shared-cpu machines, scores would vary by 15-20 points between runs of the same URL. Bumping to performance-8x brought variance down to 2-3 points. The extra cost per audit is worth the consistency.
Timeouts need multiple layers. We set timeouts at the HTTP level (30s for API calls), the machine level (stop_config timeout), and the application level (mark reports as failed after 5 minutes). Belt and suspenders.
Region availability isn't guaranteed. Sometimes a Fly.io region is temporarily unavailable. We handle this gracefully - the report for that region shows an error, but the rest of the audit completes normally.
Webhook delivery can fail. If our API is temporarily unreachable when the worker finishes, we lose the result. We're adding a retry mechanism and considering having workers write results to object storage as a fallback.
After running this in production since January 2026:
You can test this yourself at ahojmetrics.com. Free tier gives you 20 audits/month - enough to see how your site performs from Sydney, Tokyo, Sao Paulo, London, and more.
If you have questions about the architecture, ask in the comments. Happy to go deeper on any part of this.
Built with Rails 8.1, Solid Queue, Fly.io Machines API, and PostgreSQL. Frontend is React + TypeScript on Cloudflare Pages.
2026-02-14 20:18:35
Creating a slides narration video used to be a multi-hour workflow: outline → design slides → write a script → record voiceover → sync timing → render.
With 2Slides + Remotion, you can now do it in one prompt and get a clean MP4 in ~10 minutes—good enough for product explainers, training, and marketing content.
This guide is a practical checklist for shipping higher-quality narration videos consistently.
A fully automated pipeline:
1) Generate slides (pages/PDF) with 2Slides
2) Generate narration audio with 2Slides
3) Download assets (pages + audio zip)
4) Create a Remotion project and sequence pages + audio
5) Render MP4
A narration video is audio-led. If the narration is clear and well-paced, viewers forgive simple slides. If narration is messy, no amount of design saves it.
Best practices
A narration video should feel like “cuts” in video editing.
Even with great TTS, you still want predictable audio levels.
If the workflow can produce captions, do it.
.srt fileYou can run this workflow from an agent environment (Claude Code / OpenClaw) by installing two skill packs:
Then use a single prompt like this (replace the topic):
Please help me create a slides narration video for the topic: [YOUR_TOPIC_CONTENT]
- Create slides using 2Slides create pdf slides API; decide the slide pages count by AI.
- Generate voice narration using 2Slides API.
- Download slides pages and voices zip using 2Slides API.
- Create a Remotion project and a slides narration video using pages and voices in sequence.
- Render and output the video.
Real case demo:
https://www.youtube.com/watch?v=_KswiI-Tgdc
Goal: create slides that are readable at video resolution.
Pro tip: ask the agent to choose slide count based on content length.
Goal: match narration structure to slide boundaries.
Goal: make rendering deterministic.
001.jpg, 002.jpg… and 001.mp3, 002.mp3…Goal: correct sync, zero glitches.
Goal: export a share-ready MP4.
Try it with a real topic you care about (a product feature, onboarding flow, or weekly update). If you can turn one prompt into a consistent video pipeline, you can ship content faster than your competitors.