2026-04-18 10:24:36
This is a submission for the DEV Weekend Challenge: Earth Day Edition.
When I saw the "Build for the Planet" prompt, I knew I wanted to create something that could actually help people take climate action. The problem? Most carbon calculators are either too simplistic (3 questions) or buried in corporate sustainability reports.
I wanted to build a tool that is:
That's how EcoTrack Pro was born.
👉 Live Demo: ecotrack-pro.vercel.app
👉 GitHub Repo: github.com/setuju/ecotrack
I deliberately chose a zero-build, vanilla JavaScript stack to keep the project lean and deployable anywhere.
http.server (yes, really)Why no React/Vue? Because I wanted this to be accessible to anyone who opens index.html — no node_modules required.
I needed 15+ inputs (car, flights, meat, recycling, streaming, etc.) but didn't want to scare users away.
Solution: Grouped inputs into collapsible sections and provided sensible defaults (based on average global behavior). The emission factors come from peer‑reviewed sources (IPCC, EPA, Poore & Nemecek Science 2018).
I wanted users to save their history across devices without creating an account.
Solution: Generate a persistent device_id stored in localStorage. If the user accepts cookies, the ID and footprint data are sent to Supabase. Row‑Level Security allows inserts and selects based on that ID. Privacy‑first, zero personal data.
When toggling between pie and bar charts, I kept getting undefined labels or broken animations.
Solution: Properly destroy the previous chart instance before creating a new one, and ensure the canvas context is fresh. (This took way too many console.logs.)
I used GitHub Copilot extensively throughout this project, and it genuinely cut my development time in half. Here's exactly where it shined:
| Task | Copilot's Help |
|---|---|
| Emission factor lookup | Wrote the entire EMISSION_FACTORS object with realistic values after I typed // per km gasoline car
|
| Chart configuration | Generated the complete Chart.js options object — I only tweaked colors |
| Tooltip CSS | Created the .tooltip and .tooltiptext classes from a single comment |
| Share button logic | Wrote the shareToX() and shareToFacebook() functions with correct URL encoding |
| Recommendation messages | Suggested realistic, specific tips like "Reducing weekly driving by 20 km saves ~200 kg CO₂/year" |
Without Copilot, I would have spent hours looking up API docs and writing boilerplate.
With Copilot, I focused on architecture and user experience.
🏆 Prize Category: Best Use of GitHub Copilot
| The apps | Calculator |
|---|---|
![]() |
![]() |
Results panel showing total footprint, comparison metrics, interactive chart, and personalized recommendations.
A special thanks to the DEV Community team and the judges of the Weekend Challenge for organizing this inspiring event. It's a privilege to build alongside such a talented community.
If you found this useful, please drop a ⭐ on GitHub — it means a lot!
Let's build a greener web, one line of code at a time. 🌍
This is a submission for the DEV Weekend Challenge: Earth Day Edition.
2026-04-18 10:23:51
After helping a few friends learn Java, I kept hitting the same wall:
Every “practice Java online” site is either:
So I built Java Practice Lab — a free, no-signup playground with:
This post is half a write-up of what I learned, half a resource dump for anyone learning Java in 2026.
Not just “reverse a string.”
You’ll find real scenarios:
Think Programiz / OnlineGDB — but lighter.
Hand-written. No AI fluff.
public static void main
Full index: https://java-practice-lab.vercel.app/blog
All progress is stored in localStorage.
No accounts.
No emails.
Nothing to lose.
Here’s a quick, honest comparison from someone who used all of them:
| Platform | Strength | Weakness |
|---|---|---|
| HackerRank | Huge problem set | Heavy UI, signup required, interview-focused |
| CodeChef | Competitive coding | Not great for learning Java basics |
| W3Schools | Beginner-friendly | Mostly fill-in-the-blank |
| CodingBat | Classic problems | Outdated UI, no compiler |
| Programiz / OnlineGDB | Good compilers | No learning structure |
| Java Practice Lab | Practice + compiler + tutorials in one place, no signup | Smaller (for now 👀) |
The goal isn’t to replace them.
It’s to be the tab you always keep open while learning.
It feels heavy…
…until you try switching away and miss IntelliSense instantly.
No auth means:
Just pure usage.
Most people overcomplicate it.
Reality:
Write the tutorial people are already searching for → link your tool inside it
Don’t fight Google. Feed it.
No signup.
No install.
Nothing to download.
Here is the Link: https://java-practice-lab.vercel.app
If you have:
Drop a comment.
I read everything — and ship most reasonable ideas within a week.
Happy coding 🍵
2026-04-18 10:23:26
At L'Électron Rare we build FineFab — a local-first, multi-machine AI-native platform for manufacturing and electronics engineering. This week we open-sourced the full fine-tuning pipeline: training toolkit and output model. Here's what it looks like, and why we built it this way.
The frustration that started it
Every embedded engineer I know has the same story with generalist LLMs.
You ask GPT-4 to review an STM32 peripheral configuration and it confidently suggests a timer channel mapping that doesn't exist on that MCU family. You ask Claude to debug a SPICE .AC simulation and it hallucinates .PRINT syntax. You ask Gemini to fix a KiCad footprint and it describes Eagle shortcuts. These aren't edge cases — they're the modal failure of big generalist models in narrow technical domains.
After six months of living this in our consulting work — embedded systems for cultural and performance industries, escape rooms, live shows, industrial prototypes — we decided to do something about it.
Two public releases, one week
16/04 — KIKI-Mac_tunner (training toolkit)
MLX fine-tuning toolkit for Mac Studio, designed to distill Claude Opus reasoning into Mistral Large 123B. Apache 2.0. Runs on Apple Silicon, takes advantage of unified memory for the adapter stage.
17/04 — micro-kiki-v3 (model)
A cognitive LLM stack specialized in embedded systems engineering. Not a flat fine-tune — a routed architecture built on top of Qwen3.5-35B-A3B (MoE, 256 experts, 3B active per token).
Both Apache 2.0. The full pipeline is open, not just the artifact.
Architecture — why routed stacks instead of one big fine-tune
The design intuition is simple. Fine-tuning one monolithic model on a mixed embedded corpus smears the distinctive patterns of each sub-discipline. Training one LoRA stack per domain and picking the relevant stack(s) at inference preserves those patterns.
Domain router — classifier selects top-4 among 35 domain-specific LoRA stacks per request.
Base model — Qwen3.5-35B-A3B (MoE 256 experts, 3B active/token). LoRA rank 16 on q/k/v/o projections, top-2 routing per stack.
Null-space projection between stacks reduces catastrophic forgetting when combining domains.
Negotiator (CAMP + Catfish) arbitrates conflicting stack outputs — typical case: STM32 power-on sequencing vs. EMC suppression guidance, both technically correct but domain-priority-dependent.
Anti-bias layer (KnowBias + RBD) before output.
Aeon memory (Atlas graph + Trace log) for cross-session persistence.
Context 262K tokens, GGUF, runs on llama.cpp / Ollama / LM Studio.
35 domains covered
Conversation (chat-fr, reasoning), code (Python, TypeScript, C/C++, Rust, shell, SQL), infrastructure (Docker, DevOps, LLM-ops, ML-training), electronics (KiCad DSL, KiCad PCB, SPICE, components, power, EMC, DSP), hardware (embedded, STM32, IoT, PlatformIO), CAD (FreeCAD), web (frontend, backend), plus music-audio, math, security.
35 is pragmatic, not exhaustive. v4 will likely add RF and MEMS.
Dataset — built honestly
clemsail/micro-kiki-v3-dataset — 489K instruction-following examples, Apache 2.0.
50,116 real Claude CLI sessions captured on our 5-node P2P mesh during actual embedded consulting work (GrosMac Apple M5, Tower 28 threads, CILS i7, KXKM-AI RTX 4090, VM bootstrap).
2,529 Codex/Copilot sessions from 4 workstations.
364,045 examples from 19 filtered open-source HF datasets (CodeFeedback, French-Alpaca, Electronics StackExchange, stm32-hal-dataset, JITX open-components-database, etc.).
Opus teacher distillation for chat-fr and reasoning.
32 original curated seed sets.
Two points of honesty about this:
The Claude CLI logs come from our own work, not clients. Everything went through a filter pass before inclusion.
This is not a Meta-scale dataset. The strength is authenticity — examples map to how engineers actually use assistants in real debugging sessions. The weakness is coverage variance: some domains are thinner than others (DSP, RF, EMC).
Infrastructure — 5-node P2P mesh
The 50K+ Claude CLI examples were captured across five heterogeneous machines:
NodeHardwareRoleGrosMacApple M5, 16 GBDev + P2P bridge, LAN + TailscaleVM6.8 GB RAM, 4 CPUDocker host (29+ containers), P2P bootstrapTower31 GB RAM, 28 threadsLangfuse, LiteLLM, Piper TTS, OpenAI proxyCILS16 GB RAM, i7Ollama inference, most stable nodeKXKM-AI62 GB RAM, RTX 4090GPU inference, Unsloth, Qdrant, fine-tuning
Ed25519 auth, DHT discovery. The mesh itself is part of the product, not just a side-effect.
What I'd do differently
Routing is manual right now. You pick which LoRA adapter(s) to load based on your task. Dynamic routing (learned classifier or attention-based expert selection) is on the v4 roadmap.
Benchmark suite is internal. I have a held-out eval set and internal scores, but nothing reproducible-in-public. v4 will ship a benchmark suite you can run against the base Qwen3.5 for a reproducible comparison.
Languages: trained on French + English interleaved. Most of our customer base is francophone. If you need English-only quality, YMMV.
The meta-story
L'Électron Rare is building FineFab publicly, component by component. Related repos in the ecosystem:
Kill_LIFE — spec-first agentic methodology (BMAD agents, gates, evidence packs)
mascarade — multi-machine agentic LLM orchestration (P2P mesh, 8 providers)
KiC-AI — AI-powered PCB design assistant for KiCad
prima-cpp — distributed LLM inference, CUDA + ZMQ
Full org: github.com/L-electron-Rare.
What I want from you
Benchmarks against base Qwen3.5 / GPT-4 / Claude on embedded-specific tasks. Community runs matter more than my internal eval.
Edge cases where the router picks the wrong stack — feedback directly improves v4.
Memory/inference regressions on your hardware. Q4_K_M works cleanly on Apple Silicon 32 GB+ and RTX 4090; other configs untested.
Domains we missed. We'll add in v4.
Everything is Apache 2.0. Fork it, benchmark it, break it. That's the point.
Discussion thread open on HF: micro-kiki-v3/discussions/1.
"I would rather be a cyborg than a goddess." — Donna Haraway
2026-04-18 10:21:47
1.List Interface :
The List interface is part of java.util and extends the Collection interface. It represents an ordered sequence of elements — you can access any element by its position (index).
import java.util.List;
import java.util.ArrayList;
List<String> fruits = new ArrayList<>();
fruits.add("Apple");
fruits.add("Banana");
fruits.add("Mango");
System.out.println(fruits.get(0)); // Output: Apple
The 2 main implementations :
Arraylist:
-When you use ArrayList :
Linkedlist:
Each node holds the value + a pointer to next. Scattered in memory must walk the chain to find an element.
Each element is a node that holds a pointer to the next. Inserting means just rewiring two pointers — no shifting needed. ArrayList, on the other hand, has to push every element after the insertion point one slot over.
-When you use Linkedlist :
Common List methods :
1.Add() — insert elements :
fruits.add("Grapes"); // Adds to end
fruits.add(1, "Cherry"); // Adds at index 1
2.get() & size() — access & count :
String first = fruits.get(0); // "Apple"
int total = fruits.size(); // Total items
3.remove() — delete elements :
fruits.remove("Banana"); // Remove by value
fruits.remove(0); // Remove by index
2026-04-18 10:07:59
Liquid syntax error: Variable '{{n, 2n}' was not properly terminated with regexp: /\}\}/
2026-04-18 10:05:12
Every time a client wanted dimensional letters — for a sign, an
installation, a storefront — I hit the same wall.
Either I had to hire a 3D modeler, or spend hours in Blender
manually extruding paths, fixing normals, and setting up tolerances
for the acrylic slot.
Neither option was good. Hiring someone adds cost and back-and-forth.
Blender works, but it's slow and overkill for something that's
essentially a parametric operation on a 2D shape.
So I built FacLet3D.
You upload any SVG file — a typeface, a logo, any vector shape — and
the app generates:
You control:
No CAD knowledge needed. No Blender. No modeler.
The core challenge was parsing arbitrary SVG paths — including
compound paths, holes, and nested shapes — and turning them into
watertight solids with consistent normals.
SVG paths can be messy: overlapping subpaths, mixed winding orders,
self-intersections. Getting clean geometry out of arbitrary user input
required a lot of edge case handling.
The base and shell are generated as separate STL files so they can be
printed independently or assembled. The acrylic DXF is offset inward
by a configurable amount to account for laser kerf.
Free tier available at: https://faclet3d.factorgrafico.com
Built solo. Feedback very welcome — especially from anyone who works
with signage or fabrication.