MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

8-Bit Music Theory: How They Made The Great Sea Feel C U R S E D

2026-02-24 06:07:12

Ever wonder how the Wind Waker's "Great Sea" theme went from inviting adventure to genuinely unsettling? This breakdown reveals the musical magic that makes the "Cursed" version so darn creepy!

It's all thanks to clever tricks like spooky tritones, clashing harmonies, and even weaving in familiar ominous melodies, such as Ganondorf's theme, to totally transform that epic sailing tune into something straight out of a nightmare.

Watch on YouTube

Your Own AI Assistant: Welcome to the New World of Work

2026-02-24 06:00:19

Remember when having a personal assistant was something only executives and celebrities could afford? Those days are fading fast. AI is quietly reshaping how ordinary people work, learn, and manage their daily lives — and the shift is happening faster than most of us expected.

The Democratization of Help

For decades, getting real, personalized assistance meant either paying a premium or knowing the right people. Need a research summary? Hire someone. Need help drafting a proposal? Find a consultant. Need someone to think through a problem with you at 11pm? Good luck.

AI assistants are changing that equation entirely. Today, a freelancer in a small town has access to the same quality of thoughtful, on-demand support that a Fortune 500 company commands. A first-generation college student can get the same writing guidance as someone who grew up surrounded by professionals. That's a genuinely exciting development.

What AI Assistants Are Actually Good At

It's worth being honest here — AI isn't magic, and it isn't replacing human connection or expertise anytime soon. But it is remarkably useful for:

  • Brainstorming and getting unstuck when a project stalls
  • Drafting and editing emails, reports, or creative work
  • Explaining complex topics in plain language
  • Answering questions quickly without a 30-minute Google rabbit hole
  • Thinking through decisions by laying out pros and cons

The key is learning to treat AI as a capable collaborator rather than a search engine or a magic answer machine. Ask better questions, push back on responses, and combine its output with your own judgment. That partnership is where the real value lives.

The Workplace Is Already Changing

Surveys consistently show that workers who embrace AI tools are completing tasks faster and reporting higher satisfaction with their output. Companies that once resisted are now actively encouraging adoption. This isn't about replacing people — it's about removing the friction that bogs down good work.

Small business owners are using AI to handle first drafts of marketing copy. Teachers are using it to build lesson plans. Caregivers are using it to research medical questions before doctor appointments. The use cases are wonderfully mundane, and that's exactly the point. Powerful tools become truly valuable when they blend into everyday life.

Finding the Right Tool for You

The AI assistant space is growing quickly, and options range from massive general-purpose platforms to focused, friendly tools built for specific needs. One worth exploring is LOUIE, available at simplylouie.com — a conversational AI assistant designed to be genuinely helpful without the overwhelm. What makes it particularly easy to feel good about using: 50% of profits go directly to animal rescue organizations, so every conversation you have supports animals finding their forever homes.

The Bottom Line

The future of work isn't about humans versus AI — it's about humans with better tools. The productivity gap between those who learn to use these assistants well and those who don't is only going to grow. The good news? Getting started is easier than ever, and the learning curve is gentler than you might think.

Give it a try. Your future self — and maybe a shelter animal or two — will thank you.

Introducing EnvGuard: Catch .env Mistakes Before They Break Your App

2026-02-24 05:50:53

EnvGuard is an open-source .env validator that catches missing keys, type mismatches, stale variables, and potential secret leaks before they break your app or CI pipeline.

If you work with .env files, this is the guardrail that prevents avoidable config bugs.

TL;DR

  • Validate .env against .env.example
  • Validate values with .env.schema types
  • Detect likely hardcoded secrets
  • Find likely unused env variables
  • Run in watch mode for instant feedback while coding
  • Enforce stricter checks in CI

Why Teams Need a .env Validator

Most configuration failures are not hard problems. They are visibility problems.

You pull a branch and the app fails because one variable is missing.
You fix that and hit a runtime bug because a boolean is "yes" instead of true.
You deploy and discover stale env keys nobody remembers adding.

These issues are easy to fix once identified, but expensive when discovered late.

EnvGuard shifts that feedback earlier.

What EnvGuard Checks

Missing required keys

Compares .env against .env.example and reports missing keys.

Extra/stale keys

Warns on env keys that exist in .env but not in .env.example.

Type validation

With .env.schema, validates types like:

  • string
  • int
  • float
  • bool
  • url
  • email
  • json

Secret detection

Flags suspicious high-entropy values and known token patterns.

Unused variable detection

Scans for env keys that appear unused in the codebase.

.env.example vs .env.schema

Use both, but for different contracts:

  • .env.example defines which keys should exist (key contract)
  • .env.schema defines what each key should look like (type contract)

If you only pick one, start with .env.example.
Best coverage comes from using both.

The Most Practical Feature: watch

One-time validation is good.
Continuous validation while coding is better.

envguard watch

Watch mode automatically re-runs validation when env files change.

By default, it watches:

  • .env
  • .env.example

Optionally, include schema watching:

envguard watch --schema .env.schema

This gives immediate feedback after every save and helps prevent late discovery of config breakage.

Quick Start

# 1) Basic key validation
envguard validate

# 2) Add type validation
envguard validate --schema .env.schema

# 3) CI-friendly strict mode
envguard validate --strict

# 4) JSON output for tooling
envguard validate --json

# 5) Continuous checks while coding
envguard watch --schema .env.schema

Suggested Team Workflow

  1. Maintain required keys in .env.example.
  2. Add .env.schema for type validation.
  3. Keep envguard watch running during development.
  4. Run envguard validate --strict in CI.

What EnvGuard Is Not

EnvGuard is not a secret manager.
It does not replace Vault or cloud secret stores.

It is a focused validation layer for env correctness.

Closing

Configuration bugs are boring and expensive.

EnvGuard helps you catch them early, keep local setups stable, and reduce avoidable CI/deploy failures.

Install:

go install github.com/atoyegbe/envguard@latest

Repo: envguard

If you already use another .env checker, what does it catch well and where does it fall short?

I built an open-source alternative to Toast and Square for restaurant management

2026-02-24 05:47:49

The Problem

If you run a small restaurant, your options for online ordering and management are:

  1. SaaS platforms like Toast, Square, or ChowNow — $100-300+/month with vendor lock-in
  2. Old open-source projects — mostly PHP/Laravel, hard to extend, dated UIs
  3. Build it yourself — months of work before you can take a single order

I wanted a fourth option: a modern, self-hosted, open-source platform that a developer could deploy in an afternoon.

Introducing KitchenAsty

KitchenAsty is an MIT-licensed restaurant ordering, reservation, and management system built as a TypeScript monorepo.

What it covers

For customers:

  • Browse the menu, add to cart, and place orders for delivery or pickup
  • Schedule orders for later or order ASAP
  • Pay with Stripe or cash on delivery
  • Track orders in real-time
  • Book table reservations
  • Leave reviews
  • React Native mobile app

For restaurant staff:

  • Manage menus with categories, options, allergens, and images
  • Kitchen display — a live Kanban board showing incoming orders
  • Process orders with one-click status progression
  • Manage reservations with table assignment
  • Create and track coupons
  • Moderate customer reviews
  • Staff management with role-based access

For the owner:

  • Dashboard with revenue trends, order analytics, and top-selling items
  • Multi-language support (6 languages)
  • Full settings panel for payments, email, orders, and more

Tech Stack

Layer Tech
API Node.js + Express
Admin & Storefront React 18 + Vite
Mobile React Native + Expo
Database PostgreSQL + Prisma
Real-time Socket.IO
Payments Stripe
Styling Tailwind CSS
Testing Vitest + Playwright (330+ tests)
Language TypeScript (strict mode everywhere)

Architecture Decisions

A few choices I made and why:

Monorepo with npm workspaces — Admin, storefront, server, and shared types all live in one repo. Changes to shared types are immediately visible everywhere. No publishing packages, no version mismatches.

Prisma over raw SQL — Type-safe database queries that catch errors at build time. The schema is self-documenting with 30 models and clear relationships.

Socket.IO for real-time — The kitchen display and order tracking need instant updates. Socket.IO made this straightforward with room-based broadcasting.

Separate admin and storefront apps — Different audiences, different concerns. The admin is a dense data-management tool. The storefront is a consumer-facing ordering experience. Sharing a single React app would have meant too many compromises.

Self-Hosting

The project is designed to be self-hosted with Docker. The docs site has a complete guide covering:

  • Server setup (Ubuntu/Debian)
  • Docker Compose deployment
  • Domain and DNS configuration
  • Reverse proxy with SSL (Nginx or Caddy)
  • Backups and maintenance

For local development, it's docker compose up -d for PostgreSQL, then npm run dev:server / npm run dev:admin / npm run dev:storefront.

By the Numbers

  • 27,000 lines of TypeScript
  • 30 database models
  • 118 API endpoints
  • 330+ tests (unit, integration, E2E)
  • 6 supported languages
  • Full CI/CD with GitHub Actions

Contributing

The project is set up for contributors:

Areas where help is most needed: accessibility, i18n coverage, test coverage, and structured logging.

Links

If you've ever worked on restaurant tech, run a food business, or just want to contribute to a well-documented TypeScript project, I'd love to hear from you.

Engineering a Privacy-First Emotion Analytics Pipeline for Regulated Healthcare Data

2026-02-24 05:43:29

Introduction: The engineering problem

  • Briefly restate the challenge (unstructured healthcare feedback)
  • Emphasise engineering constraints, not product vision
  • Reference regulated environments

Why privacy must come before modelling

  • Why PII redaction must happen before storage
  • Trade-offs: recall vs safety
  • Why post-hoc anonymisation is insufficient

Designing the emotion analytics pipeline

  • Multi-label emotion detection
  • Handling overlapping emotional states
  • Calibration and confidence thresholds

Topic and trend analysis at scale

  • Why individual documents are noisy
  • Rolling windows (7/30/90 days)
  • Avoiding false positives

Rule-plus-ML decision logic

  • Why pure ML fails in regulated settings
  • Deterministic rules + probabilistic signals
  • Interpretability benefits

Explainability as an engineering requirement

  • Evidence selection
  • Rationale generation
  • Model versioning

Lessons from early builds

  • What broke
  • What surprised you
  • What you would redesign

Conclusion

  • Engineering mindset over hype
  • Decision support, not automation

Engineering machine learning systems for healthcare is less about maximising model accuracy and more about navigating architectural constraints. Unstructured staff and patient feedback contains valuable emotional signals, but responsibly processing and operationalising this data requires careful engineering decisions around privacy, explainability, and governance.

This article focuses on the engineering considerations behind building a privacy-first emotion analytics pipeline for regulated environments. Rather than discussing product features or business outcomes, it explores how design choices around data handling, model structure, and decision logic influence whether an AI system can be safely deployed in high-trust settings such as healthcare.

The system discussed here was developed as part of EADSS (Emotionally-Aware Decision Support System), an end-to-end platform designed to convert unstructured organisational feedback into interpretable emotional signals and trend-based risk insights. The emphasis throughout this article is on how the system is built — and why certain engineering trade-offs are unavoidable when privacy and accountability are first-class requirements.

Why privacy must come before modelling

A core engineering constraint for healthcare-related text data is privacy. Feedback often contains names, email addresses, phone numbers, or contextual identifiers that should never be persisted unnecessarily. In EADSS, automatic PII detection and redaction is applied before any text is stored or processed further.

This ordering is deliberate. Redacting data after storage increases governance risk and complicates auditability. By ensuring that only anonymised representations enter the analytics pipeline, downstream components — including model inference, trend analysis, and alerting — operate on data that is safer by default. This approach prioritises data minimisation over raw analytical flexibility.

Designing the emotion analytics pipeline

Traditional sentiment analysis reduces text to a single polarity score (positive, neutral, negative). In real-world feedback, especially in healthcare contexts, emotional states are often overlapping and nuanced. A single message may express frustration, anxiety, and exhaustion simultaneously.

To capture this complexity, the pipeline uses multi-label emotion detection rather than single-label classification. This introduces several engineering challenges:

  • handling overlapping labels efficiently
  • calibrating confidence scores
  • preventing overconfident predictions

Threshold-based label selection and probability calibration are used to ensure that emotion outputs remain interpretable and conservative, particularly when downstream decisions rely on aggregated trends rather than individual documents.

Topic and trend analysis over time

Individual feedback items are noisy and context-dependent. Treating them in isolation often leads to false positives or overreaction. Instead, the system aggregates emotional signals across rolling time windows (for example, 7-day and 30-day periods).

Robust statistical measures, such as median-based baselines and deviation scores, help detect meaningful shifts while reducing sensitivity to short-term spikes. From an engineering perspective, this design favours stability and interpretability over responsiveness to single data points.

Rule-plus-ML decision logic

Pure end-to-end machine learning systems can be difficult to justify in regulated environments. Fully opaque decision-making pipelines increase operational and governance risk, particularly when outcomes need to be explained to non-technical stakeholders.

EADSS therefore uses a hybrid rule + ML decision logic. Machine learning models generate probabilistic emotion and topic signals, while deterministic rules frame how these signals contribute to risk indicators. This hybrid approach ensures that decisions are:

  • reproducible
  • auditable
  • easier to reason about during reviews

From an engineering standpoint, this design trades some flexibility for predictability and accountability.

Explainability as an engineering requirement

Explainability is not treated as an afterthought. When an alert is generated, the system surfaces:

  • the dominant emotional drivers
  • associated topics
  • representative anonymised text examples

Model versions and inference metadata are logged alongside outputs, allowing engineers and reviewers to trace which model produced which signal and why. This versioned explainability layer supports audits and post-hoc analysis without requiring access to raw, sensitive data.

Lessons from early builds

Several engineering lessons emerged during development:

Emotional signals are more informative when analysed as trends rather than absolutes.

Enforcing privacy controls before persistence simplifies governance and reduces downstream complexity.

Explainability-first architectures often deliver greater stakeholder trust than marginal accuracy improvements.

These lessons reinforced the importance of designing for constraints rather than optimising for idealised datasets.

Conclusion

In regulated environments such as healthcare, AI systems must be engineered with privacy, auditability, and accountability as primary constraints. This article outlined how a privacy-first emotion analytics pipeline can be designed to balance these requirements while still extracting meaningful insights from unstructured feedback.

The approach described here reflects an engineering mindset focused on decision support rather than automation, recognising that human judgement remains central in high-trust settings.

I Built a Niche AI/ML Job Board in 48 Hours — Stack, Code & Live Revenue Model

2026-02-24 05:37:07

I Built a Niche AI/ML Job Board in 48 Hours — Here's the Exact Stack

I just launched aijobsboard.oblivionlabz.net — a fully autonomous AI/ML remote jobs board that auto-aggregates listings, serves job seekers for free, and charges employers to feature listings.

The Business Model

  • Job seekers: Free. Always.
  • Employers: $99-$299 to feature/pin a listing (via Stripe Checkout)
  • Zero manual work: Jobs auto-populate from APIs every 30 minutes

Proven model: a similar niche board made $21k in its first 4.5 months of 2025.

The Tech Stack

Next.js 14 (App Router)
TypeScript
Tailwind CSS
Stripe (Checkout + Webhooks)
Vercel (hosting)
Cloudflare DNS

How Jobs Auto-Populate

I pull from 3 free sources with zero scraping — all RSS/JSON feeds:

const SOURCES = [
  'https://remoteok.com/api?tag=ai',
  'https://remotive.com/api/remote-jobs?category=software-dev&limit=50',
  'https://jobicy.com/?feed=job_feed&job_category=dev&job_types=full-time'
]

No scraping. No paid APIs. Free to consume.

The Stripe Flow

  1. Employer hits /post-job
  2. Fills out company, title, URL, listing type
  3. Redirects to Stripe Checkout ($99 or $299)
  4. On success, webhook fires and job gets featured
export async function POST(req: Request) {
  const event = stripe.webhooks.constructEvent(
    body, sig, process.env.STRIPE_WEBHOOK_SECRET!
  )
  if (event.type === 'checkout.session.completed') {
    // Store featured job, send confirmation
  }
}

What Makes It Autonomous

  • Jobs refresh via Vercel ISR (revalidate every 30 min)
  • Payments process automatically via Stripe
  • No database needed
  • Zero moderation for organic listings

The SEO Play

Niche job boards rank fast for long-tail terms:

  • "remote machine learning jobs"
  • "AI engineer remote jobs 2026"
  • "LLM engineer remote work"

Each job listing is a static page with proper meta tags and schema markup.

The Numbers

  • Build time: 48 hours
  • Monthly cost: $0 (Vercel free tier + free APIs)
  • Revenue model: $99-$299 per featured listing
  • Autonomy level: 97%

The board is live: aijobsboard.oblivionlabz.net

Built by Oblivion Labz