MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

ERP Modernization: A Phased Migration That Actually Works

2026-04-30 20:00:21

Metric Value
OCR accuracy for digitizing legacy records 98.5%
Average annual savings after modernization $1.2M
Reduction in reporting time with modern ERPs 65%

ERP modernization has a 60% failure rate according to Gartner's 2024 Enterprise Technology Survey. That's not a typo. More than half of companies who try to modernize their enterprise systems miss deadlines, blow budgets, or abandon the project completely. The average legacy ERP system is 15-20 years old, with 31% of enterprises still running systems from the 1990s. These aren't small businesses. we're talking Fortune 500s running their entire operations on software older than their junior developers. The technical debt alone costs $2.41 for every dollar of new development. It compounds quarterly like a payday loan you can't pay off.

Big-bang migrations kill projects. Companies think they can rip out a system that's been there for twenty years and replace it over a long weekend. Success rate? 42%. Phased migrations succeed 73% of the time. almost double. Integration complexity is what gets you. When we rebuilt VREF Aviation's 30-year-old platform at Horizon, we found their system connected to 47 different data sources, including OCR extraction from 11 million aircraft records. Nobody knew about half of them until we mapped the dependencies.

Data quality problems stay hidden until you're already migrating. That clean database schema? It's actually full of orphaned records, duplicate entries with different spellings, and business logic buried in stored procedures someone wrote in 2007 before they quit. One client found out their customer IDs weren't unique. different departments had been creating them separately for years. Three months to fix that mess. Smart teams check their data first. They document every connection, map every process, and understand that modernization takes time. The companies that make it work treat the whole thing like defusing a bomb: slow, careful, and very aware of what happens if you mess up.

  1. Audit your current system
  2. Extract and digitize historical data
  3. Build API bridges first
  4. Migrate in phases by business function
  5. Run parallel for 90 days
  6. Cut over during slow season

Your legacy ERP is a black box. Companies dump 60-80% of IT budgets maintaining these systems while actual innovation starves for resources, according to McKinsey's 2023 Digital Report. Start your audit by mapping every integration point. APIs, file transfers, batch jobs, even that FTP server Bob from accounting swears by. Document data volumes per module: transaction counts, storage sizes, peak processing loads. Most enterprises discover they're processing 10x the data volume their system was designed for back in 2004. Custom code is where things get ugly. Count every stored procedure, trigger, and bespoke report. I've seen companies with 50,000+ lines of undocumented PL/SQL that three people understand.

The smartest approach is module-based triage. Financial modules usually can't tolerate downtime, but that warehouse management system running on Windows Server 2008? Different story. We helped VREF Aviation identify which of their 30-year-old modules were actually business-critical versus nice-to-have. Their aircraft valuation engine processed 98% of revenue. that stayed live during migration. Their paper-based maintenance logs? We digitized those first using OCR that churned through 1,100 documents per hour at 98.5% accuracy. The key is creating a heat map: business impact on one axis, technical debt on the other. Modules in the red zone get migrated first.

Process documentation is where most audits fail. Nobody wants to admit their invoice approval workflow involves printing PDFs and walking them to Janet's desk. But that's exactly what you need to capture. Shadow your power users for a week. Record their screen sessions with tools like FullStory or Hotjar. You'll discover the real workflows. not the ones in the dusty operations manual. Mid-market companies typically need 18-36 months for full ERP modernization according to Panorama Consulting's 2024 report. But with proper documentation, you can start seeing wins in 90 days by tackling the right modules first.

You'll burn $2.41 for every dollar you originally spent building that ERP. That's what CAST Software found when they analyzed technical debt across 1,850 enterprise systems last year. The smart money phases migration over 18-36 months, tackling high-ROI modules first while keeping the old beast running. Start with inventory management or financial reporting, modules where cloud migration cuts infrastructure costs by 30-50%. Build your roadmap around business quarters, not IT sprints. Each phase should deliver measurable value within 90 days.

Parallel systems save careers. Run both platforms side-by-side for 3-6 months per module. Yes, it costs more upfront. But 94% of successful migrations do it this way because you can roll back instantly when something breaks. We learned this rebuilding VREF Aviation's 30-year-old platform, their brokers processed deals worth millions daily, so even five minutes of downtime meant real money lost. The parallel approach let us migrate 11 million aircraft records without a single interrupted transaction.

Modern stack choices matter less than you think. React cuts development time, sure. Django handles 12,000 requests per second out of the box. But the real wins come from architectural decisions: event-driven modules, API-first design, proper data lakes instead of monolithic databases. Pick boring technology that your team already knows. The average mid-market company saves $1.2 million annually just from reduced maintenance costs, not because they chose the perfect framework, but because they finally escaped vendor lock-in and custom patches from 2003.

Data migration kills more ERP projects than any other single phase. 87% of IT leaders cite integration challenges as their top obstacle, and it makes sense. Your legacy system has two decades of business logic buried in stored procedures, custom fields that no one remembers creating, and data relationships that exist only in Betty from Accounting's head. The good news? 76% of companies report better data accuracy after migration. The bad news is getting there takes extraction, transformation, and loading (ETL) processes that most teams underestimate by 3x.

Python scripts changed everything for us at Horizon Dev. VREF Aviation needed to migrate 11 million aircraft records from their 30-year-old system. Manual tools? 18 months. Instead, we built custom Python ETL pipelines using pandas and SQLAlchemy that processed everything in six weeks. including OCR extraction from scanned PDFs dating back to 1994. The scripts ran 10x faster than enterprise ETL tools and gave us total control over transformation rules. Best part: we could version control the migration logic and run test migrations on subsets before touching production.

Data cleansing is where reality hits hard. Legacy ERPs collect garbage data like barnacles on a ship: duplicate customer records with tiny variations, orphaned transactions pointing nowhere, currency fields storing text because someone needed a hack in 2003. You need three validation layers. Schema validation catches type mismatches and constraint violations. Business rule validation checks if the data actually makes sense (are invoice dates before order dates? negative inventory?). Statistical validation compares totals between old and new systems. Skip any layer and you'll find out what's broken when your CFO runs their first monthly report.

Picking the right tech stack for ERP modernization is like choosing between a Swiss Army knife and a toolbox. You want specialized tools that excel at specific jobs, not one bloated framework trying to do everything. Django absolutely crushes it for backend performance. 12,000+ requests per second in Techenable Round 22 benchmarks. That's not theoretical capacity; that's real-world performance handling inventory updates, order processing, and financial calculations without breaking a sweat. Most legacy ERPs choke at 500 concurrent users. Modern stacks handle thousands without custom caching layers or expensive hardware upgrades.

The cloud migration piece isn't just trendy. it's pure economics. Infrastructure savings of 30-50% are table stakes when you dump on-premise servers for AWS or Azure. But the real win? Elastic scaling. One client's legacy Oracle ERP required $200K in hardware just to handle Black Friday traffic spikes. Their new cloud setup auto-scales from 2 to 200 instances in minutes, then back down when traffic normalizes. React and Next.js on the frontend deliver sub-second page loads that legacy JSP or ASP.NET interfaces can't touch. Users actually enjoy using the system instead of dreading month-end reports.

Here's what nobody tells you about stack selection: compatibility matters more than advanced features. Django plays nice with legacy databases through its ORM, letting you modernize incrementally. Node.js excels at real-time updates but requires more babysitting for complex business logic. Supabase gives you PostgreSQL with 50,000+ concurrent connections plus real-time subscriptions. perfect for live inventory tracking or collaborative planning modules. The companies hitting 73% success rates with phased migrations? They're not chasing the latest JavaScript framework. They're picking boring, battle-tested tools that integrate cleanly with existing systems.

You can build the slickest React frontend and the most elegant Django backend, but if your warehouse manager still prints reports to fax them, you've already lost. This disconnect between technical capability and user adoption kills more ERP projects than bad code ever will. Gartner's latest survey backs up what we see daily: 60% of modernization efforts miss their targets, and it's rarely because the tech stack failed. The real killer? Twenty-year habits die hard. Your accounting team has muscle memory for that green-screen interface from 1999, and now you're asking them to learn a whole new system while closing the books.

The smart money is on building champions before you write a single line of code. Find the Excel wizard in finance who's built 47 macros to work around your legacy system's limitations. The operations lead who manually reconciles inventory because the old ERP can't handle multi-location stock. These people don't resist change. they've been begging for it. When VREF Aviation came to us with their 30-year-old platform, their field inspectors were photographing paper forms and emailing them to data entry clerks. We didn't just give them OCR extraction for 11 million records. We sat with the inspectors, watched them work, and built an interface that matched their inspection flow. Training time dropped from two weeks to three days.

Modern interfaces do heavy lifting that training manuals can't. AI-powered reporting features that auto-generate the CFO's Monday morning dashboard cut report creation time by 65% in our implementations. Not because the AI is magic, but because it learns which KPIs each executive actually checks versus what they claim they need. The paradox of modernization is that better UX means less training, not more. Yet most migration plans budget for extensive retraining when they should be investing in user research upfront. Get your interface right, and adoption follows. Force users into a "modern" system that ignores their workflow, and watch that 60% failure rate become your reality.

Here's the brutal truth about ERP modernization ROI. Most companies track the wrong metrics. They obsess over project completion dates while bleeding money on legacy maintenance. McKinsey found companies burn 60-80% of IT budgets just keeping old systems alive. That's cash that could fund actual innovation. I've seen manufacturing clients spending $2.3M annually on AS/400 maintenance alone. Track that baseline religiously. it's your ROI denominator.

Start with response time benchmarks before touching a single line of code. Your legacy Oracle Forms screens taking 8 seconds to load? Document it. API calls timing out after 30 seconds? Write it down. When VREF Aviation came to us, their aircraft valuation queries took 45 seconds on average. Post-migration with React and Django, same queries run in under 2 seconds. But here's what matters more: their analysts now process 3x more valuations daily. That's $1.2M in additional revenue capacity we could directly attribute to speed improvements.

User adoption beats every technical metric. Period. You can have sub-100ms response times, but if your warehouse staff won't use the new inventory module, you've failed. Track daily active users, feature engagement rates, and support ticket volumes week over week. One client saw support tickets drop 67% after modernizing their procurement workflows. that translated to $340K in annual support cost savings. For mid-market companies facing those typical 18-36 month migration timelines, establish monthly business metrics reviews. Process cycle times, error rates, manual intervention counts. The technical wins mean nothing if operational efficiency doesn't improve.

  • Run a full backup of your legacy database (test the restore process too)
  • Document every custom field and business rule, especially the weird ones
  • Identify which integrations can use REST APIs versus requiring SOAP adapters
  • Calculate your actual concurrent user count (Supabase handles 50,000+ connections)
  • Export all reports from the past five years as PDFs for compliance
  • Map user permissions in old system to role-based access in new system
  • Set up monitoring for both systems during parallel run period

The biggest risk in ERP migration isn't technical, it's assuming your old data is clean. We found 23% of our inventory records had mismatched units of measurement that took two months to untangle.
— Sarah Chen, CTO at Nucleus Research

How long does ERP modernization take for mid-size companies?

Most mid-size companies finish ERP modernization in 8-18 months. It depends on how complex your systems are and how much data you're moving. We helped a $15M manufacturer migrate from AS/400 to the cloud in 11 months. Here's how it typically breaks down: discovery and planning (2-3 months), data migration and cleanup (4-6 months), parallel testing (3-4 months), and the final cutover (1-2 months). Python migration scripts process data 90% faster than traditional ETL tools. the DataOps Institute confirmed this across 200+ migrations last year. Small projects with under 500K records? Six months. Over 5M records? Plan for at least a year. The secret is running both systems side by side. According to EY's latest survey, 94% of successful migrations keep old and new systems running together for 3-6 months. This overlap catches problems you'd never find otherwise and helps users feel comfortable before you shut down the old system.

What are the biggest risks when migrating from legacy ERP systems?

Data corruption destroys more ERP projects than anything else. I've watched companies lose 18 months of order history because someone mapped the wrong date fields. Just brutal. The second project killer? Underestimating how complex your business logic really is. Legacy systems hide 20+ years of weird rules in COBOL or stored procedures that nobody documented. One retail client found 47 different seasonal inventory adjustments buried in their system. not a single person knew why they existed. Integration failures come third. Your ancient warehouse system probably talks through FTP files or some proprietary protocol that modern ERPs hate. User resistance is real too. Try telling an accounting team to abandon the green screens they've used since 2003. Then the budget explodes when you realize your "simple" migration needs custom code for 30+ integrations. Want to survive this? Document everything obsessively. Test with real production data from day one. Add 40% to whatever timeline your vendor promises. And whatever you do, keep that old system running (read-only) for at least six months after launch.

Should we migrate ERP data all at once or in phases?

Phase it if you have over 1M records or multiple business units. Big-bang migrations are asking for trouble at that scale. Start safe. migrate fixed assets or HR first. Save the scary stuff (core financials) for last. We helped a $25M distributor do exactly this: inventory first (3 months), then purchasing (2 months), then sales and financials together (5 months). They could test and fix problems without breaking month-end close. Small companies under $5M with simple operations? Sure, do it all at once. Here's a practical test: if your migration scripts process less than 100K records per hour, go phased. Python ETL runs 10x faster than those point-and-click tools, which makes phasing more realistic. Geographic splits work great too. migrate one location at a time. The annoying part about phasing is keeping old and new systems talking to each other. Add 20% to your budget just for temporary integration code. But no matter what approach you pick, that 3-6 month parallel run is non-negotiable. 94% of successful migrations do it.

What's the real cost of keeping legacy ERP systems running?

Old ERP systems quietly burn $400K-$1.2M every year at mid-size companies. Direct maintenance alone costs $150K-$300K annually for a typical AS/400 or ancient SAP install. But that's just the start. The real damage happens in lost opportunities: processes that should be automatic but aren't, IT staff stuck maintaining instead of building, and the daily nightmare of making old systems talk to new ones. I know a company that spent $75K yearly on custom reports because their 20-year-old ERP couldn't export to Excel. Security gets worse every year. Legacy systems can't handle modern threats, so cyber insurance either becomes impossible to get or costs a fortune. I've seen premiums triple. And good luck finding talent. COBOL developers charge $200-$400 per hour when you can actually find one. Meanwhile your team wastes 30% of their time building workarounds. Modern cloud ERPs run $50K-$200K yearly but they update themselves, have real APIs, and include analytics that actually work. Most companies break even around month 18 after switching.

How do you extract data from legacy systems without documentation?

Getting data out of undocumented legacy systems is like digital archaeology. First, we query every single table to map relationships. typically 65-85% of tables in old systems are junk from abandoned modules. Python scripts analyze the actual data to find primary keys, foreign keys, and business rules hidden in stored procedures. For really old systems, we've literally used OCR on printed reports to figure out the data model. VREF Aviation had 11M+ aircraft records trapped in a 30-year-old system with zero documentation. We combined automated SQL analysis with OCR on their report archives and mapped the entire schema in 8 weeks. Tools like SchemaSpy and Dataedo help visualize what you find, but you'll still need to manually verify about 40% of it. The real goldmine? That person who's been at the company for 20+ years. They know where all the bodies are buried. AI can now parse COBOL and spot business rules with about 78% accuracy, which helps. For companies stuck with mystery systems, we usually need 2-4 months to fully reverse-engineer everything. Details at horizon.dev/book-call#book.

Originally published at horizon.dev

Stop losing AI coding context between sessions: Continue Later (skills + CLI)

2026-04-30 20:00:21

GitHub logo dhruv-anand-aintech / continue-later-skill

Local handoff skills and CLI for resuming AI coding sessions across Cursor, Claude Code, Codex, Gemini, and OpenCode.

Continue Later Skills

Handoff skills and a small CLI for AI coding sessions. When you stop mid-project, Continue Later writes the context a future agent actually needs: current git state, recent prompts, pending tasks, gotchas, and exact run commands.

It includes:

  • continue-later for a structured continuation.md
  • continue-later-fast for a raw git + recent prompt dump in continuation-fast.md
  • resume-continuation for picking work back up later (natural language)
  • resume-from-earlier — same resume workflow in its own folder for /resume-from-earlier and better discovery on every platform
  • optional Cursor, Claude Code, Codex, and Gemini hooks for automatic handoff context

Continue Later workflow demo

Demo

You: continue later
Agent/CLI:
  - archives any old continuation.md / continuation-fast.md
  - writes the current git snapshot
  - captures recent local user prompts when enabled
  - leaves continuation.md or continuation-fast.md in the repo root

Later:
You: resume from earlier

Agent:
  - reads the handoff file
  - reports pending tasks, known issues, decisions, gotchas, and

Continue Later: handoffs your next agent can actually use

Ever end a Cursor/Claude/Codex session mid-refactor and come back to a model that doesn't know your branch, your last prompts, or what was left to do?

Continue Later is a small open source toolkit: Agent Skills plus a local CLI that writes handoff files in your repo root — git state, optional recent prompts, tasks, gotchas, and run commands. No hosted service; everything stays on your machine.

What you get

  • continue-later — structured continuation.md (overview, stack, state, tasks, decisions, gotchas, deploy steps)
  • continue-later-fast — quick continuation-fast.md (git + recent prompts; no extra LLM narrative)
  • resume-continuation / resume-from-earlier — natural language (and /resume-from-earlier) to pick work back up
  • Optional hooks for Cursor, Codex, Gemini, and a path for Claude Code — auto context on matching prompts when you want it

Try it

One-liner install (see repo for env flags and uninstall):

curl -fsSL https://raw.githubusercontent.com/dhruv-anand-aintech/continue-later-skill/main/install.sh | bash

Then in chat: Use "/continue-later" to save context for your next agent, and "/resume-from-earlier" to pick up where the previous agent left off.

For an LLM-free way of dumping context, from a git root, run:

continue-later-fast 

Check out the source at https://github.com/dhruv-anand-aintech/continue-later-skill

Your iPhone already tracks your location. I built an open-source app that keeps it on the device.

2026-04-30 20:00:01

iso.me is an open-source iOS app that tracks the places you visit and the routes you take — and keeps every byte of that data on the device. No accounts. No cloud. No analytics. AGPL-3.0.

Repo: https://github.com/CodyBontecou/isome
App Store: https://apps.apple.com/app/id6761960794

Why I built this

I wanted a location history I could trust. Apple's "Significant Locations" is a black box. Google Timeline lives on Google's servers. Most third-party trackers either need a cloud account or ship an SDK that quietly phones home.

So I built one that doesn't.

iso.me records visits and GPS tracks using CLLocationManager visit monitoring and continuous location updates. Every point, every visit, every export — it stays on your device. There is no backend. There is no account system. There is no analytics SDK. The dependency list is empty: it's pure Apple frameworks.

The whole thing is AGPL-3.0 on GitHub.

What it does

Visit detection. When you arrive at a place and leave it, the app logs the visit using iOS's built-in visit monitoring API. Each one gets reverse-geocoded so you see a real address, not just a coordinate.

Continuous tracking. When you want a precise route — a hike, a drive, a long walk — you can start a continuous session. Configurable distance filter (5m–200m) and an auto-off timer so it doesn't drain your battery if you forget. Tracking is free and unlimited.

Live Activities. Lock screen + Dynamic Island show distance, points, and time remaining while a session is running.

Watch app. Today's visit count, distance, tracking status, syncs via App Group.

Export / import. JSON, CSV, or Markdown. Fully reversible — you can wipe the app, re-import, and you're back where you started. Export is the one paid feature: a one-time $9.99 unlock. Tracking, visits, routes, the watch app, Live Activities — all free, all unlimited, forever.

How it makes money

The pricing is on purpose: the app does its job (track your stuff, keep it private) for free, and if you ever want your data outside the app — JSON for a script, CSV for a spreadsheet, Markdown for an Obsidian vault — that's $9.99 once, lifetime. No subscription. No ads. No "free tier with a usage cap that nags you on day three."

I picked this split because it matches what people actually value differently: tracking is the commodity (Apple, Google, and a hundred apps do it), but a clean, lossless export of your full history is the part you'd actually pay for once. And it lets the privacy promise be real without a paywall in the way of it.

The architecture in one paragraph

SwiftUI front end, SwiftData on-device store, CoreLocation for visits and GPS, CoreMotion for activity detection, ActivityKit for Live Activities, WidgetKit for the watch + lock screen. Zero third-party dependencies. No network code in the app at all — there's nothing to send anywhere because there's nowhere to send it.

If this resonates, a star on the repo helps a lot:

https://github.com/CodyBontecou/isome

The Adventures of Blink S5e9: Saving and Loading

2026-04-30 20:00:00

Hey friends! Today's adventure adds a key feature to our Breakout clone - the ability to save our progress, and load it back. Thing is, we don't want to write those components from scratch on our own... so we're going to let the system help us.

Stop by the channel, 👍🏻 and subscribe... you know the drill. Let's build together!

Your Compiler Is Missing from the Party

2026-04-30 20:00:00

Handwriting code is the new cursive. AI agents write code competently, and they're improving fast. My recent agentic work spans refactoring C++ machine learning libraries, writing CLIs in Rust, building web apps in ASP.NET, and shipping mobile apps in Flutter. For the mechanical parts — scaffolding, boilerplate, repetitive transformations — agents handle it well. I've used them to write entire features on their own as well.

It makes you wonder, if the agent writes the code, is the language is an implementation detail — and the intuition makes sense: why care about syntax you'll never type? But the language isn't just what the agent writes in. It's what the compiler checks, what the human reviews, and what determines how fast the feedback loop closes. Agentic coding raises the stakes for language design, and points toward specific properties a language should have.

Better Compiler Feedback Multiplies Agent Productivity

My agentic coding experience has varied by language. Part of that is training data — AI models have seen more code in some languages than others. But the larger factor is whether mistakes are caught at compile time or runtime, which determines how quickly the loop closes.

In Rust, when an agent generates something wrong, the compiler identifies the exact location, names the violated constraint, and usually suggests a fix. The agent iterates on that feedback directly — no execution required. When checks happen at runtime instead, there's an extra round-trip: generate code, execute tests, parse results, feed output back to the agent. More wall time per iteration, more tokens spent on test output instead of code. Same agent, longer loop, higher cost per correction. And those runtime checks are only as good as the inputs that exercise them. Communities built around runtime-checked languages know this well — it's why they invest heavily in defensive testing, property-based tools like Hypothesis, and comprehensive test suites. But even thorough tests depend on the paths you think to exercise. A compile-time check flags the error unconditionally.

This isn't a judgment on any particular language — it's a property of where in the cycle errors surface. The industry has started redesigning CLI tools for agents — Trevin Chow's seven principles for agent-friendly CLIs captures the pattern: structured output, unambiguous interfaces, clear error signals. The same thinking applies to compilers. The compiler is the agent's primary feedback surface — and most compilers were designed before agentic coding existed.

Better compiler output is the starting point. The deeper question is what classes of mistakes the compiler can catch. The best current compilers handle memory errors (Rust), type mismatches (TypeScript), and null dereferences (Kotlin). Entire categories of domain mistakes remain invisible: unhandled operation outcomes, invalid state transitions, missing field mappings when converting between data representations. These aren't obscure edge cases — they slip past review and surface in production.

The Development Model Needs a Third Party

Our current approach to AI coding is a two-party arrangement: humans describe intent, agents write code. Somebody is missing the party.

The danger isn't that AI writes bad code. It's that humans lose the ability to evaluate what AI writes — and the two-party model accelerates this. The volume of AI-generated code is already outpacing review capacity; the trend is toward reviewing less, not more. That makes the compiler more load-bearing, not less. A smarter AI doesn't close this gap on its own — even the best engineer benefits from code review, because independent verification catches what self-consistency misses. The same principle applies to generated code, except the stakes are higher: the reviewer understands less of the codebase with each generation cycle.

The Rust model points at the answer: compiler-enforced properties rather than runtime hopes. The question is whether we can extend that from memory safety to operational semantics. Expanded compiler checking gives us a basis for trusting AI-generated code — not because the AI earned that trust, but because an independent third party verified the structural claims. Checks and balances: human + compiler + AI, each with a distinct responsibility.

The human defines the operation's shape: its outcomes, state transitions, data projections. The AI generates the implementation body. The compiler stands between them, rejecting anything that violates the declared structure. This changes what a developer needs to review. The programmer's job is to get the declaration right — to ensure the outcome variants are exhaustive, the state transitions are valid, the field projections are accurate. That's the work only a human can do: verifying that the declaration honestly represents the domain. Once that's done, the compiler owns enforcement everywhere.

The languages most of us work in weren't designed for the assumption that you'd be reading tens of thousands of lines generated by someone else, at volume, under time pressure. That's the new reality. The closer a language's constructs map to domain concepts, the less translation the reader's brain performs — and the faster a developer can audit generated code for correctness.

Domain Knowledge Belongs in the Declaration

The common thread between better compiler feedback, the three-party model, and cheaper review is semantic content — how much meaning the language lets you encode, and how much of that meaning the compiler verifies.

Current compilers have limited capability to check domain rules. Your service calls a payment gateway — the response can mean charged, declined, fraud-held, or gateway failure. Most languages give you a status code and a body; whether you distinguish "declined" from "fraud hold" depends on your discipline, not the compiler. A User has twenty fields; an API response should expose five of them. Derive the response type by hand, add a field to User next quarter, and now you're trusting every downstream DTO to have been updated.

The convention of documenting this is everywhere — Javadoc's @throws, Python's Raises, OpenAPI's response schemas. The problem is that none of it is compiler-visible. You can document four outcomes and handle three. The gap stays invisible until production. Here's a typical pattern:

/**
 * @throws DeclinedError
 * @throws FraudHoldError
 * @throws GatewayError
 */
async function chargeCard(
  req: ChargeRequest,
  gateway: PaymentGateway,
): Promise<Receipt> {
  // ...
}

The return type promises a Receipt — that's the happy path. The three failure modes live in a JSDoc comment the compiler will never read. A caller that ignores all three error cases compiles without a warning. The information is there, but it's decorative.

Here is the same operation in Ruuk, a language designed to include this information where the compiler can verify it:

pub op chargeCard =
    payload req: ChargeRequest
    via gateway: PaymentGateway
    outcomes =
        | Charged of Receipt
        | Declined of DeclineReason
        | FraudHold of ReviewId
        | GatewayError of ErrorDetail

This declares not just inputs but what role each plays (payload is the data being acted on, via is the external system being called) and what outcomes the caller must handle. If you call chargeCard and don't handle the FraudHold outcome, it doesn't compile. The meaning you would have put in a comment is now visible to the compiler — and enforced by it.

The same declaration that gives the compiler more to verify also makes the code faster to cold-read. A developer scanning AI-generated code can evaluate chargeCard's shape in seconds: what it takes, where the data goes, what can go wrong. No implementation diving required.

This design direction asks developers to formalize what they already know — it's on the whiteboard, in the docs, in comments scattered through the codebase. You already know your payment operation has four possible outcomes. The language asks you to type that in. The compiler takes it from there.

Conclusion

Agentic coding hasn't reduced the importance of language design — it's exposed where it needs to grow. The properties that improve the agentic loop are the same ones that improve human review: more meaning in the syntax, more verification in the compiler, less translation between what the code says and what the domain demands.

That points toward a class of language, not a single answer. Ruuk is my attempt to build one — designed from the start for the world where agents write the code and humans verify the shape. Hopefully it won't be the only attempt, and competition here is genuinely good. The industry needs more people thinking about this problem.

The articles that follow make the design concrete. The next piece is a fast tour of the OCaml/F# syntax Ruuk builds on — enough to read the examples without getting lost. After that: how operations and outcomes give the compiler visibility into failure modes, and how projections enforce structural rules when data crosses boundaries. The chargeCard declaration above is the destination. The series shows how you get there.

Ruuk is pre-alpha. What I can show right now is the thinking behind it — and a place to engage with the design before it solidifies. If the ideas in this article resonates, follow along on GitHub and weigh in on the discussions. The best languages get shaped by the people who care about the problems they solve.

I built a React component library with Tailwind v4, Framer Motion &amp; typed hooks

2026-04-30 19:58:06

3 weeks ago I started building yet another component library.

I know, I know. The world doesn't need another one. But hear me out — I had a specific itch I couldn't scratch with what was already out there, and after shipping 25+ components, a set of headless hooks, and a full typography system, I think it was worth it.

This is the story of what I built, what tradeoffs I made, and what I'd do differently. Plus some code you can steal.

Why build one at all?

I kept running into the same problem on projects: I wanted Tailwind CSS v4 (not v3), Framer Motion for animated overlays and transitions, and headless browser hooks — all from the same package with consistent TypeScript APIs.

The options I found were:

  • shadcn/ui — great, but copy-paste model means no central updates, and you bring your own motion
  • Chakra UI — runtime theming, Emotion-based, not Tailwind-native
  • Radix — unstyled primitives; you build everything yourself

None of them gave me the combination I wanted out of the box. So I built Zentauri UI.

What's in the box

@zentauri-ui/zentauri-components
  • 25+ UI components — buttons, modals, accordions, toasts, sliders, inputs, tables, tabs, progress bars, and more
  • Multiple appearance variants — glass, solid, outline, gradient (across buttons, inputs, overlays)
  • Framer Motion baked in for animated entry points on modals, tabs, and drawers
  • React hooksuseLocalStorage, useDebouncedValue, useClickOutside, useMediaQuery, and more
  • Typography componentsHeading, Text, Blockquote, Inline, Code, List
  • Tailwind v4-first with CVA-backed variant APIs
  • TypeScript throughout — typed props, typed variants, typed hooks

Install:

npm install @zentauri-ui/zentauri-components

Or use the optional CLI to scaffold:

npx @zentauri-ui/zentauri-components init

Show me the code

Buttons with variants

import { Button } from '@zentauri-ui/zentauri-components/ui/button'

export default function Demo() {
  return (
    <div className="flex gap-3">
      <Button variant="sky">Get started</Button>
      <Button variant="gradient-purple">Upgrade</Button>
      <Button variant="outline">Learn more</Button>
    </div>
  )
}

Each variant is CVA-backed, so refactors stay safe as the design system grows.

Animated modal

import { Modal } from '@zentauri-ui/zentauri-components/ui/modal'
import { useState } from 'react'

export default function Demo() {
  const [open, setOpen] = useState(false)

  return (
    <>
      <Button onClick={() => setOpen(true)}>Open dialog</Button>
      <Modal open={open} onClose={() => setOpen(false)}>
        <Modal.Title>Confirm action</Modal.Title>
        <Modal.Body>Are you sure you want to proceed?</Modal.Body>
        <Modal.Footer>
          <Button variant="solid" onClick={() => setOpen(false)}>Confirm</Button>
        </Modal.Footer>
      </Modal>
    </>
  )
}

The entry/exit animation is Framer Motion under the hood — no extra setup needed.

Headless hooks

import { useDebounedValue } from '@zentauri-ui/zentauri-components/hooks'

function Search() {
  const [query, setQuery] = useState('')
  const debouncedQuery = useDebouncedValue(query, 400)

  useEffect(() => {
    if (debouncedQuery) fetchResults(debouncedQuery)
  }, [debouncedQuery])

  return <input value={query} onChange={e => setQuery(e.target.value)} />
}

All hooks ship from the same package alongside the UI components — no separate install.

Path-level imports for lean bundles

You don't have to import everything at once:

// Import only what you need
import { Button } from '@zentauri-ui/zentauri-components/ui/button'
import { Modal } from '@zentauri-ui/zentauri-components/ui/modal'
import { useLocalStorage } from '@zentauri-ui/zentauri-components/hooks'

The decisions that shaped the library

1. Tailwind v4 from day one

Most libraries are built on v3. I went all-in on v4, which means native CSS cascade layers, the new @source scanning, and a cleaner config. It's a bet — but v4 is the direction the ecosystem is heading, and building on it now means the library won't need a painful migration later.

2. Framer Motion as a first-class citizen

A lot of libraries treat animation as an afterthought: "add your own transitions." I wanted animated modals, tab switches, and toasts to work immediately without wiring anything up. Framer Motion backs those animated entry points out of the box.

No tradeoffs: bundle size. If you're not using animated variants, then you don't need to import the framer motion into the bundle

3. Hooks alongside UI in one package

Most UI libraries are UI-only. You reach for a separate hooks library for useClickOutside or useMediaQuery. In Zentauri, they live together because they're used together. Having them in the same package means consistent TypeScript types and one less dependency to manage.

4. Preview site that mirrors real imports

Every component has its own preview route at zentauri-ui.vercel.app/preview/components/[name]. The code you see in the preview is the exact import path you'd use in production — no divergence between docs and reality.

What the stack looks like

Layer Choice Why
Styles Tailwind CSS v4 + CVA Variant safety, v4-forward
Animation Framer Motion Mature, composable, performant
Icons react-icons Consistent iconography in examples
Types TypeScript throughout Refactor confidence
Docs Next.js App Router Drops straight into app/ routes
Distribution npm + optional CLI Familiar tooling

Compared to the alternatives

I tried to be honest about this on the landing page, and I'll be honest here too:

Use Zentauri if:

  • You're building with Tailwind v4
  • You want motion-ready components without extra configuration
  • You want UI + hooks from one package
  • You care about TypeScript variant safety

Use shadcn/ui if:

  • You want full ownership and copy-paste into your repo
  • You're on Tailwind v3 and don't want to migrate
  • You don't need animated overlays

Use Radix if:

  • You're building a design system from scratch and want unstyled primitives

What's next

The library is live and published, but it's still early. The things I'm actively working on:

  • Accessibility audit and ARIA notes per component
  • More components: DatePicker, Combobox, DataTable, Skeleton
  • Storybook integration for isolated component docs
  • Theming / design token guide for teams who want to brand it

Try it

I'd love feedback — especially on the component API design and anything that feels inconsistent or missing. Drop a comment below or open an issue on GitHub.

If this helped or you find the library useful, a ⭐ on GitHub goes a long way for an indie open-source project.

You can contact me on -

Instagram - https://www.instagram.com/supremacism__shubh/
LinkedIn - https://www.linkedin.com/in/shubham-tiwari-b7544b193/
Email - [email protected]

You can help me with some donation at the link below Thank you👇👇
https://www.buymeacoffee.com/waaduheck

Also check these posts as well