MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

I got tired of reading AI-generated Markdown in VS Code, so I built a dedicated reader

2026-02-16 02:27:32

If you use AI tools like Claude, ChatGPT, or Copilot regularly, you probably have a growing pile of Markdown files — design docs, API specs, architecture notes, explanations, meeting summaries. I do too.

And reading them has always been a pain.

The problem

VS Code's Markdown preview splits your workspace in half. Browser-based renderers don't watch files. And most Markdown apps are built for writing — they ship with editors, toolbars, file managers, and cloud sync I didn't need.

I didn't want to edit Markdown. I just wanted to read it.

Point at a file. See it beautifully rendered. Have it update live when the file changes. That's it.

Nothing I found did just that, so I built it.

What I built

MEVA is a small native desktop app focused entirely on reading Markdown. Here's what it does:

  • Watches files in real time — when an AI tool streams output to a .md file, you see the rendered result building live
  • Renders LaTeX, Mermaid diagrams, and syntax-highlighted code blocks natively
  • Works fully offline — no accounts, no cloud sync, no tracking
  • Under 15MB — because a Markdown viewer shouldn't be a 200MB Electron app

The tech journey

I started with Electron. It worked, but the bundle was 150MB+ for what is essentially a file viewer. That felt wrong.

I switched to Tauri (Rust + native WebView), which got the app under 15MB while keeping it cross-platform on Mac, Windows, and Linux. Tauri gives you a native app shell without bundling an entire Chromium browser — which is exactly what a lightweight tool like this needs.

For rendering, I use:

  • markdown-it as the core parser
  • KaTeX for LaTeX math rendering
  • Mermaid for diagram rendering
  • Shiki for syntax-highlighted code blocks

The trickiest part was live file watching. I needed it to feel instant without hammering the filesystem. I ended up using debounced native file system events through Rust's notify crate, piped to the frontend via Tauri's event system. It picks up changes within milliseconds.

Getting Mermaid to re-render cleanly on live file changes without flickering was another challenge. I had to diff the diagram source and only re-render blocks that actually changed — otherwise you'd see a flash every time the file updated.

How I actually use it

My daily workflow looks like this:

  1. I ask Claude or ChatGPT to generate a design doc, analysis, or explanation
  2. Save the output as a .md file (or the tool writes directly to disk)
  3. MEVA picks it up instantly and renders it

The real magic is with tools like Claude Code that stream output directly to files. I have MEVA open in a separate window and watch the rendered document build in real time as the AI writes. It's a much better reading experience than watching raw Markdown scroll by in a terminal.

It's become my default way to read any Markdown file — AI-generated or otherwise.

What I learned building this

A few takeaways from the process:

Tauri is impressive for small tools. If you're building something that doesn't need a full browser engine, the size difference vs Electron is dramatic (15MB vs 150MB+). The Rust layer is fast and the IPC between Rust and the WebView is clean.

"Do one thing well" still works. Every Markdown app I tried wanted to be an editor. By only solving the reading problem, I could make the UX much simpler — no sidebar, no file tree, no toolbar. Just the rendered document.

File watching is deceptively hard to get right. Between debouncing, platform differences, and avoiding unnecessary re-renders, it took more iteration than I expected to make it feel seamless.

Try it out

MEVA is available for Mac, Windows, and Linux: https://usemeva.com/#download

There's a free version with all core features, and an optional paid version that adds multiple tabs, themes, and a few extras to support continued development.

I'd genuinely love feedback — especially on:

  • What feels unnecessary or bloated?
  • What's missing that you'd want in a Markdown reader?
  • How does it fit (or not fit) into your workflow?

If you work with AI-generated Markdown regularly, give it a try and let me know what you think.

The Codex App: A New Era in Autonomous AI Coding

2026-02-16 02:17:15

AI coding assistants are not new. Autocomplete, inline suggestions, and quick refactors have been standard for years.

The OpenAI Codex app is different.

It does not just suggest code. It executes development work as an autonomous agent operating inside controlled environments. That distinction shifts the conversation from “AI helper” to “AI execution layer.”

This post breaks down what the Codex app actually represents, how it differs from traditional AI coding tools, and what it means for serious development workflows.

What the Codex App Actually Is

The OpenAI Codex app is a dedicated AI-driven coding environment built around autonomous agents. Instead of prompting for isolated snippets, you define structured objectives.

An agent can:

  • Analyze a repository
  • Decompose a high-level requirement into tasks
  • Implement changes across multiple files
  • Run validation and test suites
  • Report progress in structured summaries
  • Adjust behavior based on feedback

The interaction model changes from prompt → response to assign → supervise → review.

That’s a meaningful architectural shift.

From Reactive Coding to Task Delegation

Most AI coding tools are reactive. You type something. The model responds. The context window defines the boundary.

The Codex app introduces continuity. Once assigned a task, the agent maintains context across execution stages. It does not forget the objective after generating one block of code.

Instead of asking:

“Write a function to validate tokens.”

You can assign:

“Implement authentication across the project, add token validation, update middleware, and ensure compatibility with existing sessions.”

The agent plans, executes, and reports.

Developers move from micro-instruction to structured delegation.

Parallel Agent Execution

One of the most interesting capabilities is multi-agent orchestration.

Different agents can handle separate workstreams:

  • Feature implementation
  • Bug triage
  • Test generation
  • Documentation updates
  • Refactoring

Each operates in isolation, reducing risk to the main codebase.

This introduces parallel development capacity without increasing headcount.

The practical impact is cycle-time compression.

Context-Aware Repository Understanding

A core limitation of many AI coding tools is context fragmentation. Every interaction feels isolated.

Codex agents are designed to operate at the repository level rather than the snippet level. They understand project structure, dependencies, naming conventions, and architectural patterns.

This enables higher-level execution such as:

  • Cross-module refactoring
  • System-wide modernization
  • Consistent test expansion
  • Dependency-aware updates

That is not autocomplete. That is structured execution.

Where This Becomes Powerful

The Codex app becomes most valuable in scenarios such as:

Large-Scale Refactoring

Legacy systems can be modernized systematically rather than manually rewriting components one at a time.

Feature Implementation from Spec

High-level feature requirements can be translated into structured development tasks.

CI Support

Agents can monitor test failures, suggest patches, and improve coverage automatically.

Multi-Repository Coordination

Organizations managing microservices can execute aligned updates across repositories in parallel.

This is where autonomous execution changes the economics of development.

Governance Still Matters

Autonomous execution does not eliminate the need for oversight.

If anything, governance becomes more important.

Teams should:

  • Define boundaries for agent authority
  • Require structured review before merging
  • Log and audit agent-generated changes
  • Start with lower-risk repositories
  • Standardize task definitions

Autonomy without discipline introduces risk. Supervised autonomy increases leverage.

Is This the Future of Development?

The Codex app reflects a broader shift in AI tooling.

We are moving from systems that help write code toward systems that execute defined engineering objectives.

That changes the role of developers.

Instead of manually implementing every detail, engineers define architecture, constraints, and quality thresholds while delegating structured work to AI agents.

Execution becomes partially automated.
Oversight remains human.

This is not about replacing developers.
It is about amplifying throughput.

Final Thoughts

The OpenAI Codex app is not just another AI coding assistant.

It represents the transition from suggestion-based tooling to agent-driven software execution.

If implemented with discipline, it can reduce repetitive engineering effort, accelerate feature delivery, and enable parallel workflows that were previously limited by human bandwidth.

We are likely at the beginning of a new phase in software engineering: supervised autonomous development.

The question is not whether this model will evolve.

The question is how teams will structure governance around it.

How to design a Simple URL Shortener(TinyURL)

2026-02-16 02:16:36

TinyURL is often called the “Hello World” of system design because it has minimal requirements but forces us to think about scalability, caching, ID generation, and bottlenecks.

Let’s design it step by step.

Functional Requirements :

  • Convert a Long URL → Short URL
  • Redirect Short URL → Long URL

Non-Functional Requirements:

  • High Availability
  • Low Latency
  • Scalable under heavy traffic

API End-Points:

  • POST /shorten → Accepts Long URL
  • GET /{shortId} → Redirects to Long URL

High-Level Design:

User<br>
→ Load Balancer<br>
→ App Servers<br>
→ Cache<br>
→ Sharded Database
This is a simple and scalable design.

Since we require low latency, we introduce a cache layer to store frequently accessed short URLs. Most read requests will be served directly from cache, reducing database load.

To ensure high availability, we avoid single points of failure. App servers are scaled horizontally and placed behind a load balancer, which distributes incoming traffic evenly.

Because our system only needs to store simple mappings:
short_url → long_url
we can use either:

  • A Key-Value database (natural fit for simple mapping)
  • Or an SQL database if additional analytics or constraints are required

This covers the basic design derived from requirements.

But now comes the interesting part.

How Short Should the URL Be?
We want to convert long URLs into short ones. But how short should they be?

Assume:

K new URLs are generated every second.
We store URLs for 10 years.
Total URLs required:
K*60*60*24*365*10

If our short URL can use:

  • 26 lowercase letters (a–z)
  • 26 uppercase letters (A–Z)
  • 10 digits (0–9)

That gives us 62 possible characters.
To determine required length:

62^n ≥ K × 60 × 60 × 24 × 365 × 10
Where n is the length of the short URL.
If n = 7:
62^7 ≈ 3.5 trillion combinations

Which is sufficient for large-scale systems.

Bottlenecks
Hot Key Problem (Read Bottleneck)
Suppose the application becomes popular and millions of users request the same short URL simultaneously.

Where would the system collapse first?

The cache.
When many users access the same key, we face a hot key problem. Horizontal scaling alone does not solve this because the same key may map to the same cache node.

Solution:

  • Use cache replicas
  • Introduce a CDN layer Distribute read load across multiple cache nodes

Write Bottleneck (Database)

Now assume we receive a large number of write requests (URL creation) and writes typically go to the primary database node.
Where will the bottleneck occur?
The database.
Since every new short URL requires a write operation, database throughput becomes the limiting factor.
Solution:
Sharding the database.
However, simple modulo-based sharding can cause problems when adding new shards because it requires massive data redistribution.

A better approach is:
Consistent hashing, which minimizes data movement when scaling.

ID Collision Problem

Since app servers are horizontally scaled, two servers might generate the same short URL.

How do we prevent collisions?

Possible approaches:

  • Random Base62 generation + collision check
  • Centralized ID generator
  • Distributed ID service
  • Using Redis atomic counter (e.g., INCR)

Final Thought

TinyURL may look simple, but it teaches us:

  • Scalability
  • Caching strategies
  • Sharding techniques
  • Bottleneck analysis
  • ID generation trade-offs

That’s why it’s called the “Hello World” of System Design. Let's meet again with another interesting design.

KAIzen — What Agile Needs for the AI Era

2026-02-16 02:13:41

How a small team at a gaming company went from 32% flow efficiency to 85% — by changing what we gave the AI

Our team was running Scrum by the book. Two-week sprints. Grooming. Planning poker. Retros. By every conventional measure, we were doing Agile correctly.

Then I measured our flow efficiency — the ratio of active work time to total elapsed time — and it was 32%. For every hour on the clock, we were actively working for about 19 minutes. The rest was waiting. Waiting for grooming. Waiting for clarification. Waiting for review. Waiting to align on what the story actually meant.

Industry average for software teams is 15-25%. We were above average. But "above average at wasting time" isn't a metric anyone puts on a slide.

What made it worse was that we'd started using AI coding assistants. The promise was faster delivery. The reality was faster code generation — but the code was often wrong, because the input was vague. A user story that says "As a user, I want to receive rewards so that I feel valued" gives a human enough context to ask smart questions. It gives an AI enough context to hallucinate confidently.

AI didn't just speed up coding. It moved the bottleneck. The bottleneck was no longer "how fast can we write code?" It became "how precisely can we define what we want?" And our entire process was optimized for a world where humans were the bottleneck. That world was gone.

I should be honest: I didn't call what followed "KAIzen" at the time. We didn't have names for any of it. We just started changing how we worked. The vocabulary in this post — Blueprint, Runbook — came later, to make the patterns shareable. The work was real. The naming is an afterthought.

The Inspiration — And Why I Needed Something Different

I wasn't starting from zero. Amazon's AI-DLC (AI-Driven Development Lifecycle) was a major inspiration. AWS had shown that spec-driven, AI-augmented development could work at scale. But when I looked at applying it to my team, the cost was high: the AI-DLC replaces your entire development process. New phases, new roles, new artifacts, new way of working from the ground up.

We didn't have that luxury. We were mid-sprint, mid-quarter, mid-delivery. I needed something that could plug into our existing process — not replace it. Where the AI-DLC asks you to change everything, I wanted to change one thing: the quality of our input to AI. Keep our sprints, keep our board, add a layer on top.

I now call this approach KAIzen, from kaizen (改善) — the Japanese philosophy of continuous improvement. Small changes, led by the people who do the work. KAIzen applies that principle with AI as the lever. Not a new methodology. Not a process overhaul. A layer you add on top of whatever Agile process you already run.

Specification as the Primary Lever

The turning point was small. Instead of writing a user story, I wrote a detailed engineering spec for a feature — inputs, outputs, edge cases, constraints, acceptance criteria. I fed it to our AI assistant and the generated code was review-ready on the first pass.

The previous feature — similar complexity, described as a user story — had taken three rounds of review, two Slack threads, and a sync meeting. Same AI. Same team. The difference was entirely in the input.

The spec is the product now. Not the code. The quality of your specification determines the quality of everything that follows.

I call this a Blueprint — a structured spec precise enough for AI to build against. For complex work, you also need a Runbook — an ordered implementation plan derived from the Blueprint. For a small fix, a lightweight Blueprint is enough.

Here's the part that changes the adoption story: the AI agent drafts the Blueprint. The product owner gives us a feature brief — goals, context, user needs. We feed that brief to our GitHub Copilot custom agent (we call it SpecKit), and it generates a first draft of the Blueprint: inputs, outputs, edge cases, constraints, acceptance criteria.

But the draft isn't the artifact — the reviewed Blueprint is. A developer still spends real time reviewing and refining it, often up to two hours for complex features. That investment is the point. A precise Blueprint is what makes the Runbook coherent and the AI-generated code review-ready. The agent removes the blank-page problem and gets you 70% of the way there. The developer's judgment closes the last 30% — and that's where the quality lives.

Over time, something unexpected happened. Our product owner started using the same agent to write the feature brief itself — structuring it so the downstream Blueprint would be cleaner. The whole chain tightened: better brief → better Blueprint → better Runbook → better AI-generated code → fewer review cycles. The agent didn't just help developers. It pulled the entire team toward precision.

What Dissolved

We didn't decide to stop doing Scrum. We just started writing Blueprints inside our sprints. But things dissolved on their own. Grooming became redundant — the Blueprint already answered every question grooming was designed to surface. Estimation stopped making sense — spec-driven work is inherently scoped. Sprint planning became just prioritization: "which Blueprints next?"

We didn't switch to Kanban. We just stopped needing the ceremonies that were solving problems the Blueprint solved better. What survived: prioritization, standups, retros. Whether you call the outer loop Scrum or Kanban stops mattering. The inner loop — spec-first, AI-augmented — is what drives results.

This is the core difference from the AI-DLC approach: we didn't need anyone's permission to start. No process overhaul. No new roles. No org-wide buy-in. One team, one Blueprint, one sprint. The layer proved itself through results, not a proposal deck.

The Numbers

Three epics, same area, similar complexity. Flow efficiency: 32% → 47% → 85%. Cycle time: 36 days → 36 days → 13 days. The active work time barely changed. What collapsed was the waiting — grooming, clarification, alignment overhead that was invisible inside sprint velocity.

The caveats: three epics is a signal, not a proof. They weren't identical in scope. The team was small and I was coaching directly. I'd rather you hear these caveats from me. Three data points isn't proof. It's a signal worth investigating.

What I Learned

The Blueprint is the new bottleneck — but it's a better bottleneck. With SpecKit drafting the first pass, the blank-page problem is gone. But the review still takes real time, and it should — that's where engineering judgment lives. The developer's job shifts from "write the spec from scratch" to "validate and sharpen the spec," which is a better use of their expertise.

Not everyone wants to write specs. Resistance collapses after one demonstration. Show a developer AI output from a vague story next to AI output from a good Blueprint. After that, most people write the spec — not because of a process argument, but because it makes their afternoon easier.

This is kaizen — continuous improvement, from the ground up. We changed one thing, measured what happened, and kept improving.

But there's a limit to what one team can achieve alone. Our flow efficiency hit 85% within our area. Then we got an initiative spanning Gaming, Rewards, and Sportsbook — and suddenly our speed didn't matter. Blocked by another team's API. Debating event schemas in Slack. Sitting in alignment meetings where six people discussed what two could have decided in a DM.

One team improving means nothing if features get stuck at the boundary. That's Part 2.

Part 2: "KAIzen Across Boundaries" — coming next week.

Want to try it today? Pick your next feature. Feed your product brief to an AI assistant and ask it to generate a spec — inputs, outputs, edge cases, constraints, acceptance criteria. Refine it. Build against it. Measure your flow efficiency before and after. One spec. See what happens.

#ai #agile #softwareengineering #productivity

Adobe Animate Is on Life Support. Where Do Web Animations Go Now?

2026-02-16 02:12:15

On February 2, 2026, Adobe emailed Animate customers telling them the software would be discontinued on March 1. Within 24 hours, after massive backlash — including a 1.9K-upvote thread on r/technology and outrage from professional animators whose careers depend on the tool — Adobe reversed course. Animate is now in "maintenance mode": still available, still getting bug fixes, but no new features. Ever.
If that sounds like a reprieve, read the room. The Animate community isn't celebrating. They're planning their exit. As one user on r/adobeanimate put it: "This is your chance to jump ship with a bit extra time. Animate is GONE, its burial was just postponed."
They're right. Maintenance mode is where software goes to die slowly. And Adobe couldn't even recommend a replacement that covers what Animate does — they pointed users to After Effects' Puppet tool and Adobe Express, neither of which comes close to feature parity. The community response has been brutal and, frankly, fair.

What Adobe Suggests (And Why It Falls Short)

Adobe's official guidance steers Animate users toward two products: After Effects (specifically its Puppet tool for character animation) and Adobe Express for simpler motion graphics.

The problem is straightforward. After Effects is a compositing and motion graphics tool. Its Puppet tool can deform layers, but it was never designed as a frame-by-frame animation environment. There is no timeline with traditional keyframe-by-keyframe drawing. There is no symbol library system. There is no equivalent to Animate's bone tool or shape tweening engine. If your workflow involved drawing frames, rigging characters with bones, and publishing interactive content — After Effects does not do that. It does adjacent things.

Adobe Express is even further from the mark. It is a Canva competitor aimed at social media templates. Suggesting it as an Animate replacement is like telling a carpenter to use a Swiss Army knife because it also has a blade.

The honest read: Adobe does not have an Animate replacement. They have other products. Those products are good at what they do. But what they do is not what Animate did.

Every Alternative, Honestly Reviewed

I spent the last several weeks testing every serious Adobe Animate alternative I could find. Here is what I learned, organized by use case, because the right tool depends entirely on what you were actually using Animate for.

For Professional 2D Animation

Toon Boom Harmony — $27-85/month

If you are searching for a direct Adobe Animate alternative for professional 2D work, this is it. Toon Boom is already the industry standard at major studios — it powers productions at Cartoon Network, Netflix, and Disney Television Animation. The drawing tools are excellent. The rigging system (with deformers and inverse kinematics) is more powerful than what Animate offered. It handles both traditional frame-by-frame and cut-out rigging workflows.

The catch: the learning curve is genuinely steep. Harmony's interface is dense and assumes professional-level knowledge. If you are coming from Animate, expect a solid month of retraining before you are productive. Pricing ranges from $27/month (Essentials) to $85/month (Premium), which is reasonable for studios but adds up for freelancers. If your livelihood depends on 2D animation, this is probably where you end up.

OpenToonz — Free, Open Source

OpenToonz deserves more attention than it gets. It is the open-source version of the Toonz software that Studio Ghibli used for productions like Spirited Away and Princess Mononoke. It supports vector and raster drawing, traditional animation workflows, and has a surprisingly capable effects system.

The UI is not as polished as commercial tools — it looks like software that was open-sourced from a professional studio environment rather than designed for mass adoption. But it is genuinely capable and it costs nothing. For small studios on tight budgets or independent animators who need professional output without monthly fees, OpenToonz is a legitimate option.

Synfig Studio — Free, Open Source

Synfig is simpler and more approachable than OpenToonz. It leans heavily on vector-based tweening and bone rigging rather than frame-by-frame drawing. If you are a hobbyist or indie creator who primarily used Animate's tweening features (rather than hand-drawing every frame), Synfig will feel more familiar. The community is active, documentation is decent, and it runs on Windows, Mac, and Linux.

For Interactive and Complex App Animation

Rive — Free tier, $18/month for export features

Rive is the tool I would recommend most enthusiastically for anyone building interactive animations for apps and websites. It is purpose-built for the problem. The state machine system lets you create animations that respond to user input, data changes, and application logic — not just play from start to finish. GPU-accelerated rendering. Lightweight runtimes for iOS, Android, Flutter, React, and web.

Companies like Spotify, Notion, and Dropbox use Rive in production. The tool itself is excellent.

The friction point: Rive's free tier lets you design and preview animations, but exporting requires the $18/month Teams plan. This paywall has generated real backlash in the community, and it is a fair criticism. You can learn the tool for free but you cannot ship with it for free. If you are evaluating Rive, go in knowing that. For teams building interactive product animations, $18/month is easy to justify. For solo developers experimenting, it stings.

LottieLab / Magic Animator — Free during beta

LottieLab takes a different approach. Their Magic Animator feature generates four animation variations from a Figma design file in a single click. You import your static design, Magic Animator suggests how to animate it, and you refine from there. It exports to Lottie, GIF, and MP4.

The requirement for a Figma file as input means this is specifically for designers already in the Figma ecosystem. If that is you, it is worth trying — the results are surprisingly good for a beta product, and the team has been a genuinely good actor in the animation ecosystem. It will not replace Animate's full feature set, but for getting static designs moving quickly, it solves a real problem.

For the After Effects to Lottie Pipeline

After Effects + Bodymovin — $23/month (Creative Cloud)

The AE-to-Lottie pipeline via the Bodymovin plugin is not going anywhere. It is the most established path for creating Lottie animations and it works. If you already pay for Creative Cloud and know After Effects, this remains a viable workflow.

But the pain points are well-documented and persistent. Exported Lottie files frequently balloon to 10-20MB because of how Bodymovin translates AE's layer structure. Rendering inconsistencies between the AE preview and actual Lottie playback send animators into days-long debugging spirals. The lottie-web GitHub repository has 778 open issues as of this writing. Not all are bugs — many are feature requests and questions — but the number reflects the friction in this pipeline.

After Effects is a $23/month commitment (at minimum, via Creative Cloud). For studios already invested in the Adobe ecosystem, that is fine. For a developer who needs a loading spinner, it is like renting a bulldozer to plant a flower.

For Simple App Micro-Interactions

MotionPrompt — Free (10 generations/day), launching March 2026

Full disclosure: I am building this. I started building it after spending two frustrating weeks trying to add a simple staircase animation to a hearing rehab app I am working on. Rive was overkill, After Effects was expensive, and I just needed a file. MotionPrompt generates Lottie animation files from text descriptions. You describe what you want — "a green checkmark that draws itself and bounces" — and it generates a production-ready .lottie file. No After Effects. No Rive. No design tools at all.

It is best for the animations that developers actually need most often: loading spinners, success checkmarks, notification pulses, onboarding hints, progress indicators, toggle state changes. The kind of micro-interactions that make an app feel alive but do not justify learning a new animation tool or paying for a Creative Cloud subscription.

It is not for complex character animation, cinematic sequences, interactive state machines, or anything that Adobe Animate was genuinely great at. If you need those things, look at Rive or Toon Boom above. MotionPrompt solves a narrower problem: getting simple, polished app animations without the tooling overhead.

Join the waitlist to try it first at motionprompt.dev.

The Bigger Picture

Here is the thing that makes Adobe's decision so frustrating: demand for web animations is not declining. It is accelerating.

Lottie — the open animation format created by Airbnb's engineering team — now gets 4.3 million npm downloads per week. That is an all-time high, up 75% year-over-year. Every major app ships animations. Every design system includes motion guidelines. Animation is not a nice-to-have anymore; it is a baseline expectation for any product that wants to feel modern.

But creating a Lottie file is still unreasonably painful. After Effects is overkill (and expensive) for a loading spinner. Rive paywalls its export feature. LottieFiles — which should be the community hub for the format — suffered a serious npm supply chain attack that shook trust in the ecosystem. And now Adobe is removing yet another creation option from the market.

The supply side of web animation is getting worse while the demand side gets stronger. That gap is going to produce new tools, new workflows, and probably a few good open-source projects. Adobe killing Animate is a loss, but it is also an opening.

Quick Comparison

Tool Price Best For Learning Curve Output
Toon Boom Harmony $27-85/mo Pro 2D animation Steep Video, GIF
OpenToonz Free Budget studios Moderate Video, GIF
Synfig Studio Free Indie / hobbyist Moderate Video, GIF
Rive $0-18/mo Interactive apps Moderate .riv, WebGL
LottieLab Free (beta) Figma users Low Lottie, GIF, MP4
AE + Bodymovin $23/mo Motion pros Steep Lottie JSON
MotionPrompt Free (10/day) Devs, micro-interactions None .lottie, JSON

What To Do Right Now

**If you're currently using Animate: It's not disappearing tomorrow, but maintenance mode means the clock is ticking. No new features, no investment, eventual sunsetting. Start evaluating alternatives now while you have the luxury of time rather than a deadline.

If you are choosing a tool for interactive animations: Try Rive. I say this with no agenda — for complex interactive work with state machines, data binding, and cross-platform runtimes, it is the best option available right now. The export paywall is frustrating, but the tool earns its price for teams shipping real products.

If you need Figma designs animated: LottieLab's Magic Animator is free during beta and genuinely impressive. Import your Figma file, get four animation variations, refine the one you like. It will not replace a skilled motion designer, but for getting 80% of the way there in five minutes, it is hard to beat.

If you are a developer who just needs basic app animations: motionprompt.dev — I am building this. Text prompt in, Lottie file out. Free tier, launching March 2026. It is not for complex animation work, but if all you need is a loading spinner in your brand colors, it will save you a trip through After Effects.

If you found this useful, I would genuinely appreciate a bookmark or share. And if you are an Animate user figuring out your next move — I am sorry. It was a great tool, and the people who mastered it deserved a better send-off than "try Puppet tool."

React Refs &amp; useRef — The "Secret Backdoor" to the DOM 🚪

2026-02-16 02:09:13

Ever needed to talk directly to a DOM element in React, but felt like React was standing in your way?

That's exactly what useRef is for. Think of it as a secret backdoor that lets you reach into the actual DOM — without breaking any of React's rules.

Let's break it down so simply that you'll never forget it.

State vs. Ref — The Two-Sentence Version

  • State → Changes trigger a re-render. You update it with a setter function.
  • Ref → Changes are silent. You mutate it directly, and React doesn't even blink.

That's the core difference. Refs are like sticky notes you keep for yourself. React doesn't care what you write on them.

Creating a Ref

import { useRef } from "react";

function MyComponent() {
  const inputRef = useRef(null);

  return <input ref={inputRef} />;
}

Three things just happened:

  1. useRef(null) created an object: { current: null }
  2. We passed that object to the ref prop on the <input>
  3. React filled in inputRef.current with the actual DOM node of that input

That's it. inputRef.current is now the real, living, breathing <input> element on the page.

A Real-World Example: Auto-Scroll to New Content

Imagine you have an app where a user clicks a button, waits for data to load, and the new content appears below the fold. The user has no idea anything happened. Bad UX.

Here's how refs fix that:

import { useRef, useEffect, useState } from "react";

function RecipeApp() {
  const [recipe, setRecipe] = useState(null);
  const recipeSectionRef = useRef(null);

  async function fetchRecipe() {
    const response = await getRecipeFromAI(); // pretend API call
    setRecipe(response);
  }

  useEffect(() => {
    if (recipe && recipeSectionRef.current) {
      recipeSectionRef.current.scrollIntoView({ behavior: "smooth" });
    }
  }, [recipe]);

  return (
    <div>
      <button onClick={fetchRecipe}>Get a Recipe</button>

      {recipe && (
        <div ref={recipeSectionRef}>
          <h2>{recipe.title}</h2>
          <p>{recipe.instructions}</p>
        </div>
      )}
    </div>
  );
}

What's happening step by step:

  1. User clicks "Get a Recipe"
  2. The API returns data → state updates → React re-renders
  3. The <div> with our ref now exists in the DOM
  4. useEffect fires, sees the recipe is loaded, and calls scrollIntoView()
  5. The browser smoothly scrolls down to the recipe section

No document.getElementById. No query selectors. Just a clean ref.

"But Why Not Just Use an ID?"

Great question. You could do this:

<div id="recipe-section">...</div>

// somewhere else:
document.getElementById("recipe-section").scrollIntoView();

It works... until it doesn't. Here's the problem:

React is built around reusable components. If you render the same component twice, you get two elements with the same ID on the page. That's invalid HTML and a bug waiting to happen.

Refs avoid this entirely because they're scoped to each component instance. Two instances, two separate refs, zero conflicts.

The Mental Model Cheat Sheet

State Ref
Triggers re-render? Yes No
How to update Setter function Direct mutation
Common use UI data DOM access, timers, previous values
Shape Whatever you set { current: value }

Three Quick Rules to Remember

Rule 1: Refs are just boxes.
useRef(initialValue) gives you { current: initialValue }. That's the whole data structure. A box with one shelf called current.

Rule 2: Mutate freely.
Unlike state, you can do myRef.current = "whatever" and React won't complain or re-render.

Rule 3: The ref prop is magic — but only on native elements.
When you write <div ref={myRef}>, React automatically fills myRef.current with that DOM node. But if you write <MyComponent ref={myRef}>, you're just passing a regular prop called "ref" (unless you use forwardRef, which is a story for another day).

TL;DR

  • useRef creates a persistent mutable container: { current: value }
  • Changing .current does not cause a re-render
  • Attach it to a DOM element via the ref prop to get direct access to that node
  • Perfect for things like scrolling, focusing inputs, measuring elements, or storing values between renders without triggering updates

Refs are one of those tools that feel weird at first and then become second nature. Once you "get" them, you'll reach for them all the time.