MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Building the First TypeScript LTI 1.3 Library (So You Don't Have To)

2026-02-04 22:35:26

If you've ever tried to integrate with a modern LMS such as Canvas, Moodle, or Schoology, you know the pain. The LTI 1.3 specification is powerful but technically complex. It involves OAuth 2.0, OIDC, JWTs, platform-specific quirks, and documentation scattered across various locations.

I needed to build an LMS integration for a side project. When I went looking for a modern TypeScript library, I found... nothing production-ready. The options were either not serverless-friendly, incomplete, or abandoned.

So I built one.

What is LTI?

Learning Tools Interoperability (LTI) is the standard that connects external tools to learning management systems (LMS). It's what lets platforms like Kahoot, Turnitin, and thousands of other tools appear seamlessly inside Canvas or Moodle.

The LMS market is worth over $20 billion. Every major platform uses LTI: Canvas, Moodle, Blackboard, Brightspace, and Schoology. If you're building anything for education, chances are you'll eventually need it.

LTI 1.3 is the current version, built on OAuth 2.0 and OpenID Connect. It's more secure than the old OAuth 1.0-based LTI 1.1, but also significantly more complex to implement.

What I Built

LTI Tool is a TypeScript-native implementation of the full IMS Global LTI 1.3 specification. Here's the core setup:

import { Hono } from 'hono';
import { LTITool } from '@lti-tool/core';
import {
  jwksRouteHandler,
  launchRouteHandler,
  loginRouteHandler,
  secureLTISession,
} from '@lti-tool/hono';
import { MemoryStorage } from '@lti-tool/memory';

const storage = new MemoryStorage();
const ltiTool = new LTITool({
  keyPair,      // RSA keypair - see docs for generation
  stateSecret,  // HMAC secret for state tokens
  storage,
});

const app = new Hono();

// Mount required LTI endpoints
app.get('/lti/jwks', jwksRouteHandler(ltiTool));
app.post('/lti/login', loginRouteHandler(ltiTool));
app.post('/lti/launch', launchRouteHandler(ltiTool));

// Protect your routes
app.use('/app/*', secureLTISession(ltiTool));

And here's a protected route that accesses the LTI session:

app.get('/app/dashboard', (c) => {
  const session = c.get('ltiSession');
  return c.json({
    user: session.user.name,
    course: session.context?.title,
    roles: session.roles,
  });
});

That's it. The library handles OIDC authentication, JWT verification, nonce validation, and all the security requirements.

Full Feature Set

  • ✅ Complete OIDC authentication flow
  • ✅ Cookieless state management (works in iframes, no third-party cookie issues)
  • ✅ Assignment and Grade Services (AGS) - submit grades, manage line items
  • ✅ Names and Role Provisioning Services (NRPS) - access course rosters
  • ✅ Deep Linking - content selection with multiple placement types
  • ✅ Dynamic Registration - zero-config tool setup
  • ✅ TypeScript-first with complete type safety
  • ✅ Serverless-optimized (AWS Lambda, Cloudflare Workers)
  • ✅ Pluggable storage (DynamoDB, Memory, MySQL, bring your own)

The Quirks I Discovered

Building against a specification is one thing. Building against real-world platforms is another. Every LMS has quirks, parameters in unexpected places, undocumented behaviors, or subtle differences in how they interpret the spec. The library handles these so you don't have to discover them yourself.

Canvas, for example, announced that starting January 2026, they'll reject API requests without proper User-Agent headers. The library sends @lti-tool/core/1.0.0 on all service requests by default, so you're compliant out of the box.

The Security Details That Matter

LTI 1.3 security isn't optional. The library enforces:

  • JWT signature verification against platform public keys
  • Nonce validation to prevent replay attacks
  • State parameter verification for CSRF protection
  • Token expiration checks

Getting any of these wrong means your integration is insecure. Getting them right means implementing them once, correctly, and never thinking about them again.

Real-World Testing

This isn't a spec-compliant library that's never seen production. I've
tested it against live Moodle and Canvas instances.

The test suite has 50+ tests covering the core flows. But real validation came from watching grades actually appear in the LMS gradebook.

Try It

npm install @lti-tool/core @lti-tool/hono @lti-tool/memory

The library is MIT licensed. If you're building LMS integrations in Node.js, this should save you weeks of implementation time.

Links

If you find it useful, a star on GitHub helps others discover it.

If you run into issues or have questions, open an issue.

Building something with LTI Tool? I'd love to hear about it.

Bun 1.2 Deep Dive: Built-in SQLite, S3, and Why It Might Actually Replace Node.js

2026-02-04 22:31:00

Bun 1.2 Deep Dive: Built-in SQLite, S3, and Why It Might Actually Replace Node.js

Bun has been the JavaScript runtime that promises everything: faster installs, faster execution, native TypeScript support. But for most of us, it's been "cool for side projects, not ready for production."

Bun 1.2 changes that conversation.

Released in January 2025, Bun 1.2 isn't just another incremental update. It ships with built-in SQLite, a native S3 client, Postgres support, and seamless Node.js compatibility that finally passes 96% of the Node.js test suite. No npm packages. No configuration. Just import and use.

In this deep dive, we'll explore what Bun 1.2 actually offers, run real benchmarks, and determine whether it's finally time to consider Bun for your next production project.

What's Actually New in Bun 1.2

Let's cut through the hype and look at what Bun 1.2 delivers:

1. Built-in SQLite Database

SQLite is now a first-class citizen in Bun. No installation required:

import { Database } from "bun:sqlite";

const db = new Database("myapp.db");

// Create tables
db.run(`
  CREATE TABLE IF NOT EXISTS users (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    email TEXT UNIQUE NOT NULL,
    name TEXT,
    created_at DATETIME DEFAULT CURRENT_TIMESTAMP
  )
`);

// Insert data
const insert = db.prepare("INSERT INTO users (email, name) VALUES (?, ?)");
insert.run("[email protected]", "John Doe");

// Query with type safety
interface User {
  id: number;
  email: string;
  name: string;
  created_at: string;
}

const users = db.prepare("SELECT * FROM users").all() as User[];
console.log(users);

This isn't just a wrapper around better-sqlite3. It's a native implementation that's significantly faster:

Operation better-sqlite3 (Node.js) Bun SQLite Difference
INSERT 1M rows 4.2s 1.8s 2.3x faster
SELECT 100K rows 320ms 140ms 2.3x faster
Transaction commit 12ms 5ms 2.4x faster

The performance gains come from Bun's integration with JavaScriptCore and avoiding the N-API overhead that Node.js addons face.

2. Native S3 Client

Cloud storage without dependencies. Bun's S3 client works with AWS S3, R2, MinIO, and any S3-compatible service:

import { S3Client } from "bun";

const s3 = new S3Client({
  endpoint: "https://s3.amazonaws.com",
  region: "us-east-1",
  accessKeyId: process.env.AWS_ACCESS_KEY_ID,
  secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
});

// Upload a file
const file = Bun.file("./large-video.mp4");
await s3.write("my-bucket/videos/intro.mp4", file);

// Download a file
const downloaded = await s3.file("my-bucket/videos/intro.mp4");
await Bun.write("./downloaded.mp4", downloaded);

// Stream large files
const stream = s3.file("my-bucket/data/huge.csv").stream();
for await (const chunk of stream) {
  // Process chunk without loading entire file into memory
}

The S3 client handles multipart uploads automatically for large files and supports presigned URLs out of the box. Compare this to the AWS SDK:

// AWS SDK v3 - The old way
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import fs from "fs";

const client = new S3Client({ region: "us-east-1" });
const fileStream = fs.createReadStream("./large-video.mp4");
await client.send(new PutObjectCommand({
  Bucket: "my-bucket",
  Key: "videos/intro.mp4",
  Body: fileStream,
}));

// Bun - The new way
const s3 = new S3Client({ /* config */ });
await s3.write("my-bucket/videos/intro.mp4", Bun.file("./large-video.mp4"));

The API surface is dramatically smaller while providing the same functionality.

3. Built-in Postgres Support

Postgres joins SQLite as a built-in database option:

import { sql } from "bun";

// Connection from environment variable (DATABASE_URL)
const users = await sql`SELECT * FROM users WHERE active = ${true}`;

// Or explicit connection
import { SQL } from "bun";

const db = new SQL({
  hostname: "localhost",
  port: 5432,
  database: "myapp",
  username: "postgres",
  password: "secret",
});

// Parameterized queries are automatic
const email = "[email protected]";
const user = await db`SELECT * FROM users WHERE email = ${email}`;

// Transactions
await db.begin(async (tx) => {
  await tx`UPDATE accounts SET balance = balance - 100 WHERE id = 1`;
  await tx`UPDATE accounts SET balance = balance + 100 WHERE id = 2`;
});

The SQL template literal approach prevents SQL injection by design and provides excellent DX.

4. Node.js Compatibility: 96% and Climbing

The biggest blocker for Bun adoption has been compatibility. Bun 1.2 now passes:

  • 96% of Node.js test suite
  • 100% of node:fs tests
  • 100% of node:path tests
  • 99% of node:crypto tests
  • 98% of node:http tests

This means most npm packages "just work." We tested several popular packages:

Package Status Notes
Express ✅ Works Full compatibility
Fastify ✅ Works Full compatibility
Prisma ✅ Works Since Bun 1.1
Next.js ⚠️ Partial Dev server works, some edge cases
NestJS ✅ Works Full compatibility
Socket.io ✅ Works Full compatibility

5. Windows Support (Finally)

Bun now runs natively on Windows without WSL. The installer is a single executable:

powershell -c "irm bun.sh/install.ps1 | iex"

Performance on Windows is comparable to Linux/macOS, which wasn't the case with earlier versions.

Real-World Benchmark: Building an API Server

Let's build the same API with Node.js and Bun to see real performance differences.

The Test: User CRUD API with SQLite

Bun Implementation:

// server.ts (Bun)
import { Database } from "bun:sqlite";

const db = new Database(":memory:");
db.run(`
  CREATE TABLE users (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    email TEXT UNIQUE,
    name TEXT
  )
`);

// Seed data
const insert = db.prepare("INSERT INTO users (email, name) VALUES (?, ?)");
for (let i = 0; i < 1000; i++) {
  insert.run(`user${i}@test.com`, `User ${i}`);
}

const server = Bun.serve({
  port: 3000,
  async fetch(req) {
    const url = new URL(req.url);

    if (url.pathname === "/users" && req.method === "GET") {
      const users = db.prepare("SELECT * FROM users LIMIT 100").all();
      return Response.json(users);
    }

    if (url.pathname === "/users" && req.method === "POST") {
      const body = await req.json();
      const result = db.prepare(
        "INSERT INTO users (email, name) VALUES (?, ?) RETURNING *"
      ).get(body.email, body.name);
      return Response.json(result, { status: 201 });
    }

    return new Response("Not Found", { status: 404 });
  },
});

console.log(`Server running at http://localhost:${server.port}`);

Node.js Implementation:

// server.mjs (Node.js with better-sqlite3)
import Database from "better-sqlite3";
import { createServer } from "http";

const db = new Database(":memory:");
db.exec(`
  CREATE TABLE users (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    email TEXT UNIQUE,
    name TEXT
  )
`);

const insert = db.prepare("INSERT INTO users (email, name) VALUES (?, ?)");
for (let i = 0; i < 1000; i++) {
  insert.run(`user${i}@test.com`, `User ${i}`);
}

const server = createServer(async (req, res) => {
  const url = new URL(req.url, `http://${req.headers.host}`);

  if (url.pathname === "/users" && req.method === "GET") {
    const users = db.prepare("SELECT * FROM users LIMIT 100").all();
    res.writeHead(200, { "Content-Type": "application/json" });
    res.end(JSON.stringify(users));
    return;
  }

  if (url.pathname === "/users" && req.method === "POST") {
    let body = "";
    for await (const chunk of req) body += chunk;
    const data = JSON.parse(body);
    const result = db.prepare(
      "INSERT INTO users (email, name) VALUES (?, ?) RETURNING *"
    ).get(data.email, data.name);
    res.writeHead(201, { "Content-Type": "application/json" });
    res.end(JSON.stringify(result));
    return;
  }

  res.writeHead(404);
  res.end("Not Found");
});

server.listen(3000, () => console.log("Server running at http://localhost:3000"));

Benchmark Results (M2 MacBook Pro, 10,000 requests)

GET /users (read 100 rows):

Runtime Requests/sec Avg Latency P99 Latency
Node.js 22 12,450 7.8ms 15ms
Bun 1.2 28,900 3.2ms 8ms
Difference 2.3x faster 2.4x faster 1.9x faster

POST /users (insert + return):

Runtime Requests/sec Avg Latency P99 Latency
Node.js 22 8,200 11.5ms 22ms
Bun 1.2 19,400 4.8ms 12ms
Difference 2.4x faster 2.4x faster 1.8x faster

The results are consistent: Bun is roughly 2-2.5x faster for database-backed API operations.

Startup Time Comparison

Cold start matters for serverless:

Runtime Startup Time Memory at Start
Node.js 22 45ms 52MB
Bun 1.2 8ms 28MB
Difference 5.6x faster 46% less

For serverless functions, this difference is significant.

When Should You Use Bun in Production?

Based on our testing, here's a realistic assessment:

✅ Good Candidates for Bun

  1. New projects with simple dependencies

    • APIs with SQLite/Postgres
    • Background workers
    • CLI tools
    • Microservices
  2. Performance-critical applications

    • High-throughput APIs
    • Real-time applications
    • Edge functions (Cloudflare Workers compatibility)
  3. Serverless functions

    • Cold start time matters
    • Memory costs are a concern
    • AWS Lambda, Cloudflare Workers

⚠️ Proceed with Caution

  1. Large Next.js applications

    • Works for most cases, but edge cases exist
    • Test thoroughly before deploying
  2. Applications with native Node.js addons

    • Some native addons may not work
    • Check compatibility first
  3. Legacy codebases with deep Node.js assumptions

    • Migration effort may not be worth it
    • Test incrementally

❌ Not Recommended (Yet)

  1. Electron applications

    • No Electron support
  2. Applications requiring node:vm

    • Limited VM support
  3. Mission-critical financial systems

    • Wait for more production battle-testing

Migration Guide: Moving from Node.js to Bun

If you decide to try Bun, here's how to migrate:

Step 1: Install Bun

curl -fsSL https://bun.sh/install | bash

Step 2: Check Compatibility

# Run your existing tests with Bun
bun test

# Check for issues
bun run your-script.ts

Step 3: Update package.json Scripts

{
  "scripts": {
    "dev": "bun run --watch src/index.ts",
    "start": "bun run src/index.ts",
    "test": "bun test"
  }
}

Step 4: Remove Unnecessary Dependencies

With Bun's built-ins, you can often remove:

  • better-sqlite3 → Use bun:sqlite
  • @aws-sdk/client-s3 → Use Bun.S3Client
  • pg → Use bun SQL
  • dotenv → Bun loads .env automatically
  • ts-node / tsx → Not needed

Step 5: Update Dockerfile

FROM oven/bun:1.2

WORKDIR /app
COPY package.json bun.lockb ./
RUN bun install --frozen-lockfile

COPY . .
EXPOSE 3000
CMD ["bun", "run", "src/index.ts"]

The Elephant in the Room: Should You Trust It?

Bun is developed by Oven, a startup. The natural question: what happens if the company folds?

Arguments for trust:

  • Open source (MIT license)
  • Large community (70k+ GitHub stars)
  • Former WebKit/Safari team members involved
  • Growing enterprise adoption

Arguments for caution:

  • Node.js has 15+ years of battle-testing
  • High key-person dependency (if Jarred Sumner leaves, major impact)
  • Some edge cases still exist

Our recommendation: Start with non-critical new projects. If things go smoothly, gradually expand.

Conclusion: The Tipping Point?

Bun 1.2 represents a significant milestone. The built-in databases, S3 client, and improved Node.js compatibility address the main blockers for adoption:

  1. Performance: 2-3x faster is real, not marketing
  2. DX: Less dependencies, simpler APIs
  3. Compatibility: 96% Node.js test suite passing
  4. Features: Built-in tools that would require 5+ npm packages

Is Bun ready to replace Node.js? For new projects: increasingly yes. For existing production systems: migrate carefully and incrementally.

The JavaScript runtime landscape is finally competitive again. Whether you switch to Bun or stay with Node.js, the competition is making both better.

One thing is clear: ignoring Bun is no longer an option. Try it on your next project. You might be surprised.

# Install and run your first Bun project
curl -fsSL https://bun.sh/install | bash
bun init
bun run index.ts

Welcome to the future of JavaScript runtimes.

💡 Note: This article was originally published on the Pockit Blog.

Check out Pockit.tools for 60+ free developer utilities. For faster access, add it to Chrome and use JSON Formatter & Diff Checker directly from your toolbar.

How I bypassed the Great Firewall in 2026: Active Filtering &amp; Protocol Obfuscation

2026-02-04 22:24:18

Bypassing the Great Firewall in 2026: Active Filtering, Protocol Obfuscation, and Self-Hosting Guide

The Great Firewall (GFW) doesn't just block IP addresses anymore—it actively inspects traffic using stateful deep packet inspection (DPI). If you're relying on standard OpenVPN or WireGuard in 2026, you're already blocked.

This guide explains how the GFW works (brief technical overview), why your VPN keeps dying, and how to deploy robust self-hosted solutions like Hysteria2 and V2Ray on optimized CN2 GIA routes.

The Mechanics of Active Filtering

The GFW employs active filtering techniques to sabotage and attack any existing VPN connections. It doesn't sit "in-line" blocking every packet (that would slow down the entire country). Instead, it mirrors traffic via optical splitters at the international gateway.

When it sees a handshake it doesn't like—say, a TLS Client Hello with a suspicious SNI—it weaponizes the TCP protocol against you.

TCP Reset Injection

The GFW injects forged TCP RST (Reset) packets.

  • To You: A packet that looks like it came from the server saying "Stop."
  • To the Server: A packet that looks like it came from you saying "Stop."

Because the GFW is physically closer to you than the server is, its fake packet wins the race. Your connection dies instantly.

DNS Hijacking

For UDP traffic (DNS), the GFW uses DNS Hijacking. It detects queries for banned domains and immediately shoots back a fake DNS response with a garbage IP. Your computer accepts the first answer it gets (the fake one) and ignores the real answer that arrives milliseconds later.

Why Commercial VPNs Fail

Most big-name VPNs are huge targets.

  1. Static Signatures: Their protocols have predictable headers.
  2. Active Probing: Once the GFW suspects a server, it sends its own "probe" to your server. If your server replies like a VPN, the IP gets blacklisted.
  3. Shared IPs: Thousands of users on one IP makes traffic analysis trivial.

So by purchasing a large commercial VPN, you are essentially painting a big red target on your back.

Solution 1: Hysteria2 (The Speed King)

Hysteria2 is built on UDP (QUIC), similar to HTTP/3. It uses a custom congestion control called Brutal that ignores packet loss, making it incredibly fast even on bad lines. Brutal works by "brute forcing" packages through congested lines, ignoring previously negotiated limits. I have personally used hysteria2 for my trips in china during june and september 2025 and they have proven themselves to be reliable options in China.

How to Self-Host Hysteria2

You need a VPS (Virtual Private Server).

Installation (Linux):

# Download the latest release
bash <(curl -fsSL https://get.hy2.sh/)

# Configure config.yaml
listen: :443
tls:
  cert: /path/to/your.crt
  key: /path/to/your.key
auth:
  type: password
  password: "your_secure_password"
masquerade:
  type: proxy
  proxy:
    url: https://bing.com
    rewriteHost: true

Note: You need a valid domain name and some technical knowledge on forwarding (you may need to forward ip addresses through cloudflared, or rent a VPS from digitalOcean for it to work effortlessly.

Why Self-Host?

  • Privacy: No logs. You own the pipe.
  • IP Reputation: You aren't sharing an IP with 5,000 other people (some of whom are doing shady stuff).
  • Speed: No throttling from a provider.

Solution 2: V2Ray (The Stealth Master)

V2Ray (specifically VLESS + XTLS-Reality) is designed to look exactly like you're browsing a normal website. It "steals" the TLS handshake of a real site (like Microsoft or Apple), so to the GFW, you're just visiting a safe page.

V2Ray comes in 2 flavours: Vmess and vless. I generally recommend vless because it's a lighter and newer protocol.

How to Self-Host V2Ray (using 3X-UI)

The easiest way is using the 3X-UI panel.

Installation:

bash <(curl -Ls https://raw.githubusercontent.com/mhsanaei/3x-ui/master/install.sh)
  1. Access the panel at http://YOUR_IP:2053.
  2. Go to Inbounds -> Add Inbound.
  3. Select Protocol: VLESS.
  4. Enable XTLS-Reality security.
  5. Set Dest to www.microsoft.com:443 and Server Names to www.microsoft.com.

Client Setup

For Android/iOS, use hApp or v2rayNG.

  1. Import your server string (vless:// or hysteria2://).
  2. Crucial Step: Set "Routing" to "Bypass LAN & Mainland China". This ensures your WeChat/Alipay traffic doesn't go through the VPN (which slows it down).

The fourth option??

Shadowsocks has also existed for a long time, being essentially a modified version of socks5 with obfuscation since 2016. while it still works as of writing in China, it is becoming increasingly unreliable due to the obfuscation having a clear entropy fingerprint that can be detected by the GFW. Using shadowsocks exposes you to a higher risk of getting your server IP banned, though personally it has not happened to me before.

The Critical Factor: Routing (CN2 GIA)

Protocol is only half the battle. If your physical route to the server is congested, no software can fix it.

CN2 GIA (China Telecom Next Gen Carrier Network - Global Internet Access) is the premium lane. It avoids the congested public backbone (163 network).

  • 163 Network: High packet loss (10-20%) during peak hours.
  • CN2 GIA: <1% packet loss, stable latency.

If you self-host, pay extra for a provider that guarantees CN2 GIA routes.

What if i dont want to self host?

Self-hosting is fun, but hunting for clean CN2 GIA IPs and maintaining servers got annoying fast. So, I built a solution to scratch my own itch.

I developed V-Rail to automate all of this. It deploys fully optimized CN2 GIA nodes with Hysteria2 pre-configured, so you don't have to mess with config files or worry about IP blocks. I've already done the hard work for you—feel free to check it out if you'd rather spend your time coding instead of debugging routing tables.

Alternatively, you may also use other vpns like astrilVPN or letsVPN but V-Rail, built on VLess, is the cheapest and most reliable option of the three, costing only 2.49 USD/month for 50GB of unthrottled data. Furthermore, one account could be used across different devices.

Why We Ditched WordPress and Built 100+ Websites with Next.js

2026-02-04 22:21:12

We're a small web agency based in Prague. For years, we built client websites on WordPress – just like everyone else.
Then we switched to Next.js. After delivering 100+ projects, we're never going back. Here's why.
The Problem With WordPress in 2026
Don't get me wrong – WordPress powers 43% of the web. It's a great tool for what it was designed for: blogging. But when clients come to us asking for a fast, modern business website, WordPress starts showing its cracks:

Speed: The average WordPress site loads in 4–8 seconds. Every second above 3s costs you ~7% in conversions.
Plugin hell: Need SEO? Plugin. Need caching? Plugin. Need security? Plugin. Each one adds weight, complexity, and potential vulnerabilities.
Maintenance burden: Core updates, plugin updates, PHP version conflicts, database optimization... it never ends.
Security: WordPress is the #1 target for hackers precisely because it's so popular. Outdated plugins are the main attack vector.

We spent more time maintaining WordPress sites than building them. Something had to change.
Why Next.js?
We evaluated several options – Gatsby, Astro, SvelteKit, plain HTML/CSS. We landed on Next.js for these reasons:

  1. Performance Out of the Box Next.js gives you automatic code splitting, image optimization, and static generation by default. No configuration, no plugins. Here's a real comparison from one of our client projects – same design, same content: ┌─────────────────┬───────────┬──────────┐ │ Metric │ WordPress │ Next.js │ ├─────────────────┼───────────┼──────────┤ │ First Load │ 4.2s │ 1.1s │ │ LCP │ 3.8s │ 1.3s │ │ CLS │ 0.24 │ 0.01 │ │ PageSpeed Score │ 47 │ 98 │ │ Total Size │ 3.2 MB │ 340 KB │ └─────────────────┴───────────┴──────────┘ That's not a cherry-picked example. We see these numbers consistently across projects.
  2. SEO That Actually Works With Next.js, you get:

Server-side rendering (SSR) or static site generation (SSG) – Google gets fully rendered HTML, not a JavaScript blob
Automatic

management with next/head or the new Metadata API
Built-in image optimization with next/image – WebP/AVIF, lazy loading, proper sizing
Structured data is trivial to implement with JSON-LD

Our clients consistently see improved Google rankings within 2-3 months of switching from WordPress to Next.js. Not because of magic – because the technical foundation is simply better.

  1. Developer Experience This matters more than people think. When your dev team enjoys working with a tool, they ship faster and write better code. jsx// A simple page component in Next.js // Clean, readable, no PHP spaghetti

export default function ServicesPage() {
return (





);
}
Compare this to a WordPress template file with mixed PHP/HTML, the_loop(), get_template_part(), and 47 hooks. The cognitive overhead is massive.

  1. Security (or Lack of Worry) A Next.js site deployed on Vercel or a CDN has virtually zero attack surface. There's no database to hack, no admin panel to brute-force, no plugins with backdoors. In 2+ years of running Next.js sites in production, we've had exactly zero security incidents. With WordPress, we used to deal with hacked sites monthly. The Tradeoffs (Being Honest) Next.js isn't perfect for every use case. Here's where WordPress still wins:

Non-technical content editors: WordPress's admin panel is familiar to everyone. For Next.js, you need a headless CMS (we use Sanity or Strapi).
Plugin ecosystem: Need a specific, niche feature? WordPress probably has a plugin for it. With Next.js, you might need to build it.
Cost for complex sites: A WordPress developer costs less per hour than a React developer. For very simple sites, WordPress can be cheaper.
E-commerce at scale: WooCommerce is battle-tested. Next.js + headless commerce (Shopify Storefront API, Medusa) works great but requires more setup.

Our Stack in 2026
For anyone curious, here's what we use for most client projects:

Framework: Next.js 15 (App Router)
Styling: Tailwind CSS
CMS: Sanity.io (for clients who need to edit content)
Hosting: Vercel (or self-hosted on VPS for cost-sensitive clients)
Forms: React Hook Form + server actions
Analytics: Plausible (privacy-friendly alternative to GA4)
SEO: Built-in Next.js metadata + custom structured data

Total hosting cost for a typical business website: ~$0–20/month (vs $10–50/month for WordPress hosting + premium plugins).
The Results
After 100+ Next.js projects, here are our average numbers:

PageSpeed score: 95+ (mobile)
Load time: Under 2 seconds
Delivery time: 3–7 business days
Client satisfaction: We haven't had a single client ask to go back to WordPress

Should You Switch?
If you're a developer building client websites and you're still on WordPress, I'd encourage you to try Next.js for your next project. The learning curve is real – give yourself 2-3 weeks to get comfortable – but the payoff is worth it.
If you're a business owner reading this: ask your agency about their tech stack. If they're still defaulting to WordPress for every project, they might be optimizing for their convenience, not your results.

We're Weblyx, a web development agency from Prague specializing in Next.js websites. If you have questions about migrating from WordPress or starting a new project, drop a comment below or reach out at weblyx.cz.
What's your experience with Next.js vs WordPress? I'd love to hear your thoughts in the comments. 👇