2026-02-04 22:35:26
If you've ever tried to integrate with a modern LMS such as Canvas, Moodle, or Schoology, you know the pain. The LTI 1.3 specification is powerful but technically complex. It involves OAuth 2.0, OIDC, JWTs, platform-specific quirks, and documentation scattered across various locations.
I needed to build an LMS integration for a side project. When I went looking for a modern TypeScript library, I found... nothing production-ready. The options were either not serverless-friendly, incomplete, or abandoned.
So I built one.
Learning Tools Interoperability (LTI) is the standard that connects external tools to learning management systems (LMS). It's what lets platforms like Kahoot, Turnitin, and thousands of other tools appear seamlessly inside Canvas or Moodle.
The LMS market is worth over $20 billion. Every major platform uses LTI: Canvas, Moodle, Blackboard, Brightspace, and Schoology. If you're building anything for education, chances are you'll eventually need it.
LTI 1.3 is the current version, built on OAuth 2.0 and OpenID Connect. It's more secure than the old OAuth 1.0-based LTI 1.1, but also significantly more complex to implement.
LTI Tool is a TypeScript-native implementation of the full IMS Global LTI 1.3 specification. Here's the core setup:
import { Hono } from 'hono';
import { LTITool } from '@lti-tool/core';
import {
jwksRouteHandler,
launchRouteHandler,
loginRouteHandler,
secureLTISession,
} from '@lti-tool/hono';
import { MemoryStorage } from '@lti-tool/memory';
const storage = new MemoryStorage();
const ltiTool = new LTITool({
keyPair, // RSA keypair - see docs for generation
stateSecret, // HMAC secret for state tokens
storage,
});
const app = new Hono();
// Mount required LTI endpoints
app.get('/lti/jwks', jwksRouteHandler(ltiTool));
app.post('/lti/login', loginRouteHandler(ltiTool));
app.post('/lti/launch', launchRouteHandler(ltiTool));
// Protect your routes
app.use('/app/*', secureLTISession(ltiTool));
And here's a protected route that accesses the LTI session:
app.get('/app/dashboard', (c) => {
const session = c.get('ltiSession');
return c.json({
user: session.user.name,
course: session.context?.title,
roles: session.roles,
});
});
That's it. The library handles OIDC authentication, JWT verification, nonce validation, and all the security requirements.
Building against a specification is one thing. Building against real-world platforms is another. Every LMS has quirks, parameters in unexpected places, undocumented behaviors, or subtle differences in how they interpret the spec. The library handles these so you don't have to discover them yourself.
Canvas, for example, announced that starting January 2026, they'll reject API requests without proper User-Agent headers. The library sends @lti-tool/core/1.0.0 on all service requests by default, so you're compliant out of the box.
LTI 1.3 security isn't optional. The library enforces:
Getting any of these wrong means your integration is insecure. Getting them right means implementing them once, correctly, and never thinking about them again.
This isn't a spec-compliant library that's never seen production. I've
tested it against live Moodle and Canvas instances.
The test suite has 50+ tests covering the core flows. But real validation came from watching grades actually appear in the LMS gradebook.
npm install @lti-tool/core @lti-tool/hono @lti-tool/memory
The library is MIT licensed. If you're building LMS integrations in Node.js, this should save you weeks of implementation time.
If you find it useful, a star on GitHub helps others discover it.
If you run into issues or have questions, open an issue.
Building something with LTI Tool? I'd love to hear about it.
2026-02-04 22:31:16
2026-02-04 22:31:00
Bun has been the JavaScript runtime that promises everything: faster installs, faster execution, native TypeScript support. But for most of us, it's been "cool for side projects, not ready for production."
Bun 1.2 changes that conversation.
Released in January 2025, Bun 1.2 isn't just another incremental update. It ships with built-in SQLite, a native S3 client, Postgres support, and seamless Node.js compatibility that finally passes 96% of the Node.js test suite. No npm packages. No configuration. Just import and use.
In this deep dive, we'll explore what Bun 1.2 actually offers, run real benchmarks, and determine whether it's finally time to consider Bun for your next production project.
Let's cut through the hype and look at what Bun 1.2 delivers:
SQLite is now a first-class citizen in Bun. No installation required:
import { Database } from "bun:sqlite";
const db = new Database("myapp.db");
// Create tables
db.run(`
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
email TEXT UNIQUE NOT NULL,
name TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
)
`);
// Insert data
const insert = db.prepare("INSERT INTO users (email, name) VALUES (?, ?)");
insert.run("[email protected]", "John Doe");
// Query with type safety
interface User {
id: number;
email: string;
name: string;
created_at: string;
}
const users = db.prepare("SELECT * FROM users").all() as User[];
console.log(users);
This isn't just a wrapper around better-sqlite3. It's a native implementation that's significantly faster:
| Operation | better-sqlite3 (Node.js) | Bun SQLite | Difference |
|---|---|---|---|
| INSERT 1M rows | 4.2s | 1.8s | 2.3x faster |
| SELECT 100K rows | 320ms | 140ms | 2.3x faster |
| Transaction commit | 12ms | 5ms | 2.4x faster |
The performance gains come from Bun's integration with JavaScriptCore and avoiding the N-API overhead that Node.js addons face.
Cloud storage without dependencies. Bun's S3 client works with AWS S3, R2, MinIO, and any S3-compatible service:
import { S3Client } from "bun";
const s3 = new S3Client({
endpoint: "https://s3.amazonaws.com",
region: "us-east-1",
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
});
// Upload a file
const file = Bun.file("./large-video.mp4");
await s3.write("my-bucket/videos/intro.mp4", file);
// Download a file
const downloaded = await s3.file("my-bucket/videos/intro.mp4");
await Bun.write("./downloaded.mp4", downloaded);
// Stream large files
const stream = s3.file("my-bucket/data/huge.csv").stream();
for await (const chunk of stream) {
// Process chunk without loading entire file into memory
}
The S3 client handles multipart uploads automatically for large files and supports presigned URLs out of the box. Compare this to the AWS SDK:
// AWS SDK v3 - The old way
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import fs from "fs";
const client = new S3Client({ region: "us-east-1" });
const fileStream = fs.createReadStream("./large-video.mp4");
await client.send(new PutObjectCommand({
Bucket: "my-bucket",
Key: "videos/intro.mp4",
Body: fileStream,
}));
// Bun - The new way
const s3 = new S3Client({ /* config */ });
await s3.write("my-bucket/videos/intro.mp4", Bun.file("./large-video.mp4"));
The API surface is dramatically smaller while providing the same functionality.
Postgres joins SQLite as a built-in database option:
import { sql } from "bun";
// Connection from environment variable (DATABASE_URL)
const users = await sql`SELECT * FROM users WHERE active = ${true}`;
// Or explicit connection
import { SQL } from "bun";
const db = new SQL({
hostname: "localhost",
port: 5432,
database: "myapp",
username: "postgres",
password: "secret",
});
// Parameterized queries are automatic
const email = "[email protected]";
const user = await db`SELECT * FROM users WHERE email = ${email}`;
// Transactions
await db.begin(async (tx) => {
await tx`UPDATE accounts SET balance = balance - 100 WHERE id = 1`;
await tx`UPDATE accounts SET balance = balance + 100 WHERE id = 2`;
});
The SQL template literal approach prevents SQL injection by design and provides excellent DX.
The biggest blocker for Bun adoption has been compatibility. Bun 1.2 now passes:
node:fs testsnode:path testsnode:crypto testsnode:http testsThis means most npm packages "just work." We tested several popular packages:
| Package | Status | Notes |
|---|---|---|
| Express | ✅ Works | Full compatibility |
| Fastify | ✅ Works | Full compatibility |
| Prisma | ✅ Works | Since Bun 1.1 |
| Next.js | ⚠️ Partial | Dev server works, some edge cases |
| NestJS | ✅ Works | Full compatibility |
| Socket.io | ✅ Works | Full compatibility |
Bun now runs natively on Windows without WSL. The installer is a single executable:
powershell -c "irm bun.sh/install.ps1 | iex"
Performance on Windows is comparable to Linux/macOS, which wasn't the case with earlier versions.
Let's build the same API with Node.js and Bun to see real performance differences.
Bun Implementation:
// server.ts (Bun)
import { Database } from "bun:sqlite";
const db = new Database(":memory:");
db.run(`
CREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
email TEXT UNIQUE,
name TEXT
)
`);
// Seed data
const insert = db.prepare("INSERT INTO users (email, name) VALUES (?, ?)");
for (let i = 0; i < 1000; i++) {
insert.run(`user${i}@test.com`, `User ${i}`);
}
const server = Bun.serve({
port: 3000,
async fetch(req) {
const url = new URL(req.url);
if (url.pathname === "/users" && req.method === "GET") {
const users = db.prepare("SELECT * FROM users LIMIT 100").all();
return Response.json(users);
}
if (url.pathname === "/users" && req.method === "POST") {
const body = await req.json();
const result = db.prepare(
"INSERT INTO users (email, name) VALUES (?, ?) RETURNING *"
).get(body.email, body.name);
return Response.json(result, { status: 201 });
}
return new Response("Not Found", { status: 404 });
},
});
console.log(`Server running at http://localhost:${server.port}`);
Node.js Implementation:
// server.mjs (Node.js with better-sqlite3)
import Database from "better-sqlite3";
import { createServer } from "http";
const db = new Database(":memory:");
db.exec(`
CREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
email TEXT UNIQUE,
name TEXT
)
`);
const insert = db.prepare("INSERT INTO users (email, name) VALUES (?, ?)");
for (let i = 0; i < 1000; i++) {
insert.run(`user${i}@test.com`, `User ${i}`);
}
const server = createServer(async (req, res) => {
const url = new URL(req.url, `http://${req.headers.host}`);
if (url.pathname === "/users" && req.method === "GET") {
const users = db.prepare("SELECT * FROM users LIMIT 100").all();
res.writeHead(200, { "Content-Type": "application/json" });
res.end(JSON.stringify(users));
return;
}
if (url.pathname === "/users" && req.method === "POST") {
let body = "";
for await (const chunk of req) body += chunk;
const data = JSON.parse(body);
const result = db.prepare(
"INSERT INTO users (email, name) VALUES (?, ?) RETURNING *"
).get(data.email, data.name);
res.writeHead(201, { "Content-Type": "application/json" });
res.end(JSON.stringify(result));
return;
}
res.writeHead(404);
res.end("Not Found");
});
server.listen(3000, () => console.log("Server running at http://localhost:3000"));
GET /users (read 100 rows):
| Runtime | Requests/sec | Avg Latency | P99 Latency |
|---|---|---|---|
| Node.js 22 | 12,450 | 7.8ms | 15ms |
| Bun 1.2 | 28,900 | 3.2ms | 8ms |
| Difference | 2.3x faster | 2.4x faster | 1.9x faster |
POST /users (insert + return):
| Runtime | Requests/sec | Avg Latency | P99 Latency |
|---|---|---|---|
| Node.js 22 | 8,200 | 11.5ms | 22ms |
| Bun 1.2 | 19,400 | 4.8ms | 12ms |
| Difference | 2.4x faster | 2.4x faster | 1.8x faster |
The results are consistent: Bun is roughly 2-2.5x faster for database-backed API operations.
Cold start matters for serverless:
| Runtime | Startup Time | Memory at Start |
|---|---|---|
| Node.js 22 | 45ms | 52MB |
| Bun 1.2 | 8ms | 28MB |
| Difference | 5.6x faster | 46% less |
For serverless functions, this difference is significant.
Based on our testing, here's a realistic assessment:
New projects with simple dependencies
Performance-critical applications
Serverless functions
Large Next.js applications
Applications with native Node.js addons
Legacy codebases with deep Node.js assumptions
Electron applications
Applications requiring node:vm
Mission-critical financial systems
If you decide to try Bun, here's how to migrate:
curl -fsSL https://bun.sh/install | bash
# Run your existing tests with Bun
bun test
# Check for issues
bun run your-script.ts
{
"scripts": {
"dev": "bun run --watch src/index.ts",
"start": "bun run src/index.ts",
"test": "bun test"
}
}
With Bun's built-ins, you can often remove:
better-sqlite3 → Use bun:sqlite
@aws-sdk/client-s3 → Use Bun.S3Client
pg → Use bun SQLdotenv → Bun loads .env automaticallyts-node / tsx → Not neededFROM oven/bun:1.2
WORKDIR /app
COPY package.json bun.lockb ./
RUN bun install --frozen-lockfile
COPY . .
EXPOSE 3000
CMD ["bun", "run", "src/index.ts"]
Bun is developed by Oven, a startup. The natural question: what happens if the company folds?
Arguments for trust:
Arguments for caution:
Our recommendation: Start with non-critical new projects. If things go smoothly, gradually expand.
Bun 1.2 represents a significant milestone. The built-in databases, S3 client, and improved Node.js compatibility address the main blockers for adoption:
Is Bun ready to replace Node.js? For new projects: increasingly yes. For existing production systems: migrate carefully and incrementally.
The JavaScript runtime landscape is finally competitive again. Whether you switch to Bun or stay with Node.js, the competition is making both better.
One thing is clear: ignoring Bun is no longer an option. Try it on your next project. You might be surprised.
# Install and run your first Bun project
curl -fsSL https://bun.sh/install | bash
bun init
bun run index.ts
Welcome to the future of JavaScript runtimes.
💡 Note: This article was originally published on the Pockit Blog.
Check out Pockit.tools for 60+ free developer utilities. For faster access, add it to Chrome and use JSON Formatter & Diff Checker directly from your toolbar.
2026-02-04 22:24:18
The Great Firewall (GFW) doesn't just block IP addresses anymore—it actively inspects traffic using stateful deep packet inspection (DPI). If you're relying on standard OpenVPN or WireGuard in 2026, you're already blocked.
This guide explains how the GFW works (brief technical overview), why your VPN keeps dying, and how to deploy robust self-hosted solutions like Hysteria2 and V2Ray on optimized CN2 GIA routes.
The GFW employs active filtering techniques to sabotage and attack any existing VPN connections. It doesn't sit "in-line" blocking every packet (that would slow down the entire country). Instead, it mirrors traffic via optical splitters at the international gateway.
When it sees a handshake it doesn't like—say, a TLS Client Hello with a suspicious SNI—it weaponizes the TCP protocol against you.
The GFW injects forged TCP RST (Reset) packets.
Because the GFW is physically closer to you than the server is, its fake packet wins the race. Your connection dies instantly.
For UDP traffic (DNS), the GFW uses DNS Hijacking. It detects queries for banned domains and immediately shoots back a fake DNS response with a garbage IP. Your computer accepts the first answer it gets (the fake one) and ignores the real answer that arrives milliseconds later.
Most big-name VPNs are huge targets.
So by purchasing a large commercial VPN, you are essentially painting a big red target on your back.
Hysteria2 is built on UDP (QUIC), similar to HTTP/3. It uses a custom congestion control called Brutal that ignores packet loss, making it incredibly fast even on bad lines. Brutal works by "brute forcing" packages through congested lines, ignoring previously negotiated limits. I have personally used hysteria2 for my trips in china during june and september 2025 and they have proven themselves to be reliable options in China.
You need a VPS (Virtual Private Server).
Installation (Linux):
# Download the latest release
bash <(curl -fsSL https://get.hy2.sh/)
# Configure config.yaml
listen: :443
tls:
cert: /path/to/your.crt
key: /path/to/your.key
auth:
type: password
password: "your_secure_password"
masquerade:
type: proxy
proxy:
url: https://bing.com
rewriteHost: true
Note: You need a valid domain name and some technical knowledge on forwarding (you may need to forward ip addresses through cloudflared, or rent a VPS from digitalOcean for it to work effortlessly.
Why Self-Host?
V2Ray (specifically VLESS + XTLS-Reality) is designed to look exactly like you're browsing a normal website. It "steals" the TLS handshake of a real site (like Microsoft or Apple), so to the GFW, you're just visiting a safe page.
V2Ray comes in 2 flavours: Vmess and vless. I generally recommend vless because it's a lighter and newer protocol.
The easiest way is using the 3X-UI panel.
Installation:
bash <(curl -Ls https://raw.githubusercontent.com/mhsanaei/3x-ui/master/install.sh)
http://YOUR_IP:2053.www.microsoft.com:443 and Server Names to www.microsoft.com.For Android/iOS, use hApp or v2rayNG.
Shadowsocks has also existed for a long time, being essentially a modified version of socks5 with obfuscation since 2016. while it still works as of writing in China, it is becoming increasingly unreliable due to the obfuscation having a clear entropy fingerprint that can be detected by the GFW. Using shadowsocks exposes you to a higher risk of getting your server IP banned, though personally it has not happened to me before.
Protocol is only half the battle. If your physical route to the server is congested, no software can fix it.
CN2 GIA (China Telecom Next Gen Carrier Network - Global Internet Access) is the premium lane. It avoids the congested public backbone (163 network).
If you self-host, pay extra for a provider that guarantees CN2 GIA routes.
Self-hosting is fun, but hunting for clean CN2 GIA IPs and maintaining servers got annoying fast. So, I built a solution to scratch my own itch.
I developed V-Rail to automate all of this. It deploys fully optimized CN2 GIA nodes with Hysteria2 pre-configured, so you don't have to mess with config files or worry about IP blocks. I've already done the hard work for you—feel free to check it out if you'd rather spend your time coding instead of debugging routing tables.
Alternatively, you may also use other vpns like astrilVPN or letsVPN but V-Rail, built on VLess, is the cheapest and most reliable option of the three, costing only 2.49 USD/month for 50GB of unthrottled data. Furthermore, one account could be used across different devices.
2026-02-04 22:21:12
We're a small web agency based in Prague. For years, we built client websites on WordPress – just like everyone else.
Then we switched to Next.js. After delivering 100+ projects, we're never going back. Here's why.
The Problem With WordPress in 2026
Don't get me wrong – WordPress powers 43% of the web. It's a great tool for what it was designed for: blogging. But when clients come to us asking for a fast, modern business website, WordPress starts showing its cracks:
Speed: The average WordPress site loads in 4–8 seconds. Every second above 3s costs you ~7% in conversions.
Plugin hell: Need SEO? Plugin. Need caching? Plugin. Need security? Plugin. Each one adds weight, complexity, and potential vulnerabilities.
Maintenance burden: Core updates, plugin updates, PHP version conflicts, database optimization... it never ends.
Security: WordPress is the #1 target for hackers precisely because it's so popular. Outdated plugins are the main attack vector.
We spent more time maintaining WordPress sites than building them. Something had to change.
Why Next.js?
We evaluated several options – Gatsby, Astro, SvelteKit, plain HTML/CSS. We landed on Next.js for these reasons:
Server-side rendering (SSR) or static site generation (SSG) – Google gets fully rendered HTML, not a JavaScript blob
Automatic
Our clients consistently see improved Google rankings within 2-3 months of switching from WordPress to Next.js. Not because of magic – because the technical foundation is simply better.
export default function ServicesPage() {
return (
);
}
Compare this to a WordPress template file with mixed PHP/HTML, the_loop(), get_template_part(), and 47 hooks. The cognitive overhead is massive.
Non-technical content editors: WordPress's admin panel is familiar to everyone. For Next.js, you need a headless CMS (we use Sanity or Strapi).
Plugin ecosystem: Need a specific, niche feature? WordPress probably has a plugin for it. With Next.js, you might need to build it.
Cost for complex sites: A WordPress developer costs less per hour than a React developer. For very simple sites, WordPress can be cheaper.
E-commerce at scale: WooCommerce is battle-tested. Next.js + headless commerce (Shopify Storefront API, Medusa) works great but requires more setup.
Our Stack in 2026
For anyone curious, here's what we use for most client projects:
Framework: Next.js 15 (App Router)
Styling: Tailwind CSS
CMS: Sanity.io (for clients who need to edit content)
Hosting: Vercel (or self-hosted on VPS for cost-sensitive clients)
Forms: React Hook Form + server actions
Analytics: Plausible (privacy-friendly alternative to GA4)
SEO: Built-in Next.js metadata + custom structured data
Total hosting cost for a typical business website: ~$0–20/month (vs $10–50/month for WordPress hosting + premium plugins).
The Results
After 100+ Next.js projects, here are our average numbers:
PageSpeed score: 95+ (mobile)
Load time: Under 2 seconds
Delivery time: 3–7 business days
Client satisfaction: We haven't had a single client ask to go back to WordPress
Should You Switch?
If you're a developer building client websites and you're still on WordPress, I'd encourage you to try Next.js for your next project. The learning curve is real – give yourself 2-3 weeks to get comfortable – but the payoff is worth it.
If you're a business owner reading this: ask your agency about their tech stack. If they're still defaulting to WordPress for every project, they might be optimizing for their convenience, not your results.
We're Weblyx, a web development agency from Prague specializing in Next.js websites. If you have questions about migrating from WordPress or starting a new project, drop a comment below or reach out at weblyx.cz.
What's your experience with Next.js vs WordPress? I'd love to hear your thoughts in the comments. 👇