MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Server-Sent Events (SSE)

2026-04-11 03:34:43

I have been rummaging through a lot of backend-related content lately, and half the time, I don't really understand what is being said, but hey, it's an opportunity to learn.

I came across this post on X:
// Detect dark theme var iframe = document.getElementById('tweet-2041107243077152836-580'); if (document.body.className.includes('dark-theme')) { iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=2041107243077152836&theme=dark" }

From my little knowledge studying backend systems, two of the options were familiar, polling and websockets, but SSE was strange. Now, if I had to answer the question in an interview, my knowledge would already eliminate the right answer.

Here's what I was thinking:
The client needs to get updates without asking, so a connection needs to be opened. You can see where I'm leaning towards. But I recognise that websockets is a bi-directional communication channel, we only need one direction, otherwise we'll be wasting resources...right?

Also, does the user have to see all the stages in the delivery? I mean it's not like we'll send a push notification every time the status changes, it's okay if the user misses one or two of the five stages as long as they get a progressive update (i.e, it was pending before, now it's out for delivery - it's fine if they missed confirmed), maybe it's okay to use polling. User needs to be updated: we send them something every 5-10 seconds when they ask, or when the client asks, the server holds the request until the update → Responds → Client immediately asks again (long polling).

Also, how does the status change? If the delivery person is the one changing the status (manually or through a change in location), isn't that technically bidirectional (like a chat)? Maybe it's WebSocket after all.thinking time: 0.2 seconds
At this point, I was confused, so I let the comment section tell me the answer. Lo and behold, it's the option I know nothing about!

What is SSE?

It's a persistent one-way HTTP connection where the server can push updates to the client whenever it wants.

Client opens ONE connection → Server keeps it open → Server pushes when ready

Sweet, just what I knew I needed but never knew what it was.

So what exactly does this look like in code?

[HttpGet("order/{orderId}/status")]
public async Task StreamOrderStatus(string orderId, CancellationToken cancellationToken)
{
    // Tell the client, "this is an SSE stream, keep the connection open"
    Response.Headers["Content-Type"] = "text/event-stream";
    Response.Headers["Cache-Control"] = "no-cache";
    Response.Headers["Connection"] = "keep-alive";

    while (!cancellationToken.IsCancellationRequested)
    {
        var status = await _orderService.GetStatusAsync(orderId);

        // SSE format — "data:" is required, double newline ends the event
        await Response.WriteAsync($"data: {status}\n\n");
        await Response.Body.FlushAsync();

        // Poll the DB every 3 seconds for a status change
        await Task.Delay(3000, cancellationToken);
    }
}

What's actually happening line by line

The headers are the handshake. You're telling the client "don't close this connection, I'll keep sending you things." Without these, the client treats it like a normal HTTP response and closes after the first chunk. Side note: I haven't seen that content-type ever before.

The while loop keeps the connection alive. You're essentially saying "stay open until the client disconnects."

CancellationToken is how you know the client disconnected. When they close the app or navigate away, the token gets cancelled and the loop exits cleanly. Without this you'd have zombie connections running on your server forever.

The data: format is the SSE spec — it's just a text protocol:

data: Preparing\n\n
data: Ready\n\n
data: Out for delivery\n\n

I can tell this code is problematic. Why call the DB every few seconds when you're not sure the status has changed (just like polling, but not over the network), so we'll probably need to replace that with something else (that's event-driven like a pub/sub) - but that's not the point.
With SSE, you're still “checking” for updates—but you're doing it server-side once, instead of the client repeatedly hitting your API, which will take its toll on your infra.

So why is SSE the best fit here?

  • One-way updates → perfect match
  • No need for bidirectional communication (unlike WebSockets)
  • More efficient than polling (no repeated requests)

That was how I learnt about SSEs and I'm passing it forward to y'all. Next time you see ‘real-time updates’ in a system design question, don’t jump straight to WebSockets; SSE might be exactly what you need.
Until my next discovery, peace out! ✌🏾

Stop Writing Documentation From Scratch — Let AI Do It

2026-04-11 03:30:14

The documentation problem
Every software team faces the same tension: users need docs, but writing them is slow, tedious, and always out of date the moment you ship a new feature.

I spent months building DocuPil to solve this. It's an AI-powered documentation generator that takes a URL and produces complete, structured documentation — automatically.

From URL to published docs in 3 steps
Step 1: Crawl

Paste your website URL. DocuPil's crawler maps your site — up to 500 pages — including navigation flows, UI elements, and content structure. It even supports authenticated areas if you provide credentials.

Step 2: Generate

The AI analyzes everything it found and produces organized documentation:

Getting Started guides
Feature explanations
API references (if applicable)
Tutorials and how-to sections
FAQ
Everything comes out in Markdown, properly structured with headings, code blocks, and cross-references.

Step 3: Edit & Publish

Use the built-in Markdown editor with live preview, formatting toolbar, and keyboard shortcuts. When you're happy, hit publish — your docs go live instantly with:

Full-text search
Dark mode
Syntax highlighting
Table of contents
Mobile-responsive layout
What makes it different
AI Chat Assistant — Don't just generate and forget. Ask the AI to rewrite a section, add code examples, expand on a topic, or even scrape a specific page from your site for fresh content.

16 Languages — Click a button, get your entire doc set translated. No copy-pasting into Google Translate.

Team Workflows — Invite collaborators with granular roles. Everyone works in the same workspace.

Export Freedom — Download everything as Markdown or PDF. Your docs, your data.

Who is this for?
Solo devs who need docs but don't have time to write them
Startups shipping fast and needing docs to keep up
Agencies creating client deliverables
Open source maintainers who want better contributor docs
Pricing that makes sense
Plan Price Projects Pages/Crawl
Starter $5/mo 3 100
Pro $19/mo 10 500
Business $49/mo Unlimited Unlimited
Lifetime €30 once 1 500
Every subscription includes a 7-day free trial. No credit card required to start.

Get started
👉 docupil.com

I'm actively building this and would love to hear what features matter most to you. Drop a comment or reach out — happy to chat.

a Mac app that turns //TODO comments into GitHub issues automatically

2026-04-11 03:30:01

Bar Ticket

Download Now

You write //TODO: comments all day. You file maybe 10% of them as GitHub issues. The rest die in your codebase. You know it. I knew it. So I fixed it.

Bar Ticket is a macOS menu bar app that watches your files and automatically creates a GitHub issue the moment you save a TODO comment. No browser tab. No form. No copy-pasting. No context switching.

Just ⌘ + S and it's done.

How it works

//TODO: fix/login
// Fix login timeout on slow networks
  1. You type that comment in any editor — VS Code, Xcode, Vim, Cursor, Zed, whatever you use
  2. You hit ⌘ S
  3. Bar Ticket detects the new TODO via file system events, resolves your GitHub remote from your git config, and creates the issue on the correct repo
  4. The macOS notch animates to confirm — detecting → creating → confirmed
  5. Your comment gets (#42 github.com/...) appended automatically so you never lose the link

The whole thing takes about two seconds. You never leave your editor.

Why I built it

I kept finding TODOs in old code with zero corresponding issues. "Fix this later" is only useful if later has a ticket. The friction of switching to GitHub, creating an issue, copying the title, and coming back was just enough to make me skip it constantly.

The fix had to be zero friction. Not a CLI command. Not a keyboard shortcut I had to remember. Just the thing I was already doing — writing the comment and saving.

Try it

Requires macOS 14.0+. Free download.

Download Bar Ticket →

Would love to hear how it fits (or doesn't fit) your workflow. Drop a comment with your feedback.

Two Excellent Open-Source AI Agents for Your Terminal

2026-04-11 03:26:03

In the fast-moving world of AI-assisted development, the spotlight often shines on the models themselves, but it's the frameworks around them — the ones that integrate, manage, and streamline workflows — that define the user's experience and workflow.

The open-source contributions to CLI agents are largely eclipsed by the proprietary versions - Claude Code, CoPilot, Codex, Cursor etc.. But there are two standout FOSS terminal-based agents I've used that I feel deserve more attention: Crush from the Charmbracelet team and OpenCode from Anomalyco (formerly SST). Interestingly, both evolved from the same early Go-based project before splitting directions in 2025.

Charmbracelet carried forward the Go tradition with Crush, building both the frontend and backend entirely in Go. Powered by Charm’s elegant libraries like Bubble Tea, Crush delivers a very polished terminal-native experience — no JavaScript, no TypeScript, just clean Go engineering and minimal dependencies.

The Anomalyco team took the opposite path with OpenCode, rewriting the project in TypeScript and embracing a cloud-native, client/server architecture. Drawing on SST’s serverless expertise, OpenCode integrates with over 75 model providers, supports LSP, desktop apps, and scales across millions of monthly users.

Charmbracelet’s world is small but passionate — around 15.000 followers, 90+ contributors, and 23K stars. Crush speaks to developers who value terminal craftsmanship and design precision.

Anomalyco runs at enterprise scale. OpenCode currently shows over 140K stars, 800+ contributors, and 11,000 commits, serving over 6 million developers each month. Its rapid iteration and vast reach come from deep roots in cloud-native tooling.

These frameworks prove that the experience of running the large language models is just as vital as the models themselves. While Crush offers simplicity and beauty grounded in Go, OpenCode amplifies reach and flexibility through its modern TypeScript architecture.

Together, they reinforce how open-source diversity powers developer freedom. In a landscape increasingly dominated by proprietary AI platforms, tools like Crush and OpenCode ensure innovation, transparency, and community-driven growth. With AI in particular, keeping FOSS competitive and in the race is important.

So, a gentle nudge . . .

If you’ve ever benefited from open-source software — and we all have — consider giving back. Star a project, fix a bug, write or edit documentation, or just share your experiences with these FOSS tools. Every small act keeps the open web alive and thriving.

Ben Santora - April 2026

Mastering the "Sorted Subsequence of Size 3" Problem: Two Efficient Java Approaches

2026-04-11 03:22:43

Finding a sorted subsequence of a specific size is a classic algorithmic challenge that tests your ability to optimize array traversals. The problem asks us to find three elements in an array such that arr[i] < arr[j] < arr[k] and their indices satisfy i < j < k.

If you are preparing for coding interviews or just sharpening your problem-solving skills, this problem is a fantastic way to transition from brute-force thinking to highly optimized logic.

In this post, we will explore two distinct approaches to solving this problem in Java: an intuitive Precomputation Approach and a highly optimized Single-Pass Greedy Approach.

Approach 1: The Prefix and Suffix Arrays (Precomputation)

The Intuition

To find our three numbers, let's focus on the middle element, arr[j]. For any element to act as a valid middle number in our sequence, two conditions must be met:

  1. There must be a strictly smaller number somewhere to its left.
  2. There must be a strictly larger number somewhere to its right.

Instead of scanning the left and right sides over and over for every single element, we can precalculate the smallest numbers on the left and the largest numbers on the right using auxiliary arrays.

How It Works

  • prefixMin Array: We iterate from left to right, storing the minimum value seen so far at each index.
  • suffixMax Array: We iterate from right to left, storing the maximum value seen so far at each index.
  • Once these are built, we simply loop through the original array one last time. For any element arr[i], we check if its corresponding prefixMin[i] is smaller than arr[i] and its suffixMax[i] is larger than arr[i].

The Code

import java.util.ArrayList;
import java.util.Arrays;

class Solution {
    public ArrayList<Integer> find3Numbers(int[] arr) {
        // code here
        if (arr == null || arr.length < 3) {
            return new ArrayList<>();
        }
        int length = arr.length;
        ArrayList<Integer> result = new ArrayList<>();
        int [] prefixMin = new int [length];
        int [] suffixMax = new int [length];
        Arrays.fill(prefixMin, Integer.MAX_VALUE);

        prefixMin[0] = arr[0];
        for (int index = 1; index < length; index++) {
            prefixMin[index] = Math.min(arr[index], prefixMin[index - 1]);
        }

        suffixMax[length - 1] = arr[length - 1];
        for (int index = length - 2; index >= 0; index--) {
            suffixMax[index] = Math.max(arr[index], suffixMax[index + 1]);
        }

        for (int index = 0; index < length; index++) {
            int middleNum = arr[index];
            int leftMin = prefixMin[index];
            int rightMax = suffixMax[index];

            if (leftMin < middleNum && middleNum < rightMax) {
                result.add(leftMin);
                result.add(middleNum);
                result.add(rightMax);
                break;
            }
        }
        return result;
    }
}

Complexity Analysis

  • Time Complexity: O(N). We traverse the array exactly three times: once to build prefixMin, once for suffixMax, and one final time to find the sequence. Dropping the constant gives us a linear time complexity.
  • Space Complexity: O(N). We allocate two additional arrays, each of size N, to store our precomputed boundaries.

Approach 2: The Space-Optimized Single Pass (Greedy)

While the first approach is highly intuitive, creating two additional arrays consumes extra memory. What if the array is massive? We can solve this in a single pass with constant extra space by keeping a running track of the "smallest" and "second smallest" values.

The Intuition

As we iterate through the array, we want to build our sequence dynamically. We can do this by maintaining variables for the smallest element seen so far (verySmall) and the second smallest element (secondSmall).

The tricky part? If we find a new verySmall number, we can't just carelessly overwrite the old one if we already have a secondSmall paired with it. To solve this, this approach brilliantly introduces smallBeforeSecondSmall. This guarantees that when we finally find our third, largest number, we pair it with the exact minimum that historically came before our second-smallest number.

How It Works

  1. Initialize verySmall, secondSmall, and smallBeforeSecondSmall to infinity.
  2. Traverse the array:
    • If the current number is smaller than verySmall, update verySmall.
    • If it falls strictly between verySmall and secondSmall, update secondSmall and "lock in" the current verySmall into smallBeforeSecondSmall.
    • If we find a number strictly greater than secondSmall, we have found our trio!

The Code


class Solution {
    public ArrayList<Integer> find3Numbers(int[] arr) {
        // code here
        if (arr == null || arr.length < 3) {
            return new ArrayList<>();
        }
        int length = arr.length;
        ArrayList<Integer> result = new ArrayList<>();

        int verySmall = Integer.MAX_VALUE;
        int secondSmall = Integer.MAX_VALUE;
        int smallBeforeSecondSmall = Integer.MAX_VALUE;

        for (int num : arr) {
            if (num < verySmall) {
                verySmall = num;
            }
            else if (num > verySmall && num < secondSmall) {
                secondSmall = num;
                smallBeforeSecondSmall = verySmall;
            }
            else if (smallBeforeSecondSmall < secondSmall && secondSmall < num) {
                result.add(smallBeforeSecondSmall);
                result.add(secondSmall);
                result.add(num);
                break;
            }
        }
        return result;
    }
}

Complexity Analysis

  • Time Complexity: O(N). We iterate through the array exactly once, performing basic comparison operations at each step.
  • Space Complexity: O(1). Aside from the output ArrayList, we only use three integer variables, meaning our memory footprint remains constant regardless of the array's size.

I shipped 41 tools on a $5 VPS in 4 days — here is everything I learned

2026-04-11 03:22:23

Four days ago I started a 30-day challenge: ship as many useful tools as possible on a single $5 VPS, using Python + Flask + SQLite + systemd. No Docker, no Kubernetes, no cloud functions.

Today the counter is at 41 tools across 9 categories, all live at foxyyy.com.

What I shipped

Crypto & Trading (the starting point):

  • Cross-exchange funding rate scanner (20 exchanges, ~6,800 perps)
  • Autonomous signal alerts bot
  • Exchange uptime tracker
  • Historical dataset (2.58M rows, CC BY 4.0)

Developer Tools (the largest category):

French Business Tools:

Utilities:

And more. Full changelog: foxyyy.com/changelog

The stack

Every tool runs on one $5 OVH VPS (2 vCPU, 2GB RAM):

  • Python 3.12 + Flask
  • SQLite (WAL mode) for anything that needs persistence
  • systemd for every service (17 active units)
  • nginx as reverse proxy + Let's Encrypt HTTPS
  • No Docker, no Redis, no Postgres

Total memory usage across all 17 services: ~400 MB. CPU mostly idle.

What I learned

  1. Client-side tools are free to ship. Half of these tools (regex, base64, JSON, UUID, pomodoro, etc.) are pure JavaScript. No server process, no port, no systemd unit. Just an HTML file with <script> tags. Once routed, the cost of hosting them is literally zero.

  2. The boring stack scales to dozens of services. systemd + SQLite + Flask is enough for everything I've built. Each service starts in <1 second, uses ~20-40MB, and auto-restarts on failure.

  3. Distribution is harder than building. I can ship a tool in 30 minutes. Getting 10 people to see it takes 10x longer. My Twitter account has 2 followers. My dev.to profile is brand new. The tools are invisible without external traffic.

  4. A pricing page makes the free tools feel more valuable. Once I added a pricing page that shows "30+ tools are free, 4 are paid", the free tools stopped feeling like side projects and started feeling like a product.

Monetization

4 products have Stripe payment links:

  • 30 Boring Patterns — €19 one-time (production recipes for solo devs)
  • Flask SaaS Starter Kit — €29 one-time (boilerplate with auth + billing)
  • Screenshot API Pro — €9/month (higher rate limit)
  • Email Checker Pro — €5/month (bulk API)

Revenue so far: €0. Day 4. The funnel exists, the traffic doesn't yet.

What's next

26 more days. More tools in more domains. The goal is to find which tool has organic traction and double down on it. Every tool is a lottery ticket — the more I ship, the higher the chance one breaks through.

If you want to explore: foxyyy.com

Built by Clément Slowik. All tools open on foxyyy.com. OSS repos on GitHub.