MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Why $9/mo SaaS is Dead in 2026

2026-04-09 12:14:42

The Era of Cheap Acquisition is Over

Between 2015 and 2021, a dominant strategy emerged for independent software developers: build a simple, single-feature productivity tool, price it at $5/mo or $9/mo, and blast it softly across Reddit, ProductHunt, and cheap Facebook Ads.

At a $5/mo price point, resistance is nonexistent. The conversion rate is high, and achieving a flashy $20,000 MRR metric felt inevitably simple given enough top-of-funnel volume.

Welcome to 2026. The math no longer works. It hasn't worked for years.

The Margin Compression Trap

If you are a solo founder or a small team, pricing is your single biggest lever. Let's look at the brutal reality of the $9/month SaaS app today:

  1. Stripe Fixed Fees: Take away $0.30 immediately. You drop to $8.70.
  2. Customer Acquisition Cost (CAC): The cost of driving highly qualified clicks via Google Ads or meta platforms has skyrocketed. Assuming a 5% baseline conversion rate, you might be paying $1.50 per click, making your CAC $30.
  3. The Payback Period: At $8.70 actual revenue, you aren't profitable on that customer until Month 4.
  4. Churn: B2C software and cheap prosumer software has astronomically high churn (often over 8% monthly). A huge portion of your users will cancel in Month 2.

You are losing money on acquiring these users. You are effectively paying your customers to use your software.

B2B is the Only Viable Independent Path

When founders ask us for strategy advice, we tell them to shut down their B2C habits. Stop building habit trackers. Stop building $5 AI tools that write tweet drafts.

You must transition to B2B architecture that solves a critical business bottleneck. Businesses do not blink at paying $49/mo, $99/mo, or even $499/mo for software that directly saves them manual labor or drives new revenue.

When your Monthly Recurring Revenue is jumping in increments of $99, the unit economics of a $40 Customer Acquisition Cost become profoundly profitable on Day 1.

Do the Math Yourself

Before writing a single line of backend logic, you must prove the unit economics of your business model. You don't need a Wall Street firm; you just need disciplined basic math.

We built the Break-Even Analysis template precisely so founders can visually map their Fixed Costs, Variable Costs, and Price Points to determine exactly how many users and what pricing tiers are required to achieve profitability.

Don't launch a $9/mo tool. Use our Financial Models to stress-test your pricing strategy before you build the product, and charge what it actually takes to survive.

Why monitoring your search engine matters: Manticore ➡ Prometheus ➡ Grafana

2026-04-09 12:13:56

One of our users reached out recently with a familiar problem: search had suddenly become noticeably slower, even though nothing looked obviously broken.

The service was up, no errors in the logs, CPU usage looked normal — yet users were starting to complain that results felt sluggish.

This is how search problems usually show up in production. Not with a dramatic outage, but as a slow, creeping degradation. A little more traffic here, some extra indexing there, and before you know it, performance has slipped.

By the time users notice, the real issue has often been building for hours. Without good visibility you’re left guessing: Is the system overloaded? Is one table eating up resources? Or is something else quietly going wrong?

That’s why monitoring matters. It turns the vague “search feels slow” complaint into something you can actually diagnose and fix.

Introducing the Manticore Grafana dashboard

This is exactly what our new Manticore Grafana dashboard is built for.

Instead of raw metrics, it gives you a clean, practical view of what really matters when running search in production. At a glance you can see:

  • Is the node healthy?
  • How heavy is the current load?
  • Are queries slowing down?
  • Which tables are using the most resources?

It’s designed to help you move quickly from a user symptom to the actual root cause.

How the stack works

The setup is straightforward: Manticore → Prometheus → Grafana.

Manticore exposes rich internal metrics, Prometheus collects and stores them as time-series data, and Grafana visualizes everything with our pre-built dashboard — including 21 production-ready alerts.

You can launch the entire stack with a single Docker command:

docker run -e MANTICORE_TARGETS=localhost:9308 -p 3000:3000 manticoresearch/dashboard

(Just change the MANTICORE_TARGETS environment variable if your Manticore instance is running somewhere else.)

If you prefer to set things up manually, grab these files:

Minimal Prometheus scrape config:

scrape_configs:
  - job_name: "manticore"
    static_configs:
      - targets: ["localhost:9308"]

Exploring the dashboard

The dashboard is laid out so you can follow a natural troubleshooting flow.

1. Health summary (start here)

Open the dashboard and look at the top row first. It gives you an instant picture of the node’s overall health.

Key panels to watch:

  • Health / Up — Is Prometheus even able to scrape metrics?
  • Health / Crash indicator — Any recent crashes?
  • Workers Utilization % + Load / Queue pressure — These two together are gold. High utilization plus rising queue pressure is one of the clearest early signs the node is approaching saturation.

The System Score panel also gives you a quick overall health rating at a glance.

2. Query load and latency

Next, check what kind of workload the system is handling.

  • QPS Total shows overall traffic levels.
  • Search Latency (p95/p99) is one of the most important panels — averages can hide problems, but percentiles show what your users are really experiencing.
  • Slowest Thread helps spot expensive or stuck queries.
  • Work Queue Length and Worker Saturation together tell you whether the node is keeping up or starting to fall behind.

3. Memory and resources

This section is one of the most useful because memory pressure is a very common (and often hidden) cause of slowdowns in search engines. Instead of showing one vague number, the dashboard breaks it down so you can see exactly where the growth is happening.

  • Searchd RSS and Buddy RSS show the total resident memory — how much physical RAM the main search daemon (searchd) and the Buddy helper process are actually using right now.
  • The Anon RSS panels go one level deeper. “Anonymous” memory is the private, dynamic RAM allocated by Manticore itself (think heap, query caches, loaded data structures, temporary buffers — everything not backed by a file on disk). Unlike file-mapped memory (which the OS can page out or reclaim), anon memory is what usually puts real pressure on your system.

Why show both RSS and Anon RSS? Total RSS gives you the big picture, but Anon RSS tells you the story behind it. If total RSS is climbing but Anon RSS is stable, the growth might be harmless (e.g. more cached files). If Anon RSS is also rising fast, that’s usually a sign that Manticore’s own data structures or query activity are consuming more and more memory — exactly the kind of thing that leads to slower queries or even swapping.

At the bottom you’ll also see several quick counters:

  • Resources / FDs (searchd) — current number of open file descriptors used by the search daemon. Manticore opens a lot of files for indexes (especially large real-time tables with many disk chunks). If this number gets too high you can hit the OS limit and start seeing “Too many open files” errors. You can raise the soft limit with the max_open_files setting (see the Manticore docs on server settings).
  • Active workers, table counts, and non-served tables — all quick signals that something might need attention.

4. Table-level insights

Now zoom in on the data itself.

  • Document counts per table
  • Top 10 tables by RAM and disk usage
  • Tables / Health panel — this one is particularly valuable because it combines docs, RAM, disk, and state flags (locked/optimizing) in a single view.

5. Cluster state and history


For distributed setups you get node status and sync state. The history section is excellent for answering the most important question during any incident: what changed right before things slowed down?

Conclusion

Remember the user who reached out because search had suddenly become noticeably slower?

Once he enabled this dashboard, the problem became obvious almost immediately: workers were getting busier, queues were growing, and memory pressure was building — all before any obvious errors or crashes appeared. With clear visibility into what was actually happening inside the engine, he quickly pinpointed the root cause, made the right adjustments, and got performance back to the fast, reliable level his users expected.

The real value of monitoring isn’t just seeing pretty graphs. It’s catching those creeping issues early — before they cost you money or customers.

This dashboard removes that blind spot. It gives you the visibility you need to keep your search fast and reliable.

Why We Built an AI Market Research Tool to Pivot Our Own Company

2026-04-09 12:13:44

The Feature Factory Trap

About six months ago, Acrutus was stuck. Like many technical founders, we had fallen deep into the "Feature Factory" trap. We were building complex AI features into an existing application, assuming the sheer force of our technical architecture would attract paying users.

It didn't.

We were solving technical challenges that were fun to code, but mapping no real market value to business operations.

We needed a pivot, but we didn't want to rely on gut instinct or scanning random HackerNews threads to find it. We are engineers; we needed an objective, mathematical approach to finding real market pain.

The FWPN Database

"Find What People Need." That was the objective.

We began quietly building an internal, highly specialized market research platform. We hooked up Apify scrapers to deep-crawl specific subreddits (r/sweatystartup, r/SaaS, r/Entrepreneur). We built ingestion pipelines mapping to an AI engine that didn't just summarize posts, but aggressively scored them based on intent to pay, urgency, and founder pain.

Within a few weeks, we had amassed an internal database of 1,841 startup ideas. This wasn't a list of "AI prompt generator" ideas; this was raw data scraped from business owners complaining about broken invoicing software, unscalable ad tracking, and workflow bottlenecks.

What the Data Told Us

A strange, fascinating trend began emerging across the top quartile of our scored data pipeline. People were desperately trying to build internal tools and basic workflow applications, but were failing at the starting line.

They weren't failing because their business logic was flawed. They were failing because setting up a high-quality frontend infrastructure, writing the boilerplate authentication UIs, and structuring clean data dashboards was taking them 3 weeks instead of 3 days.

The market didn't need another generic App UI library with 7,000 components requiring a Webpack PhD to install. It required beautifully designed, pre-fabricated, robust starting points.

The Birth of Acrutus Templates

We stopped building our previous AI wrapper. Instead, we realized that our core competency—building ultra-premium, conversion-optimized interfaces—was exactly what the market was requesting in the FWPN database.

We decoupled the frontend aesthetics from the backend framework wars, and the Acrutus Template platform was formed. We wanted to build the perfect admin panels, the sharpest SaaS landing pages, and the cleanest financial dashboards, delivering them in pure, unadulterated HTML/CSS so any founding engineer could deploy it instantly.

We built an AI platform to launch AI startups, and ironically, the data told us to build picks and shovels for the gold rush instead.

If you're stuck in a technical rut, step back. Find what people need. And if you need to build what they need quickly? Our templates are waiting.

Why Pure HTML/CSS Templates Still Rule in 2026

2026-04-09 12:12:49

The Boiling Frog of Frontend Complexity

If you've bought a "SaaS Boilerplate" or "UI Kit" recently, you know the exact script. You clone the repository, enthusiastically run npm install\, and watch as 1.4 gigabytes of dependencies flow into your node_modules\.

Thirty seconds later, you boot up the dev server and are immediately greeted by 47 terminal warnings regarding peer dependency conflicts, a deprecated hook, and a mysterious hydration boundary mismatch. You spend the next three days fighting middleware routing and a rogues' gallery of state management bugs.

You didn't want to become a DevOps engineer. You didn't want to master the idiosyncratic rendering lifecycle of React Server Components. You just wanted a nice-looking dashboard table for your user data.

Welcome to modern web development, where the barrier to entry for shipping a simple landing page has reached terminal velocity.

The Case for Bare Metal HTML & CSS

This creeping complexity is exactly why we built the Acrutus template catalog entirely around pure HTML and CSS. By surgically stripping away the framework logic, we remove 90% of the friction holding back developers from actually shipping their product.

When you buy an Acrutus template, you aren't fighting a tech stack. Instead:

  1. True Portability: You get semantic, accessible HTML that can be dropped verbatim into any backend templating engine. Building a Python/Django monolith? Dropping it into Laravel Blade? Prototyping in Go with html/template\? It just works.
  2. Zero-Dependency Styling: You get a single, meticulously crafted styles.css\ file built with zero dependencies. No PostCSS configurations to debug. No Tailwind CLI fighting. No build step required just to change a button from blue to green.
  3. Eternal Shelf Life: JavaScript frameworks churn every 18 months. An Acrutus template written today will render perfectly in a browser 15 years from now.

Returning to the Fundamentals

Our users—primarily backend engineers, indie hackers, and data scientists—consistently report launching their MVPs up to 3x faster using our templates. They aren't spending cycles fighting UI tooling. They define their data models, wrap the results in our markup, and it instantly looks like a million-dollar enterprise product.

For instance, looking at our SaaS Analytics Dashboard, the entire aesthetic is driven by a lightweight CSS variables system. Want to change the accent brand color or the dark-mode background threshold? You alter three CSS variables at the root level, and the entire application seamlessly updates. Try doing that across a 50-component React tree with hardcoded utility classes.

The Verdict

The future of web development isn't always more complexity, deeper abstractions, and heavier client-side bundles. Sometimes, the most powerful superpower for a developer is just having exceedingly good CSS.

Stop fighting your UI. Download the markup, plug in your backend, and go launch your product.

Open Source: Control Claude Code / Codex CLI Entirely from Your Phone with Feishu (Lark) — Approve, Choose, and Send Commands on the Go

2026-04-09 12:10:40

Ever had this happen? Claude Code has been running for a while and finally pops up a permission prompt — but you've already walked away to grab coffee.

Or Codex finishes an entire task, and you only realize it when you come back — ten minutes too late.

Today I'm open-sourcing a little tool I use daily — Agent Notifier. It routes all interactions from Claude Code and Codex CLI to Feishu, so you can handle everything from your phone without being chained to the terminal.

GitHub: https://github.com/KaminDeng/agent_notifier

1. The Problem It Solves

When you use Claude Code or Codex CLI for coding, the biggest pain point isn't that the AI isn't smart enough — it's the interaction gap:

  • The AI needs your approval to run a command → you're away from your desk → task stalls
  • The AI offers three approaches and asks you to pick one → you don't see it → task stalls
  • The task finishes → you have no idea → you've been waiting for nothing for 30 minutes

At its core, these CLI tools have no mobile interaction layer. You have to sit in front of the terminal, watching the output in real time.

Agent Notifier's approach is simple: turn Feishu into your remote terminal controller.

Pain Point Solution
Stuck waiting for permission prompts at the terminal Feishu pushes interactive cards in real time — one tap on your phone
Want mobile access, but there's no official app Feishu is a full multi-platform app — iOS / Android / Mac / Windows / Web
Self-hosted push notifications need a server, domain, and SSL Feishu's long-connection (WebSocket) mode requires no public IP — direct local connection
Enterprise app approval is a bureaucratic nightmare Feishu's custom enterprise apps get instant approval; personal accounts work too — completely free
Running tasks across multiple terminals, notifications get tangled Multi-terminal parallel routing — each terminal gets its own delivery channel, no cross-talk

2. What It Looks Like

Here are actual screenshots from the Feishu mobile app:

Permission Confirmation & Option Selection

When Claude Code needs you to approve a command, or an AskUserQuestion prompt pops up with choices, Feishu sends you an interactive card:


  

  

Left: Permission confirmation — Allow / Deny / Allow for Session / Allow Always + text input
Center: Permission options — Tool option buttons (e.g. ExitPlanMode) + text input
Right: Option selection — AskUserQuestion with dynamic choices + Other + free-form input
Footer: project name · terminal ID (fifo:/tmp/claude-inject-ptsN) · session duration · timestamp

Tap a button or type a response, and the action flows directly back to your local terminal — as if you were typing on your keyboard.

Live Execution Summary

Every time Claude invokes a tool, Feishu pushes a real-time execution summary card. Cards for the same task update in-place instead of flooding your chat:

Live execution summary — tool call table · in-place patch updates for the same task
Footer: project name · timestamp

Task Completion Notification

When a task finishes, you get a completion card with a change summary, token usage, and duration stats. There's a text input at the bottom so you can continue the conversation right away:


   

Left: Change summary + text input to continue chatting
Right: Test results table + text input to continue chatting
Footer: project name · session duration · timestamp · token usage breakdown (input / output / cache read / cache write)

3. Supported Card Types

Scenario Card Color Description
Permission confirmation Orange Allow / Allow for Session / Deny + text input
AskUserQuestion (single choice) Orange Dynamic option buttons + Other + text input
AskUserQuestion (multi-part) Orange Q1 → Q2 → Q3 sent sequentially
Task complete Green Summary, duration, tokens + text input
Abnormal exit Red Error details + text input
Live execution summary Blue In-place patch updates for the same task

4. Why Feishu Instead of WeChat / Telegram / Slack?

This is the question I get asked the most, so let me address it:

  1. Feishu custom apps are free — you can create one with a personal account, and enterprise approval is instant
  2. Feishu supports long connections (WebSocket) — no public IP or domain required. Whether you're at home, at the office, or on a VPN, it just works
  3. Feishu's interactive card system is incredibly powerful — buttons, text inputs, tables, multi-column layouts — far beyond what a basic message notification can do
  4. Feishu syncs natively across all devices — phone, desktop, tablet, and web all receive and can interact with the same card
  5. No need to build a separate app — the Feishu client itself becomes your remote control

That said, the architecture uses a channel abstraction layer (src/channels/), so adding support for other platforms is straightforward.

5. Get Started in 5 Minutes

Prerequisites

  • Node.js >= 18
  • Python 3 (for the PTY terminal relay)
  • A Feishu account

Installation

# 1. Clone the repo
git clone https://github.com/KaminDeng/agent_notifier.git
cd agent_notifier

# 2. Edit .env with your Feishu app credentials (auto-created on first run)
#    FEISHU_APP_ID=your_app_id
#    FEISHU_APP_SECRET=your_app_secret

# 3. One-command install (handles all configuration automatically)
bash install.sh

# 4. Reload your shell
source ~/.zshrc  # or source ~/.bashrc

# 5. Use Claude Code as usual
claude

install.sh automatically takes care of:

  • Installing npm dependencies
  • Writing Claude Code hooks to ~/.claude/settings.json
  • Injecting claude / codex shell wrapper functions
  • Starting the Feishu listener and registering it for auto-start (launchd on macOS, systemd on Linux)

Running install.sh again is safe — it always cleans up before reinstalling.

Feishu App Setup (3 Minutes)

  1. Log in to the Feishu Open Platform and create a custom enterprise app
  2. Copy the App ID / App Secret into your .env file
  3. Enable bot capabilities
  4. Set the event subscription to Long Connection (no public IP needed)
  5. Add the event: card.action.trigger
  6. Request permissions: im:message, im:message:send_as_bot, im:chat:readonly
  7. Publish the app and add the bot to your target group

That's it. No domain, no SSL certificates, no dedicated server.

6. Cross-Platform Support

Platform Service Management Auto-Start
macOS launchd RunAtLoad + KeepAlive
Linux (systemd) systemd user service systemctl --user enable
Linux (SSH / no systemd) nohup + crontab @reboot crontab fallback

Uninstalling is a single command:

bash uninstall.sh

This cleans up all configuration, stops services, and removes hooks and shell injections.

7. How It Works (Brief Overview)

There are two main pipelines:

Claude Code Pipeline:

Claude Hooks fire an event → hook-handler.js parses it → generates Feishu interactive card
                                                              ↓
User taps/types in Feishu → feishu-listener.js receives callback → injects input into local terminal

Codex CLI Pipeline:

pty-relay.py creates a PTY terminal proxy → captures Codex output → generates Feishu card
                                                              ↓
User taps/types in Feishu → feishu-listener.js receives callback → injects input via FIFO into terminal

Terminal injection supports multiple methods: tmux, PTY FIFO, pty master direct write, and TIOCSTI — the best method is auto-detected.

8. Project Structure

agent_notifier/
├── install.sh / uninstall.sh          # Install / uninstall
├── hook-handler.js / live-handler.js  # Claude Hooks entry points (thin shims)
├── bin/
│   └── pty-relay.py                   # PTY terminal relay
├── src/
│   ├── apps/                          # App entry points (claude-hook, claude-live, feishu-listener, etc.)
│   ├── adapters/                      # Claude / Codex adapters
│   ├── channels/                      # Feishu channel (card rendering, client, interaction handling)
│   ├── core/                          # Low-level primitives (session store, terminal injector)
│   └── lib/                           # App-level services (env config, session state, terminal inject)
├── tests/                             # 81 test cases
└── scripts/                           # Debug / test scripts

9. Who Is This For?

If any of the following apply to you:

  • You use Claude Code or Codex CLI for daily coding
  • You frequently step away from your desk and don't want tasks stalling on permission prompts
  • You want to interact with your AI coding assistant from your phone
  • You run multiple terminals simultaneously and need a unified notification hub

Then this tool was built for you.

10. Wrapping Up

The project is fully open source under the MIT license. Stars, forks, and issues are all welcome.

GitHub: https://github.com/KaminDeng/agent_notifier

If you find this tool useful, a star on the repo is the best way to show your support. Feel free to open an issue if you have questions or feedback.

Introduction to Monetized AI

2026-04-09 12:09:31

What is Monetized AI?

Monetized AI refers to the use of artificial intelligence to generate revenue. This can be achieved through various means, such as creating and selling AI-powered products or services, or using AI to optimize business operations and increase efficiency.

Applications of Monetized AI

Monetized AI has numerous applications across industries, including healthcare, finance, and marketing. For instance, AI-powered chatbots can be used to provide customer support, while AI-driven analytics can help businesses make data-driven decisions.

Conclusion

In conclusion, monetized AI has the potential to revolutionize the way businesses operate and generate revenue. As AI technology continues to evolve, we can expect to see even more innovative applications of monetized AI in the future.