MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

You Think Being Replaced by AI is Scary? I Think Developer Irrelevance is the Real Horror

2026-01-29 16:15:05

Everyone's Watching the Wrong Movie

The discourse around "AI replacing developers" has become exhausting. We're all arguing about the wrong thing.

One camp is screaming: "AI will take all our jobs! Learn to prompt or perish!"

The other camp is laughing: "AI can't even handle production-grade distributed systems! We're safe!"

Meanwhile, I'm over here watching something completely different happen. And it's way scarier than either narrative.

The market doesn't need as many of us anymore.

Not because AI got "good enough" to replace us. But because the gap we were filling is closing from a completely different direction.

Let Me Tell You About Bob

Bob runs a small development shop. Makes decent money building apps for local businesses. Nothing fancy - inventory systems, booking platforms, internal dashboards. Bread and butter work.

Karen owns a small restaurant chain. She has accounting problems. Her invoices are scattered across folders, formats are inconsistent, reconciliation is a nightmare.

The Old World:

  1. Karen realizes: "I need software for this"
  2. Karen emails Bob: "Can you build me an invoice reconciliation tool?"
  3. Bob quotes $3,000, two-week timeline
  4. Karen either pays or continues suffering manually
  5. Bob feeds his family

The New World:

  1. Karen has the same problem
  2. Karen opens ChatGPT: "I need to compare these Excel files and show me which rows are different"
  3. ChatGPT spits out a Python script
  4. Karen runs it on her laptop
  5. Problem solved
  6. Bob never gets the email

"But wait," the engineers cry, "what happens when it breaks? Surely she'll need Bob to maintain it!"

You think like a programmer. Go touch some grass in the Accounting department.

When Karen's Excel pivot table breaks—when the formula throws a #REF! error or the VLOOKUP breaks—does she call Bob the Excel Consultant? Does she open a Jira ticket? Does she enter a 3-week maintenance cycle?

No. She mutters "ugh this stupid thing," hits Ctrl-A, Delete, and spends 12 minutes making a new one from scratch. She treats it like a sticky note, not a cathedral.

Cost of regeneration: $0.

Cost of her time: ignored.

Call to Bob: never happening.

The script broke? She'll just prompt a new one. NewInvoiceBot_v7.py replaces NewInvoiceBot_v6.py the same way she made a new Excel sheet for Q4 instead of fixing the Q3 one. No maintenance culture exists because no asset value was assigned.

You were maintaining code because you viewed it as infrastructure. Karen views it as consumable.

The Localhost:3000 Revelation

You know what the big dogs love to mock? Vibe coders sharing screenshots of their localhost:3000 projects.

"Lmao you didn't even deploy it? That's not real software!"

But here's the thing they're missing: for most people, localhost IS the deployment.

The localhost:3000 screenshot isn't amateur hour. It's the Excel-fication of code. Disposable. Ephemeral. Infinitely regeneratable.

We laughed BUT we missed the bigger precedent: Excel already proved that business users prefer disposable regeneration over maintenance.

Karen doesn't need "software as a service." She needs "software as a consumable." Like a paper towel. Use it, crumple it, grab another.

The Infrastructure We Built is Post-Tragedy Necessity

Here's where it gets existential.

Why do we have:

  • Authentication systems?
  • Multi-tenancy?
  • Cloud infrastructure?
  • PCI-DSS compliance?
  • Load balancers?
  • API gateways?
  • Maintenance retainers?

Not because they're inherently necessary to solve problems.

They're necessary because we built an entire industry around SELLING software as assets to people who couldn't make it themselves. We convinced businesses that software is capital expenditure—something to maintain, depreciate, and protect.

But go look at Karen's desktop. How many "broken" Excel files are sitting there? Zero. She deleted them. How many "deprecated" Python scripts? She doesn't keep deprecated things because there's no cost to regeneration.

How Many of You Actually Work at Netflix Scale?

Every AI-replacement thread devolves into:

  • "AI can't handle distributed systems at scale!"
  • "What about microservices architecture?"
  • "Complex algorithmic optimization!"

Cool. How many developers actually work on those problems?

Maybe 5%?

The other 95% are:

  • Building CRUD apps (Excel with extra steps)
  • Making forms talk to databases (Excel with a web UI)
  • Integrating third-party APIs (Excel VLOOKUP to external data)
  • Building internal dashboards (Excel charts, but slower)
  • WordPress sites (Excel for websites)
  • E-commerce platforms (Excel with checkout)
  • Simple mobile apps (Excel on your phone)

This is EXACTLY the work AI is getting scary good at. And it's exactly the work that can be treated as consumable rather than maintained.

The Disposability Gap is Everything

When Karen can solve 70% of her own problems with disposable, regeneratable tools, she doesn't accumulate technical debt. She throw away the debt with the tool. Bob's business just shrank by 70%, but he's still waiting for the maintenance call that'll never come because there's nothing to maintain—only things to regenerate.

Will people still build software? Yes.

Will "Software Engineer" be a large, distinct profession in 10 years? Maybe not. Or rather, it'll be split between:

  • Complexity Surgeons (the 5% doing distributed systems, security, stateful multi-user nightmares)
  • Literate Business Users (the 95% who "write code" the way they "write Excel formulas"—disposably, daily, without calling it "engineering")

What I AM Saying

Getting replaced is scary but clean. It's dramatic. You see it coming.

Irrelevance is insidious.

It's Bob watching his client calls drop from 20/month to 15 to 10 to 5, each time thinking "just a slow quarter," not realizing that Karen now treats code like she treats spreadsheet cells—disposable, regeneratable, not worth a phone call.

It's looking at your Kubernetes certification and realizing the market is moving toward infinite sticky notes that need no orchestration because they're used once and deleted.

It's defending against the headshot while bleeding out from deprofessionalization.

Look outside your cubicle. Look at Accounting. Look at Marketing. They don't maintain their Excel messes—they remake them. And they're about to treat your beautiful REST APIs and React components the exact same way.

PS: Yes, I wrote this with AI assistance. Because I'm not a Lotus 1-2-3 consultant pretending Excel doesn't exist. I'm adapting.

PPS: If this made you angry, hit that "Sign Out" button. But first, go ask Karen in Accounting how many times she "maintained" an Excel formula versus just making a new sheet. I'll wait.

PPPS: If this resonated, you should probably start building your ark. The flood isn't coming—it's already here. We're just standing in ankle-deep water arguing about whether Kubernetes can save us while Karen is throwing away Python scripts like used Kleenex.

Why Real-Time Communications and Web Applications Need Different Boundaries — A Comparison of SBC and WAF

2026-01-29 16:00:34

If you’ve worked with VoIP, SIP, or real-time communications, you’ve probably encountered a Session Border Controller (SBC).

If you build or operate web applications and APIs, you’re almost certainly familiar with Web Application Firewalls (WAFs).

At first glance, these two technologies seem to live in completely different worlds. One protects voice and signaling traffic, the other defends HTTP-based applications. But if you look closely at their design philosophy, SBCs and WAFs actually share a surprising amount of DNA.

This article compares SBCs and WAFs from a design perspective, explains how they solve different layers of the same security problem, and shows how combining them leads to a more resilient architecture. Finally, we’ll look at how a modern, self-hosted WAF like SafeLine fits into this picture.

Why Compare SBC and WAF at All?

Both SBCs and WAFs exist for the same fundamental reason:

Exposing a service directly to the internet is dangerous.

Whether it’s SIP signaling or a REST API, once traffic crosses an organizational boundary, you lose trust in:

  • The source
  • The intent
  • The correctness of the protocol usage

The difference is where and how each technology draws the line.

What an SBC Is Designed to Do

A Session Border Controller sits at the edge of a VoIP or real-time communication network, typically between:

  • An internal SIP infrastructure, and
  • External carriers, service providers, or the public internet

Its core responsibilities include:

  • Protocol enforcement

    Validating SIP and RTP behavior against RFCs and operator policies.

  • Session control and state tracking

    Understanding call setup, teardown, and media negotiation as stateful flows.

  • Topology hiding

    Preventing internal IPs, extensions, and infrastructure details from leaking.

  • Security and abuse prevention

    Detecting malformed SIP messages, call floods, toll fraud, and replay attacks.

The key point is this:

An SBC doesn’t just forward packets — it understands sessions.

What a WAF Is Designed to Do

A Web Application Firewall protects HTTP-based applications and APIs by sitting between:

  • Clients (browsers, mobile apps, bots), and
  • Web servers or backend services

Its responsibilities typically include:

  • Request inspection at the application layer

    Parsing URLs, headers, cookies, request bodies, and parameters.

  • Attack detection

    Identifying SQL injection, XSS, command injection, path traversal, and more.

  • Behavioral analysis

    Detecting abnormal request rates, automation patterns, and replay behavior.

  • Policy enforcement

    Blocking, challenging, rate-limiting, or logging suspicious requests.

Like an SBC, a modern WAF is not a simple filter.

It attempts to understand intent behind requests, not just syntax.

Design Philosophy: SBC vs WAF

When you strip away protocols and use cases, the design mindset is strikingly similar.

1. Protocol Awareness vs Payload Awareness

  • SBCs deeply understand SIP signaling, SDP negotiation, and RTP flows.
  • WAFs deeply understand HTTP semantics, API structures, and application context.

Both are built on the idea that generic firewalls are not enough once you reach higher-layer protocols.

2. Stateful vs Stateless Thinking

Traditional firewalls often make decisions per packet or per request.

  • SBCs are explicitly stateful. A SIP message only makes sense in the context of a call session.
  • Modern WAFs increasingly behave the same way, correlating requests across time:
    • Login attempts
    • Token reuse
    • Request sequences

Security decisions improve dramatically once state is introduced.

3. Trust Boundaries and Normalization

Both technologies sit at a trust boundary.

Their first job is normalization:

  • Is this request well-formed?
  • Does it conform to expected behavior?
  • Is it safe to forward internally?

Only after normalization does forwarding happen.

4. Balancing Security and Availability

Neither SBCs nor WAFs can afford to be overly aggressive.

  • Block too much → you break calls or applications.
  • Block too little → attackers get through.

This balance is not theoretical. It’s an operational reality that shapes how both systems are designed and tuned.

How SBCs and WAFs Complement Each Other

In modern systems, real-time communication platforms are no longer isolated.

A typical architecture might look like this:


Clients (Web / Mobile)
|
[ WAF ]
|
Web APIs / Auth
|
RTC Services
|
[ SBC ]
|
SIP / Media Providers

In this setup:

  • The WAF protects:

    • Authentication endpoints
    • REST APIs
    • Web portals
    • Automation and scraping surfaces
  • The SBC protects:

    • SIP signaling
    • Call sessions
    • Media negotiation
    • Carrier-facing interfaces

They operate at different layers, but share the same goal:
reduce attack surface before traffic reaches critical systems.

Where a WAF Still Matters, Even with an SBC

An SBC is excellent at what it does — but it does not:

  • Understand web authentication flows
  • Detect SQL injection in backend APIs
  • Stop credential stuffing against a login endpoint
  • Control abusive bots scraping your web interface

That’s where a WAF remains essential.

As more communication platforms expose:

  • Web dashboards
  • REST APIs
  • Webhooks
  • Admin panels

The web layer becomes a primary attack vector — even if SIP itself is well-protected.

SafeLine WAF: Applying These Principles to the Web

Understanding SBC design makes it easier to appreciate what modern WAFs aim to achieve.

SafeLine WAF follows many of the same principles that made SBCs effective:

Self-Hosted by Design

Just as many operators insist on running SBCs in their own network, SafeLine supports full self-hosted deployment:

  • Traffic stays in your environment
  • Logs are under your control
  • No forced data export to third parties

This matters for compliance, privacy, and operational transparency.

Multi-Layer Detection, Not Just Rules

Instead of relying purely on static signatures, SafeLine combines:

  • Attack pattern recognition
  • Behavioral analysis
  • Request context understanding

This mirrors how SBCs evolved beyond simple SIP filtering into session-aware controllers.

Designed for Real Production Traffic

SafeLine focuses on real-world usage:

  • APIs with complex payloads
  • Automation-heavy environments
  • Bot traffic that mimics human behavior

The goal is not to pass rule tests, but to protect live systems without breaking them.

Observability and Explainability

One common frustration with security tools is the “black box” effect.

SafeLine emphasizes:

  • Clear interception reasons
  • Inspectable request details
  • Tunable policies

This aligns with how experienced teams operate SBCs: visibility first, enforcement second.

Final Thoughts

SBCs and WAFs are built for different protocols, but they share a common security philosophy:

Understand the protocol, track state, normalize behavior, and enforce policy at the boundary.

In modern architectures, it’s not a question of SBC or WAF — it’s where each one fits.

When combined correctly:

  • SBCs protect real-time communication layers
  • WAFs protect web and API layers

And tools like SafeLine WAF bring the maturity and discipline of session-aware security into the web world — where attackers increasingly focus their efforts.

If you’re designing or reviewing your security architecture, looking at these technologies through a shared design lens can make your decisions clearer — and your systems more resilient.

I Turned My Resume into a Spotify Playlist

2026-01-29 15:44:11

This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI

About Me

Hi, I'm Fatiya Labibah, an Informatics Engineering student who believes that portfolios shouldn't be boring static pages. They should be an experience.

I love music and I love code. So, I asked myself: "What if recruiters could 'play' my projects like they play their favorite songs?"

The result is a fully interactive, full-stack Spotify-Themed Portfolio that blurs the line between a music app and a professional resume.

Portfolio

Live Demo on Google Cloud Run:
https://my-spotify-portfolio-891617309900.asia-southeast1.run.app

(Recommended: Click the Play Music button to hear my favorite song!)

How I Built It:The AI-First Workflow

1. Brainstorming with Gemini 3 Pro

I used Gemini 3 Pro as my Lead Architect. I fed it the concept: "Create a Spotify clone, but for a developer resume."
Gemini helped me map out the complex data schemas for MongoDB—defining how Projects, Certificates, and Experience should relate to each other, ensuring the data structure was scalable from day one.

2. "Antigravity" Development

I adopted an AI-First Development Environment (Antigravity approach). Instead of writing boilerplate code manually, I utilized AI to generate the intricate UI components, such as the responsive Sidebar logic and the Player Bar controls.

Challenge: Making the sidebar remember its state (collapsed/expanded) across refreshes while being responsive on mobile.

AI Solution: Gemini suggested a custom hook using localStorage and resize event listeners that perfectly mimicked the native app feel.

3. Deployment with Gemini CLI & Cloud Run

The most daunting part was deployment. How do you deploy a MERN stack (Frontend + Backend) efficiently?
I used AI assistance to craft a Multi-Stage Dockerfile. It builds the React frontend, optimizes the Node.js backend, and bundles them into a single lightweight container.

This container was then pushed to Google Artifact Registry and deployed to Google Cloud Run using gcloud commands generated with AI assistance, ensuring a serverless, scalable, and cost-effective deployment.

The Tech Stack

  • Frontend: React (Vite), Tailwind CSS, Framer Motion (for those buttery smooth transitions).
  • Backend: Node.js, Express.js.
  • Database: MongoDB Atlas (Mongoose).
  • Media Engine: Cloudinary (Auto-optimizing images to prevent database bloat).
  • Infrastructure: Docker & Google Cloud Run.

What I'm Most Proud Of

1. The "Player Bar" Navigation

  • Instead of standard pagination, I built a Player Bar.
  • Play/Pause: Actually toggles the project demo mode.
  • Next/Prev: Skips through my project list.
  • Share: Generates a Deep Link (e.g., /?id=campgear) so you can share a specific "track" (project) with others.

2. A Real, Hidden Admin CMS

  • This isn't just a hardcoded JSON file. I built a fully functional Admin Dashboard protected by authentication.
  • I can Create, Read, Update, and Delete (CRUD) projects, skills, and certificates directly from the UI.
  • Image Upload System: I built a custom uploader that converts images to Base64, sends them to the backend, uploads to Cloudinary, and saves the optimized URL to MongoDB—all in one click.

3. "Pixel-Perfect" Responsiveness

  • Replicating Spotify's complex grid layout on mobile was tough.
  • Desktop: Collapsible sidebar and expandable "Now Playing" details.
  • Mobile: The sidebar transforms into a bottom sheet, and the grid adapts to touch interactions.

Final Thoughts

This portfolio represents my new year and new direction:

  • More intentional design
  • Stronger engineering fundamentals
  • Real cloud deployments
  • Smarter use of AI

If there’s one thing I want judges to take away, it’s this:

I don’t just build projects.
I build systems — and I ship them.

Thank you for this challenge 🚀
It pushed me beyond “just frontend” and into real-world cloud engineering.

Tags

#devchallenge #googleaichallenge #portfolio #gemini #cloudrun

Brave Search MCP Server Token Optimization

2026-01-29 15:41:17

Brave Search API allows searching web, videos, news and several other things. Brave also has official MCP Server that you can wraps its API so you can plug into your favorite LLM if you have access to npx in your computer.

Brave Search is one of the most popular MCP Servers in HasMCP. The video demonstrates a genuine way of creating MCP Server from scratch using HasMCP without installing a npx/python to your computer by mapping the Brave API into 7/24 token optimized MCP Server using UI. You will explore how to debug when things go wrong, how to fix them in realtime and see the changes immediately taking place in the MCP Server without any restarts. You can see the details of how to optimize token usage of an API response up to 95% per call. All token estimated usages were measured using tiktoken library and payload sizes summed as bytes with and without token optimization.

High Level Look into Brave Web Search API

Brave Web Search API is a GET request to with 2 headers Accept with application/json and X-Subscription-Token basically it is your API Token. It accepts several query string params, you can access the full list from its documentation.

Sample request

curl -s --compressed "https://api.search.brave.com/res/v1/web/search?q=hasmcp" \
  -H "Accept: application/json" \
  -H "X-Subscription-Token: <YOUR_API_KEY>"

Sample response

{
    "type": "search",
    "web": {
        "type": "search",
        "results": [
            {
                "title": "<>",
                "url": "<>",
                "is_source_local": false,
                "is_source_both": false,
                "description": "<>",
                "profile": {
                    ...
                }…
            },
            ...
       ]
   }
}

The API response includes a lot of details and if not filtered it returns multiple types of responses including web, videos, etc.. LLMs regardless will try to display only necessary information to do that it will spend a lot of tokens and it might even cause issues like context bloating eventually.

Creating MCP Server from Scratch

HasMCP is a GUI-based MCP (Model Context Protocol) Server Framework. It acts as a bridge, converting standard REST API definitions into a 24/7 online MCP Server with Streamable HTTP (SSE). This eliminates the need for local npx or Python installations, allowing you to deploy MCP servers that interface purely with external APIs.

Provider details
First, we need to define the "Provider." In HasMCP, a provider represents the external service you are connecting to.

Action:

  • Name: Enter a distinct name for the provider (e.g., brave-search).

  • Base URL: Enter the root URL for the API. For Brave Search, this is https://api.search.brave.com/res/v1.

  • Description: (Optional) Add a note for yourself about what this API does.

HasMCP - API Provider Details

Tool details

Now we define the "Tool." This is the specific function the LLM (like Claude or Cursor) will see and call. We need to map a specific API endpoint to a tool definition.

Action:

  • Method: GET.

  • Endpoint Path: /web/search.

  • Tool name: webSearch (use camelCase to help LLMs to parse)

  • Parameters: Define the inputs the LLM must provide.

  • Key: q

  • Type: string

  • Description: The search query to find information about.

HasMCP - Tool Details

Token optimization

APIs often return massive JSON objects with meta-data that LLMs don't need. This wastes token limits and slows down context. We use a Response Interceptor (using JMESPath) to filter the output.

Action:

  • Select Tool: Choose webSearch.

  • Interceptor Expression: specific JMESPath query to extract only the essentials.

  • Expression:

web.results[].{title: title, url: url, description: description}

Raw API Response

{
  "web": {
    "results": [
      { "title": "Example", "url": "...", "meta_url": "...", "thumbnail": "...", "age": "..." }
    ]
  },
  "query": { ... },
  "mixed": { ... }
}

Optimized response

[
  {
    "title": "Example",
    "url": "[https://example.com](https://example.com)",
    "description": "This is the description."
  }
]

HasMCP - Response Interceptor Window

API Key
We must securely store the credentials required by the 3rd party API. HasMCP injects these into the request headers automatically.

Action:

  • Header Name: Look up the API docs. Brave requires X-Subscription-Token.

  • Value: Paste your actual API Key from the Brave Developer Portal.

Brave Search API Key for MCP Server

Generate MCP Server Token

To expose your new server to an LLM client, you need a secure entry point. HasMCP generates a unique URL protected by a token.

HasMCP - Generate Auth Token for MCP Server

Action:

  • Expiry: Set the token duration (e.g., "Never" or "30 days").

  • Generate: Click Generate Token.

  • Copy Connection URL for your favorite LLM: This URL is what you will paste into your Claude Desktop config or Cursor settings.

MCP Telemetry/Analytics

Once connected, you can monitor the health and efficiency of your server. This is crucial for verifying that your Token Optimization (Step 3) is actually working.

HasMCP - MCP telemetry, token optimization stats

Debug

If the LLM says "I encountered an error," you use the Debug logs to see exactly what happened between HasMCP and the Brave API.

Action:

  • Open the Debug (Realtime Logs/Telemetry) tab.

  • Trigger a request from your LLM.

  • Inspect the Request (did the LLM send the right query?) and the Response (did the API key fail?).

HasMCP - Realtime MCP server debug logs

How Migros Online protects its assets with Cloudflare - a DDoS Story

2026-01-29 15:35:48

Nowadays, attackers are more and more eager to get your website down, to gather your data, to exploit every little vulnerability you might have. It has become a major concern for every company to protect itself against any malicious activity.

When thinking about what to put in place to improve your security, you have mainly two situations:

  • You're big enough and have the knowledge in-house to manage the whole security stack
  • You don't have the expertise or you don't want to spend a huge human effort on setting up the security

In the first case, you need to have one or multiple teams dedicated to the security to build a safe and secure infrastructure and to keep it up to date with the latest vulnerabilities found. Knowing that you should ask yourself "when" and not "if" you're going to be attacked, this team should also know how to react when something goes south at each security level.

For small companies or companies that don't want to invest in a highly-skilled security team, you will probably search for market solutions and providers that are able to handle these concerns for you, or at least that are going to ease the management of many security aspects. Nevertheless, it won’t prevent you from having security dedicated people to manage the selected solutions as well as other security aspects, like people awareness to prevent phishing for example.

Beware that choosing a third party comes with its downsides: you become, at a certain level, dependent of their infrastructure, their partners and their problems (availability, security). Thus, you can encounter issues over which you have no control.

Migros Online and Cloudflare

At Migros Online, we decided a few years back to work with Cloudflare to have a unique entrypoint for our infrastructure (on-premises back then, in the cloud today).

Using such a tool brought us many security and performance aspects for our website and our mobile applications:

  • Content Delivery Network (CDN): edge caching allows us to serve assets without hitting the backend on every requests
  • Web Application Firewall (WAF): we are able to protect our public endpoints with simple rules in a few clicks (or a few Terraform line of code ;-))
  • Basic sets of rules that are managed by Cloudflare directly allowing us to fix deeper issues with serenity (as an example, the log4Shell vulnerability was automatically handled by Cloudflare, giving us the time to patch our backend systems without pressure)
  • Bot protection: automatic categorization of the traffic and possibility to easily act on requests based on the rating done by the platform
  • Distributed Denial of Service (DDoS) protection: automatic discovery of DDoS attacks and direct mitigation
  • Zero Trust mechanisms: we are able to expose private endpoints, but secure them behind the Zero Trust product, bound with our authentication provider
  • Cloudflare Warp: a tunneling solution to access internal resources that we don't want to expose publicly, even behind Zero Trust

Thanks to Cloudflare, we were able to consolidate our public exposure, simplify its management and get confidence that we are in good hands when problems arise.

Cloudflare Immerse 2025

As an example of Cloudflare's usage for Migros Online, I went on stage (for the first time!) during the Cloudflare Immerse 2025 event in Zurich to present how Cloudflare helped us in mitigating DDoS attacks we faced in the past.

The recording is available below and outlines the Migros Online context, what issues we faced and how Cloudflare was a key element in solving the problem.

Backstage SaaS &amp; Open Source Alternatives

2026-01-29 15:26:39

Spotify's Backstage created the Internal Developer Portal category. It showed that a central hub for services, documentation, and tooling could actually improve how developers work. But here's what a lot of engineering leaders figure out the hard way: Backstage isn't right for every team.

If you're reading this, you've probably realized that adopting Backstage means committing serious engineering resources. You need a dedicated team, TypeScript expertise, and months of setup time. For a lot of teams, the operational overhead just isn't worth it.

But the problem Backstage solves doesn't go away. As your organization grows past about 150 people, you hit what's called the Dunbar Number Effect. Anthropologist Robin Dunbar found that humans can only maintain stable social networks of around 150 people. Go beyond that, and tribal knowledge disappears. Nobody knows who owns what. The informal Slack channels that worked when you were 30 people turn into complete chaos.

You need a system to fix this chaos, a "system of record" for your engineering organization. But you don't need the complexity of self-hosted Backstage to get there.

This guide covers the best Backstage alternatives in 2026. I'll break down three paths you can take, Build, Buy, or Hybrid, and help you figure out which one makes sense without drowning your platform team in maintenance work.

The Three Paths to an IDP: Build, Buy, or Hybrid

Before we look at specific tools, you need to understand the three approaches you can take:

Build (Self-Hosted Backstage): You take the open-source Backstage project and dedicate engineers to build, customize, and maintain your own portal. Ultimate flexibility, but significant headcount and operational costs.

Buy (Proprietary IDPs): You purchase a SaaS solution from a vendor like Cortex or Port. Quick to set up and feature-rich, but you're locked into their proprietary data model.

Hybrid (Managed Backstage): You use a service like Roadie that handles the hosting and maintenance of Backstage for you. You get the open-source ecosystem without the operational burden.

Backstage Alternatives: At a Glance

Here's a quick comparison of your options:

Tool Core Technology Hosting Model Key Strength Ideal For
Roadie Backstage SaaS (Managed) Backstage ecosystem without the overhead Teams who want Backstage's power but need a managed solution
Cortex Proprietary SaaS Engineering metrics and scorecards Organizations focused on measuring service quality
Port Proprietary SaaS Developer-friendly API and flexibility Teams building custom workflows
Atlassian Compass Proprietary SaaS Deep Atlassian integration Companies invested in the Atlassian stack
OpsLevel Proprietary SaaS Service maturity and reliability checks SRE teams enforcing production standards
Self-Hosted Backstage Backstage (OSS) Self-Hosted Ultimate customization Large orgs with dedicated platform teams (5+ engineers)

The Hybrid Approach: Managed Backstage

Roadie

Roadie isn't an alternative to Backstage, it's a different way to adopt it. The core idea is that you shouldn't have to choose between the power of an open-source community and the convenience of a SaaS product.

I've talked to a lot of teams who committed to self-hosting Backstage, only to realize it requires 3-12 engineers and 6-12 months to get production-ready. That's a significant investment. Roadie solves this by providing a secure, scalable, and fully managed Backstage experience out of the box.

Best for: Teams who've decided on Backstage but want to accelerate their timeline and reduce operational burden.

Key Features:

  • Get a production-ready Backstage instance running in minutes. Roadie handles upgrades, security, and maintenance.
  • Enterprise features like Role-Based Access Control, enterprise-grade search, and scorecards come built-in.
  • Install any open-source Backstage plugin without rebuilding your instance.

Considerations: Roadie uses Backstage as its foundation, so you get the same data model and core experience. If you want a completely different, highly opinionated UI, a proprietary vendor might fit better.

The "Buy" Approach: Proprietary IDPs

Cortex

Cortex has established itself as a leader in the IDP space. They focus heavily on service quality, reliability, and engineering metrics. Their Scorecards feature is particularly strong for defining standards and tracking adoption.

Best for: Organizations focused on establishing and measuring engineering standards.

Key Features: A central inventory for microservices and APIs, scorecards to track service health, and a scaffolder for creating new services from templates.

Considerations: Cortex is proprietary. You're locked into their data model, and migrating away later could be painful.

Port

Port is built around flexibility. Their developer-friendly API lets you ingest any data and build custom workflows. They position themselves as a platform for building a developer portal, not a rigid out-of-the-box solution.

Best for: Platform teams with strong dev skills who want to build highly custom experiences.

Key Features: A flexible "blueprint" model to define any asset, a self-service hub for custom actions, and scorecards for tracking quality.

Considerations: Port's flexibility is powerful but comes with a steeper learning curve. Expect more initial setup compared to more opinionated platforms.

Atlassian Compass

Atlassian Compass is Atlassian's entry into developer experience. Its main advantage is seamless integration with Jira, Confluence, and Bitbucket.

Best for: Companies already standardized on Atlassian tools.

Key Features: A component catalog for tracking ownership, health scorecards, and deep native integration with other Atlassian products.

Considerations: If you're not an Atlassian-centric organization, Compass may feel less compelling compared to other options.

OpsLevel

OpsLevel is a mature player focused on service ownership and reliability. SRE and platform teams like them because they help answer, "Is our software ready for production?"

Best for: SRE-driven organizations enforcing service maturity standards.

Key Features: A complete service catalog, an extensive library of automated maturity checks, and integrations with on-call tools like PagerDuty.

Considerations: OpsLevel's focus is more on reliability and standards than on developer self-service, which is stronger in other platforms.

The "Build" Approach: Self-Hosted Backstage

Choosing to self-host Backstage is a significant commitment. You should treat it like building an internal product, not just deploying a tool.

Best for: Large organizations with a well-funded platform team (5+ engineers) that has a clear mandate to build and maintain a customized developer portal.

Key Features: Complete control over the code and data model. You can customize it to your exact specifications.

Considerations: This path has a high cost of ownership. You need to account for the fully-loaded salaries of a dedicated engineering team, 6-12 months of initial build time, and ongoing operational burden for maintenance and upgrades.

How to Choose the Right Path

Your choice depends on your organization's priorities, resources, and philosophy. Here are the questions you should ask:

How important is the open-source ecosystem to us?

If you want to avoid vendor lock-in and tap into community innovation, choose between self-hosting Backstage or using a managed service like Roadie. If you prefer an all-in-one vendor experience, a proprietary option like Cortex or Port makes more sense.

What's the size and skill set of our platform team?

If you have 5-10 engineers with TypeScript experience and a mandate to build a custom portal, self-hosted Backstage is viable. If your platform team is smaller or focused on other priorities, Roadie or a proprietary vendor is more efficient.

What's our most critical problem right now?

If you need scorecards and don't mind vendor lock-in, tools like Cortex or OpsLevel offer polished solutions.

If you want to build custom workflows from scratch and are comfortable in a closed-source ecosystem, Port gives you a flexible API.

If your organization lives entirely in the Atlassian suite, Compass is a natural extension.

If you want enterprise features combined with the freedom of the open-source ecosystem, Roadie gives you both.

Final Thoughts

An Internal Developer Portal is a long-term investment in your developer experience. The choice between Build, Buy, and Hybrid depends entirely on your team's size, skills, and priorities.

I've seen teams succeed with all three approaches. The key is being honest about what you can realistically maintain and what problems you're actually trying to solve.

I'm curious what path you're evaluating. Are you leaning toward self-hosting Backstage? Considering a proprietary vendor? Looking at the hybrid approach with something like Roadie? What's the biggest factor driving your decision, team size, budget, or something else? Drop a comment and share what's working (or not working) in your evaluation process.