2026-01-29 16:15:05
The discourse around "AI replacing developers" has become exhausting. We're all arguing about the wrong thing.
One camp is screaming: "AI will take all our jobs! Learn to prompt or perish!"
The other camp is laughing: "AI can't even handle production-grade distributed systems! We're safe!"
Meanwhile, I'm over here watching something completely different happen. And it's way scarier than either narrative.
The market doesn't need as many of us anymore.
Not because AI got "good enough" to replace us. But because the gap we were filling is closing from a completely different direction.
Bob runs a small development shop. Makes decent money building apps for local businesses. Nothing fancy - inventory systems, booking platforms, internal dashboards. Bread and butter work.
Karen owns a small restaurant chain. She has accounting problems. Her invoices are scattered across folders, formats are inconsistent, reconciliation is a nightmare.
The Old World:
The New World:
"But wait," the engineers cry, "what happens when it breaks? Surely she'll need Bob to maintain it!"
You think like a programmer. Go touch some grass in the Accounting department.
When Karen's Excel pivot table breaks—when the formula throws a #REF! error or the VLOOKUP breaks—does she call Bob the Excel Consultant? Does she open a Jira ticket? Does she enter a 3-week maintenance cycle?
No. She mutters "ugh this stupid thing," hits Ctrl-A, Delete, and spends 12 minutes making a new one from scratch. She treats it like a sticky note, not a cathedral.
Cost of regeneration: $0.
Cost of her time: ignored.
Call to Bob: never happening.
The script broke? She'll just prompt a new one. NewInvoiceBot_v7.py replaces NewInvoiceBot_v6.py the same way she made a new Excel sheet for Q4 instead of fixing the Q3 one. No maintenance culture exists because no asset value was assigned.
You were maintaining code because you viewed it as infrastructure. Karen views it as consumable.
You know what the big dogs love to mock? Vibe coders sharing screenshots of their localhost:3000 projects.
"Lmao you didn't even deploy it? That's not real software!"
But here's the thing they're missing: for most people, localhost IS the deployment.
The localhost:3000 screenshot isn't amateur hour. It's the Excel-fication of code. Disposable. Ephemeral. Infinitely regeneratable.
We laughed BUT we missed the bigger precedent: Excel already proved that business users prefer disposable regeneration over maintenance.
Karen doesn't need "software as a service." She needs "software as a consumable." Like a paper towel. Use it, crumple it, grab another.
Here's where it gets existential.
Why do we have:
Not because they're inherently necessary to solve problems.
They're necessary because we built an entire industry around SELLING software as assets to people who couldn't make it themselves. We convinced businesses that software is capital expenditure—something to maintain, depreciate, and protect.
But go look at Karen's desktop. How many "broken" Excel files are sitting there? Zero. She deleted them. How many "deprecated" Python scripts? She doesn't keep deprecated things because there's no cost to regeneration.
Every AI-replacement thread devolves into:
Cool. How many developers actually work on those problems?
Maybe 5%?
The other 95% are:
This is EXACTLY the work AI is getting scary good at. And it's exactly the work that can be treated as consumable rather than maintained.
When Karen can solve 70% of her own problems with disposable, regeneratable tools, she doesn't accumulate technical debt. She throw away the debt with the tool. Bob's business just shrank by 70%, but he's still waiting for the maintenance call that'll never come because there's nothing to maintain—only things to regenerate.
Will "Software Engineer" be a large, distinct profession in 10 years? Maybe not. Or rather, it'll be split between:
Getting replaced is scary but clean. It's dramatic. You see it coming.
Irrelevance is insidious.
It's Bob watching his client calls drop from 20/month to 15 to 10 to 5, each time thinking "just a slow quarter," not realizing that Karen now treats code like she treats spreadsheet cells—disposable, regeneratable, not worth a phone call.
It's looking at your Kubernetes certification and realizing the market is moving toward infinite sticky notes that need no orchestration because they're used once and deleted.
It's defending against the headshot while bleeding out from deprofessionalization.
Look outside your cubicle. Look at Accounting. Look at Marketing. They don't maintain their Excel messes—they remake them. And they're about to treat your beautiful REST APIs and React components the exact same way.
PS: Yes, I wrote this with AI assistance. Because I'm not a Lotus 1-2-3 consultant pretending Excel doesn't exist. I'm adapting.
PPS: If this made you angry, hit that "Sign Out" button. But first, go ask Karen in Accounting how many times she "maintained" an Excel formula versus just making a new sheet. I'll wait.
PPPS: If this resonated, you should probably start building your ark. The flood isn't coming—it's already here. We're just standing in ankle-deep water arguing about whether Kubernetes can save us while Karen is throwing away Python scripts like used Kleenex.
2026-01-29 16:00:34
If you’ve worked with VoIP, SIP, or real-time communications, you’ve probably encountered a Session Border Controller (SBC).
If you build or operate web applications and APIs, you’re almost certainly familiar with Web Application Firewalls (WAFs).
At first glance, these two technologies seem to live in completely different worlds. One protects voice and signaling traffic, the other defends HTTP-based applications. But if you look closely at their design philosophy, SBCs and WAFs actually share a surprising amount of DNA.
This article compares SBCs and WAFs from a design perspective, explains how they solve different layers of the same security problem, and shows how combining them leads to a more resilient architecture. Finally, we’ll look at how a modern, self-hosted WAF like SafeLine fits into this picture.
Both SBCs and WAFs exist for the same fundamental reason:
Exposing a service directly to the internet is dangerous.
Whether it’s SIP signaling or a REST API, once traffic crosses an organizational boundary, you lose trust in:
The difference is where and how each technology draws the line.
A Session Border Controller sits at the edge of a VoIP or real-time communication network, typically between:
Its core responsibilities include:
Protocol enforcement
Validating SIP and RTP behavior against RFCs and operator policies.
Session control and state tracking
Understanding call setup, teardown, and media negotiation as stateful flows.
Topology hiding
Preventing internal IPs, extensions, and infrastructure details from leaking.
Security and abuse prevention
Detecting malformed SIP messages, call floods, toll fraud, and replay attacks.
The key point is this:
An SBC doesn’t just forward packets — it understands sessions.
A Web Application Firewall protects HTTP-based applications and APIs by sitting between:
Its responsibilities typically include:
Request inspection at the application layer
Parsing URLs, headers, cookies, request bodies, and parameters.
Attack detection
Identifying SQL injection, XSS, command injection, path traversal, and more.
Behavioral analysis
Detecting abnormal request rates, automation patterns, and replay behavior.
Policy enforcement
Blocking, challenging, rate-limiting, or logging suspicious requests.
Like an SBC, a modern WAF is not a simple filter.
It attempts to understand intent behind requests, not just syntax.
When you strip away protocols and use cases, the design mindset is strikingly similar.
Both are built on the idea that generic firewalls are not enough once you reach higher-layer protocols.
Traditional firewalls often make decisions per packet or per request.
Security decisions improve dramatically once state is introduced.
Both technologies sit at a trust boundary.
Their first job is normalization:
Only after normalization does forwarding happen.
Neither SBCs nor WAFs can afford to be overly aggressive.
This balance is not theoretical. It’s an operational reality that shapes how both systems are designed and tuned.
In modern systems, real-time communication platforms are no longer isolated.
A typical architecture might look like this:
Clients (Web / Mobile)
|
[ WAF ]
|
Web APIs / Auth
|
RTC Services
|
[ SBC ]
|
SIP / Media Providers
In this setup:
The WAF protects:
The SBC protects:
They operate at different layers, but share the same goal:
reduce attack surface before traffic reaches critical systems.
An SBC is excellent at what it does — but it does not:
That’s where a WAF remains essential.
As more communication platforms expose:
The web layer becomes a primary attack vector — even if SIP itself is well-protected.
Understanding SBC design makes it easier to appreciate what modern WAFs aim to achieve.
SafeLine WAF follows many of the same principles that made SBCs effective:
Just as many operators insist on running SBCs in their own network, SafeLine supports full self-hosted deployment:
This matters for compliance, privacy, and operational transparency.
Instead of relying purely on static signatures, SafeLine combines:
This mirrors how SBCs evolved beyond simple SIP filtering into session-aware controllers.
SafeLine focuses on real-world usage:
The goal is not to pass rule tests, but to protect live systems without breaking them.
One common frustration with security tools is the “black box” effect.
SafeLine emphasizes:
This aligns with how experienced teams operate SBCs: visibility first, enforcement second.
SBCs and WAFs are built for different protocols, but they share a common security philosophy:
Understand the protocol, track state, normalize behavior, and enforce policy at the boundary.
In modern architectures, it’s not a question of SBC or WAF — it’s where each one fits.
When combined correctly:
And tools like SafeLine WAF bring the maturity and discipline of session-aware security into the web world — where attackers increasingly focus their efforts.
If you’re designing or reviewing your security architecture, looking at these technologies through a shared design lens can make your decisions clearer — and your systems more resilient.
2026-01-29 15:44:11
This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI
Hi, I'm Fatiya Labibah, an Informatics Engineering student who believes that portfolios shouldn't be boring static pages. They should be an experience.
I love music and I love code. So, I asked myself: "What if recruiters could 'play' my projects like they play their favorite songs?"
The result is a fully interactive, full-stack Spotify-Themed Portfolio that blurs the line between a music app and a professional resume.
Live Demo on Google Cloud Run:
https://my-spotify-portfolio-891617309900.asia-southeast1.run.app
(Recommended: Click the Play Music button to hear my favorite song!)
1. Brainstorming with Gemini 3 Pro
I used Gemini 3 Pro as my Lead Architect. I fed it the concept: "Create a Spotify clone, but for a developer resume."
Gemini helped me map out the complex data schemas for MongoDB—defining how Projects, Certificates, and Experience should relate to each other, ensuring the data structure was scalable from day one.
2. "Antigravity" Development
I adopted an AI-First Development Environment (Antigravity approach). Instead of writing boilerplate code manually, I utilized AI to generate the intricate UI components, such as the responsive Sidebar logic and the Player Bar controls.
Challenge: Making the sidebar remember its state (collapsed/expanded) across refreshes while being responsive on mobile.
AI Solution: Gemini suggested a custom hook using localStorage and resize event listeners that perfectly mimicked the native app feel.
3. Deployment with Gemini CLI & Cloud Run
The most daunting part was deployment. How do you deploy a MERN stack (Frontend + Backend) efficiently?
I used AI assistance to craft a Multi-Stage Dockerfile. It builds the React frontend, optimizes the Node.js backend, and bundles them into a single lightweight container.
This container was then pushed to Google Artifact Registry and deployed to Google Cloud Run using gcloud commands generated with AI assistance, ensuring a serverless, scalable, and cost-effective deployment.
1. The "Player Bar" Navigation
2. A Real, Hidden Admin CMS
3. "Pixel-Perfect" Responsiveness
This portfolio represents my new year and new direction:
If there’s one thing I want judges to take away, it’s this:
I don’t just build projects.
I build systems — and I ship them.
Thank you for this challenge 🚀
It pushed me beyond “just frontend” and into real-world cloud engineering.
#devchallenge #googleaichallenge #portfolio #gemini #cloudrun
2026-01-29 15:41:17
Brave Search API allows searching web, videos, news and several other things. Brave also has official MCP Server that you can wraps its API so you can plug into your favorite LLM if you have access to npx in your computer.
Brave Search is one of the most popular MCP Servers in HasMCP. The video demonstrates a genuine way of creating MCP Server from scratch using HasMCP without installing a npx/python to your computer by mapping the Brave API into 7/24 token optimized MCP Server using UI. You will explore how to debug when things go wrong, how to fix them in realtime and see the changes immediately taking place in the MCP Server without any restarts. You can see the details of how to optimize token usage of an API response up to 95% per call. All token estimated usages were measured using tiktoken library and payload sizes summed as bytes with and without token optimization.
Brave Web Search API is a GET request to with 2 headers Accept with application/json and X-Subscription-Token basically it is your API Token. It accepts several query string params, you can access the full list from its documentation.
Sample request
curl -s --compressed "https://api.search.brave.com/res/v1/web/search?q=hasmcp" \
-H "Accept: application/json" \
-H "X-Subscription-Token: <YOUR_API_KEY>"
Sample response
{
"type": "search",
"web": {
"type": "search",
"results": [
{
"title": "<>",
"url": "<>",
"is_source_local": false,
"is_source_both": false,
"description": "<>",
"profile": {
...
}…
},
...
]
}
}
The API response includes a lot of details and if not filtered it returns multiple types of responses including web, videos, etc.. LLMs regardless will try to display only necessary information to do that it will spend a lot of tokens and it might even cause issues like context bloating eventually.
HasMCP is a GUI-based MCP (Model Context Protocol) Server Framework. It acts as a bridge, converting standard REST API definitions into a 24/7 online MCP Server with Streamable HTTP (SSE). This eliminates the need for local npx or Python installations, allowing you to deploy MCP servers that interface purely with external APIs.
Provider details
First, we need to define the "Provider." In HasMCP, a provider represents the external service you are connecting to.
Action:
Name: Enter a distinct name for the provider (e.g., brave-search).
Base URL: Enter the root URL for the API. For Brave Search, this is https://api.search.brave.com/res/v1.
Description: (Optional) Add a note for yourself about what this API does.
Tool details
Now we define the "Tool." This is the specific function the LLM (like Claude or Cursor) will see and call. We need to map a specific API endpoint to a tool definition.
Action:
Method: GET.
Endpoint Path: /web/search.
Tool name: webSearch (use camelCase to help LLMs to parse)
Parameters: Define the inputs the LLM must provide.
Key: q
Type: string
Description: The search query to find information about.
Token optimization
APIs often return massive JSON objects with meta-data that LLMs don't need. This wastes token limits and slows down context. We use a Response Interceptor (using JMESPath) to filter the output.
Action:
Select Tool: Choose webSearch.
Interceptor Expression: specific JMESPath query to extract only the essentials.
Expression:
web.results[].{title: title, url: url, description: description}
Raw API Response
{
"web": {
"results": [
{ "title": "Example", "url": "...", "meta_url": "...", "thumbnail": "...", "age": "..." }
]
},
"query": { ... },
"mixed": { ... }
}
Optimized response
[
{
"title": "Example",
"url": "[https://example.com](https://example.com)",
"description": "This is the description."
}
]
API Key
We must securely store the credentials required by the 3rd party API. HasMCP injects these into the request headers automatically.
Action:
Header Name: Look up the API docs. Brave requires X-Subscription-Token.
Value: Paste your actual API Key from the Brave Developer Portal.
Generate MCP Server Token
To expose your new server to an LLM client, you need a secure entry point. HasMCP generates a unique URL protected by a token.
Action:
Expiry: Set the token duration (e.g., "Never" or "30 days").
Generate: Click Generate Token.
Copy Connection URL for your favorite LLM: This URL is what you will paste into your Claude Desktop config or Cursor settings.
MCP Telemetry/Analytics
Once connected, you can monitor the health and efficiency of your server. This is crucial for verifying that your Token Optimization (Step 3) is actually working.
Debug
If the LLM says "I encountered an error," you use the Debug logs to see exactly what happened between HasMCP and the Brave API.
Action:
Open the Debug (Realtime Logs/Telemetry) tab.
Trigger a request from your LLM.
Inspect the Request (did the LLM send the right query?) and the Response (did the API key fail?).
2026-01-29 15:35:48
Nowadays, attackers are more and more eager to get your website down, to gather your data, to exploit every little vulnerability you might have. It has become a major concern for every company to protect itself against any malicious activity.
When thinking about what to put in place to improve your security, you have mainly two situations:
In the first case, you need to have one or multiple teams dedicated to the security to build a safe and secure infrastructure and to keep it up to date with the latest vulnerabilities found. Knowing that you should ask yourself "when" and not "if" you're going to be attacked, this team should also know how to react when something goes south at each security level.
For small companies or companies that don't want to invest in a highly-skilled security team, you will probably search for market solutions and providers that are able to handle these concerns for you, or at least that are going to ease the management of many security aspects. Nevertheless, it won’t prevent you from having security dedicated people to manage the selected solutions as well as other security aspects, like people awareness to prevent phishing for example.
Beware that choosing a third party comes with its downsides: you become, at a certain level, dependent of their infrastructure, their partners and their problems (availability, security). Thus, you can encounter issues over which you have no control.
At Migros Online, we decided a few years back to work with Cloudflare to have a unique entrypoint for our infrastructure (on-premises back then, in the cloud today).
Using such a tool brought us many security and performance aspects for our website and our mobile applications:
Thanks to Cloudflare, we were able to consolidate our public exposure, simplify its management and get confidence that we are in good hands when problems arise.
As an example of Cloudflare's usage for Migros Online, I went on stage (for the first time!) during the Cloudflare Immerse 2025 event in Zurich to present how Cloudflare helped us in mitigating DDoS attacks we faced in the past.
The recording is available below and outlines the Migros Online context, what issues we faced and how Cloudflare was a key element in solving the problem.
2026-01-29 15:26:39
Spotify's Backstage created the Internal Developer Portal category. It showed that a central hub for services, documentation, and tooling could actually improve how developers work. But here's what a lot of engineering leaders figure out the hard way: Backstage isn't right for every team.
If you're reading this, you've probably realized that adopting Backstage means committing serious engineering resources. You need a dedicated team, TypeScript expertise, and months of setup time. For a lot of teams, the operational overhead just isn't worth it.
But the problem Backstage solves doesn't go away. As your organization grows past about 150 people, you hit what's called the Dunbar Number Effect. Anthropologist Robin Dunbar found that humans can only maintain stable social networks of around 150 people. Go beyond that, and tribal knowledge disappears. Nobody knows who owns what. The informal Slack channels that worked when you were 30 people turn into complete chaos.
You need a system to fix this chaos, a "system of record" for your engineering organization. But you don't need the complexity of self-hosted Backstage to get there.
This guide covers the best Backstage alternatives in 2026. I'll break down three paths you can take, Build, Buy, or Hybrid, and help you figure out which one makes sense without drowning your platform team in maintenance work.
Before we look at specific tools, you need to understand the three approaches you can take:
Build (Self-Hosted Backstage): You take the open-source Backstage project and dedicate engineers to build, customize, and maintain your own portal. Ultimate flexibility, but significant headcount and operational costs.
Buy (Proprietary IDPs): You purchase a SaaS solution from a vendor like Cortex or Port. Quick to set up and feature-rich, but you're locked into their proprietary data model.
Hybrid (Managed Backstage): You use a service like Roadie that handles the hosting and maintenance of Backstage for you. You get the open-source ecosystem without the operational burden.
Here's a quick comparison of your options:
| Tool | Core Technology | Hosting Model | Key Strength | Ideal For |
|---|---|---|---|---|
| Roadie | Backstage | SaaS (Managed) | Backstage ecosystem without the overhead | Teams who want Backstage's power but need a managed solution |
| Cortex | Proprietary | SaaS | Engineering metrics and scorecards | Organizations focused on measuring service quality |
| Port | Proprietary | SaaS | Developer-friendly API and flexibility | Teams building custom workflows |
| Atlassian Compass | Proprietary | SaaS | Deep Atlassian integration | Companies invested in the Atlassian stack |
| OpsLevel | Proprietary | SaaS | Service maturity and reliability checks | SRE teams enforcing production standards |
| Self-Hosted Backstage | Backstage (OSS) | Self-Hosted | Ultimate customization | Large orgs with dedicated platform teams (5+ engineers) |
Roadie isn't an alternative to Backstage, it's a different way to adopt it. The core idea is that you shouldn't have to choose between the power of an open-source community and the convenience of a SaaS product.
I've talked to a lot of teams who committed to self-hosting Backstage, only to realize it requires 3-12 engineers and 6-12 months to get production-ready. That's a significant investment. Roadie solves this by providing a secure, scalable, and fully managed Backstage experience out of the box.
Best for: Teams who've decided on Backstage but want to accelerate their timeline and reduce operational burden.
Key Features:
Considerations: Roadie uses Backstage as its foundation, so you get the same data model and core experience. If you want a completely different, highly opinionated UI, a proprietary vendor might fit better.
Cortex has established itself as a leader in the IDP space. They focus heavily on service quality, reliability, and engineering metrics. Their Scorecards feature is particularly strong for defining standards and tracking adoption.
Best for: Organizations focused on establishing and measuring engineering standards.
Key Features: A central inventory for microservices and APIs, scorecards to track service health, and a scaffolder for creating new services from templates.
Considerations: Cortex is proprietary. You're locked into their data model, and migrating away later could be painful.
Port is built around flexibility. Their developer-friendly API lets you ingest any data and build custom workflows. They position themselves as a platform for building a developer portal, not a rigid out-of-the-box solution.
Best for: Platform teams with strong dev skills who want to build highly custom experiences.
Key Features: A flexible "blueprint" model to define any asset, a self-service hub for custom actions, and scorecards for tracking quality.
Considerations: Port's flexibility is powerful but comes with a steeper learning curve. Expect more initial setup compared to more opinionated platforms.
Atlassian Compass is Atlassian's entry into developer experience. Its main advantage is seamless integration with Jira, Confluence, and Bitbucket.
Best for: Companies already standardized on Atlassian tools.
Key Features: A component catalog for tracking ownership, health scorecards, and deep native integration with other Atlassian products.
Considerations: If you're not an Atlassian-centric organization, Compass may feel less compelling compared to other options.
OpsLevel is a mature player focused on service ownership and reliability. SRE and platform teams like them because they help answer, "Is our software ready for production?"
Best for: SRE-driven organizations enforcing service maturity standards.
Key Features: A complete service catalog, an extensive library of automated maturity checks, and integrations with on-call tools like PagerDuty.
Considerations: OpsLevel's focus is more on reliability and standards than on developer self-service, which is stronger in other platforms.
Choosing to self-host Backstage is a significant commitment. You should treat it like building an internal product, not just deploying a tool.
Best for: Large organizations with a well-funded platform team (5+ engineers) that has a clear mandate to build and maintain a customized developer portal.
Key Features: Complete control over the code and data model. You can customize it to your exact specifications.
Considerations: This path has a high cost of ownership. You need to account for the fully-loaded salaries of a dedicated engineering team, 6-12 months of initial build time, and ongoing operational burden for maintenance and upgrades.
Your choice depends on your organization's priorities, resources, and philosophy. Here are the questions you should ask:
If you want to avoid vendor lock-in and tap into community innovation, choose between self-hosting Backstage or using a managed service like Roadie. If you prefer an all-in-one vendor experience, a proprietary option like Cortex or Port makes more sense.
If you have 5-10 engineers with TypeScript experience and a mandate to build a custom portal, self-hosted Backstage is viable. If your platform team is smaller or focused on other priorities, Roadie or a proprietary vendor is more efficient.
If you need scorecards and don't mind vendor lock-in, tools like Cortex or OpsLevel offer polished solutions.
If you want to build custom workflows from scratch and are comfortable in a closed-source ecosystem, Port gives you a flexible API.
If your organization lives entirely in the Atlassian suite, Compass is a natural extension.
If you want enterprise features combined with the freedom of the open-source ecosystem, Roadie gives you both.
An Internal Developer Portal is a long-term investment in your developer experience. The choice between Build, Buy, and Hybrid depends entirely on your team's size, skills, and priorities.
I've seen teams succeed with all three approaches. The key is being honest about what you can realistically maintain and what problems you're actually trying to solve.
I'm curious what path you're evaluating. Are you leaning toward self-hosting Backstage? Considering a proprietary vendor? Looking at the hybrid approach with something like Roadie? What's the biggest factor driving your decision, team size, budget, or something else? Drop a comment and share what's working (or not working) in your evaluation process.