MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

The Systemd Bug That Nobody Wants to Own

2026-02-28 03:18:00

TL;DR: There’s a namespace bug affecting Ubuntu 20.04, 22.04, and 24.04 servers that causes random service failures. It’s been reported since 2021 across systemd, Ubuntu, Fedora, and Red Hat trackers. Most reports are either expired or labeled “not-our-bug.” Only a reboot fixes it.

If you’re running Ubuntu servers and have ever seen this in your logs:

Failed to set up mount namespacing: /run/systemd/unit-root/dev: Invalid argument
Failed at step NAMESPACE spawning: Invalid argument
Main process exited, code=exited, status=226/NAMESPACE

Congratulations. You’ve encountered one of the most frustrating bugs in the Linux ecosystem — one that’s been bouncing between the kernel and systemd teams for years with no resolution.

What Happens

Random systemd services — including critical ones like systemd-resolved, systemd-timesyncd, systemd-journald, and your own custom services — suddenly refuse to start. The error mentions “mount namespacing” and “Invalid argument.”

Restarting the service doesn’t help. systemctl daemon-reload doesn’t help. The only reliable fix is a full system reboot.

If you’re running containerized workloads (LXC, LXD, Proxmox), it gets worse: the bug can affect the entire host node, and container reboots won’t fix it — you need to reboot the hypervisor itself.

The Blame Game

I’ve tracked this bug across multiple issue trackers:

  • systemd/systemd #24798 — Ubuntu 20.04, September 2022
  • systemd/systemd #19926 — Labeled not-our-bug, June 2021
  • Ubuntu Launchpad #1990659 — Expired due to inactivity
  • Fedora CoreOS #1296 — Affects PXE/diskless boot
  • Red Hat Bugzilla #2111863 — Migrated to Jira, status unknown
  • dbus-broker #297 — CentOS Stream 9

The pattern is always the same: user reports the bug, maintainers ask for debug logs, user either provides them or doesn’t respond fast enough, bug expires or gets closed with “not-our-bug.”

The systemd team says it’s a kernel issue. The kernel team… well, I haven’t found anyone from the kernel team actively investigating this.

Root Causes (As Best We Can Tell)

The bug appears to involve:

  1. Race conditions in mount namespace setup — systemd tries to remount /sys and /dev while other unmount operations are happening
  2. Mount propagation issues — systemd changes the default from MS_PRIVATE to MS_SHARED, causing unexpected interactions
  3. Resource exhaustion — sometimes related to inotify limits (fs.inotify.max_user_instances)
  4. Container/virtualization edge cases — more prevalent in LXC/LXD environments

But nobody has done a definitive root cause analysis. The bug is intermittent, hard to reproduce on demand, and affects systems that have been running fine for weeks or months.

The Irony

Remember when /etc/init.d/ scripts “just worked”? When starting a service meant running a shell script that executed a binary?

Systemd brought us dependency management, socket activation, cgroups integration, and dozens of security features like PrivateDevices=, ProtectSystem=, and PrivateTmp=. These are genuinely useful features.

But they also introduced complexity. The namespace isolation that causes this bug exists because systemd creates a private mount namespace for services with security hardening enabled. It’s a feature. Until it breaks.

The old init system didn’t have this bug because it didn’t have namespaces. Services ran in the global namespace. Less secure? Yes. But also fewer moving parts to fail.

Workarounds

If you’re affected, here are your options:

1. Disable namespace isolation for affected services:

sudo systemctl edit your-service.service
[Service]
PrivateDevices=no
ProtectHome=no
ProtectSystem=no

2. Clear corrupted systemd state:

sudo rm -rf /run/systemd/unit-root/
sudo systemctl daemon-reload

3. Increase inotify limits:

echo "fs.inotify.max_user_instances=512" >> /etc/sysctl.conf
sysctl -p

4. Monitor and auto-restart:

* */3 * * * systemctl list-units --failed | grep -q NAMESPACE && reboot

Yes, that last one is a scheduled reboot. That’s where we are.

What Should Happen

Someone — Canonical, Red Hat, or the systemd team — needs to:

  1. Create a reliable reproduction case
  2. Add instrumentation to capture the exact kernel/systemd state when the failure occurs
  3. Do a proper root cause analysis
  4. Fix it in either the kernel, systemd, or both

Until then, we’re all just rebooting servers and hoping.

Have you encountered this bug? What’s your workaround?

I’d love to hear from anyone who has done deeper investigation or found a permanent fix.

Jsonb e Gin Index: Otimizando consultas no PostgreSQL

2026-02-28 03:11:44

O PostgreSQL é um dos sgbds que mais usei durante minha carreira. Ele é robusto, confiável e possui uma série de recursos avançados que o tornam uma escolha popular para muitos desenvolvedores. Um desses recursos é o suporte a tipos de dados JSON e JSONB, que permitem armazenar e consultar dados semi-estruturados de forma eficiente. Neste post, vamos explorar como usar o tipo JSONB em conjunto com o Gin Index para otimizar consultas no PostgreSQL.

O que é JSONB?

O JSONB é uma versão binária do tipo de dados JSON no PostgreSQL. Ele armazena os dados em um formato otimizado para consultas, o que o torna mais rápido do que o tipo JSON tradicional. O JSONB suporta operações de indexação, o que significa que você pode criar índices para acelerar as consultas em campos específicos dentro do JSONB.

Criando um índice Gin para JSONB

O índice Gin (Generalized Inverted Index) é uma estrutura de dados que permite indexar valores dentro de um campo JSONB. Ele é especialmente útil para consultas que filtram por atributos específicos dentro do JSONB, como no exemplo abaixo:

-- 1. Criação da Tabela
CREATE TABLE cards (
    id SERIAL PRIMARY KEY,
    name TEXT NOT NULL,
    attributes JSONB
);

-- 2. Inserção de Dados
INSERT INTO cards (name, attributes)
SELECT 
    'Card ' || generate_series,
    jsonb_build_object(
        'type', 'Creature',
        -- Subtipo com poucas variações para aumentar a amostragem
        'subtype', (ARRAY['Dragon', 'Goblin', 'Human', 'Elf'])[floor(random() * 4 + 1)],
        'stats', jsonb_build_object(
            'power', floor(random() * 10),
            'toughness', floor(random() * 10)
        ),
        -- Habilidade rara: apenas 'Phasing' será fácil de filtrar
        'abilities', CASE 
            WHEN random() < 0.001 THEN jsonb_build_array('Phasing')
            ELSE jsonb_build_array('Flying')
        END
    )
FROM generate_series(1, 1000000);

SELECT * FROM CARDS LIMIT 3;
// id |  name   | attributes
// ---+---------+----------------------------------------------------------------------------------
//  1 | Card 1  | {"type": "Creature", "stats": {"power": 8, "toughness": 1}, "subtype": "Goblin", "abilities": ["Flying"]}
//  2 | Card 2  | {"type": "Creature", "stats": {"power": 0, "toughness": 3}, "subtype": "Elf", "abilities": ["Flying"]}
//  3 | Card 3  | {"type": "Creature", "stats": {"power": 7, "toughness": 4}, "subtype": "Human", "abilities": ["Flying"]}

-- Atualiza as estatísticas para o Planejador de Consultas
ANALYZE cards;

Agora vamos analizar o desempenho das consultas sem o índice e depois com o índice Gin.

-- Consulta sem índice
EXPLAIN ANALYZE
SELECT * FROM cards
WHERE attributes @> '{"type": "Creature", "subtype": "Dragon"}';
"Seq Scan on cards  (cost=0.00..36891.00 rows=179403 width=163) (actual time=0.037..944.492 rows=250022 loops=1)"
"  Filter: (attributes @> '{""type"": ""Creature"", ""subtype"": ""Dragon""}'::jsonb)"
"  Rows Removed by Filter: 749978"
"Planning Time: 0.303 ms"
"Execution Time: 1441.434 ms"

Veja que temos um scan sequencial na tabela, o que pode ser muito lento em tabelas grandes. Veja o Rows Removed by Filter, que indica que o PostgreSQL teve que ler todas as linhas da tabela para encontrar as correspondências.

Vamos criar o índice Gin e analisar a consulta novamente.

-- Criando o índice Gin
CREATE INDEX idx_cards_attributes ON cards USING gin (attributes);
-- Consulta com índice
EXPLAIN ANALYZE
SELECT * FROM cards
WHERE attributes @> '{"type": "Creature", "subtype": "Dragon"}';
"Bitmap Heap Scan on cards  (cost=2044.35..28677.89 rows=179403 width=163) (actual time=96.083..777.979 rows=250022 loops=1)"
"  Recheck Cond: (attributes @> '{""type"": ""Creature"", ""subtype"": ""Dragon""}'::jsonb)"
"  Heap Blocks: exact=24391"
"  ->  Bitmap Index Scan on idx_cards_attributes  (cost=0.00..1999.50 rows=179403 width=0) (actual time=92.507..92.509 rows=250022 loops=1)"
"        Index Cond: (attributes @> '{""type"": ""Creature"", ""subtype"": ""Dragon""}'::jsonb)"
"Planning Time: 0.388 ms"
"Execution Time: 1197.012 ms"

Agora temos um Bitmap Heap Scan, que é muito mais eficiente do que o Seq Scan. Não temos mais o Rows Removed by Filter, pois o índice Gin já filtrou as linhas relevantes, resultando em uma execução muito mais rápida da consulta. A diferença no tempo de execução é significativa, mostrando a importância de usar índices adequados para otimizar consultas em campos JSONB.

Conclusão

O uso do tipo JSONB em conjunto com o índice Gin pode melhorar significativamente o desempenho das consultas em campos JSONB no PostgreSQL. Ao criar um índice Gin, as consultas que filtram por atributos específicos dentro do JSONB podem ser executadas muito mais rapidamente, especialmente em tabelas com um grande volume de dados. Se você estiver trabalhando com dados semi-estruturados, considere usar JSONB e índices Gin para otimizar suas consultas e melhorar a performance do seu banco de dados.

Price Action: How to Understand Breakout Tests (Part 1)

2026-02-28 03:10:26

Price Action: How to Understand Breakout Tests (Part 1)

Every fluctuation in the market is a test of a key price level.

After a breakout occurs, why does the market almost always return to test that breakout? This is because any breakout represents price temporarily departing from the original equilibrium zone, and the market naturally wants to verify whether the breakout is genuine and valid. In other words, every breakout is essentially a "hypothesis test," and the market confirms through subsequent retest behavior whether the breakout can hold.

In Price Action logic, there is a very important inference:
Test Success = Trend Resumption = A Failed Reversal = Pullback
Test Failure = New Breakout = A Reversal is a Failed Test, it's also a New Breakout.
That is to say, if the test succeeds, the market resumes the original trend, and the failed reversal is essentially just a pullback; if the test fails, then it is itself a new breakout — a reversal is actually a failed test, and every reversal is accompanied by a new breakout.

Taking a declining channel as an example, the channel usually has a clear trend line. When price forcefully reverses and breaks through the declining trend line in one move, this breakout location is the critical "trend line breakout signal." It is equivalent to a key offensive by the bulls. This level often becomes a new bull support zone, and the market almost always pulls back to test it. There are two ways to test: either pulling back to the breakout level itself, or retesting the other side of the extended trend line. Price typically finds support after touching the trend line or breakout point, then bounces again.

This trend line retest phenomenon is extremely common and is one of the foundational characteristics of Price Action. Although support and resistance levels appear horizontal on charts, in reality they are often sloped, because price action has fractal characteristics — small structures are always nested within larger structures. Even when market momentum is extremely strong, price usually retests the key breakout level at least once to validate the trend's effectiveness.

To judge whether a breakout test is successful, the key is to observe bar behavior. If price directly breaks below support during the test and confirms with a close, the test has failed; if it only tentatively touches the support level before bouncing back, the test has succeeded and the trend continues. When large amounts of overlapping bars or deep pullbacks appear, it means the market is conducting a more thorough test, at which point the probability of the gap being filled is high and the pullback depth is often significant.

Many traders fall into a misconception when the market makes new highs — believing that since there is no resistance above, price can "soar to the sky." This thinking is incorrect. In fact, every new high is inevitably accompanied by another market test. That is the market testing the bulls' resolve, seeing whether they can continue buying and continue pushing price higher.

Even if you cannot find a clear reason for the test on the chart, you must understand: every breakout is actually the market self-validating. Price will continuously probe key levels, repeatedly confirming the balance of power between bulls and bears. Only breakouts that pass the test can become starting points for trend continuation; those breakouts that fail the test often evolve into the beginning of a reversal.

CVE-2026-27449: Unauthenticated Data Exposure via Broken Access Control in Umbraco Engage

2026-02-28 03:10:11

Unauthenticated Data Exposure via Broken Access Control in Umbraco Engage

Vulnerability ID: CVE-2026-27449
CVSS Score: 7.5
Published: 2026-02-27

A critical access control failure has been identified in Umbraco Engage (formerly uMarketingSuite), specifically affecting the Forms component. The vulnerability arises from missing authentication and authorization checks on sensitive API endpoints, allowing unauthenticated remote attackers to access proprietary marketing data and form submissions. By exploiting this flaw, attackers can bypass intended security boundaries and enumerate records via Insecure Direct Object References (IDOR), leading to significant data leakage of business intelligence and potentially personally identifiable information (PII).

TL;DR

CVE-2026-27449 permits unauthenticated attackers to query internal Umbraco Engage API endpoints. By manipulating ID parameters, attackers can scrape sensitive form and analytics data. Immediate patching to versions 16.2.1 or 17.1.1 is required.

⚠️ Exploit Status: POC

Technical Details

  • CVE ID: CVE-2026-27449
  • CVSS v3.1: 7.5 (High)
  • CWE IDs: CWE-284, CWE-306, CWE-639
  • Attack Vector: Network
  • Privileges Required: None
  • Impact: Confidentiality (High)

Affected Systems

  • Umbraco Engage (uMarketingSuite)
  • Umbraco.Engage.Forms
  • Umbraco.Engage.Forms: < 16.2.1 (Fixed in: 16.2.1)
  • Umbraco.Engage.Forms: >= 17.0.0, < 17.1.1 (Fixed in: 17.1.1)

Mitigation Strategies

  • Update Umbraco Engage packages to fixed versions immediately.
  • Implement Web Application Firewall (WAF) rules to restrict access to '/umbraco/' API paths.
  • Restrict network access to backoffice APIs via VPN or IP allowlisting.

Remediation Steps:

  1. Identify the current version of Umbraco.Engage.Forms or uMarketingSuite running in the environment.
  2. If running version 16.x, update the NuGet package to version 16.2.1.
  3. If running version 17.x, update the NuGet package to version 17.1.1.
  4. Rebuild and redeploy the application to the production environment.
  5. Verify the fix by attempting to access the Engage API endpoints without an active session; the server should now return a 401 Unauthorized or 302 Redirect to login.

References

Read the full report for CVE-2026-27449 on our website for more details including interactive diagrams and full exploit analysis.

I’m Building for the Real MVPs of Our Economy, SMEs

2026-02-28 02:59:51

This is a submission for the DEV Weekend Challenge: Community

The Community

I built this for Small and Medium Scale Enterprises (SMEs), the silent engines of our economy.

They create jobs.
They power innovation.
They give many of us our first clients, first paychecks, and first real-world experience as developers.

And yet… most of them struggle to survive.

The biggest challenges they face:

💰 Limited access to funding

🎯 Difficulty finding consistent customers

🏗 Infrastructure and operational constraints

If SMEs thrive, the entire ecosystem thrives, including us as developers.

So instead of just building for other developers…
I decided to build for them.

What I Built

I created a web-based platform that helps SMEs:

🔎 Discover funding opportunities

🤝 Connect with potential customers

🌱 Position their businesses for sustainable growth

This idea didn’t come from a random brainstorm, it was born out of real conversations in a startup founders’ community I belong to. Many of them are actively advising me on features they need.

This isn’t theory.
It’s being shaped by the people it’s meant to serve.

Demo

An MVP is already live:
👉 https://good-name-ng.web.app

I’d love your feedback, especially from founders, developers, and ecosystem builders.

Code

You can explore the code here:
👉 https://github.com/nwanna-joseph

How I Built It

The MVP is powered by:

🔥 Firebase (authentication, backend services)

🖼 Vue (frontend framework)

☁️ AWS (infrastructure support)

The goal was to move fast, validate quickly, and iterate based on real user feedback.