MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Self-Hosted Nextcloud on a Ubuntu Server 24.04 VM

2026-01-14 07:10:03

While learning Linux server administration, I wanted to set up something more realistic than just installing packages or running test containers.

Something that actually reflects what you'd do on a server, running a real service, managing containers, handling storage and dealing with security-related setup.

So I deployed Nextcloud using Docker and Docker Compose on an Ubuntu 24.04 server VM.

What the setup includes
Services:

  • Nextcloud running in Docker
  • MariaDB for the database
  • Docker Compose to manage the stack

docker compose ps

Persistence:

  • Database data stored using Docker volumes
  • Nextcloud data directory mounted so files persist across restarts

docker volume ls

Access & networking:

  • HTTPS enabled using a self-signed certificate
  • Service accessible locally and over the network

Why I built it
I wanted hands-on experience with things you actually run into when working with Linux servers.

  • How multiple containers work together
  • How data is stored safely outside containers
  • How to work with docker compose

Challenges & fixes
Database persistence

  • Made sure MariaDB data survives container restarts by using volumes and testing stop/start scenarios.

HTTPS setup

  • HTTPS works using a self-signed certificate, but browsers still show a warning. Good reminder that self-signed certificates are fine for testing, but not ideal for production.

Redis configuration

  • Tried adding Redis for Nextcloud, but stopped when the configuration became unclear. Decided to skip it for now instead of adding something I didn’t fully understand.

nextcloud setup

What I learned

  • How to structure a Docker Compose file for a multi-service setup
  • How Nextcloud depends on external services like databases
  • How Docker volumes work and why they’re important
  • The difference between “HTTPS enabled” and “trusted HTTPS”
  • When it's better to pause and understand something instead of forcing it to work

What's next?
Possible improvements:

  • Setting up proper HTTPS with Let's Encrypt
  • Adding Redis once I understand how it fits into the setup
  • Improving security and configuration cleanup
  • Adding basic monitoring or backups

nextcloud front page

This project was mainly about learning by doing and understanding how a real service is deployed and maintained on a Linux server.

Por que o seu Design System está morrendo (e como começamos a salvá-lo)

2026-01-14 07:09:58

Todo desenvolvedor frontend já passou por isso: você recebe um protótipo novo no Figma, abre a biblioteca de componentes do projeto e percebe que... nada encaixa. As cores são levemente diferentes, os espaçamentos não batem e aquele botão que deveria ser "padrão" precisa de 15 linhas de CSS extra para ficar igual ao design.

No último ano, vivi exatamente esse cenário. O que começou como uma biblioteca React robusta, com o tempo, tornou-se um gargalo de produtividade e um museu de decisões técnicas obsoletas.

Neste primeiro post, quero compartilhar como identificamos que nosso Design System (DS) estava falhando e por que decidimos reconstruí-lo como um ecossistema multiplataforma.

O diagnóstico: A "Deriva" entre Design e Código

O primeiro sinal de alerta foi a inconsistência. O time de Design evoluía rápido, testando novas abordagens e padrões de UX. Enquanto isso, nossa biblioteca em React estava estagnada.

Essa desconexão cria um fenômeno perigoso: o Shadow CSS. Desenvolvedores, para entregarem suas tarefas no prazo, param de usar os componentes do sistema e começam a criar "versões locais" ou a sobrescrever estilos globalmente.

Resultado? Um bundle cada vez maior, manutenção impossível e uma interface que parece uma colcha de retalhos.

Quando a tecnologia se torna o problema

No nosso caso, tínhamos um complicador técnico: a biblioteca era inteiramente baseada em styled-components em versões que já estavam se tornando depreciadas.

Embora o CSS-in-JS tenha tido seu auge, percebemos que a forma como ele estava implementado no projeto inteiro gerava:

  1. Dificuldade de manutenção: Alterar um comportamento básico exigia navegar por árvores complexas de componentes estilizados.
  2. Barreira de entrada: Novas stacks (como Mobile) não podiam aproveitar nada do que já tínhamos feito para Web.

Percebemos que não bastava atualizar as versões das libs. Precisávamos de um Design System System (sim, um sistema para gerir o sistema).

A Virada de Chave: Do React para o Multiplataforma

A grande decisão foi: parar de construir apenas para Web e começar a construir para a marca.

Nossa empresa não vive só de React. Temos Android nativo, iOS, Flutter e Web. Se continuássemos focados apenas em uma biblioteca React, o problema da inconsistência se repetiria em todas as outras frentes.

A estratégia do Monorepo

Decidimos migrar para uma estrutura de Monorepo. A ideia era centralizar a inteligência do Design System em um único lugar, mas distribuir para todas as plataformas:

  • Design Tokens: A fonte única de verdade (Single Source of Truth).
  • Core Logic: Regras de negócio de componentes que poderiam ser compartilhadas.
  • Implementações específicas: Pacotes dedicados para React, Flutter e Mobile Nativo.

Isso mudou o jogo. O Design System deixou de ser "aquele repositório de botões em React" para se tornar a linguagem universal da engenharia na empresa.

Por que você deveria se importar com isso?

Se você é um Senior Software Developer, sabe que nosso trabalho não é apenas escrever código, mas garantir a manutenibilidade e a escalabilidade da solução. Manter a conexão viva entre design e código traz benefícios claros:

  • Reduz o Time-to-Market: Menos discussões sobre pixels, mais foco em funcionalidades.
  • Elimina o Débito Técnico: Atualizações de marca são propagadas automaticamente através de tokens.
  • Unifica a experiência: O usuário sente que está usando o mesmo produto, seja no site ou no app.

E você?

Já sentiu que seu Design System está mais atrapalhando do que ajudando?

Refactor the Terraform Script — Restoring Balance

2026-01-14 07:07:17

Previously when i use terraform and localstack , i avoid to using the tflocal wrapper due to one 100% same way when running in real environment . However i notice that thats a lot of unnecessary thing that we need to add and add more maintenance .

As example from previous

provider "aws" {
  profile = "localstack"
  region  = "us-east-1"

  s3_use_path_style           = true
  skip_credentials_validation = true
  skip_metadata_api_check     = true

  endpoints {
    s3             = "http://s3.localhost.localstack.cloud:4566"
  }

  default_tags {
    tags = {
      Environment = "tutorial"
      Project     = "terraform-configure-providers"
    }
  }
}

We need to specify specific server endpoint and also few flaf like skip_credentials_validation and others . Which is we dont need it on real environment .

Tflocal

Localstack provide a wrapper to run Terraform against LocalStack . You can read more about it in here .

Once you setup everything , you just can clean up your tf file . The only difference that you will need to is when you want to run it .

To init
tflocal init

To apply
tflocal apply

Closing

Luckily, I realized this early and moved away from the dark side. By using tflocal, things become much simpler. The Terraform files stay clean and close to real AWS usage, without extra LocalStack settings inside the code.

LocalStack details are handled when running Terraform, not when writing it. This keeps the setup easy to understand, easier to maintain, and closer to how things should work in the real environment.

SQL Aggregations Finally Made Sense: GROUP BY, HAVING, MIN, MAX, AVG

2026-01-14 07:02:48

Today was one of those SQL days where things look simple… until you actually write the query.

I focused on GROUP BY, HAVING, and the basic aggregate functions: MIN, MAX, and AVG. These are things I’ve heard about for a long time, but I realised I didn’t truly understand them until I tried using them with real questions.

This post is me documenting what finally made sense, and what confused me at first.

The Mental Shift: From Rows to Groups

Up until now, most of my SQL queries felt like this:

“Show me rows that match X condition”

But GROUP BY changes the game. Instead of thinking about rows, you start thinking about groups of rows. That was the first uncomfortable part. SO let me show you what I did.

What I Practiced Today

I worked with queries that asked questions like:

  • How many users per country?
  • What’s the average salary per department?
  • Which department has the highest average score?

All of these queries required grouping. Not a simple get rows that fulfil this conditions.

GROUP BY (What Finally Clicked)

At first, I kept getting errors like:

column must appear in the GROUP BY clause or be used in an aggregate function

That error message annoyed me… until I started thinking more and trying to understand the message and task better.

What helped me:

If a column is not inside an aggregate function, it must be in the GROUP BY.

Example:

SELECT country, COUNT(*) 
FROM users
GROUP BY country;

This works because:

  • country → used for grouping
  • COUNT(*) → summarizes each group

But this does NOT work:

SELECT country, email
FROM users
GROUP BY country;

Because now SQL is like:

“Which email do you want me to pick for each country??”

That was my first real “okay… fair enough” moment.

MIN, MAX, and AVG (Straightforward but Powerful)

Once grouping made sense, these felt more natural.

MIN

SELECT department, MIN(salary)
FROM employees
GROUP BY department;

→ lowest salary per department

MAX

SELECT department, MAX(salary)
FROM employees
GROUP BY department;

→ highest salary per department

AVG

SELECT department, AVG(salary)
FROM employees
GROUP BY department;

→ average salary per department

Nothing fancy but seeing real numbers come out per group made SQL feel more real-world.

HAVING (The Part I Confused with WHERE)

This is where I stumbled a bit.

At first, I kept writing queries like this:

SELECT department, AVG(salary)
FROM employees
WHERE AVG(salary) > 50000
GROUP BY department;

And SQL basically said nope.

The Difference That Finally Stuck:

  • WHERE filters rows before grouping

  • HAVING filters groups after grouping

So the correct version was supposed to be:

SELECT department, AVG(salary)
FROM employees
GROUP BY department
HAVING AVG(salary) > 50000;

Once I saw HAVING as “WHERE for grouped data”, it stopped feeling magical though.

Order of Execution

I then learnt the order of execution of SQL queries and understanding the order made everything clearer:

  • FROM
  • WHERE (filter rows)
  • GROUP BY (create groups)
  • HAVING (filter groups)
  • SELECT
  • ORDER BY

This explains why HAVING exists, WHERE simply runs too early.

What Still Feels Weird

I’m still adjusting to:

  • Thinking in groups instead of rows
  • Knowing when I actually need GROUP BY
  • Reading grouped queries without mentally getting lost

But compared to yesterday? This is progress.

Why I’m Sharing This

GROUP BY and HAVING are usually taught quickly, like:

“Here’s the syntax, move on.”
But as a beginner, this is a mental shift, not just syntax.

If you’re also learning SQL and GROUP BY felt confusing at first, same here.

What’s Next

I’m learning SQL slowly, properly, and honestly and I’m documenting the process so other beginners don’t feel like they’re the only ones struggling.

If you’re on the same path, you’re not behind. You’re just learning it the right way.

ReactJS Hook Pattern ~Latest Ref Pattern~

2026-01-14 07:01:32

・The latest ref pattern solves the problem of accessing the latest values in useEffect without having to add them to the dependency array. First, you create a ref to store the function, then another useEffect with no dependencies that runs on every render to keep the ref updated.

import { useEffect, useRef } from "react";

function ClickHandler({ onClick }) {

  const onClickRef = useRef(onClick);

  useEffect(() => {
    onClickRef.current = onClick
  })

  useEffect(() => {
    const handleClick = () => {
       onClickRef.current();
    };

    document.addEventListener("click", handleClick);

    return () => {
      document.removeEventListener("click", handleClick);
    };
  }, []); 

  return (
    <div>
      <h1>Latest Ref Pattern</h1>
    </div>
  );
}

function App() {
  return <ClickHandler onClick={() => console.log("Clicked")} />;
}

export default App;

Why most aerodynamic tools fail beyond stall – and what flight simulators need instead

2026-01-14 06:59:39

Most aerodynamic tools stop working after ~20° angle of attack. But for flight simulators and game development, we need to understand what happens during spins, tailslides, and aerobatics.

I originally began developing SimiFoil while working on my flight simulator project SimiFlight. What started as an internal tool quickly evolved into a standalone, lightweight aerodynamic curve generator for flight simulators, RC aircraft, and game engines.

👉 Try the free Web Demo: SimiFoil

SimiFoil generates plausible, continuous aerodynamic coefficients across the full 360° angle-of-attack range, including post-stall and reverse-flow regimes. The goal is not CFD-level accuracy, but stable, simulation-friendly data suitable for real-time applications and Blade Element Theory (BET).

While SimiFoil handles classical and ideal airfoil shapes well, it is also designed to handle non-standard and unconventional profiles, such as:

  • Flat plates with thickness or rounded edges

  • Airfoils with blunt or thick trailing edges

  • Highly modified or non-classical geometries

While the results are not perfect for such profiles, the generated curves remain physically plausible and numerically stable, which is critical for simulation and control systems.

To validate the model, I compared the generated curves against experimental wind tunnel data, including the NACA Technical Note 3361 (1955, Critzos et al.), which investigates the NACA 0012 airfoil over the full 0°–180° (and even 360°) angle-of-attack range. The agreement is remarkably close, especially considering the real-time constraints and analytical nature of the model.

The Web Demo runs directly in the browser and allows you to:

  • Analyze lift, drag, and moment coefficients

  • Explore polar plots (Cl vs. Cd)

  • Design airfoils and experiment in a visual wind tunnel

File export is intentionally disabled in the web demo.

A standalone desktop version with CSV, LUT, and XFoil/XFLR5 export is already implemented and is currently being prepared for release.

Development is ongoing. I plan to continue refining the aerodynamic model and expanding the feature set based on feedback and real-world use cases.

SimiFoil: Real-time visualization and XFLR5 XFoil import.

Real-time visualization and curve generation inside SimiFoil

The SimiFoil Dashboard: Analyzing a NACA 0012 profile. The left panel handles configuration, the center shows 360° Lift (Cl), Drag (Cd), and Moment (Cm) curves, and the right panel visualizes real-time forces in a virtual wind tunnel.

Example of aerodynamic curves generated with SimiFoil and exported to XFoil format, shown here inside XFLR5.

The screenshot shows aerodynamic curves generated with SimiFoil and exported in XFoil-compatible format, visualized in XFLR5.

The resulting data allows for seamless integration into existing analysis tools and simulation workflows.

This makes it possible to use the generated data directly in existing analysis tools and simulation workflows.

If you work in flight simulation: How do you currently handle post-stall and reverse-flow aerodynamics? I’d love to hear your thoughts.