MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Decisões demais, estratégia de menos

2025-12-15 08:17:13

m muitos projetos, especialmente sob pressão de prazo, fazemos escolhas conscientes de assumir débito técnico. Isso faz parte da realidade. Clean Code e Clean Architecture têm um custo, e nem sempre há tempo ou contexto para pagar esse preço no momento certo.

O problema começa quando essas decisões deixam de ser estratégicas e passam a ser apenas reativas.

Nos últimos tempos, revisando projetos e PRs, venho percebendo um padrão recorrente: lógicas que crescem apoiadas em longas cadeias de if/else ou switch/case. Na maioria das vezes, isso não nasce de descuido, mas de pequenas decisões acumuladas, feitas para resolver o problema imediato.
Talvez não seja sempre possível aplicar Clean Code ou Clean Architecture da forma ideal.

Ainda assim, compreender as estruturas de dados e desenhar uma lógica simples para sustentá-las é o mínimo para evitar que o código se deteriore rapidamente.

E aqui entra um ponto importante: nem toda melhoria exige uma grande refactors ou uma reestruturação arquitetural completa. Muitas vezes, é possível simplificar a lógica, melhorar a estratégia e escolher melhor as estruturas de dados, reduzindo complexidade sem introduzir riscos relevantes no projeto.

Não se trata de buscar código perfeito, mas de evitar que decisões simples hoje se transformem em manutenção cara amanhã. Um mapa bem definido, uma separação mais clara de responsabilidades ou uma lógica mais declarativa já são passos suficientes para tornar o código mais previsível e menos frágil.

Essa reflexão foi o que me motivou a escrever uma sequência de artigos mais voltados para base lógica e estratégia. A ideia não é eliminar decisões, mas tomar decisões melhores, mesmo quando o contexto não permite o cenário ideal.

Porque, no fim, dívida técnica não é apenas o que deixamos de fazer. É, muitas vezes, como escolhemos resolver o problema.

  1. Trocando complexidade ciclomática por O(1) com Object Maps
  2. Refatorar Ifs Não Significa Eliminar Decisões

Another E2E Solution delivered. This time with CI/CD, AWS EventBridge and ECS Fargate

2025-12-15 08:07:37

To wrap up the year, I built my latest E2E project.

It is a side project however it will help us at work. We have a service that uploads documents from a third-party system. This integration requires authentication, but the system enforces a monthly password rotation. When the password expires, uploads and downloads start failing, which quickly turns into a operational issue.

To remove the need for manual updates and the risk of someone simply forgetting, I built an automation to handle this end to end.

The solution is a Python worker using Selenium with headless Chromium, executed on a schedule and backed by a full CI/CD pipeline. On every push to the main branch, GitHub Actions assumes an AWS IAM Role via OIDC (no access keys involved), builds the Docker image, and pushes it to Amazon ECR. The workflow then registers a new ECS Task Definition revision, updating only the container image.

This is the Architecture design solution:

CI/CD:

Execution is handled by Amazon EventBridge, which triggers the task every 29 days:

ECS Cluster:

The task runs on ECS Fargate in a public subnet, with a public IP and outbound traffic allowed. When triggered, Fargate starts the container, runs automation.py, launches Selenium with Chromium and Chromedriver, logs into the system, performs the password rotation, and exits. On success, the task finishes automatically with exit code 0. If an exception occurs, logs are sent to CloudWatch and the error is reported to a Slack alerts channel.

Archtecture decisions:
I chose to run the task in a public subnet for simplicity and cost reasons. Since the worker only needs outbound internet access and does not expose any inbound ports, there’s no additional risk as long as the security group has no inbound rules. This also avoids the cost and complexity of running a NAT Gateway, which would be required with private subnets.

Using ECS Fargate instead of Lambda was also a deliberate decision. Running Selenium with Chromium on Lambda usually requires custom layers and fine-tuning, and it’s easy to hit limits around memory, package size, or execution time. With Fargate, the entire environment is packaged in the Docker image, with predictable runtime behavior and flexible CPU and memory allocation, which makes this kind of workload much easier to operate.

In the end, this is a simple batch worker. It runs on a schedule, does one job, and exits. For headless browser automation, this approach turned out to be more straightforward and reliable.

Refatorar Ifs Não Significa Eliminar Decisões

2025-12-15 07:57:08

Já falei sobre Object Maps, uma técnica poderosa para substituir cadeias de switch ou if/else. Com ela, trocamos complexidade ciclomática por acesso direto em tempo constante (O(1)), tornando-a ideal para cenários de mapeamentos estáticos do tipo chave → valor.

  1. Trocando complexidade ciclomática por O(1) com Object Maps

No entanto, o desenvolvimento de software no mundo real raramente é tão previsível. No dia a dia, lidamos com regras de negócio dinâmicas, faixas de valores, validações combinadas e decisões dependentes de contexto, situações em que um simples mapeamento deixa de ser suficiente.

É aí que surge a dúvida inevitável:
se não consigo eliminar o if, estou condenado a escrever código sujo?

A resposta é não. Refatorar condicionais não é um esporte onde ganha quem tem menos linhas de código (se fosse assim, bastava então usar o minifier :D), mas sim quem tem o código mais claro. A proposta hoje vai além de sintaxe. A meta é parar de escrever condicionais defensivas e começar a escrever intenção.

Vamos ver como aplicar Early Return, Encapsulamento e até mesmo o polêmico Switch da maneira coerente.

O Inimigo: A Programação Defensiva

Tive um professor na faculdade que, ao ensinar lógica de programação, apresentou uma regra que ia além do que normalmente se encontra na literatura. Um if isolado é perfeitamente aceitável. Dois, talvez acompanhados de um else, ainda exigem apenas atenção. Mas, a partir do terceiro, é um sinal claro de alerta: a probabilidade de a lógica ter sido mal modelada cresce significativamente.

Em essência, o tipo de if que mais polui o código não nasce da complexidade do domínio, mas da desconfiança. É o código que, antes de executar o que realmente importa, precisa validar, checar e reconfirmar uma série de condições defensivas. O resultado é conhecido: é um Arrow Code, onde a lógica principal se perde em meio a níveis crescentes de indentação e decisões encadeadas.

// ❌ Código Defensivo e Aninhado
function processarPedido(pedido) {
  if (pedido) {
    if (pedido.ativo) {
      if (pedido.itens.length > 0) {
        if (pedido.saldo >= pedido.total) {
           // Finalmente, a lógica real...
           console.log("Processando...");
        }
      }
    }
  }
}

O problema aqui não é a existência da validação, mas a carga cognitiva. Você precisa ler 5 linhas de "ruído" para achar a lógica de negócio.

A Limpeza: Early Return (Guard Clauses)

A primeira estratégia para organizar a casa é o Early Return (ou Guard Clauses).

A regra é simples: trate as exceções primeiro. Se uma condição impede o código de rodar, retorne imediatamente. Isso elimina a necessidade de else e remove níveis de indentação.

// ✅ Guard Clauses: Limpando o fluxo
function processarPedido(pedido) {
  if (!pedido || !pedido.ativo) return;
  if (pedido.itens.length === 0) return;
  if (pedido.saldo < pedido.total) return;

  // O "Happy Path" fica livre e na raiz da função
  console.log("Processando...");
}

Beeem melhor, neh? Mas ainda estamos lendo implementação. Estamos lendo saldo < total, quando deveríamos estar lendo uma regra de negócio.

Escrevendo Intenção (Encapsulamento)

Aqui está o pulo do gato para um código maduro. Se o seu if verifica múltiplas variáveis ou regras específicas, não exponha essa matemática na função principal. Dê um nome a ela.

Em vez de escrever código que verifica pedaços de dados, escreva código que pergunta se uma regra foi satisfeita.

// ❌ Leitura de implementação
if (user.age >= 18 && user.hasLicense && !user.suspended) {
  rentCar();
}

// ✅ Leitura de Intenção (Predicados)
const podeAlugarCarro = (user) => 
  user.age >= 18 && user.hasLicense && !user.suspended;

if (podeAlugarCarro(user)) {
  rentCar();
}

O if continua lá. O processador vai executar a mesma comparação. Mas para quem lê (você no futuro), a complexidade foi abstraída. Você parou de ler código defensivo e passou a ler a intenção do negócio.

A defesa do Switch: Roteamento vs Lógica

Muitas vezes, a complexidade não é booleana (sim/não), mas categórica (Tipo A, Tipo B, Tipo C). Nesses casos, o Object Map do artigo anterior é ótimo, mas e se precisarmos de lógicas complexas para cada tipo?

É aqui que o switch (ou cadeias de if) costuma virar um monstro de código espaguete, violando o princípio DRY (Don't Repeat Yourself).

O segredo para usar switch sem culpa é entender seu propósito: Ele deve ser um Roteador, não um Processador.

Utilize o switch apenas como uma Factory (Fábrica). Ele decide qual estratégia usar, mas a execução da lógica fica isolada em outro lugar (Classes ou Funções).

O jeito errado (Lógica Acoplada)

function calcularFrete(tipo, peso) {
  switch (tipo) {
    case 'SEDEX':
      // Lógica pesada misturada com a decisão
      const taxa =obterTaxa();
      return (peso * taxa) + 10; 
    case 'PAC':
      // Mais lógica misturada...
      if (peso > 30) throw new Error('Peso limite');
      return peso * 5;
  }
}

O jeito certo (Factory + Strategy)

Aqui, o switch serve apenas para escolher o especialista.

// As regras de negócio ficam isoladas (Strategy Pattern)
const FreteSedex = { calcular: (p) => (p * 10) + 10 };
const FretePAC   = { calcular: (p) => p * 5 };

// O Switch serve APENAS para criar/rotear (Factory)
const obterEstrategia = (tipo) => {
  switch (tipo) {
    case 'SEDEX': return FreteSedex;
    case 'PAC':   return FretePAC;
    default:      throw new Error('Tipo inválido');
  }
}

// Uso Limpo
const estrategia = obterEstrategia('SEDEX');
estrategia.calcular(10);

If e Switch não são o vilão

Eliminar ifs não deve ser um objetivo em si, mas o efeito colateral de um bom design.

O problema surge quando condicionais passam a compensar a ausência de estrutura, nomes semânticos e limites claros no código. Nesse cenário, o if vira defesa. O switch vira gambiarra. E a lógica de negócio se perde no meio do caminho.

Quando decisões estão bem distribuídas, Object Maps para mapeamentos estáticos, Guard Clauses para validações, Regras de negócio explícitas e Switches atuando como roteadores, o código deixa de ser reativo e passa a ser declarativo.

Você não elimina decisões.

Você elimina ruído.

E esse é o tipo de código que continua legível mesmo depois que o contexto do problema já não está mais fresco na memória.

Full-Stack Development: The AI Evolution

2025-12-15 07:54:57

Cover

Full-Stack Development: The AI Evolution

Are You Building on an Obsolete Roadmap?

Are you building a full-stack career on a roadmap that's already obsolete? The tech landscape doesn't wait for anyone, and the traditional definition of a 'full-stack developer' is rapidly disintegrating, giving way to something far more powerful, yet profoundly misunderstood.

The Paradox of Present-Day Mastery

For years, the full-stack path was clear: master a frontend framework (React, Vue), a backend language (Node, Python, Go), a database (PostgreSQL, MongoDB), and maybe dabble in cloud deployment. This was the blueprint for independent creation, the ultimate leverage for turning an idea into a product. But while many are still perfecting their API integrations or debating JavaScript frameworks, a seismic shift has occurred. AI isn't just a fancy tool to enhance your workflow; it's becoming an intrinsic layer of the stack itself.

Think about it. We're moving from a world where developers build logic to one where they command intelligence. Generative AI isn't just spitting out boilerplate code; it's crafting entire UI components, optimizing backend algorithms, and even orchestrating deployment pipelines. Your 'full-stack' expertise, without understanding how to integrate, prompt, and leverage these new intelligences, is like being a master carpenter in an age of automated construction. You might be excellent at your craft, but you're missing the future.

"The future of full-stack isn't just about building applications; it's about commanding intelligence within them."

The THINK ADDICT System: Building for the AI-Native Future

So, how do you adapt? You don't abandon the fundamentals; you augment them. This isn't about replacing your hard-earned skills but expanding your mental models and toolset to incorporate the greatest leverage multiplier we've seen in decades.

Here's the updated THINK ADDICT roadmap for the AI-Native Full-Stack Developer:

1. Solidify the Core Foundations (The 'Why' remains):

  • Frontend Mastery: Deep dive into a modern framework (React, Vue, Svelte). Understand component architecture, state management, and performance. But now, explore how generative AI can build these components faster, and how AI-driven tools can optimize user experience.
  • Backend Powerhouse: Choose a robust language (Node.js, Python, Go, Rust). Focus on API design, microservices, and scalability. Crucially, learn how to expose and consume AI services as part of your backend architecture.
  • Data Acumen: SQL and NoSQL databases are still critical. Add to this understanding data pipelines for ML models, vector databases, and how to prepare data for AI consumption.
  • Cloud & DevOps: Deploying to AWS, GCP, or Azure is non-negotiable. Now, integrate AI-driven monitoring, automated deployment scripts that leverage AI, and serverless functions optimized for AI inference.

2. Master the AI Integration Layer (The New 'How'):

  • AI Fundamentals: Don't need to be an ML scientist, but understand the basics of machine learning, neural networks, and especially Large Language Models (LLMs). Know their capabilities, limitations, and ethical considerations.
  • Prompt Engineering: This is the new API. Learn to craft effective prompts for code generation, debugging, testing, and even UI/UX ideation. It's about communicating effectively with intelligence.
  • API Integration: Become proficient at integrating powerful AI APIs (OpenAI, Gemini, Hugging Face). Learn how to fine-tune models for specific use cases and build AI-powered features into your applications.
  • Vector Databases & Embeddings: Crucial for building RAG (Retrieval Augmented Generation) systems, enabling your applications to interact with vast amounts of proprietary data intelligently.
"Your ability to prompt, integrate, and orchestrate AI defines your leverage in the next decade."

This isn't about blindly following trends. It's about recognizing reality. The full-stack developer who thrives will be the one who sees AI not as a threat, but as an indispensable co-pilot, an amplifier of their own capabilities. Start small. Integrate an LLM into a personal project. Experiment. Build. The world is moving, and the only way to stay relevant is to keep evolving with it. Your skill stack isn't static; it's a living, breathing entity demanding constant upgrades.

"Don't just build with AI; build for an AI-driven future."

🚀 Upgrade Your Mindset

👉 JOIN THE SYSTEM

Visual by Think Addict System.

What NestJS Actually Is — A Simple, No-Fluff Explanation

2025-12-15 07:47:10

Alright, let’s come down to basics.
NestJS is basically a TypeScript-first framework built on top of Node.js and Express. That’s it. No magic. No hype. Just structure on top of tools we already know.

To understand why NestJS exists, you need to understand what came before it.

Node.js → JavaScript runtime

Runs JS outside the browser. Great for fast backend development.
But JS itself? Very quirky. No types. Easy to move fast, also easy to break everything accidentally.

Express → A simple server

Express made backend development stupidly easy. Tiny learning curve. Perfect for small projects, prototypes, hackathons.

But then…

When apps got bigger, everything got messy

As real-world apps became feature-heavy, codebases turned into spaghetti bowls:

No type guarantees

No enforced structure

Every dev invents their own folder layout

Business logic ends up mixed with routing

Regression bugs multiply

“Just add this new feature” becomes “hope nothing explodes”

Even adding TypeScript to Node didn’t fix the deeper problem.
TS gives you types, sure — but it doesn't give you architecture.

Node + TS still leaves you with:

Unreinforced boundaries

Too much flexibility

Teams writing code in completely different styles

Dependency chaos

No opinionated structure for large-scale apps

And that’s exactly where NestJS comes in.

NestJS: Node + Express, but grown-up

NestJS sits on top of Express (or Fastify), but adds real structure, real boundaries, and a consistent way to build apps — especially when multiple developers are involved.

The most important idea Nest brings is opinionated architecture.

Not optional.
Not “choose your own adventure.”
Actual structure.

Controllers + Services = Clean Separation

Nest enforces the Controller → Service pattern.

This quietly implements the Single Responsibility Principle in the background:

Controllers handle incoming requests

Services handle business logic

No mixing

No “let me put everything in one file” nonsense

And Nest breaks everything into modules.
Every controller, every service, every feature — all separated, all clean, all connected through one root module.

This alone already makes large codebases way easier to reason about.

Dependency Injection (DI) Done Right

Node is notorious for relying heavily on random NPM packages for everything.
Great for flexibility, also a giant security and maintenance headache.

Nest gives you:

Built-in dependency injection

Cleaner integrations

Fewer third-party landmines

More secure and predictable architecture

This means features plug in cleanly instead of becoming tangled metal wires behind your TV.

Extra Nest Perks

Nest also brings in a lot of real-world development conveniences:

DTOs (Data Transfer Objects)

Pipes for validation

Providers

Guards

First-class testing support

CLI tools for scaffolding

Basically, everything you wish Express had out of the box.

Why I’m Writing This Series

I’m publishing a series of simple NestJS guides to help people actually understand:

how NestJS works

how the architecture fits together

how TypeScript + Node + Nest can feel natural instead of overwhelming

It’s not going to be full of buzzwords or fake enterprise speak.
Just clean explanations, real fundamentals, and the bigger picture of how this ecosystem fits together.

If you're trying to understand this NestJS / TS / JS domain from the ground up, this series will make the whole thing click.

Want more no-fluff tech guides?

I publish clean, practical cloud and backend notes here:

https://ramcodesacadmey.gumroad.com

Check it out if you want simple explanations that actually make sense.

Why Mathematics Is Essential in Machine Learning

2025-12-15 07:43:23

Why Mathematics Is Essential in Machine Learning

(and why ignoring it always ends up causing problems)

Introduction — The Black Box Myth

Machine Learning is often presented as an essentially algorithmic discipline:
you load data, choose a model, train it, and “it works.”

This view is partly true, but fundamentally incomplete.

Behind every Machine Learning algorithm lie precise mathematical structures:

  • notions of distance
  • properties of continuity
  • assumptions of convexity
  • convergence guarantees
  • theoretical limits that no model can circumvent

👉 Modern Machine Learning is not an alternative to mathematics:
it is a direct application of it.

This article sets the general framework for the series: understanding why mathematical analysis is indispensable for understanding, designing, and mastering Machine Learning algorithms.

1. Machine Learning Is Primarily an Optimization Problem

At a fundamental level, almost all ML algorithms solve the same problem:

Minimize a loss function.

Formally, we search for parameters θ such that:

θ* = arg min_θ L(θ)

where L(θ) measures the model’s error on the data.

Behind this simple expression immediately arise essential mathematical questions:

  • What does it mean to minimize?
  • Does a minimum exist?
  • Is it unique?
  • Can it be reached numerically?
  • At what speed?

These questions are not algorithmic — they are mathematical.

2. Distance, Norms, and Geometry: Measuring Error Is Not Neutral

Before optimizing anything, a fundamental question must be answered:

How do we measure error?

This question leads directly to the notions of distance and norm.

Classic examples:

  • MAE (Mean Absolute Error) ↔ L¹ norm
  • MSE (Mean Squared Error) ↔ L² norm
  • Maximum error ↔ L∞ norm

These choices are not incidental:

  • they change the geometry of the problem
  • they affect robustness to outliers
  • they influence numerical stability
  • they impact gradient descent behavior

👉 Without understanding the geometry induced by a norm, one does not truly understand what the algorithm is optimizing.

3. Convergence: When Can We Say an Algorithm Works?

A Machine Learning algorithm is often iterative:

θ₀ → θ₁ → θ₂ → …

This raises a crucial question:

Does this sequence converge? And if so, to what?

The answer depends on concepts from analysis:

  • sequences and limits
  • Cauchy sequences
  • completeness
  • continuity

Without these notions, it is impossible to answer very practical questions such as:

  • why training diverges
  • why it oscillates
  • why it is slow
  • why two implementations produce different results

4. Continuity, Lipschitz Conditions, and Stability

A Machine Learning model must be stable:

  • a small change in the data
  • a small change in the parameters
  • should not cause predictions to explode

This is precisely what is formalized by:

  • uniform continuity
  • Lipschitz functions

A function f is Lipschitz if:

|f(x) − f(y)| ≤ L |x − y|

This inequality lies at the core of:

  • model stability
  • learning rate selection
  • convergence guarantees for gradient descent

👉 The Lipschitz constant is not a theoretical detail:
it directly controls the speed and stability of learning.

5. Convexity: Why Some Problems Are Easy… and Others Are Not

Convexity is arguably the most important mathematical property in optimization.

A convex function has:

  • a unique global minimum
  • no traps in the form of local minima

This is why:

  • linear regression
  • support vector machines
  • certain regularization problems

benefit from strong theoretical guarantees.

By contrast:

  • deep neural networks are non-convex
  • yet still work thanks to particular structures and effective heuristics

👉 Understanding convexity makes it possible to know when guarantees exist — and when they do not.

6. Theory vs Practice: What Mathematics Guarantees (and What It Does Not)

A crucial point to understand from the outset:

Mathematics guarantees properties, not miraculous performance.

It can tell us:

  • whether a solution exists
  • whether it is unique
  • whether an algorithm converges
  • how fast it converges

It cannot guarantee:

  • good data
  • good generalization
  • an unbiased model

But without it, we proceed blindly.

Conclusion — Understand Before You Optimize

Modern Machine Learning rests on three fundamental mathematical pillars:

  1. Geometry (norms, distances)
  2. Analysis (continuity, convergence, Lipschitz conditions)
  3. Optimization (convexity, gradient descent)

Ignoring these foundations amounts to:

  • applying recipes without understanding their limits
  • misdiagnosing failures
  • overcomplicating simple problems

👉 Understanding the mathematical analysis of Machine Learning is not theory for theory’s sake:
it is about gaining control, robustness, and intuition.

Reginald Victor aka Lezeta