MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Lidando com Concorrência em Java - Lock Pessimista

2025-11-17 09:09:18

Partindo do pressuposto que concorrências acontecerão em uma aplicação multithread, nesse post vou descrever de forma simples e intuitiva o lock pessimista. Ele é um mecanismo de controle de acesso a recursos compartilhados, ou seja, quando threads simultâneas acessam o mesmo registro. 🚀

Importante entender também a estratégia de lock otimista e analisar qual se encaixa melhor no seu problema.

Concorrência

Bom, imagina uma aplicação multithread, ou seja, o mesmo recurso pode ser acessado simultaneamente por threads diferentes.

Threads são processos em paralelo que estão rodando no seu programa e cada uma pode estar "disputando" o mesmo dado.

Trazendo pra vida real, é como se você, leitor, tivesse tentando reservar a cadeira E10 da sessão de Velozes e Furiosos do dia 16/11 às 19:00 e Bento, no mesmíssimo instante, está fazendo a mesma reserva, na mesma cadeira, na mesma sessão. Ou seja, vocês estão "disputando" o mesmo recurso do banco de dados. Está aí um clássico exemplo de concorrência.

Pra resolver esse caso temos várias alternativas e uma delas é o uso de Lock.

🔒 O que é Lock Pessimista e Lock Otimista?

De maneira bastante resumida, um lock otimista assume que os conflitos são raros e o lock pessimista assume que os conflitos são comuns.

Tá, mas o que isso significa? Signfica que as estratégias são diferentes para cada um.

O Lock otimista verifica conflitos apenas na hora de atualizar o registro. Então, supondo que eu estou atualizando o registro de versão 2: se na hora de fazer o commit o lock vê que já existe a versão 3, significa que meu registro já não é o mais atual e a atualização para.

Por exemplo, se você começou a editar com versão 2, mas outra transação já criou a versão 3, sua atualização será rejeitada (muito abstrato, eu sei, escrever sobre lock não está sendo fácil, mas no próximo artigo vamos aprofundar no lock otimista).

Lock Pessimista

Aqui assumimos que os conflitos são comuns e por isso o lock pessimista "tranca" o registro.

Podemos dizer que a tentativa de fazer a mesma reserva no cinema é comum, certo? Bom... então vamos implantar um lock pessimista nesse sistema.

Isso significa que a partir de agora sempre que algum dado (a cadeira E10 da sessão de Velozes e Furiosos dia 16/11 as 19:00) estiver sendo disputado (por você e Bento) nós vamos trancar o acesso a essa linha do banco até que quem chegou primeiro finalize a operação que está tentando fazer.

A imagem abaixo mostra em alto nível como é realizado o lock.
Supondo que Bento começou a reserva primeiro, você só poderá fazer modificações (reservar) aquele assento quando Bento terminar ou desistir do processo dele.

lock-pessimista-flow

E em Java usando JPA ficaria assim:

@Transactional
public void processarPedido(Long pedidoId) {
    // 1. A transação começa aqui. O pool de conexões nos empresta uma conexão.

    // 2. Pedimos o lock PESSIMISTIC_WRITE
    Pedido pedido = entityManager.find(
        Pedido.class,
        pedidoId,
        LockModeType.PESSIMISTIC_WRITE
    );

    // 3. O JPA traduz isso para SQL:
    // "SELECT * FROM pedidos WHERE id = ? FOR UPDATE"
    // O Banco de Dados AGORA bloqueia esta linha.

    // 4. NENHUMA outra transação pode escrever (ou ler com FOR UPDATE)
    // esta linha. Elas ficarão na fila, esperando.

    // ... fazemos nossa lógica de negócio ...
    pedido.setStatus("PROCESSADO");

    // 5. A transação faz COMMIT.
    // O lock é FINALMENTE liberado. A conexão é devolvida ao pool.
}

Importante notar aqui que o recurso mais escasso da sua aplicação não é CPU ou memória, é o pool de conexões do banco de dados.
O lock pessimista segura essa conexão durante todo o tempo de vida da transação. Se a sua "lógica de negócio" for lenta (ex: chamar uma API externa), sua aplicação irá parar.
Por isso é importante trabalhar na arquitetura da sua aplicação e não deixar que a transação precise ficar esperando por respostas externas.

Extras do JPA (o LockModeType)

LockModeType não é uma coisa só. A especificação JPA nos dá opções, e a escolha errada tem consequências.

PESSIMISTIC_WRITE (O Lock Exclusivo)

  • SQL: SELECT ... FOR UPDATE (na maioria dos dialetos).

  • O que faz: Impede que outras transações façam SELECT ... FOR UPDATE E impede que façam UPDATE ou DELETE. É um lock exclusivo total.

  • Quando usar: Este é o padrão. Você vai ler e definitivamente vai escrever na linha.

PESSIMISTIC_READ (O Lock Compartilhado... às vezes)

  • SQL: SELECT ... FOR SHARE (ex: PostgreSQL/MySQL 8+) ou ... LOCK IN SHARE MODE (MySQL antigo).

  • O que faz: Impede que outras transações façam UPDATE ou DELETE, mas permite que outras transações também leiam com PESSIMISTIC_READ.

  • Quando usar: Cenário mais raro. Você quer garantir que o dado não mude enquanto você lê, mas sabe que outros podem estar lendo ao mesmo tempo sem intenção de escrita.

PESSIMISTIC_FORCE_INCREMENT (O Híbrido)

  • O que faz: Adquire um lock pessimista (FOR UPDATE) e, além disso, força um incremento na coluna @version (a mesma usada pelo lock otimista), mesmo que você não altere nenhum outro campo.

  • Quando usar: Útil se você precisa invalidar caches ou "sinalizar" para sistemas otimistas que algo mudou, mas garantindo isso de forma pessimista.

Dicas Extras:

  1. Considere configurar timeout no seu lock
Map<String, Object> properties = new HashMap<>();
// Define o timeout em milissegundos.
// "javax.persistence.lock.timeout" = 0 (não espere, falhe imediatamente)
// "javax.persistence.lock.timeout" = 5000 (espere 5 seg e lance LockTimeoutException)
properties.put("javax.persistence.lock.timeout", 5000);

Pedido pedido = entityManager.find(Pedido.class, id, LockModeType.PESSIMISTIC_WRITE, properties);
  1. Se você optar pelo locking pessimista, sua transação deve ser cirúrgica. Ela deve ser extremamente rápida, não fazer I/O externo e ter um plano de tratamento para deadlocks (ordem de lock) e timeouts.

Links úteis 🔗

Orchestrating Complex Processes in Node.js with @jescrich/nestjs-workflow

2025-11-17 09:05:15

Modern backend systems are no longer simple request/response pipelines.
They orchestrate payments, onboarding, document validation, long-running tasks, integrations with external vendors, and multi-step processes that must never end in partial failure.

And yet… most Node.js applications still try to manage this complexity with:

giant service classes

boolean flags in the database

magic strings like "pending" | "processing" | "done"

ad-hoc Saga implementations

hand-rolled state machines

That’s why I built nestjs-workflow — a lightweight, declarative workflow engine for NestJS that helps you structure multi-step business processes with clarity, resiliency, and observability.

🚀 Why nestjs-workflow?

Because every real system eventually needs workflows.

When you’re building microservices, event-driven systems, or anything that depends on external APIs, you need:

State transitions (from “received” → “validated” → “processed” → “completed”)

Retries & compensation when external calls fail

Idempotency

Persistence of state

Visibility into where the process is stuck

In enterprise systems (fintech, ecommerce, LOS/OMS integrations, KYC flows, etc.), this becomes even more important.

nestjs-workflow gives you all of that without turning your project into a distributed-systems PhD.

🧩 A Declarative Workflow, Not a Mess

Here’s what a workflow looks like with the library:

@Workflow()
export class OnboardingWorkflow extends WorkflowBase {
definition = {
id: 'onboarding',
initial: 'received',
states: {
received: {
on: {
VALIDATE: 'validating',
},
},
validating: {
invoke: async (ctx) => this.validateUser(ctx),
on: {
SUCCESS: 'completed',
FAILURE: 'failed',
},
},
failed: {},
completed: {},
},
};
}

Clean. Explicit. Testable.
Your workflow logic is the source of truth, not scattered across your services.

🔌 Plugged Into NestJS the Right Way

nestjs-workflow integrates seamlessly:

✔️ Providers & Dependency Injection

Inject services, repositories, and external clients directly into your workflow.

✔️ Persistence Layer

Use memory, Redis, SQL, or your own implementation.

✔️ Event Emitters

React to workflow transitions, notify other services, or publish Kafka messages.

✔️ Hooks for Observability

Perfect for platforms like Datadog, New Relic, or WHAWIT.

⚙️ Real-World Example: A Payment Flow
await this.workflowService.send('payment', orderId, 'AUTHORIZE');

Behind the scenes:

State moves from created → authorizing

The workflow calls an external PSP

If it fails, retries happen

If it still fails, compensation logic runs

Workflow transitions to failed

Your system stays consistent

This is the power of structured orchestration.

🏢 Designed From Real Enterprise Problems

I originally built this library after hitting the same issues repeatedly while building:

event-driven ecommerce platforms

financial onboarding pipelines

data ingestion engines

vendor adapter layers

long-lived loan origination flows (LOS)

background processors for Kafka/Flink

Node.js needed something opinionated but flexible — a workflow engine that Team Leads and Architects could adopt without a massive learning curve.

nestjs-workflow is intentionally simple, predictable, and battle-tested in production environments.

📦 Explore the Examples

If you want to see real, runnable use cases, check the examples repo:

👉 https://github.com/jescrich/nestjs-workflow

👉 (Examples moved here) https://github.com/jescrich/nestjs-workflow

You’ll find:

Saga examples

E2E workflows

External service orchestration

Error-handling patterns

Kafka + workflow patterns

Complex state machines with branching logic

🎯 Final Thoughts

If your NestJS application has:

business processes with multiple steps

integrations that may fail or require retries

state transitions

workflows that need transparency

or just too much chaos in service layers…

…then you’ll benefit from nestjs-workflow.

It gives teams a clean, maintainable way to orchestrate complexity, brings structure to long-running processes, and avoids the hidden traps of “just manually coding it.”

If you build something with it, tag me — I’m always curious to see how others push workflow engines to new places.

Quantum-Inspired Encoding: A Leap in Offline Reinforcement Learning

2025-11-17 09:02:08

Quantum-Inspired Encoding: A Leap in Offline Reinforcement Learning

Imagine training a robot to navigate a complex environment, but only getting 100 chances to try. Or teaching an AI model to make critical decisions based on tiny, fragmented datasets. The challenge? Traditional reinforcement learning (RL) struggles with limited data.

We've been exploring a novel approach: transforming the raw data into a more insightful representation before feeding it to the RL algorithm. Think of it like compressing a large image file without losing the important details. The key is a quantum-inspired encoding that reshapes the data, making patterns clearer and decisions easier to learn, even with sparse information.

This encoding method, inspired by quantum computing principles but fully functional on classical machines, maps states into a new space where geometric properties are optimized for reinforcement learning. By training on these encoded states, and decoding the resulting rewards, we've seen dramatic improvements in offline RL performance.

Benefits:

  • Significant Performance Boost: Achieved over 100% improvement in reward attainment compared to training directly on raw data, even with severely limited datasets.
  • Data Efficiency: Unlock effective RL training, even when sample size is drastically reduced.
  • Improved Generalization: The altered data landscape allows models to better generalize from limited experiences to unseen scenarios.
  • Enhanced Stability: Transforms the geometric structure of the learning space, leading to more consistent and stable learning.
  • Simplicity: Can be easily integrated with existing RL frameworks like Soft Actor-Critic (SAC) and Implicit Q-Learning (IQL).

One key implementation challenge lies in selecting the optimal encoding parameters. This requires careful tuning and may benefit from automated hyperparameter optimization techniques. A good analogy is sculpting a clay model – the initial shape greatly influences the final form. This method can be used for optimizing energy consumption in buildings and other autonomous resource management systems, where real world tests are costly.

This quantum-inspired encoding unlocks a new paradigm for offline RL, paving the way for more robust and efficient AI systems. The ability to learn effectively from limited data opens doors to applications in robotics, autonomous systems, and decision-making scenarios where real-world interactions are expensive or dangerous. Future research might explore adaptive encoding methods, where the transformation adjusts dynamically during the learning process, further enhancing data efficiency and model performance.

Related Keywords: Offline RL, Batch Reinforcement Learning, Quantum Metric, Quantum Embedding, Metric Learning, Representation Learning, AI for Robotics, Autonomous Systems, Data Efficiency, Sample Efficiency, Simulated Environments, Real-World Applications, Quantum Algorithms, Kernel Methods, Distance Metric Learning, AI Safety, Explainable AI, Quantum Reinforcement Learning, Off-Policy Learning, Supervised Learning

Maximize SDK Integration: Monetize Your AI Conversations

2025-11-17 08:58:52

Traditional Ads Don't Work in AI Conversations. Here's What Does.

As developers, we know that the AI landscape is booming. With countless applications emerging, the challenge lies in monetization without sacrificing user experience. Enter Monetzly, a game-changing platform that positions itself as the Google Ads for AI conversations, paving the way for sustainable AI innovation.

Imagine you’ve created an innovative AI app. Users love it, but how do you monetize without hitting them with a subscription or paywall? Monetzly is the first dual-earning platform specifically tailored for AI applications, allowing you to monetize your app while also earning revenue by hosting relevant ads. This is a win-win for developers, advertisers, and users alike.

The Innovation: A Conversation-Native Advertising Marketplace

Traditional advertising methods often disrupt user engagement, especially in conversation-driven environments. Monetzly tackles this head-on with its unique advertiser marketplace that allows advertisers to reach AI app users in a more native and seamless way.

How It Works:

  1. Contextual Matching: Our AI-powered platform intelligently matches ads to conversations, ensuring that the ads are contextually relevant to what users are discussing. This increases engagement and reduces ad fatigue.

  2. Developer-First Approach: As a developer, you can integrate Monetzly's SDK in just five minutes. This simplicity means you can focus on what you do best—building great applications—while we handle the monetization aspect.

  3. Dual Revenue Streams: Not only do you earn from app usage, but you also generate additional income by displaying relevant ads. This dual-earning model is a game-changer, making it easier to sustain your innovation without resorting to disruptive monetization tactics.

Why Developers Should Act Now

  • Monetization Made Easy: Say goodbye to complicated subscription models. With Monetzly, you can monetize your app effortlessly while keeping user experience intact.
  • Engaged User Base: Advertisers gain access to a highly engaged audience—your users. This means they’re more likely to see ads that are relevant to them, leading to higher conversion rates.
  • Revenue Sharing Model: Our revenue-sharing framework ensures that all parties benefit. As you host relevant ads, you get a percentage of the ad revenue, creating a sustainable income stream.

Join the Conversation

The future of AI application monetization is here, and it’s built on the premise of collaboration between developers and advertisers. Ready to take your AI app to the next level?

Explore how Monetzly can transform your monetization strategy and help you foster sustainable innovation. Check out more at Monetzly.

Let’s create a thriving ecosystem where developers can innovate freely, advertisers can engage effectively, and users can enjoy a seamless experience.

ai #webdev #startup #monetization #developer

Feel free to share your thoughts or experiences with monetizing AI applications in the comments below. Your insights could be invaluable to our community!

Domain Events: Transformando Mudanças em Oportunidades

2025-11-17 08:53:30

Introdução

E se seu código pudesse anunciar quando algo importante acontece, ao invés de você ter que conectar manualmente todos os sistemas interessados? E se adicionar novos comportamentos não exigisse modificar código existente?

Bem-vindo ao mundo dos Domain Events!

O Problema: Acoplamento em Cascata

Imagine que você precisa implementar: "Quando um cliente for aprovado, envie email e notifique vendas".

Abordagem Ingênua (acoplada)

@Service
public class CustomerService {
    @Autowired private EmailService emailService;
    @Autowired private SalesNotificationService salesService;

    public void approveCustomer(UUID customerId) {
        Customer customer = repository.findById(customerId);
        customer.setStatus(APPROVED);
        repository.save(customer);

        emailService.sendApprovalEmail(customer);
        salesService.notifySalesTeam(customer);
    }
}

Problemas:

  1. Acoplamento direto: Service conhece EmailService e SalesNotificationService
  2. Violação SRP: Aprovar cliente + enviar email + notificar vendas = 3 responsabilidades
  3. Difícil testar: Precisa mockar todos os serviços
  4. Rigidez: Adicionar novo comportamento = modificar código existente
  5. Falha em cascata: Se email falhar, aprovação falha também

Agora imagine adicionar:

  • Enviar SMS
  • Atualizar analytics
  • Notificar sistema externo
  • Gerar evento no data warehouse
  • Criar task no CRM

Seu approveCustomer() vira um monstro de 50 linhas com 10 dependências!

Domain Events: A Solução Elegante

Abordagem com Domain Events

public class Customer extends AggregateRoot {
    public void approve() {
        if (this.status != CustomerStatus.PENDING) {
            throw new IllegalArgumentException("Customer status is not pending");
        }

        this.status = CustomerStatus.APPROVED;
        this.updatedAt = Instant.now();

        // 🎯 Apenas anuncia o que aconteceu!
        this.recordDomainEvent(new CustomerApproved(this.id()));
    }
}

Vantagens:

  • Zero acoplamento: Domínio não conhece quem vai reagir
  • SRP preservado: Apenas muda estado e anuncia
  • Fácil testar: Não precisa mockar nada
  • Open/Closed: Adicione listeners sem modificar o domínio
  • Isolamento de falhas: Listener falha, aggregate não

Anatomia de um Domain Event

Interface Base

package com.github.thrsouza.sauron.domain;

public interface DomainEvent {
    UUID eventId();            // Identificador único do evento
    String eventType();        // Tipo/nome do evento
    Instant eventOccurredAt(); // Quando aconteceu
}

Design decisions:

  • eventId: Rastreabilidade e idempotência
  • eventType: Roteamento (nome do tópico Kafka)
  • eventOccurredAt: Auditoria e ordenação temporal

Implementação com Records

package com.github.thrsouza.sauron.domain.customer.events;

public record CustomerCreated(
    UUID eventId,
    UUID customerId,
    Instant eventOccurredAt
) implements DomainEvent {

    // Construtor conveniente
    public CustomerCreated(UUID customerId) {
        this(UUID.randomUUID(), customerId, Instant.now());
    }

    @Override
    public String eventType() {
        return "sauron.customer-created";
    }
}

Por que Records?

  • Imutáveis por design: Eventos nunca mudam
  • Value semantics: equals() e hashCode() automáticos
  • Serializáveis: JSON de graça com Jackson
  • Legíveis: Sintaxe concisa e clara

Família de Eventos

// Evento de criação
public record CustomerCreated(UUID eventId, UUID customerId, Instant eventOccurredAt) 
    implements DomainEvent {

    public CustomerCreated(UUID customerId) {
        this(UUID.randomUUID(), customerId, Instant.now());
    }

    @Override
    public String eventType() {
        return "sauron.customer-created";
    }
}

// Evento de aprovação
public record CustomerApproved(UUID eventId, UUID customerId, Instant eventOccurredAt) 
    implements DomainEvent {

    public CustomerApproved(UUID customerId) {
        this(UUID.randomUUID(), customerId, Instant.now());
    }

    @Override
    public String eventType() {
        return "sauron.customer-approved";
    }
}

// Evento de rejeição
public record CustomerRejected(UUID eventId, UUID customerId, Instant eventOccurredAt) 
    implements DomainEvent {

    public CustomerRejected(UUID customerId) {
        this(UUID.randomUUID(), customerId, Instant.now());
    }

    @Override
    public String eventType() {
        return "sauron.customer-rejected";
    }
}

Padrão consistente:

  • Mesmo construtor conveniente
  • Mesmo formato de eventType
  • Payloads mínimos (apenas IDs)

Gerando Eventos no Aggregate

AggregateRoot Base Class

package com.github.thrsouza.sauron.domain;

public abstract class AggregateRoot {
    private final transient List<DomainEvent> domainEvents = new ArrayList<>();

    protected void recordDomainEvent(DomainEvent domainEvent) {
        this.domainEvents.add(domainEvent);
    }

    public List<DomainEvent> pullDomainEvents() {
        List<DomainEvent> copyOfDomainEvents = List.copyOf(this.domainEvents);
        this.domainEvents.clear();
        return copyOfDomainEvents;
    }
}

Design patterns aplicados:

  1. Transient: Eventos não são persistidos com o aggregate
  2. Protected: Só o aggregate pode registrar eventos
  3. Pull pattern: Quem persiste o aggregate puxa os eventos
  4. Defensive copying: List.copyOf() retorna lista imutável
  5. Clear after pull: Evita publicação duplicada

Registrando Eventos

public class Customer extends AggregateRoot {

    public static Customer create(String document, String name, String email) {
        UUID id = UUID.randomUUID();
        Instant now = Instant.now();

        Customer customer = new Customer(id, document, name, email, 
                                         CustomerStatus.PENDING, now, now);

        // 🎯 Registra evento de criação
        customer.recordDomainEvent(new CustomerCreated(customer.id()));

        return customer;
    }

    public void approve() {
        validateCanApprove();

        this.status = CustomerStatus.APPROVED;
        this.updatedAt = Instant.now();

        // 🎯 Registra evento de aprovação
        this.recordDomainEvent(new CustomerApproved(this.id()));
    }

    public void reject() {
        validateCanReject();

        this.status = CustomerStatus.REJECTED;
        this.updatedAt = Instant.now();

        // 🎯 Registra evento de rejeição
        this.recordDomainEvent(new CustomerRejected(this.id()));
    }
}

Observe:

  • Eventos registrados depois da mudança de estado
  • Eventos descrevem o que aconteceu (passado)
  • Nenhuma lógica de negócio nos eventos

Publicando Eventos

No Use Case

public class CreateCustomerUseCase {
    private final CustomerRepository customerRepository;
    private final DomainEventPublisher domainEventPublisher;

    public Output handle(Input input) {
        // Cria o aggregate
        Customer customer = Customer.create(input.document(), input.name(), input.email());

        // Persiste
        customerRepository.save(customer);

        // 🎯 Recupera e publica eventos
        domainEventPublisher.publishAll(customer.pullDomainEvents());

        return new Output(customer.id());
    }
}

Padrão "Pull and Publish":

  1. Aggregate gera eventos durante operações
  2. Use case recupera eventos do aggregate
  3. Use case delega publicação ao publisher
  4. Publisher envia para infraestrutura (Kafka, RabbitMQ, etc.)

DomainEventPublisher (Interface)

package com.github.thrsouza.sauron.domain;

public interface DomainEventPublisher {
    void publishAll(Collection<DomainEvent> events);
}

Interface no domínio, implementação na infraestrutura!

Implementação Kafka

@Component
public class DomainEventPublisherAdapter implements DomainEventPublisher {
    private final KafkaTemplate<String, Object> kafkaTemplate;

    @Override
    public void publishAll(Collection<DomainEvent> events) {
        events.forEach(this::publish);
    }

    private void publish(DomainEvent event) {
        String topic = event.eventType(); // Nome do tópico vem do evento!

        kafkaTemplate.send(topic, event)
            .whenComplete((result, exception) -> {
                if (exception != null) {
                    log.error("❌ Failed to publish {} to topic {}", 
                             event.getClass().getSimpleName(), topic, exception);
                } else {
                    log.info("📤 Published {} to topic {} (partition: {}, offset: {})",
                            event.getClass().getSimpleName(),
                            topic,
                            result.getRecordMetadata().partition(),
                            result.getRecordMetadata().offset());
                }
            });
    }
}

Observabilidade:

  • Logs estruturados para debug
  • Informações de partição e offset
  • Tratamento de erros centralizado

Consumindo Eventos

Event Listeners

@Component
public class CustomerEventListener {
    private final EvaluateCustomerUseCase evaluateCustomerUseCase;

    @KafkaListener(topics = "sauron.customer-created", 
                   groupId = "${spring.kafka.consumer.group-id}")
    public void handleCustomerCreated(@Payload CustomerCreated event) {
        log.info("📥 Received CustomerCreated event - CustomerId: {}", 
                 event.customerId());

        try {
            evaluateCustomerUseCase.handle(new Input(event.customerId()));
            log.info("✅ Successfully processed CustomerCreated - CustomerId: {}", 
                     event.customerId());
        } catch (Exception e) {
            log.error("❌ Error processing CustomerCreated - CustomerId: {}", 
                      event.customerId(), e);
            throw e; // Relança para retry
        }
    }

    @KafkaListener(topics = "sauron.customer-approved", ...)
    public void handleCustomerApproved(@Payload CustomerApproved event) {
        log.info("📥 Received CustomerApproved - CustomerId: {}", 
                 event.customerId());
        // Aqui: enviar email, notificar vendas, etc.
    }

    @KafkaListener(topics = "sauron.customer-rejected", ...)
    public void handleCustomerRejected(@Payload CustomerRejected event) {
        log.info("📥 Received CustomerRejected - CustomerId: {}", 
                 event.customerId());
        // Aqui: email de rejeição, analytics, etc.
    }
}

Benefícios Reais

1. Auditoria de Graça

Todos os eventos são logados e persistidos no Kafka:

# Lista eventos de um cliente
kafka-console-consumer --bootstrap-server localhost:9092 \
  --topic sauron.customer-created --from-beginning | grep "customerId: 123"

kafka-console-consumer --bootstrap-server localhost:9092 \
  --topic sauron.customer-approved --from-beginning | grep "customerId: 123"

Histórico completo do que aconteceu, quando e por quê!

2. Rastreabilidade Distribuída

1. [14:23:45.123] CustomerCreated published - eventId: abc-123
2. [14:23:45.150] CustomerCreated received - eventId: abc-123
3. [14:23:50.200] CustomerApproved published - eventId: def-456
4. [14:23:50.220] CustomerApproved received - eventId: def-456

Correlação de eventos em sistemas distribuídos!

3. Extensibilidade Sem Modificação

Antes (sem eventos):

// Para adicionar analytics, modifica o código existente
public void approveCustomer(UUID customerId) {
    // ... código existente
    emailService.send(...);
    salesService.notify(...);
    analyticsService.track(...); // ← Nova linha
}

Depois (com eventos):

// Código existente permanece intocado!
@KafkaListener(topics = "sauron.customer-approved")
public void handleCustomerApproved(CustomerApproved event) {
    analyticsService.track(event); // ← Novo listener separado
}

Open/Closed Principle em ação!

4. Testabilidade Aprimorada

@Test
void shouldRecordCustomerApprovedEvent() {
    // Given
    Customer customer = Customer.create("12345", "John", "[email protected]");

    // When
    customer.approve();

    // Then
    List<DomainEvent> events = customer.pullDomainEvents();
    assertEquals(2, events.size()); // CustomerCreated + CustomerApproved
    assertTrue(events.get(1) instanceof CustomerApproved);
}

Testar eventos é trivial. Sem mocks, sem Spring!

Padrões e Boas Práticas

1. Event Naming (Passado)

// ✅ CORRETO: Verbos no passado
CustomerCreated
OrderShipped
PaymentProcessed
InvoiceGenerated

// ❌ ERRADO: Verbos no imperativo/presente
CreateCustomer
ShipOrder
ProcessPayment
GenerateInvoice

Eventos descrevem fatos que já aconteceram.

2. Payload Mínimo

// ✅ RECOMENDADO: Apenas referências (IDs)
public record CustomerApproved(UUID customerId) {}

// ⚠️ NÃO RECOMENDADO: Dados completos (acoplamento de schema)
public record CustomerApproved(
    UUID customerId,
    String name,
    String email,
    String document,
    Address address,
    CreditScore creditScore
) {}

Por quê?

  • Reduz acoplamento entre producers e consumers
  • Consumers buscam dados na versão que precisam
  • Menores mensagens = melhor performance

3. Eventos São Imutáveis

// ✅ RECOMENDADO: Record (imutável)
public record CustomerCreated(UUID customerId) {}

// ⚠️ NÃO RECOMENDADO: Classe mutável
public class CustomerCreated {
    private UUID customerId; // Pode ser alterado!

    public void setCustomerId(UUID id) {
        this.customerId = id;
    }
}

Eventos são fatos históricos. Não mudam!

4. Eventos Bem Definidos

// ✅ RECOMENDADO: Eventos específicos
CustomerApproved
CustomerRejected

// ⚠️ NÃO RECOMENDADO: Evento genérico
CustomerStatusChanged(UUID customerId, CustomerStatus newStatus)

Eventos específicos são mais semânticos e auto documentados.

Quando Usar Domain Events

✅ Use quando:

  • Múltiplos sistemas precisam reagir a mudanças
  • Você quer histórico/auditoria completa
  • Precisa de desacoplamento entre módulos
  • Quer extensibilidade sem modificar código existente
  • Eventual consistency é aceitável

❌ Evite quando:

  • Consistency forte é obrigatória (ACID)
  • Resposta síncrona é necessária
  • Sistema é muito simples (overkill)
  • Time não tem experiência com async

Conclusão

Ao invés de pensar "o que preciso fazer agora?", pense "o que acabou de acontecer?". Seus aggregates se tornam mais focados, seu código mais testável, e seu sistema mais flexível.

Recursos

Terraform Basics Week 3: Managing Variables with tfvars Files

2025-11-17 08:44:59

Table of Contents

1. Recap of Week 2

2. What is a tfvars file, how do I create one and why should i use it ?

3. Handling Sensitive Values Safely

4. Variable Precedence in Terraform

5. Using .tfvars files instead of environment variables

6. Deploy to Azure – Testing the configuration using tfvars values

7. Wrap-Up

Architecture

GitHub Link for this week's files

1. Recap of Week 2

Last week we introduced variables in Terraform and saw how they make our configuration more reusable. Instead of hard-coding values directly in resource blocks, we created a variables.tf file and referenced values using var.variable_name.
We also explored how to assign values via defaults and environment variables (for example TF_VAR_location).

By the end of Week 2 our project structure was cleaner and more scalable, but managing a growing number of values purely through environment variables can become cumbersome. That’s where .tfvars files come in ready to simplify things.

In case you missed last week's post, you can read the full Week 2 post here.

2. What is a tfvars file, how do i create one and why should i use it ?

A tfvars file is where you define the values for your variables, separate from your Terraform code. Instead of setting them through environment variables or typing them out every time you run a command, you can store everything in one place and let Terraform pick it up automatically.

Terraform looks for two files by default:

terraform.tfvars and any file ending with .auto.tfvars (more on this in upcoming weeks). If either of those exist in your working directory, Terraform automatically loads them when you run plan or apply.

So let's go back to our Week 2 setup and explain this with our example:

In order to utilize tfvars file instead of Environment Variables, you would have a variables.tf file, such as the one we had last week:

Variables file

Then, you would create a file called terraform.tfvars under your project folder, like so :

Terraform tfvars

As shown in the example, this is how you define values for the variables you created in your variables.tf file using the terraform.tfvars file.
Any value you set in terraform.tfvars overrides the variable’s default value if one is defined. The default value is only used when no other value is provided through a tfvars file, the CLI, or environment variables.

So why use it this way? Why not just set the values directly in the variable’s default attribute or continue using environment variables?

Because tfvars files offer much better flexibility and management at scale. Imagine having to create environment variables for a hundred different variables. Now imagine doing that across multiple Terraform environments where those values differ each time. Would you manually update or remove environment variables whenever you switch between environments?

The same challenge applies if you rely on default values. tfvars files solve this by letting you keep multiple versions for different purposes or environments. You can see all variable values for a given project in one place and manage them easily without touching your Terraform code.

3. Handling Sensitive Values Safely

When using tfvars files, you’ll often define values like usernames, passwords, or API keys. These are considered sensitive values, and Terraform provides a way to protect them from being displayed in your CLI output.

To do that, mark the variable as sensitive in your variables.tf file:

admin_username

This prevents Terraform from printing the value in your plan or apply output. You’ll still see that a change is being made, but the actual value will be hidden.

It’s worth noting that Terraform still stores these values in the state file, but we'll go over how to secure those in upcoming weeks.

Marking sensitive variables like this helps keep your terminal output, logs, and collaboration history clean while still allowing Terraform to use the values during deployment.

4. Variable Precedence in Terraform

When Terraform runs, it can receive variable values from several places. If a variable is defined in more than one location, Terraform follows a specific order to decide which value to use. Understanding this order helps you predict how your configuration will behave and avoid confusion when values overlap.

From lowest to highest precedence:

  1. Default values set inside the variable block

  2. Values from terraform.tfvars or any *.auto.tfvars file

  3. Values passed with the -var or -var-file flag

  4. Environment variables that start with TF_VAR_

If a variable is defined in multiple places, Terraform always uses the one that appears later in this list.

For example, if your terraform.tfvars file contains:

Location East US

but you have an environment variable defined as:

EnvVariables

Terraform will use West Europe, because environment variables override tfvars files.

Knowing this order is especially helpful when troubleshooting unexpected values or testing changes without modifying your main configuration.

5. Using terraform.tfvars files instead of environment variables

We covered most of what needs to be done for this section in the previous sections actually. To start using terraform.tfvars for assigning values to your variables, create a terraform.tfvars file as described in section 2 and define the values you would like to use for each.

We also mentioned in variable precedence section that Environment Variables do override tfvars variables. So with that information, if you have Environment Variables defined from the previous weeks, make sure those no longer exist, in order to have your tfvars variables apply on your deployment.

I had these defined from previous weeks which I now deleted :

Env Variables

6. Deploy to Azure – Testing the configuration using tfvars values

  1. As always - we start with terraform init :

Terraform Init

  1. Next we use terraform plan command. I'd like to highlight a few things on the plan output:

Plan output

You can see that because we used sensitive = true attribute for admin_password variable, the value is not shown in the plan output. You can also see resource_group_name value is now rg-prod-002, which is what I had in my terraform.tfvars file and it's overwriting the default attribute that was defined for that variable in variables.tf file. Perfect, just as we expected.

  1. Now that everything looks good - we go ahead and issue the terraform apply command.

Terraform apply

After apply command ran for 1 minutes and 30 seconds, I could see in my Azure Tenant the VM is created with the appropriate values that we defined in terraform.tfvars :

Variables

  1. Finally, don't forget to issue terraform destroy command so you don't generate unnecessary costs.

Terraform Destroy

7. Wrap-Up

This week we replaced environment variables with a terraform.tfvars file and saw how much simpler it is to manage variable values this way. We learned how Terraform automatically reads that file, how its values override defaults, and how to handle sensitive variables safely.

With this setup, our configuration is cleaner, easier to maintain, and ready to scale. We no longer need to rely on environment variables every time we run a plan or apply.

Next week, we’ll build on this by improving the security of our deployment. We’ll add a Network Security Group to control access to the VM and use dynamic blocks to make our configuration more flexible.

I hope this was helpful to you and hope to see you next week for more Terraform fun!