MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

AI Tweet Scheduler

2026-04-24 03:39:32

AI Tweet Scheduler

Craft engaging tweets with AI, effortlessly schedule, and boost your Twitter presence.

What it does

Craft engaging tweets with AI, effortlessly schedule, and boost your Twitter presence.

Try it

Built this to solve a real problem. Feedback welcome!

Fintech on Go: what the language solves in a crypto backend (Part 1)

2026-04-24 03:39:05

A two-part case study on building an ERC-20 rewards service in Go, covering goroutines, replicas, and the consistency problems that surface before a transaction is signed. Part 1 tackles nonce sequencing and idempotency.

TL;DR

A crypto payments backend holds state in two systems that disagree often: Postgres and the Ethereum chain. Consistency breaks at three points in an ERC-20 (Ethereum's fungible token standard) transfer. Before the broadcast, the nonce must be ordered correctly. After the broadcast, retries must be idempotent. Between the database commit and the broker that dispatches to the chain, the Transactional Outbox closes the atomicity gap. All three break because the backend is concurrent. All three resolve with the same move: coordinate outside the Go process, let Postgres be the arbiter, and make the outcome explicit in the return type. Every failure mode has a name and a type. Go's explicit error values turn financial failure cases into auditable domain objects, not hidden exceptions. Part 2 covers signing, event loops, and replay protection.

Why I wrote this

Go is the dominant language in the backend layer of crypto fintech. Kraken, Coinbase, Zerohash, Circle, Fireblocks, and go-ethereum itself run on Go. The Rust wave belongs to validators and consensus nodes, not the application layer that moves money and reconciles state.

I worked on Mercado Envios while Mercado Pago was building Mercado Coin and MELI Dolar. Recurring leadership meetings on those products pulled me into studying this niche. The running example mints ERC-20 tokens across goroutines and replicas.

Why Go for this layer

Tech leads ask this before they pick a stack, so here is the case in plain terms.

go-ethereum is Go, not a binding. The reference Ethereum implementation is written in Go. ethclient.Dial, types.SignTx, and crypto/ecdsa are not wrappers around a C library or JNI bindings to a Java implementation. They are the source code you can read, audit, and step through in a debugger in the same language you ship. When behavior at the protocol level is ambiguous, reading go-ethereum/core/types is the answer. Node and Java teams open a translation layer; Go teams open the same repository.

Interfaces over go-ethereum enable testing without a live node. The service defines a four-method EthereumClient interface at the use case layer:

type EthereumClient interface {
    SendTransaction(ctx context.Context, tx *types.Transaction) error
    SuggestGasPrice(ctx context.Context) (*big.Int, error)
    TransactionReceipt(ctx context.Context, txHash common.Hash) (*types.Receipt, error)
    PendingNonceAt(ctx context.Context, addr common.Address) (uint64, error)
}

A test replaces this with a struct returning canned receipts. No test container, no Anvil instance, no docker-compose up. Go's implicit interface satisfaction means the concrete ethclient.Client satisfies this interface without a single annotation. The interface lives in the use case, not next to its implementation, which is the dependency direction that keeps business logic framework-free.

Context cancellation reaches the last provider. A request walks from HTTP handler to use case to repository to the signer to one or more RPC (Remote Procedure Call) providers. Every function accepts a context.Context, and a client disconnect or a 3-second deadline at the top propagates to the last eth_getTransactionByHash call without glue code. Java teams emulate this with CompletableFuture and thread pools, Node teams reach for AsyncLocalStorage or abort controllers. In Go the cancellation is the function signature.

Errors are domain values, not exceptions. The mint use case returns a MintResult struct with a Retryable bool field. A caller retrying an in-flight broadcast reads that field and decides. Nothing propagates silently up a call stack. No catch (TransientException e) that hides a second case. In a payment system, every failure path has a financial consequence, and Go forces you to name it at the boundary where it occurs.

The concurrency primitives (goroutines, channels, errgroup) are in the language, not bolted on. 1,000 goroutines cost roughly 2MB of stack. A thread pool at the same count costs off-heap buffers and tuning. Part 2 shows the event loop that orchestrates signing and confirmation across all of this without losing work across restarts.

What makes these backends different

Two systems of record that disagree often. Postgres gives ACID and strong consistency inside its own transaction boundary. The chain gives probabilistic finality. A pending transaction can be reordered, dropped, or reorged before the receipt arrives. A confirmed receipt is still subject to short-window reorgs that senior teams handle by waiting N block confirmations. In CAP terms, Postgres is CP within the cluster and the chain is AP with eventual, probabilistic convergence. The job of the backend is not to move tokens. The job is to keep these two registers consistent under partial failure while many goroutines touch them at the same time.

Three consistency problems encode that job: ordering before the broadcast (nonce), resulting after the broadcast (idempotency), and atomicity across the database and the broker that feeds the chain. This article covers all three. Part 2 picks up from where atomic dispatch hands off to the live chain, covering signing, event loops, and replay protection.

Problem 1: nonce sequencing, before the broadcast

Every transaction from an Ethereum-style wallet carries a sequential nonce, the per-wallet counter that orders transactions from a single address. The service signs from a single hot wallet. Many goroutines issue mints in parallel, and the service runs in replicas behind a load balancer. Two goroutines that sign with the same nonce lose one of the broadcasts. A skipped nonce stalls every subsequent transaction from that wallet until an on-call engineer sends a no-op self-transfer at the missing nonce. Serialization is mandatory.

Why a mutex alone is not enough

sync.Mutex protects state inside one process. The nonce is shared across replicas and must survive a restart, so coordination has to live outside Go's memory. Redis provides cross-process mutual exclusion with a TTL that handles crashed holders. On its own, Redis cannot detect the case where the lock expired mid-transaction and another replica moved ahead. The stale holder would still come back and commit over the new state.

A fencing token, following Martin Kleppmann's argument against distributed locks used alone, closes that gap. I wrote a standalone piece on the theory at distributed locks and fencing tokens. This section takes the theory as given and walks how it composes on top of SELECT FOR UPDATE.

The flow

Sequence of Replica 1 and Replica 2 acquiring a Redis lock with a fencing token and committing nonce updates in Postgres only when the fence exceeds the stored value

Redis is the outer gate. Postgres is the arbiter. The fence travels from Redis into the transaction, so a stale holder whose TTL lapsed mid-BEGIN is rejected at commit time.

The composition, in code

func (r Repository) incrementInTx(ctx context.Context, wallet string, fence int64) (int64, error) {
    var current int64
    err := r.db.WithContext(ctx).Transaction(func(tx *gorm.DB) error {
        m := new(walletNonceModel)
        if err := tx.Clauses(clause.Locking{Strength: "UPDATE"}).
            Where("wallet_address = ?", wallet).
            First(m).Error; err != nil {
            return err
        }
        if fence <= m.FenceToken {
            return ErrStaleLockToken
        }
        current = m.CurrentNonce
        m.CurrentNonce++
        m.FenceToken = fence
        return tx.Save(m).Error
    })
    return current, err
}

Tradeoffs

Mechanism Cross-replica safe Survives TTL expiry Cost Fit
sync.Mutex No N/A Free In-process only, fails on restart
Postgres advisory lock Single master only N/A Free No cross-region
Redis SET NX PX alone Yes No 1 RTT Stale writer wins
Redis + fence + SELECT FOR UPDATE Yes Yes 1 RTT + row lock Chosen: correctness critical

Failure modes

  1. TTL expires mid-transaction. A second replica acquires the lock, increments the fence, and commits. When the stale holder finally reaches its commit, the fence comparison returns ErrStaleLockToken and the caller retries with a fresh fence.
  2. Process crashes after releasing Redis but before the database commit. The uncommitted transaction is rolled back, and the next broadcast re-reads the pre-commit nonce. No data is lost.
  3. Replicas partition from each other. Postgres is still the final arbiter. A reconciler calls SyncFromChain(ctx, wallet, nonce) which upserts the nonce on ON CONFLICT and heals drift.

Bridge

A correct nonce guarantees ordering before the broadcast. It does not guarantee the broadcast happens at most once. If the process dies between the RPC that sends the transaction and the database write that records the hash, a retry has no way to know the transaction already went out.

Problem 2: idempotent broadcast, after the broadcast

A broadcast is a side effect with no undo. Once the signed transaction leaves the process, the mempool owns it. The backend now faces a classic dual-write problem: the local database needs to record the hash, the chain already holds the pending transaction, and the process can die in between. With goroutines retrying failed or stuck mints in parallel, two retries of the same logical transfer must not produce two broadcasts. A double broadcast credits the user twice and the treasury loses the difference. Retrying blindly double-spends. Treating every unknown state as failure loses funds. Idempotency answers this at the API surface, and a UNIQUE constraint on idempotency_key is the last-line backstop that refuses duplicates even when the application retries aggressively.

The state machine

Idempotency state machine with Unknown, Broadcasting, Finalized, Failed, PendingStuck, and RetryableCaller states, showing transitions driven by receipt confirmation, reverts, and the thirty second timeout

Four states, one per outcome the caller needs to distinguish. The state is stored by idempotency_key in the database, so the same check works across replicas and across restarts. Each state has its own explicit return:

if tx.Status == domain.TransactionStatusPending && time.Since(tx.CreatedAt) < 30*time.Second {
    return tx, &MintResult{Success: false, Retryable: true}, nil
}

Explicit results, not exceptions

type MintResult struct {
    Success         bool
    TransactionHash string
    BlockNumber     int64
    Status          string
    ErrorCode       string
    ErrorMessage    string
    Retryable       bool
}

Retryable is the field to internalize. It is not a wrapped exception, not a magic error code, but a boolean on a result struct that maps to a gRPC response, an HTTP body, and a client that decides whether to retry. Go's insistence on explicit returns turns the four-state machine into visible, testable API surface.

This is the errors-as-values pattern applied to a financial domain. Java and Kotlin throw TransientException and trust the caller's catch hierarchy. Node rejects with an error object that might or might not carry a retry signal. In Go, the contract is part of the type: Retryable bool is documented in the struct, enforced by the compiler, and visible in code review. Every failure case in a money system has a financial consequence. Naming them in the type system is not optional.

Failure modes

  1. The process crashes between the SendTransaction RPC and the database insert. On restart, the worker queries by idempotency key, finds nothing, and starts a new broadcast. If the first RPC went through, the chain now holds a pending transaction whose hash the service does not know. A reconciler that watches the mempool by sender address heals that state.
  2. A transaction sits in Pending for more than thirty seconds. The state machine transitions it to PendingStuck and returns Retryable: true. The caller backs off instead of holding an open request.

The database is the arbiter. Not the chain, not in-process memory, not the caller.

Retry orchestration without a framework

The state machine answers what to do when the caller asks again. It does not by itself drive the retries. A retry loop wraps every mint call with a deadline, exponential backoff with jitter, and a context that cancels every nested RPC when the budget runs out. The entire orchestration lives in the standard library plus one golang.org/x/sync helper:

func (u UseCase) MintWithRetries(ctx context.Context, req Request) (MintResult, error) {
    ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
    defer cancel()

    backoff := time.Second
    for {
        res, err := u.mintOnce(ctx, req)
        if err == nil && !res.Retryable {
            return res, nil
        }
        if err != nil && !isRetryable(err) {
            return MintResult{}, err
        }
        jitter := time.Duration(rand.Int63n(int64(backoff / 2)))
        select {
        case <-ctx.Done():
            return MintResult{}, ctx.Err()
        case <-time.After(backoff + jitter):
        }
        backoff = min(backoff*2, 8*time.Second)
    }
}

Three things worth pointing at. mintOnce carries the same idempotency_key through every attempt, so the second call rejoins the state machine from wherever the first one left off. The select { case <-ctx.Done() } block is the whole framework. A deadline from the HTTP handler cancels the timer, the timer fires and drives the next attempt, and nothing leaks once the client disconnects. And isRetryable is a small predicate on domain errors, not an exception class hierarchy.

Node projects installing p-retry to get this surface, Java projects composing Resilience4j with CompletableFuture.orTimeout, and Kotlin projects stitching coroutine withTimeout plus a flow operator all end up at the same shape. In Go the shape is the stdlib.

Problem 3: atomicity across the database and the broker

The idempotency pattern prevents a double-broadcast when the caller retries. It does not prevent a lost event when the process dies between two writes. The use case needs to save a domain row in Postgres and emit an event that the chain dispatcher will consume. Two systems, one logical write, and the process can die between them. Chris Richardson's Transactional Outbox pattern is the proportional answer: write the domain row and the outbound event in the same Postgres transaction, then let a separate loop publish the event asynchronously. Postgres is the source of truth. The broker is always downstream of commit, never ahead of it.

The call site is one transaction boundary:

func (u UseCase) Credit(ctx context.Context, input CreditInput) error {
    return u.tx.WithTransaction(ctx, func(ctx context.Context) error {
        if err := u.cashback.Save(ctx, input.ToCashback()); err != nil {
            return fmt.Errorf("save cashback: %w", err)
        }
        event := outbox.NewMintRequested(input.ID, input.Wallet, input.Amount)
        return u.outbox.CreateWithTx(ctx, event)
    })
}

WithTransaction opens one Postgres transaction and threads it through the context. Save and CreateWithTx both land in that same transaction, and commit or rollback is a single decision. The outbox_events row carries the event downstream; a relay loop picks it up out-of-band.

The transaction propagation itself is short:

func (m Manager) WithTransaction(ctx context.Context, fn func(context.Context) error) error {
    return m.db.WithContext(ctx).Transaction(func(tx *gorm.DB) error {
        return fn(WithTx(ctx, tx))
    })
}

The transaction rides on context.Context, not as a threaded *gorm.DB argument. Use cases stay GORM-unaware. Repositories read the current transaction with DB(ctx) and get either the pending transaction or the pool. One code path covers both, and tests wire the same repositories against a real database without mock objects or flag switches. The pattern follows Robert Laszczak's Database transactions in Go with layered architecture on threedots.tech. It is the clearest write-up of why propagating the transaction through the context pays for itself once an application has more than three repositories.

The contract abstraction extends to the ERC-20 token itself. The use case defines a TokenContract interface alongside EthereumClient:

type TokenContract interface {
    Mint(opts *bind.TransactOpts, to common.Address, amount *big.Int) (*types.Transaction, error)
    BalanceOf(opts *bind.CallOpts, account common.Address) (*big.Int, error)
}

The concrete implementation is a generated ABI binding from abigen. Tests swap it for a struct that returns a deterministic transaction hash. The use case never sees the binding, never touches the ABI, and never knows whether the test runs against Anvil or a mock.

The published flow looks like this:

Sequence diagram: handler writes domain row and outbox event in one Postgres transaction, outbox relay ticks and publishes to NATS then marks the row published, mint consumer applies ON CONFLICT DO NOTHING on the idempotency key

The window Outbox closes is clearer when drawn without it:

Sequence diagram: handler commits domain row in Postgres then crashes before publishing to NATS, event is lost, user expects tokens but mint was never dispatched, leaving a manual reconciliation

That failure mode is not theoretical. Any dual write across a database and a broker without a coordinating pattern leaves the window open. The options for closing it all have tradeoffs:

Pattern Atomic DB + broker Latency Infra Fit
Single DB transaction only Yes (DB only) Lowest None Breaks once the broker is involved
Transactional Outbox Yes +1 relay tick Postgres + poller DB is source of truth, chosen here
Two-phase commit (XA, eXtended Architecture) Yes High, blocking XA-capable broker Rarely available, high operational cost
Saga with compensations Eventual Variable Per-step logic Complements Outbox for multi-service flows
Change Data Capture Yes +replication lag Debezium + Kafka Heavy infrastructure for a small team

Outbox trades one relay tick of latency for full atomicity with existing infrastructure. Two-phase commit needs an XA-capable broker that rarely exists outside legacy enterprise stacks. Saga complements Outbox when a downstream step needs compensation; the service uses MarkFailed as the compensating action when retries exhaust. CDC (Change Data Capture) is the bigger-team answer for shops that already run Debezium. For a team with Postgres and any broker, Outbox is the proportional choice.

Two concrete failure modes sit on either side of this choice. Without Outbox, the database commits, the publish call fails, the user is under-credited, and somebody reconciles by hand. With Outbox, the relay can crash after publishing but before marking the row published, which causes a duplicate publish on restart. The consumer absorbs it with a UNIQUE (idempotency_key) constraint, which ties straight back to the idempotency pattern in Problem 2.

Hook to part 2

The three consistency problems are now covered. Nonce sequencing handles ordering before the broadcast. Idempotent state machines handle resulting after the broadcast. Transactional Outbox closes the gap between the database commit and the chain dispatcher. Part 2 picks up where the database hands off to the live service. It covers signing a transaction with the standard library and no C bindings, for { select } event loops that do not lose work across restarts, and replay protection at the consumer end when the chain delivers the same log twice.

References

This is part 1 of a two-part series on building crypto payments backends in Go. Part 2 covers signing with the standard library and event loops that do not lose work across restarts.

Créer un système d’authentification avec PHP et MySQL (étape par étape)

2026-04-24 03:35:28

Introduction:
Dans ce tutoriel, je vais expliquer comment j’ai créé un système simple d’authentification en utilisant PHP, MySQL, HTML et CSS. Ce projet permet aux utilisateurs de s’inscrire, se connecter et accéder à un tableau de bord protégé.


Technologies utilisées:
.PHP
.MySQL
.HTML
.CSS
.phpMyAdmin

Étape 1 : Création de la base de données:
On commence par créer la base de données et la table users:


Étape 2 : Inscription (inscri.php):
Le formulaire d’inscription permet à l’utilisateur de créer un compte en saisissant :
.Nom
.Prénom
.Email
.Mot de passe

Traitement :
.Vérification des champs
.Sécurisation des données avec htmlspecialchars()
.Cryptage du mot de passe avec sha1()
.Vérification si l’email existe déjà
.Insertion dans la base de données

Étape 3 : Connexion (login.php):
Le système de login :
Vérifie l’email et le mot de passe
Utilise PDO pour la connexion à la base de données
Compare les données avec la table users

Si les informations sont correctes :

Cela permet de créer une session utilisateur.

Étape 4 : Tableau de bord (site.php):
Après la connexion, l’utilisateur est redirigé vers un dashboard contenant :
Un message de bienvenue Des statistiques fictives Un bouton de déconnexion.

Étape 5 : Déconnexion (logout.php) :
Le logout est très simple :

Cela supprime la session et déconnecte l’utilisateur.

Étape 6 : Interface utilisateur :
Le design a été réalisé avec CSS :
.Interface moderne
.Dégradés et animations
.Formulaires responsives

Ce que j’ai appris :
.Connexion PHP/MySQL
.Gestion des sessions
.Sécurisation basique des données
.Structure d’un projet web complet

Conclusion :
Ce projet m’a permis de comprendre le fonctionnement d’un système d’authentification complet avec PHP et MySQL, de l’inscription jusqu’à la déconnexion.

🔗 Code source sur GitHub :
https://github.com/ayabahri390-bit/authentification-system-php.git

Prevent Duplicate API Requests in Angular with Idempotency and HTTP Interceptors

2026-04-24 03:34:07

Modern web applications frequently interact with APIs that perform critical operations such as payments, order creation, or data updates. One common problem developers encounter is duplicate requests caused by:

  • users double-clicking buttons
  • unstable internet connections
  • automatic retries
  • browser refreshes during a request

Without safeguards, these duplicates can lead to serious issues like multiple charges, duplicated orders, or inconsistent data.

To solve this, many modern APIs rely on a concept called idempotency.

What is Idempotency in API Requests?

In simple terms, idempotency means that performing the same operation multiple times produces the same result as performing it once.

In API design, this means that sending the same request multiple times should not create multiple side effects.

This is usually implemented using a unique request identifier called an Idempotency Key.

Example request:

POST /orders
Idempotency-Key: 12345

If the same request is sent again with the same key, the server recognizes it and returns the previously generated response instead of executing the operation again.

Real-World Example: Online Payments

Imagine a user purchasing a product online.

They click “Pay Now”, but the network is slow and the page appears unresponsive.

The user clicks the button again.

Without idempotency, the system might process two payments.

User clicks Pay twice
↓
POST /payments
POST /payments
↓
Two charges are created

With idempotency implemented:

POST /payments
Idempotency-Key: 9ab3...

POST /payments
Idempotency-Key: 9ab3...

The server detects that the request has already been processed and simply returns the same payment result, preventing duplicate charges.

This pattern is widely used in payment platforms like Stripe.

Why Use Idempotency?

Implementing idempotency provides several important advantages.

Prevents Duplicate Operations

Users often double-click buttons or retry operations when something seems slow.

Idempotency ensures the backend processes the action only once.

Enables Safe Retries

Network failures happen frequently in real-world systems.

Idempotency allows clients to retry requests without worrying about unintended side effects.

Improves System Reliability

In distributed systems, retries and failures are normal.

Download the Medium app
Idempotency ensures that repeated requests do not break system consistency.

Better User Experience

Users should not suffer consequences due to technical issues like timeouts or slow networks.

Idempotency protects operations like:

  • payments
  • orders
  • form submissions
  • account actions

Implementing Idempotency in Angular Using an HTTP Interceptor

On the frontend, we can support idempotency by automatically attaching an Idempotency-Key header to requests.

Angular provides a perfect mechanism for this: HTTP Interceptors.

The interceptor can:

  1. Generate an idempotency key
  2. Store it in sessionStorage
  3. Attach it to outgoing requests
  4. Remove it after a successful response

Step 1: Create the Idempotency Interceptor

import { Injectable } from '@angular/core';
import {
  HttpEvent,
  HttpHandler,
  HttpInterceptor,
  HttpRequest,
  HttpResponse
} from '@angular/common/http';
import { Observable, tap } from 'rxjs';

@Injectable()
export class IdempotencyInterceptor implements HttpInterceptor {
  private readonly HEADER = 'Idempotency-Key';
  private readonly STORAGE_PREFIX = 'idem_';
  intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> {
    if (!['POST', 'PUT', 'PATCH'].includes(req.method)) {
      return next.handle(req);
    }
    const fingerprint = this.createFingerprint(req);
    const storageKey = this.STORAGE_PREFIX + fingerprint;
    let idempotencyKey = sessionStorage.getItem(storageKey);
    if (!idempotencyKey) {
      idempotencyKey = this.generateKey();
      sessionStorage.setItem(storageKey, idempotencyKey);
    }
    const clonedRequest = req.clone({
      setHeaders: {
        [this.HEADER]: idempotencyKey
      }
    });
    return next.handle(clonedRequest).pipe(
      tap({
        next: (event) => {
          if (event instanceof HttpResponse && event.ok) {
            sessionStorage.removeItem(storageKey);
          }
        }
      })
    );
  }
  private createFingerprint(req: HttpRequest<any>): string {
    const body = req.body ? JSON.stringify(req.body) : '';
    return btoa(`${req.method}|${req.urlWithParams}|${body}`);
  }
  private generateKey(): string {
    return crypto.randomUUID();
  }
}

Step 2: Register the Interceptor

import { HTTP_INTERCEPTORS } from '@angular/common/http';
import { IdempotencyInterceptor } from './interceptors/idempotency.interceptor';

providers: [
  {
    provide: HTTP_INTERCEPTORS,
    useClass: IdempotencyInterceptor,
    multi: true
  }
]

Step 3: What Happens During a Request

When Angular sends a request like this:

this.http.post('/api/orders', orderData)

The interceptor automatically adds:

Idempotency-Key: 3f7a8c41-93a0-4b2f-b2a6-8c9f2d1e7c2b

The key is stored in sessionStorage so that retries reuse the same identifier.

After a successful response:

HTTP 200 OK

The stored key is removed.

Final Thoughts

Idempotency is a simple but powerful technique that dramatically improves the reliability of API-driven systems.

It helps prevent:

  • duplicate payments
  • repeated orders
  • inconsistent data

By implementing idempotency using an Angular HTTP interceptor, you can add this protection transparently across your application.

In systems where operations are critical — like payments or orders — this pattern can make the difference between a reliable system and a costly bug.

If you enjoyed this article, feel free to follow me here or connect with me on LinkedIn to stay updated on my real-world web development experiences.

I’d love to hear your thoughts and keep the conversation going!

Claude IA: O que é, Como Funciona e Por que Está se Destacando

2026-04-24 03:23:50

Claude IA: O Guia Definitivo e CompletoNos últimos anos, modelos de linguagem invadiram o dia a dia de devs, pesquisadores e empresas. E entre todos os players desse mercado, o Claude IA vem se destacando de uma forma que vai além do hype.

Não é só mais um chatbot. É uma das IAs mais sofisticadas disponíveis hoje — com foco em segurança, raciocínio e contexto longo.

O que é o Claude IA

O Claude IA é um assistente de inteligência artificial desenvolvido pela Anthropic, fundada por Dario e Daniela Amodei — ex-pesquisadores da OpenAI. A empresa foi criada com um objetivo claro: construir IA poderosa e fundamentalmente segura.

Na prática, o Claude IA é um LLM (Large Language Model) capaz de compreender e gerar texto de forma natural e contextualizada. Você conversa com ele como troca mensagens com uma pessoa — faz perguntas complexas, pede análises, cria conteúdo, revisa código.

O que torna o Claude IA diferente

Constitutional AI (IA Constitucional)

A Anthropic desenvolveu uma abordagem própria de treinamento onde o modelo é guiado por um conjunto explícito de princípios éticos. O Claude foi treinado não só para ser útil, mas para evitar respostas prejudiciais, enganosas ou inconsistentes — e isso aparece na prática.

Janela de contexto gigante

A janela de contexto define quanto texto o modelo processa de uma vez. O Claude IA se destaca com uma das maiores janelas disponíveis no mercado — chegando a centenas de milhares de tokens nas versões mais avançadas.

Na prática: você pode colar um contrato longo, um artigo extenso ou um arquivo de código completo e pedir uma análise detalhada. Sem o modelo "esquecer" o que foi dito antes.

Família de modelos para diferentes usos

Claude Opus: máxima capacidade de raciocínio, para tarefas complexas

Claude Sonnet: equilíbrio entre performance e velocidade — o mais usado no dia a dia

Claude Haiku: respostas rápidas com menor consumo de recursos

Como usar o Claude IA

O acesso mais simples é pelo claude.ai — cria uma conta gratuita e já começa a usar. A versão gratuita cobre a maioria dos casos cotidianos.

Para devs que querem integrar o Claude IA em aplicações próprias, a Anthropic oferece uma API bem documentada — a mesma usada por empresas que incorporam as capacidades do modelo em seus produtos.

Por que vale prestar atenção

O Claude IA tem chamado atenção especialmente em tarefas que exigem raciocínio longo, análise de documentos e consistência em conversas estendidas. Para devs, é uma ferramenta poderosa para revisão de código, documentação, explicação de conceitos técnicos e criação de conteúdo técnico.

Se você ainda não testou ou está comparando com outras opções — vale a análise detalhada.

Quer o guia completo com todas as versões, comparações, casos de uso reais e como integrar via API?

👉 Artigo completo em: mododev.com.br/claude-ia

Understanding Transformers Part 12: Building the Decoder Layers

2026-04-24 03:23:30

In the previous article, we just began with the concept of decoders in a transformer.

Now we will start adding the positional encoding.

Adding Positional Encoding in the Decoder

Now, for the decoder, let’s add positional encoding.

Just like before, we use the same sine and cosine curves to get positional values based on the embedding positions.

These are the same curves that were used earlier when encoding the input.

Applying Positional Values

Since the <EOS> token is in the first position and has two embedding values, we take the corresponding positional values from the curves.

  • For the first embedding, the value is 0
  • For the second embedding, the value is 1

Now, we add these positional values to the embedding:

As a result, we get 2.70 and -0.34, which represent the <EOS> token after adding positional encoding.

Adding Self-Attention

Next, we add the self-attention layer so the decoder can keep track of relationships between output words.

The self-attention values for the <EOS> token are -2.8 and -2.3.

Note that the weights used in the decoder’s self-attention (for queries, keys, and values) are different from those used in the encoder.

Adding Residual Connections

Now, we add residual connections, just like we did in the encoder.

What’s Next?

So far, we have seen how self-attention helps the transformer understand relationships within the output sentence.

However, for tasks like translation, the model also needs to understand relationships between the input sentence and the output sentence.

We will explore this in the next article.

Looking for an easier way to install tools, libraries, or entire repositories?
Try Installerpedia: a community-driven, structured installation platform that lets you install almost anything with minimal hassle and clear, reliable guidance.

Just run:

ipm install repo-name

… and you’re done! 🚀

Installerpedia Screenshot

🔗 Explore Installerpedia here