MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Swift Testing #4: Correr pruebas de forma serial

2025-12-05 06:15:43

Ejecutar las pruebas de forma concurrente en XCTest es opcional y, de hecho, por defecto se ejecutan de forma serial. En cambio, en Swift Testing la ejecución es paralela por defecto.

Aunque no está bien hecho, en ocasiones el arnés de pruebas debe ser ejecutado de forma serial. Muy probablemente esto se deba a que el código a probar tiene algún estado estático compartido entre las pruebas.

Considerar la siguiente clase, State:

class State {
  private static var numbers = [1, 2, 3]
  func add(number: Int) {
    State.numbers.append(number)
  }
  func add(numbers: [Int]) {
    State.numbers.append(contentsOf: numbers)
  }
  func removeLastNumber() -> Int? {
    State.numbers.popLast()
  }
  var count: Int {
    State.numbers.count
  }
}

Las siguientes pruebas muy probablemente fallarán:

struct StaticTests {
    let state = State()
    @Test
    func test1() {
        state.add(numbers: [4, 5])
        #expect(state.count == 5)
        #expect(state.removeLastNumber() == 5)
    }
    @Test
    func test2() {
        state.add(numbers: [6, 7])
        #expect(state.count == 6)
        #expect(state.removeLastNumber() == 7)
    }
}

Para que funcionen, es necesario configurar explícitamente la Suite de pruebas, especificando que debe ser ejecutada de forma serial por medio del trait .serialized.

Por ejemplo, las siguientes pruebas se ejecutan de forma secuencial:

@Suite(.serialized) // Definir explícitamente la Suite
struct StaticTests {
    let state = State()
    @Test
    func test1() {
        state.add(numbers: [4, 5])
        #expect(state.count == 5)
        #expect(state.removeLastNumber() == 5)
    }
    @Test
    func test2() {
        state.add(numbers: [6, 7])
        #expect(state.count == 6)
        #expect(state.removeLastNumber() == 7)
    }
}

Bibliografía

  • Video "Mastering Swift Testing: Run Serialized Tests with One Line of Code!" (Swift and Tips), aquí.
  • Lista de reproducción "Swift Testing" (Swift and Tips), aquí.
  • Documentación sobre Swift Testing, aquí.
  • Documentación "Running tests serially or in parallel", aquí.

Learning Istio the Hard Way: A Real Service Mesh Lab with Canary, mTLS, and Tracing.

2025-12-05 06:12:16

Table of Contents

  • Why I Built a Service Mesh Lab Instead of Just Reading Docs
  • The 3RVision Platform: Real App, Real Traffic
  • Setting the Stage: Kind + Terraform + Namespaces
  • How Istio Fits In: Sidecars, Gateways, and the Data Plane
  • Traffic Management 101: VirtualServices, DestinationRules, and Subsets
  • Implementing Canary Releases with Headers and Weights
  • Enforcing Zero-Trust with STRICT mTLS
  • Load Balancing, Connection Pools, and Circuit Breaking
  • Observability: Metrics, Dashboards, and Traces
  • Walking Through a Single Request
  • Key Takeaways

Why I Built a Service Mesh Lab Instead of Just Reading Docs

This project started as a personal lab to really understand what a service mesh does beyond the buzzwords. Instead of using sample apps, the goal was to take a real 3-tier app (Next.js frontend, Go backend, Flask ML service) and see how Istio changes the way traffic, security, and observability work in practice.

The idea was simple: if this setup feels like something that could ship to production one day, then the learning will stick and this repo becomes a living reference for future work.

The 3RVision Platform: Real App, Real Traffic.

3RVision is split into three logical services, each running in its own Kubernetes namespace:

  • frontend → Next.js UI
  • backend → Go API server
  • ml → Flask ML inference service

The frontend talks to the backend, and the backend calls the ML service for model inference, exactly the kind of hop-by-hop traffic that benefits from a service mesh.

Each service has two deployment variants:

  • stable version (production)
  • canary version (testing new features)

This is where Istio’s traffic management features come into play.

Setting the Stage: Kind + Terraform + Namespaces

To avoid dealing with cloud accounts, Terraform provisions a local Kind cluster with:

  • 1 control plane node
  • 2 worker nodes
  • Port mappings for HTTP (80) and HTTPS (443)

Cluster Setup Workflow

# Provision the cluster
terraform init
terraform apply

# Create namespaces
kubectl create namespace frontend
kubectl create namespace backend
kubectl create namespace ml

# Enable Istio sidecar injection
kubectl label namespace frontend istio-injection=enabled
kubectl label namespace backend istio-injection=enabled
kubectl label namespace ml istio-injection=enabled

istio_injection

This gives you a clean separation:

  • Terraform → cluster lifecycle
  • Kubernetes → application resources
  • Istio → traffic shaping and security

How Istio Fits In: Sidecars, Gateways, and the Data Plane

Istio works by injecting an Envoy sidecar proxy next to each application container. All inbound and outbound traffic flows through this sidecar, which means you can add routing, retries, mTLS, and telemetry without changing application code.

sidecar_container

Architecture Overview

                            ┌─────────────────────┐
                            │    User/Client      │
                            └──────────┬──────────┘
                                       │
                                       ▼
┌──────────────────────────────────────────────────────────────────────────┐
│                        KIND KUBERNETES CLUSTER                            │
│                        (Terraform Provisioned)                            │
│  ┌────────────────┐  ┌────────────────┐  ┌────────────────┐              │
│  │ Control Plane  │  │   Worker #1    │  │   Worker #2    │              │
│  └────────────────┘  └────────────────┘  └────────────────┘              │
├──────────────────────────────────────────────────────────────────────────┤
│                          ISTIO SERVICE MESH                               │
│                                                                           │
│    Gateway ──────► VirtualService ──────► DestinationRule                │
│   (Ingress)          (Routing)           (mTLS + Load Balancing)         │
├──────────────────────────────────────────────────────────────────────────┤
│                           MICROSERVICES                                   │
│                                                                           │
│   ┌──────────────┐    ┌──────────────┐    ┌──────────────┐               │
│   │   FRONTEND   │    │   BACKEND    │    │   ML MODEL   │               │
│   │   (Next.js)  │───►│     (Go)     │───►│   (Flask)    │               │
│   │  Port: 3000  │    │  Port: 8080  │    │  Port: 5001  │               │
│   │              │    │              │    │              │               │
│   │ stable/canary│    │ stable/canary│    │ stable/canary│               │
│   └──────────────┘    └──────────────┘    └──────────────┘               │
├──────────────────────────────────────────────────────────────────────────┤
│                        OBSERVABILITY STACK                                │
│                                                                           │
│   ┌──────────────┐    ┌──────────────┐    ┌──────────────┐               │
│   │  Prometheus  │    │    Jaeger    │    │   Grafana    │               │
│   │   (Metrics)  │    │  (Tracing)   │    │ (Dashboards) │               │
│   │  Port: 9090  │    │ Port: 16686  │    │  Port: 3000  │               │
│   └──────────────┘    └──────────────┘    └──────────────┘               │
└──────────────────────────────────────────────────────────────────────────┘

At the edge, an Istio Ingress Gateway receives external requests, applies routing rules defined by VirtualServices, and forwards traffic deeper into the mesh.

Traffic Management 101: VirtualServices, DestinationRules, and Subsets

The main Istio building blocks used in this project are:

Resource Purpose
Gateway Exposes services to external traffic on specific ports
VirtualService Defines how requests are routed (by header, weight, path)
DestinationRule Defines policies for traffic (subsets, load balancing, connection pools)

Example: Frontend VirtualService

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: frontend-vs
  namespace: frontend
spec:
  hosts:
    - "frontend.local"
  gateways:
    - frontend-gateway
  http:
    - match:
        - headers:
            x-canary:
              exact: "true"
      route:
        - destination:
            host: frontend-service
            subset: canary
          weight: 100
    - route:
        - destination:
            host: frontend-service
            subset: stable
          weight: 90
        - destination:
            host: frontend-service
            subset: canary
          weight: 10

Each service (frontend, backend, ml) has:

  • A VirtualService that decides which version handles the request
  • A DestinationRule that defines two subsets based on the version label: stable and canary

Implementing Canary Releases with Headers and Weights

The canary strategy is intentionally simple but powerful.

Traffic Routing Logic

  1. If request has header x-canary: true → 100% to canary version.
  2. If header is missing → Split by weight (for example, 90% stable, 10% canary).

This pattern makes it easy to:

  • Send only internal testers to canary by setting the header.
  • Gradually increase canary weight without touching deployment specs.
  • Roll back instantly by adjusting VirtualService weights.

Testing Canary Deployment

# Send request to canary version
curl -H "x-canary: true" http://frontend.local

# Send request with default routing (weight-based)
curl http://frontend.local

traffic_split

Because the same pattern is applied to all three services (frontend, backend, ML), a single user journey can be fully on canary or fully on stable.

Enforcing Zero-Trust with STRICT mTLS

To move toward a zero-trust model, each namespace has a PeerAuthentication resource that sets mTLS mode to STRICT.

What This Means

Services only accept encrypted traffic from other sidecars in the mesh plain HTTP between pods is rejected.

mTLS_verification

Benefits of Istio mTLS

  1. Encryption → Nobody can sniff requests or responses in transit.
  2. Mutual authentication → Prevents unknown workloads from accessing services.
  3. Automated cert management → No manual cert rotation or key generation.

Example: PeerAuthentication

apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: backend
spec:
  mtls:
    mode: STRICT

Istio documentation shows similar namespace-level policies to enforce strict mTLS for all workloads in a namespace.

Load Balancing, Connection Pools, and Circuit Breaking

  • Load balancing: Round-robin across all healthy pods in a subset.
  • Connection pools: Limits on TCP connections and HTTP pending requests.
  • Outlier detection: After N consecutive errors, a pod is temporarily ejected from the pool.

Example: DestinationRule with Circuit Breaking

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: backend-dr
  namespace: backend
spec:
  host: backend-service
  trafficPolicy:
    connectionPool:
      tcp:
        maxConnections: 100
      http:
        http1MaxPendingRequests: 50
        maxRequestsPerConnection: 2
    outlierDetection:
      consecutiveErrors: 3
      interval: 10s
      baseEjectionTime: 30s
  subsets:
    - name: stable
      labels:
        version: stable
    - name: canary
      labels:
        version: canary

This means if a canary version starts throwing errors, Istio automatically reduces its impact by isolating bad instances, without waiting for a rollback.

Observability: Metrics, Dashboards, and Traces

The observability stack is built around Istio’s built-in telemetry.

Components

Tool Purpose
Prometheus Scrapes metrics from Envoy sidecars (request counts, errors, latency)
Grafana Visualizes mesh metrics (success rate, p99 latency per route)
Jaeger Distributed tracing with high sampling for end-to-end visibility

Deployment

# Deploy observability stack
kubectl apply -f k8s/observability/

# Access Grafana
kubectl port-forward -n istio-system svc/grafana 3000:3000

# Access Jaeger
kubectl port-forward -n istio-system svc/jaeger-query 16686:16686

# Access Prometheus
kubectl port-forward -n istio-system svc/prometheus 9090:9090

Grafana Dashboard

grafana_dashboard

Request Rate Per Service

req_rate_per_svc

Distributed Tracing

distributed_tracing

Istio provides ready-made metrics and dashboards so you can quickly monitor mesh traffic, latency, and error rates.

Walking Through a Single Request

When a user hits the frontend through the Istio Ingress Gateway, the flow looks like this.

Request Flow

1. User → Istio Ingress Gateway
   ↓ (VirtualService matches host/path)

2. Gateway → Frontend pod (stable or canary based on x-canary header/weight)
   ↓ (Frontend calls backend via Kubernetes DNS)

3. Frontend Envoy → Backend pod (VirtualService applies routing again)
   ↓ (Backend calls ML service)

4. Backend Envoy → ML pod (same routing logic)
   ↓ (ML inference completes)

5. Response flows back through the chain

What Happens at Each Hop

  • mTLS encryption between all services.
  • Metrics emission to Prometheus.
  • Trace spans sent to Jaeger.
  • Circuit breaking and outlier detection enforced by DestinationRules.

Seeing this full path in Jaeger, with timing for each hop, is one of the most useful parts of the setup.

Key Takeaways

Building this lab taught me:

  • Service mesh adds zero-downtime deployments and fine-grained traffic control without code changes.
  • mTLS enforcement is straightforward with Istio and significantly improves security posture.
  • Observability becomes a first-class thing with minimal instrumentation effort.
  • Understanding Istio’s primitives (Gateway, VirtualService, DestinationRule, PeerAuthentication) unlocks powerful traffic patterns.

For deeper reading, check out:

If this Istio lab setup helped you, consider ⭐ starring the repo or opening an issue/PR with improvements or ideas. Every bit of feedback, bug report, or contribution helps make this a better reference for anyone learning service mesh in the real world.

Swift Testing #2: Agrupando pruebas con @Suite

2025-12-05 06:10:35

Una "Suite" es un grupo de pruebas.

Al crear una estructura que contiene pruebas marcadas con @Test, esta viene marcada automática con una "S" dentro del reporte de pruebas, indicando que es una "Suite". Esto significa que una estructura que agrupa pruebas es una "Suite" implícitamente.

Se puede crear una "Suite" de forma explícita con la etiqueta @Suite, y en el reporte de pruebas aparecerá con el nombre que use para describirla.

@Suite("GIVEN some common preconditions")
struct CommonpreconditionsTests {
  @Test
  func specificAction() throws {
  }
}

Se pueden anidar "Suite"s para describir mejor las pruebas con funcionalidades, escenarios y precondiciones comunes.

Aquí conviene usar la notación Gherkin de BDD que describe varios niveles y pasos para definir una prueba automatizada:

  1. Feature: Descripción a alto nivel de la funcionalidad que se va a probar. Siempre encabeza el archivo, así que sería el primer nivel de Suite en nuestras pruebas de Swift Testing.
  2. Scenario: Un caso de uso específico de la funcionalidad que se va a probar. Sería el segundo nivel de Suite dentro del "Feature".
  3. Given: Describe el estado inicial o las precondiciones del sistema antes del escenario.
  4. When: Describe la acción o evento que dispara el escenario.
  5. Then: Describe el resultado esperado o post-condiciones del sistema después del escenario.
  6. And: Un paso sucesivo.
@Suite("FEATURE: Calculator")
struct CalculatorTests {
  @Suite("SCENARIO: Add two numbers")
  struct AddingTwoNumbers {
    @Test("GIVEN: I have entered 50 in the calculator AND: I have entered 70 in the calculator WHEN: I press add THEN: the result should be 120 on the screen")
    func regularCase() {
      let x = 50
      let y = 70
      let result = x + y
      let expected = 120
      #expect(result == expected)
    }
    @Test("GIVEN: I have entered <First> in the calculator AND: I have entered <Second> into the calculator WHEN: I press add THEN: the result should be <Result> on the Screen")
    func generalization() {
      let x = 60
      let y = 70
      let result = x + y
      let expected = 130
      #expect(result == expected)
    }
  }
}

En el reporte de pruebas luciría así:

- FEATURE: Calculator
- - SCENARIO: Add two numbers
- - - GIVEN: I have entered 50 in the calculator AND: I have entered 70 in the calculator WHEN: I press add THEN: the result should be 120 on the screen
- - - GIVEN: I have entered <First> in the calculator AND: I have entered <Second> into the calculator WHEN: I press add THEN: the result should be <Result> on the Screen

Si algunas pruebas comparten precondiciones (i.e. "GIVEN") también podría pensar en agruparlas con otra Suite. Por ejemplo:

@Suite("FEATURE: Calculator")
struct CalculatorTests {
  @Suite("SCENARIO: Substracting two numbers")
  struct SubtractingTwoNumbers {
    @Suite("GIVEN common pre-conditions")
    struct CommonPreconditions {
      @Test("WHEN: I press substract one THEN: the result should be...")
      func case1() {
        // ...
        #expect(result == expected)
      }
      @Test("WHEN: I press subtract twice THEN: the result should be...")
      func case2() {
        // ...
        #expect(result == expected)
      }
    }
  }
}

Luego, el reporte se vería así:

- FEATURE: Calculator
- - SCENARIO: Substracting two numbers
- - - GIVEN: come pre-conditions
- - - - WHEN: I press substract one THEN: the result should be...
- - - - WHEN: I press subtract twice THEN: the result should be...

También podría agrupar pruebas por la acción a realizar (i.e. WHEN). Al final del día, la definición de Suites depende del caso específico a construir en cada momento.

Observaciones

XCTAssert requiere vivir dentro de un XCTestCase para funcionar. No es que se pueda hacer una migración a Swift Testing solo moviendo el código que usa XCTAssert.
Se puede agrupar pruebas con @Suite y se las puede anidar para mejorar el orden del reporte de pruebas. En lo posible usar Gherkin (Feature/Scenario) para describir mejor las pruebas.

Bibliografia

  • Video "Mastering Swift Testing: Creating Subgroups of Tests with @suite" (Swift Testing), aquí.
  • Lista de reproducción "Swift Testing" (Swift and Tips), aquí.
  • Documentación sobre Swift Testing, aquí.

Scalable AI Application Development: Combining Python ML Frameworks with TypeScript-Powered Web Systems

2025-12-05 06:08:07

Introduction

In today’s rapidly evolving development landscape, engineers increasingly combine powerful backend AI frameworks with modern TypeScript-based frontends. This article explores how Python, PyTorch, Transformers, vLLM, and SGLang form a cutting-edge AI stack, while FastAPI, Zustand, and Redux enable fast, reactive web applications. Together, these tools allow you to build scalable, production-grade AI applications end-to-end.

  1. Python + PyTorch: The Core of AI Development Python remains the dominant language in machine learning, thanks in large part to PyTorch. PyTorch offers an intuitive, eager-execution framework for building and training neural networks. Its flexibility makes it ideal for research, prototyping, and production environments.

  1. Transformers: The Architecture That Changed Everything Transformers revolutionized natural language processing (NLP), powering state-of-the-art models for text generation, classification, retrieval, and more. With libraries such as Hugging Face Transformers, developers can easily access pre-trained models and fine-tune custom solutions.

  1. vLLM & SGLang: High-Performance LLM Serving vLLM is a high-performance inference engine optimized for serving large language models efficiently and affordably. Its paged-attention architecture drastically improves throughput and reduces memory overhead. SGLang complements this by providing a lightweight, modular framework for creating fast LLM-powered applications. It focuses on speed, extensibility, and ease of integration with modern AI pipelines.

  1. FastAPI: A Lightning-Fast Python Backend FastAPI is the go-to framework for creating high-performance APIs in Python. With type hints, automatic documentation via OpenAPI, and incredible speed (thanks to Starlette and Pydantic), it pairs perfectly with AI workloads. FastAPI makes deploying AI models—from simple inference endpoints to full microservices—clean and efficient.

  1. TypeScript + Modern Frontend State Management On the client side, TypeScript ensures maintainability and type safety across large codebases. Zustand offers a minimalistic, unopinionated state management solution suitable for modern React applications. It’s especially effective for small-to-medium apps that require simplicity and performance. Redux remains a robust choice for complex state management, where predictable state transitions and debugging tools are essential.

Building a Full Stack AI Application

A modern AI system might look like this:

  1. Python + PyTorch/Transformers to develop and fine-tune your LLM or model.
  2. vLLM or SGLang for serving the model efficiently in production.
  3. FastAPI to expose API endpoints to frontends or external services.
  4. TypeScript + React for building a responsive user interface.
  5. Zustand or Redux for state management across the client application.

This architecture results in a fast, scalable, modern AI application pipeline.

Conclusion

By combining Python’s powerful AI ecosystem with TypeScript’s modern frontend capabilities, developers can craft robust, scalable, and production-ready AI applications. Tools like PyTorch, Transformers, vLLM, SGLang, FastAPI, Zustand, and Redux each fill a unique role—together forming a high-performance tech stack fit for the next generation of intelligent systems.

ReGenNexus Just Learned Some New Tricks (v0.2.6)

2025-12-05 06:05:18

Remember that universal adapter protocol I posted about a few months back? The one that was supposed to kill API glue code forever?

Well, it grew up.

ReGenNexus architecture diagram

ReGenNexus v0.2.6 just dropped, and honestly, it's barely recognizable from the first release. Here's what changed:

Your LLM Can Now Use It Directly

Added MCP server support. That means Claude, GPT, or whatever LLM you're running can plug into ReGenNexus as a tool. Your AI agent can now talk to your robot, which talks to your IoT sensors, which talks to your cloud. One protocol.

Robots Joined the Party

ROS 2 integration is live. If you're running anything on Robot Operating System, it can now speak UAP natively. No more translator scripts.

Mesh Networking

Entities can now discover and route to each other dynamically. Drop a new device on the network, it finds its friends automatically.

Security Got Serious

ECDH-384 encryption with certificate-based auth. Your toaster can now have a cryptographically verified identity. (Whether it should is a different question.)

Plug and Play Config

Spent way too much time making setup boring. It mostly just works now.

One Command Install

pip install regennexus

That's it. You're done. Go build something.

Try Before You Clone

We added Google Colab demos so you can poke around without installing anything:

GitHub: https://github.com/ReGenNow/ReGenNexus
PyPI: https://pypi.org/project/regennexus/

Still MIT licensed. Still looking for contributors. Still hoping someone builds something weird with it.

What would you connect first?