2025-12-05 06:15:43
Ejecutar las pruebas de forma concurrente en XCTest es opcional y, de hecho, por defecto se ejecutan de forma serial. En cambio, en Swift Testing la ejecución es paralela por defecto.
Aunque no está bien hecho, en ocasiones el arnés de pruebas debe ser ejecutado de forma serial. Muy probablemente esto se deba a que el código a probar tiene algún estado estático compartido entre las pruebas.
Considerar la siguiente clase, State:
class State {
private static var numbers = [1, 2, 3]
func add(number: Int) {
State.numbers.append(number)
}
func add(numbers: [Int]) {
State.numbers.append(contentsOf: numbers)
}
func removeLastNumber() -> Int? {
State.numbers.popLast()
}
var count: Int {
State.numbers.count
}
}
Las siguientes pruebas muy probablemente fallarán:
struct StaticTests {
let state = State()
@Test
func test1() {
state.add(numbers: [4, 5])
#expect(state.count == 5)
#expect(state.removeLastNumber() == 5)
}
@Test
func test2() {
state.add(numbers: [6, 7])
#expect(state.count == 6)
#expect(state.removeLastNumber() == 7)
}
}
Para que funcionen, es necesario configurar explícitamente la Suite de pruebas, especificando que debe ser ejecutada de forma serial por medio del trait .serialized.
Por ejemplo, las siguientes pruebas se ejecutan de forma secuencial:
@Suite(.serialized) // Definir explícitamente la Suite
struct StaticTests {
let state = State()
@Test
func test1() {
state.add(numbers: [4, 5])
#expect(state.count == 5)
#expect(state.removeLastNumber() == 5)
}
@Test
func test2() {
state.add(numbers: [6, 7])
#expect(state.count == 6)
#expect(state.removeLastNumber() == 7)
}
}
2025-12-05 06:12:16
This project started as a personal lab to really understand what a service mesh does beyond the buzzwords. Instead of using sample apps, the goal was to take a real 3-tier app (Next.js frontend, Go backend, Flask ML service) and see how Istio changes the way traffic, security, and observability work in practice.
The idea was simple: if this setup feels like something that could ship to production one day, then the learning will stick and this repo becomes a living reference for future work.
3RVision is split into three logical services, each running in its own Kubernetes namespace:
The frontend talks to the backend, and the backend calls the ML service for model inference, exactly the kind of hop-by-hop traffic that benefits from a service mesh.
Each service has two deployment variants:
stable version (production)
canary version (testing new features)
This is where Istio’s traffic management features come into play.
To avoid dealing with cloud accounts, Terraform provisions a local Kind cluster with:
# Provision the cluster
terraform init
terraform apply
# Create namespaces
kubectl create namespace frontend
kubectl create namespace backend
kubectl create namespace ml
# Enable Istio sidecar injection
kubectl label namespace frontend istio-injection=enabled
kubectl label namespace backend istio-injection=enabled
kubectl label namespace ml istio-injection=enabled
This gives you a clean separation:
Istio works by injecting an Envoy sidecar proxy next to each application container. All inbound and outbound traffic flows through this sidecar, which means you can add routing, retries, mTLS, and telemetry without changing application code.
┌─────────────────────┐
│ User/Client │
└──────────┬──────────┘
│
▼
┌──────────────────────────────────────────────────────────────────────────┐
│ KIND KUBERNETES CLUSTER │
│ (Terraform Provisioned) │
│ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ │
│ │ Control Plane │ │ Worker #1 │ │ Worker #2 │ │
│ └────────────────┘ └────────────────┘ └────────────────┘ │
├──────────────────────────────────────────────────────────────────────────┤
│ ISTIO SERVICE MESH │
│ │
│ Gateway ──────► VirtualService ──────► DestinationRule │
│ (Ingress) (Routing) (mTLS + Load Balancing) │
├──────────────────────────────────────────────────────────────────────────┤
│ MICROSERVICES │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ FRONTEND │ │ BACKEND │ │ ML MODEL │ │
│ │ (Next.js) │───►│ (Go) │───►│ (Flask) │ │
│ │ Port: 3000 │ │ Port: 8080 │ │ Port: 5001 │ │
│ │ │ │ │ │ │ │
│ │ stable/canary│ │ stable/canary│ │ stable/canary│ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
├──────────────────────────────────────────────────────────────────────────┤
│ OBSERVABILITY STACK │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Prometheus │ │ Jaeger │ │ Grafana │ │
│ │ (Metrics) │ │ (Tracing) │ │ (Dashboards) │ │
│ │ Port: 9090 │ │ Port: 16686 │ │ Port: 3000 │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└──────────────────────────────────────────────────────────────────────────┘
At the edge, an Istio Ingress Gateway receives external requests, applies routing rules defined by VirtualServices, and forwards traffic deeper into the mesh.
The main Istio building blocks used in this project are:
| Resource | Purpose |
|---|---|
| Gateway | Exposes services to external traffic on specific ports |
| VirtualService | Defines how requests are routed (by header, weight, path) |
| DestinationRule | Defines policies for traffic (subsets, load balancing, connection pools) |
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: frontend-vs
namespace: frontend
spec:
hosts:
- "frontend.local"
gateways:
- frontend-gateway
http:
- match:
- headers:
x-canary:
exact: "true"
route:
- destination:
host: frontend-service
subset: canary
weight: 100
- route:
- destination:
host: frontend-service
subset: stable
weight: 90
- destination:
host: frontend-service
subset: canary
weight: 10
Each service (frontend, backend, ml) has:
version label: stable and canary
The canary strategy is intentionally simple but powerful.
x-canary: true → 100% to canary version.
This pattern makes it easy to:
# Send request to canary version
curl -H "x-canary: true" http://frontend.local
# Send request with default routing (weight-based)
curl http://frontend.local
Because the same pattern is applied to all three services (frontend, backend, ML), a single user journey can be fully on canary or fully on stable.
To move toward a zero-trust model, each namespace has a PeerAuthentication resource that sets mTLS mode to STRICT.
Services only accept encrypted traffic from other sidecars in the mesh plain HTTP between pods is rejected.
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: backend
spec:
mtls:
mode: STRICT
Istio documentation shows similar namespace-level policies to enforce strict mTLS for all workloads in a namespace.
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: backend-dr
namespace: backend
spec:
host: backend-service
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
http1MaxPendingRequests: 50
maxRequestsPerConnection: 2
outlierDetection:
consecutiveErrors: 3
interval: 10s
baseEjectionTime: 30s
subsets:
- name: stable
labels:
version: stable
- name: canary
labels:
version: canary
This means if a canary version starts throwing errors, Istio automatically reduces its impact by isolating bad instances, without waiting for a rollback.
The observability stack is built around Istio’s built-in telemetry.
| Tool | Purpose |
|---|---|
| Prometheus | Scrapes metrics from Envoy sidecars (request counts, errors, latency) |
| Grafana | Visualizes mesh metrics (success rate, p99 latency per route) |
| Jaeger | Distributed tracing with high sampling for end-to-end visibility |
# Deploy observability stack
kubectl apply -f k8s/observability/
# Access Grafana
kubectl port-forward -n istio-system svc/grafana 3000:3000
# Access Jaeger
kubectl port-forward -n istio-system svc/jaeger-query 16686:16686
# Access Prometheus
kubectl port-forward -n istio-system svc/prometheus 9090:9090
Grafana Dashboard
Request Rate Per Service
Distributed Tracing
Istio provides ready-made metrics and dashboards so you can quickly monitor mesh traffic, latency, and error rates.
When a user hits the frontend through the Istio Ingress Gateway, the flow looks like this.
1. User → Istio Ingress Gateway
↓ (VirtualService matches host/path)
2. Gateway → Frontend pod (stable or canary based on x-canary header/weight)
↓ (Frontend calls backend via Kubernetes DNS)
3. Frontend Envoy → Backend pod (VirtualService applies routing again)
↓ (Backend calls ML service)
4. Backend Envoy → ML pod (same routing logic)
↓ (ML inference completes)
5. Response flows back through the chain
Seeing this full path in Jaeger, with timing for each hop, is one of the most useful parts of the setup.
Building this lab taught me:
For deeper reading, check out:
If this Istio lab setup helped you, consider ⭐ starring the repo or opening an issue/PR with improvements or ideas. Every bit of feedback, bug report, or contribution helps make this a better reference for anyone learning service mesh in the real world.
2025-12-05 06:10:35
Una "Suite" es un grupo de pruebas.
Al crear una estructura que contiene pruebas marcadas con @Test, esta viene marcada automática con una "S" dentro del reporte de pruebas, indicando que es una "Suite". Esto significa que una estructura que agrupa pruebas es una "Suite" implícitamente.
Se puede crear una "Suite" de forma explícita con la etiqueta @Suite, y en el reporte de pruebas aparecerá con el nombre que use para describirla.
@Suite("GIVEN some common preconditions")
struct CommonpreconditionsTests {
@Test
func specificAction() throws {
}
}
Se pueden anidar "Suite"s para describir mejor las pruebas con funcionalidades, escenarios y precondiciones comunes.
Aquí conviene usar la notación Gherkin de BDD que describe varios niveles y pasos para definir una prueba automatizada:
@Suite("FEATURE: Calculator")
struct CalculatorTests {
@Suite("SCENARIO: Add two numbers")
struct AddingTwoNumbers {
@Test("GIVEN: I have entered 50 in the calculator AND: I have entered 70 in the calculator WHEN: I press add THEN: the result should be 120 on the screen")
func regularCase() {
let x = 50
let y = 70
let result = x + y
let expected = 120
#expect(result == expected)
}
@Test("GIVEN: I have entered <First> in the calculator AND: I have entered <Second> into the calculator WHEN: I press add THEN: the result should be <Result> on the Screen")
func generalization() {
let x = 60
let y = 70
let result = x + y
let expected = 130
#expect(result == expected)
}
}
}
En el reporte de pruebas luciría así:
- FEATURE: Calculator
- - SCENARIO: Add two numbers
- - - GIVEN: I have entered 50 in the calculator AND: I have entered 70 in the calculator WHEN: I press add THEN: the result should be 120 on the screen
- - - GIVEN: I have entered <First> in the calculator AND: I have entered <Second> into the calculator WHEN: I press add THEN: the result should be <Result> on the Screen
Si algunas pruebas comparten precondiciones (i.e. "GIVEN") también podría pensar en agruparlas con otra Suite. Por ejemplo:
@Suite("FEATURE: Calculator")
struct CalculatorTests {
@Suite("SCENARIO: Substracting two numbers")
struct SubtractingTwoNumbers {
@Suite("GIVEN common pre-conditions")
struct CommonPreconditions {
@Test("WHEN: I press substract one THEN: the result should be...")
func case1() {
// ...
#expect(result == expected)
}
@Test("WHEN: I press subtract twice THEN: the result should be...")
func case2() {
// ...
#expect(result == expected)
}
}
}
}
Luego, el reporte se vería así:
- FEATURE: Calculator
- - SCENARIO: Substracting two numbers
- - - GIVEN: come pre-conditions
- - - - WHEN: I press substract one THEN: the result should be...
- - - - WHEN: I press subtract twice THEN: the result should be...
También podría agrupar pruebas por la acción a realizar (i.e. WHEN). Al final del día, la definición de Suites depende del caso específico a construir en cada momento.
XCTAssert requiere vivir dentro de un XCTestCase para funcionar. No es que se pueda hacer una migración a Swift Testing solo moviendo el código que usa XCTAssert.
Se puede agrupar pruebas con @Suite y se las puede anidar para mejorar el orden del reporte de pruebas. En lo posible usar Gherkin (Feature/Scenario) para describir mejor las pruebas.
2025-12-05 06:08:07
In today’s rapidly evolving development landscape, engineers increasingly combine powerful backend AI frameworks with modern TypeScript-based frontends. This article explores how Python, PyTorch, Transformers, vLLM, and SGLang form a cutting-edge AI stack, while FastAPI, Zustand, and Redux enable fast, reactive web applications. Together, these tools allow you to build scalable, production-grade AI applications end-to-end.
A modern AI system might look like this:
This architecture results in a fast, scalable, modern AI application pipeline.
By combining Python’s powerful AI ecosystem with TypeScript’s modern frontend capabilities, developers can craft robust, scalable, and production-ready AI applications. Tools like PyTorch, Transformers, vLLM, SGLang, FastAPI, Zustand, and Redux each fill a unique role—together forming a high-performance tech stack fit for the next generation of intelligent systems.
2025-12-05 06:05:18
Remember that universal adapter protocol I posted about a few months back? The one that was supposed to kill API glue code forever?
Well, it grew up.
ReGenNexus v0.2.6 just dropped, and honestly, it's barely recognizable from the first release. Here's what changed:
Added MCP server support. That means Claude, GPT, or whatever LLM you're running can plug into ReGenNexus as a tool. Your AI agent can now talk to your robot, which talks to your IoT sensors, which talks to your cloud. One protocol.
ROS 2 integration is live. If you're running anything on Robot Operating System, it can now speak UAP natively. No more translator scripts.
Entities can now discover and route to each other dynamically. Drop a new device on the network, it finds its friends automatically.
ECDH-384 encryption with certificate-based auth. Your toaster can now have a cryptographically verified identity. (Whether it should is a different question.)
Spent way too much time making setup boring. It mostly just works now.
pip install regennexus
That's it. You're done. Go build something.
We added Google Colab demos so you can poke around without installing anything:
GitHub: https://github.com/ReGenNow/ReGenNexus
PyPI: https://pypi.org/project/regennexus/
Still MIT licensed. Still looking for contributors. Still hoping someone builds something weird with it.
What would you connect first?