MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

How to build Modern Web Apps: Laravel 12 + InertiaJS + ReactJS

2025-12-13 15:49:00

People in the modern world need fast, innovative solutions for many things. When it comes to web application development, some apps are developed as SPAs (Single-Page Applications) while others are MPAs (Multi-Page Applications). SPAs are typically built with client-side programming languages like React, Vue, and Svelte, while MPAs are developed with server-side languages like PHP and Java.

What if someone needs to combine these approaches to build an app quickly? This might seem overwhelming. However, with the new features of Laravel 12 framework and InertiaJS, it's now very easy to integrate with React, Vue, and most other client-side languages.

This tutorial will teach you how to set up a Laravel app with InertiaJS and ReactJS for SPA or MPA web applications.

Installation Prerequisites

  • PHP 8.3 or above
  • Laravel installer (or Composer for Laravel installation)
  • Latest NodeJS version

Step 1: Create a Laravel Project

laravel new dineshstack.blog

During the installation, you'll be prompted with several options. Here's what to choose:

Choose React for the Frontend
Select the React option when prompted to set up the Laravel, Inertia, and React stack:

Select Authentication System
Choose Laravel's built-in authentication for the default Starter Kit of React Laravel:

Pick Testing Framework
Select either PHPUnit or Pest as your testing framework. After this selection, Laravel will install all the required dependencies for your application:

Install Frontend Dependencies
When asked whether to install npm dependencies, choose yes. This will install all the React dependencies and other frontend packages:

Step 2: Verify Installation and Run the Application
If you've completed all steps correctly, you now have a fully installed application. Navigate to your project directory:

cd dineshstack.blog/

Start the development server:

php artisan serve

You'll see output similar to this:

Your application will be available at:
http://127.0.0.1:8000

🎉 And You're Done!
You now have a fully functional Laravel application with InertiaJS and ReactJS integration. This setup provides you with:

A powerful Laravel backend

React frontend with InertiaJS for seamless integration

Built-in authentication system

Modern development tooling

You can start building amazing web applications with this powerful stack. The combination gives you the best of both worlds: Laravel's robust backend capabilities with React's dynamic frontend experience.

Next Steps:
Explore the generated authentication pages

Check out the React components in the resources/js/Pages directory

Review the Laravel routes that connect to your React components

Start building your custom features!

Happy coding! 🚀

Enjoyed this tutorial? Check out more Laravel content on Dinesh Stack and follow me for more web development tips!

Angular 20 to 21 Upgrade: The Practical Survival Guide

2025-12-13 15:48:14

A clear and concise guide to upgrading from Angular 20 to 21. that Covers the essentials like the Karma full removal, the new default Zoneless mode, automatic HttpClient, and how to fix your breaking builds.

If you thought Angular 20 was a big shift, welcome to Angular 21.

While version 20 was about stabilizing Signals, version 21 is about removing the old guard. The "Angular Way" has fundamentally changed: zone.js is optional, Karma is dead, and RxJS is slowly retreating to the edges.

This isn't just an update; it's a new ecosystem. Here is what is going to break and how to fix it.

🚨 The "Stop Everything" Breaking Changes

Before you run ng update, be aware that your build will likely fail if you rely on these legacy patterns.

1. The Karma Extinction Event (Vitest is Default)

The most immediate shock for many teams will be ng test. Angular 21 has officially swapped Karma for Vitest as the default test runner.

What breaks: If you have a custom karma.conf.js or rely on specific Karma plugins/reporters, your test suite is now legacy code.

The Fix:

  • New Projects: You get Vitest out of the box. It's faster, cleaner, and uses Vite.
  • Existing Projects: You aren't forced to switch immediately, but the writing is on the wall. The CLI will nag you.
  • Migration: Run the schematic ng generate @angular/core:karma-to-vitest to attempt an auto-migration. It's remarkably good at converting standard configs, but custom Webpack hacks in your test setup will need manual rewriting for Vite.

2. HttpClient is "Just There"

Remember adding provideHttpClient() to your app.config.ts or importing HttpClientModule?

The Change: HttpClient is now injected by default in the root injector.

What breaks:

  • If you have tests that mock HttpClient by expecting it not to be there, they might fail.
  • If you rely on HttpClientModule for complex interceptor ordering in a mixed NgModule/Standalone app, you might see subtle behavior changes.

The Fix: Remove explicit provideHttpClient() calls unless you are passing configuration options (like withInterceptors or withFetch). It cleans up your config, but check your interceptor execution order.

3. zone.js is Gone (For New Apps)

New apps generated with ng new will exclude zone.js by default.

What breaks: Nothing for existing apps (yet). Your polyfils.ts will keep importing Zone.

The Warning: If you copy-paste code from a new v21 tutorial into your existing v20 app, it might assume Zoneless behavior (using ChangeDetectorRef less often, relying on Signals). If you mix the two paradigms without understanding them, you'll get "changed after checked" errors or views that don't update.

✨ The New Toys: Features You'll Actually Use

Once you fix the build, v21 offers some incredible DX improvements.

1. Signal Forms (Experimental but Stable)

This is the feature we've been waiting for. No more valueChanges.pipe(...) spaghetti.

import { form, field } from '@angular/forms/signals';

// Define a reactive form model
const loginForm = form({
  email: field('', [Validators.required, Validators.email]),
  password: field('', [Validators.required])
});

// Access values directly as signals!
console.log(loginForm.value().email); 

Why use it: It's type-safe by default and doesn't require RxJS mastery.

Status: Experimental. Use it for new features, but maybe don't rewrite your checkout flow just yet.

2. Angular Aria (Developer Preview)

A new library of headless primitives for accessibility.

Instead of fighting with aria-expanded and role="button", you use directives that handle the a11y logic while you handle the CSS.

<!-- Handles keyboard nav, focus, and ARIA roles automatically -->
<div ariaMenu>
  <button ariaMenuItem>Option 1</button>
  <button ariaMenuItem>Option 2</button>
</div>

3. Regex in Templates

Small but mighty. You can finally use regex literals in templates, perfect for @if logic without creating a helper function.

@if (email() | match: /@company\.com$/) {
  <span class="badge">Employee</span>
}

🛠️ The Upgrade Checklist

Ready to jump? Follow this order to minimize pain.

  1. Backup: Commit everything. Seriously.
  2. Update the Global CLI:
    Updating Angular generally involves two parts: the global CLI and the local project dependencies. Ensure your global CLI is up to date first (you might need sudo or Administrator privileges).

    # Optional: Uninstall the old global version first to avoid conflicts
    npm uninstall -g @angular/cli
    
    # Verify the npm cache
    npm cache verify
    
    # Install the latest global CLI version
    npm install -g @angular/cli@latest
    
  3. Update Local Project:
    Now update your local project dependencies:

    ng update @angular/cli@21 @angular/core@21
    
  4. Run the Diagnostics:
    Angular 21 includes smarter diagnostics. Pay attention to warnings about ngClass (soft deprecated in favor of [class.my-class]) and standalone migration opportunities.

  5. Check Your Tests:
    Run ng test. If it explodes, decide:

    • Path A: Keep Karma (add @angular/build:karma manually if removed).
    • Path B: Migrate to Vitest (Recommended).
  6. Optional: Go Zoneless:
    If you're feeling brave, run the experimental migration:

    ng generate @angular/core:zoneless-migration
    

This is "Agentic" territory. See our MCP Guide for how to let AI handle this complex refactor.

Summary

Angular 21 is the "clean slate" release. It sheds the weight of the last decade (Zone, Karma, Modules) to compete with modern frameworks like Svelte and Solid.

The upgrade might be bumpy due to the testing changes, but the destination—a faster, simpler, signal-driven framework—is absolutely worth it.

Angular 21 &amp; MCP: The End of "Manual" Migrations?

2025-12-13 15:48:04

Angular 21 introduces the Model Context Protocol (MCP) server. Learn how to connect your AI editor directly to the Angular CLI to automate upgrades, refactoring, and architectural shifts like Zoneless.

We used to treat AI like a smart stranger. We would copy-paste error messages or file contents into ChatGPT, hoping it understood our project's architecture. It was helpful, but it was blind.

With Angular 21, that stranger has moved into your house.

The release of the Angular CLI MCP Server (ng mcp) marks a fundamental shift in how we maintain applications. It isn't just a new command; it's a protocol that allows AI agents (like Cursor, Windsurf, or VS Code Copilot) to "interview" your project, understand your specific constraints, and run migrations that actually compile.

Here is why the "manual migration" era might be ending and how to survive the new Agentic Workflow.

What is MCP? (The "USB Port" for AI)

The Model Context Protocol (MCP) is an open standard that lets AI models connect to local tools and data.

Think of it this way:

Before MCP: You paste angular.json into the chat so the AI knows your file structure.

After MCP: The AI simply asks the Angular CLI, "Hey, list all the projects in this workspace," and the CLI responds with the exact structure, build targets, and library dependencies.

In Angular 21, the CLI is an MCP server. It exposes "tools" that your AI editor can call directly.

Setting It Up: 5 Minutes to "Agentic" Angular

The setup is surprisingly trivial because the Angular team baked it directly into the CLI.

1. Initialize the Server

In your Angular 21 workspace terminal:

ng mcp

This command doesn't start a daemon; it generates the configuration snippets you need for your specific IDE.

2. Connect Your Editor (e.g., Cursor)

If you are using Cursor (which you probably should be if you're interested in MCP), create or edit .cursor/mcp.json in your project root:

{
  "mcpServers": {
    "angular-cli": {
      "command": "npx",
      "args": ["-y", "@angular/cli", "mcp"]
    }
  }
}

[!NOTE]
The -y flag is crucial to prevent the "Press y to install" prompt from hanging the background process.

The "Killer App": Context-Aware Migrations

Why go through this trouble? Because Context-Aware Migrations blow standard schematics out of the water.

Traditional ng update scripts are rigid. They follow a strict "If A, then B" logic. If your architecture is weird (and let's be honest, every enterprise architecture is weird), the script breaks or produces code you have to rewrite.

The MCP Server exposes tools that change this dynamic:

Tool 1: get_best_practices

The AI can fetch the current Angular team recommendations. It won't hallucinate that you should use SharedModule in 2025 because it "read it on a blog from 2021." It asks the CLI for the ground truth.

Tool 2: onpush_zoneless_migration

This is the big one for v21. Instead of blindly changing ChangeDetectionStrategy.Default to OnPush, the AI uses this tool to analyze your dependency graph.

The Workflow:

You: "Hey, I want to migrate user-profile.ts to Zoneless. Check if it's safe."

AI (Internal Thought): I need to check the component style. I'll call list_projects to find the root, then read the file.

AI (Internal Thought): I see an Observable subscription in ngOnInit. I'll check get_best_practices for handling async in Zoneless.

AI (Action): "I detected an unmanaged subscription. In Zoneless, this won't trigger a render. I recommend converting this user$observable to a Signal using toSignal() before we switch the strategy."

It doesn't just apply a fix; it negotiates the refactor with you based on the framework's internal logic.

Tool 3: search_documentation

The AI doesn't need to guess API signatures. It queries the local offline documentation index provided by the MCP server.

Real-World Scenario: The "Legacy" Cleanup

Let's say you have an Angular 17-style component using HttpClientModule (deprecated approach) and RxJS for simple state.

Prompt to Cursor (with MCP active):

"Refactor dashboard.component.ts to align with Angular 21 best practices. Use the get_best_practices tool to verify your plan first."

What happens:

  1. The AI calls get_best_practices and learns that standalone: true, inject(), and Signals are the standard.
  2. It calls modernize (an experimental tool in v21) to run the standard schematics.
  3. It manually cleans up the leftovers—converting constructor(private http: HttpClient) to private http = inject(HttpClient).
  4. It converts your BehaviorSubject state to signal.

The result is code that looks like it was written in v21, not just patched to run in v21.

The Future: Continuous Maintenance

This release signals a change in how Google views the CLI. It's no longer just a build tool; it's an interface for agents.

In the near future, we likely won't "stop development" to upgrade. We will have a background agent running via MCP that opens PRs:

"I noticed you used a ControlValueAccessor here. I've created a PR to refactor this to the new Signal Forms input API."

"Angular v22 just dropped. I've updated your angular.json and verified your tests via vitest."

Summary: Don't Upgrade Alone

If you are moving to Angular 21, do not just run ng update and fight the compile errors manually.

  1. Install the MCP server.
  2. Let the AI map your project.
  3. Ask it to plan the migration for you.

The code is still your responsibility, but the grunt work? That belongs to the machine now.

Angular 21 y MCP: ¿El fin de las migraciones "manuales"?'

2025-12-13 15:47:54

Angular 21 introduce el servidor Model Context Protocol (MCP). Aprende cómo conectar tu editor de IA directamente al CLI de Angular para automatizar actualizaciones, refactorizaciones y cambios arquitectónicos como Zoneless.

Solíamos tratar a la IA como a un extraño inteligente. Copiábamos y pegábamos mensajes de error o contenidos de archivos en ChatGPT, esperando que entendiera la arquitectura de nuestro proyecto. Era útil, pero estaba ciega.

Con Angular 21, ese extraño se ha mudado a tu casa.

El lanzamiento del servidor MCP del CLI de Angular (ng mcp) marca un cambio fundamental en la forma en que mantenemos las aplicaciones. No es solo un nuevo comando; es un protocolo que permite a los agentes de IA (como Cursor, Windsurf o VS Code Copilot) "entrevistar" a tu proyecto, entender tus restricciones específicas y ejecutar migraciones que realmente compilen.

He aquí por qué la era de la "migración manual" podría estar terminando, y cómo sobrevivir al nuevo flujo de trabajo agéntico.

¿Qué es MCP? (El "puerto USB" para la IA)

El Model Context Protocol (MCP) es un estándar abierto que permite a los modelos de IA conectarse a herramientas y datos locales.

Piénsalo de esta manera:

Antes de MCP: Pegabas angular.json en el chat para que la IA conociera tu estructura de archivos.

Después de MCP: La IA simplemente pregunta al CLI de Angular: "Oye, lista todos los proyectos en este espacio de trabajo", y el CLI responde con la estructura exacta, los objetivos de compilación y las dependencias de bibliotecas.

En Angular 21, el CLI es un servidor MCP. Expone "herramientas" que tu editor de IA puede llamar directamente.

Configuración: 5 minutos para un Angular "Agéntico"

La configuración es sorprendentemente trivial porque el equipo de Angular la integró directamente en el CLI.

1. Inicializar el Servidor

En tu terminal del espacio de trabajo de Angular 21:

ng mcp

Este comando no inicia un demonio; genera los fragmentos de configuración que necesitas para tu IDE específico.

2. Conectar tu Editor (por ejemplo, Cursor)

Si estás usando Cursor (que probablemente deberías estar usando si te interesa MCP), crea o edita .cursor/mcp.json en la raíz de tu proyecto:

{
  "mcpServers": {
    "angular-cli": {
      "command": "npx",
      "args": ["-y", "@angular/cli", "mcp"]
    }
  }
}

[!NOTE]
La bandera -y es crucial para evitar que el mensaje "Press y to install" bloquee el proceso en segundo plano.

La "killer app": Migraciones conscientes del contexto

¿Por qué pasar por este problema? Porque las migraciones conscientes del contexto superan con creces a los esquemáticos estándar.

Los scripts tradicionales de ng update son rígidos. Siguen una lógica estricta de "Si A, entonces B". Si tu arquitectura es rara (y seamos honestos, cada arquitectura empresarial es rara), el script se rompe o produce código que tienes que reescribir.

El servidor MCP expone herramientas que cambian esta dinámica:

Herramienta 1: get_best_practices

La IA puede obtener las recomendaciones actuales del equipo de Angular. No alucinará que deberías usar SharedModule en 2025 porque "lo leyó en un blog de 2021". Pregunta al CLI por la verdad fundamental.

Herramienta 2: onpush_zoneless_migration

Esta es la grande para la v21. En lugar de cambiar ciegamente ChangeDetectionStrategy.Default a OnPush, la IA utiliza esta herramienta para analizar tu gráfico de dependencias.

El flujo de trabajo:

Tú: "Oye, quiero migrar user-profile.ts a Zoneless. Comprueba si es seguro."

IA (Pensamiento Interno): Necesito comprobar el estilo del componente. Llamaré a list_projects para encontrar la raíz, luego leeré el archivo.

IA (Pensamiento Interno): Veo una suscripción a Observable en ngOnInit. Comprobaré get_best_practices para manejar asincronía en Zoneless.

IA (Acción): "Detecté una suscripción no gestionada. En Zoneless, esto no activará un renderizado. Recomiendo convertir este observable user$ a una Signal usando toSignal() antes de cambiar la estrategia."

No solo aplica una solución; negocia la refactorización contigo basándose en la lógica interna del marco.

Herramienta 3: search_documentation

La IA no necesita adivinar las firmas de la API. Consulta el índice de documentación local fuera de línea proporcionado por el servidor MCP.

Escenario del mundo real: La limpieza "Legacy"

Digamos que tienes un componente al estilo Angular 17 usando HttpClientModule (enfoque obsoleto) y RxJS para estado simple.

Prompt a Cursor (con MCP activo):

"Refactoriza dashboard.component.ts para alinearlo con las mejores prácticas de Angular 21. Usa la herramienta get_best_practices para verificar tu plan primero."

Qué sucede:

  1. La IA llama a get_best_practices y aprende que standalone: true, inject() y Signals son el estándar.
  2. Llama a modernize (una herramienta experimental en v21) para ejecutar los esquemáticos estándar.
  3. Limpia manualmente los restos: convirtiendo constructor(private http: HttpClient) a private http = inject(HttpClient).
  4. Convierte tu estado BehaviorSubject a signal.

El resultado es código que parece escrito en v21, no solo parcheado para ejecutarse en v21.

El futuro: Mantenimiento continuo

Este lanzamiento señala un cambio en cómo Google ve el CLI. Ya no es solo una herramienta de construcción; es una interfaz para agentes.

En el futuro cercano, probablemente no "pararemos el desarrollo" para actualizar. Tendremos un agente en segundo plano ejecutándose vía MCP que abre PRs:

"Noté que usaste un ControlValueAccessor aquí. He creado un PR para refactorizar esto a la nueva API de entrada de Signal Forms."

"Angular v22 acaba de salir. He actualizado tu angular.json y verificado tus pruebas vía vitest."

Resumen: No actualices solo.

Si te estás moviendo a Angular 21, no ejecutes simplemente ng update y pelees con los errores de compilación manualmente.

  1. Instala el servidor MCP.
  2. Deja que la IA mapee tu proyecto.
  3. Pídele que planifique la migración por ti.

El código sigue siendo tu responsabilidad, ¿pero el trabajo pesado? Eso pertenece a la máquina ahora.

Migrando Angular 20 a 21: Guía.

2025-12-13 15:47:46

Una guía sin rodeos para actualizar de Angular 20 a 21. Cubre la eliminación de Karma, el nuevo modo Zoneless por defecto, HttpClient automático y cómo arreglar tus compilaciones rotas.

Si pensabas que Angular 20 fue un gran cambio, bienvenido a Angular 21.

Mientras la versión 20 trataba de estabilizar Signals, la versión 21 trata de eliminar la vieja guardia. El "Modo Angular" ha cambiado fundamentalmente: zone.js es opcional, Karma está muerto y RxJS se retira lentamente a los bordes.

Esto no es solo una actualización; es un nuevo ecosistema. Aquí está lo que se va a romper y cómo arreglarlo.

🚨 Los críticos que detienen todo

Antes de ejecutar ng update, ten en cuenta que tu compilación probablemente fallará si dependes de estos patrones heredados.

1. El Evento de Extinción de Karma (Vitest es el predeterminado)

El choque más inmediato para muchos equipos será ng test. Angular 21 ha cambiado oficialmente Karma por Vitest como el ejecutor de pruebas predeterminado.

Qué se rompe: Si tienes un karma.conf.js personalizado o dependes de complementos/reportadores específicos de Karma, tu suite de pruebas es ahora código heredado.

La solución:

  • Nuevos proyectos: Obtienes Vitest desde el principio. Es más rápido, más limpio y usa Vite.
  • Proyectos existentes: No estás obligado a cambiar inmediatamente, pero el final está cerca. El CLI te insistirá.
  • Migración: Ejecuta el esquemático ng generate @angular/core:karma-to-vitest para intentar una auto-migración. Es notablemente bueno convirtiendo configuraciones estándar, pero los trucos personalizados de Webpack en tu configuración de pruebas necesitarán reescritura manual para Vite.

2. HttpClient está "simplemente ahí"

¿Recuerdas añadir provideHttpClient() a tu app.config.ts o importar HttpClientModule?

El cambio: HttpClient ahora se inyecta por defecto en el inyector raíz.

Qué se rompe:

  • Si tienes pruebas que simulan HttpClient esperando que no esté allí, podrían fallar.
  • Si dependes de HttpClientModule para un orden complejo de interceptores en una aplicación mixta NgModule/Standalone, podrías ver cambios sutiles de comportamiento.

La solución: Elimina las llamadas explícitas a provideHttpClient() a menos que estés pasando opciones de configuración (como withInterceptors o withFetch). Limpia tu configuración, pero comprueba el orden de ejecución de tus interceptores.

3. zone.js se ha ido (para nuevas apps)

Las nuevas aplicaciones generadas con ng new excluirán zone.js por defecto.

Qué se rompe: Nada para las aplicaciones existentes (todavía). Tu polyfils.ts seguirá importando Zone.

La advertencia: Si copias y pegas código de un tutorial nuevo de v21 en tu aplicación v20 existente, podría asumir un comportamiento Zoneless (usando menos ChangeDetectorRef, confiando en Signals). Si mezclas los dos paradigmas sin entenderlos, obtendrás errores "changed after checked" o vistas que no se actualizan.

✨ Los nuevos juguetes: Características que realmente usarás

Una vez que arregles la compilación, la v21 ofrece algunas mejoras increíbles en la experiencia de desarrollo (DX).

1. Signal Forms (Experimental pero estable)

Esta es la característica que hemos estado esperando. No más espagueti de valueChanges.pipe(...).

import { form, field } from '@angular/forms/signals';

// Definir un modelo de formulario reactivo
const loginForm = form({
  email: field('', [Validators.required, Validators.email]),
  password: field('', [Validators.required])
});

// ¡Accede a los valores directamente como signals!
console.log(loginForm.value().email); 

Por qué usarlo: Es seguro en cuanto a tipos por defecto y no requiere maestría en RxJS.

Estado: Experimental. Úsalo para nuevas características, pero tal vez no reescribas tu flujo de pago todavía.

2. Angular Aria (Developer Preview)

Una nueva biblioteca de primitivas "headless" para accesibilidad.

En lugar de pelear con aria-expanded y role="button", usas directivas que manejan la lógica de accesibilidad mientras tú manejas el CSS.

<!-- Maneja navegación por teclado, foco y roles ARIA automáticamente -->
<div ariaMenu>
  <button ariaMenuItem>Opción 1</button>
  <button ariaMenuItem>Opción 2</button>
</div>

3. Regex en plantillas

Pequeño pero poderoso. Finalmente, puedes usar literales de expresiones regulares en plantillas, perfecto para la lógica @if sin crear una función auxiliar.

@if (email() | match: /@company\.com$/) {
  <span class="badge">Empleado</span>
}

🛠️ La listaa deverificación de actualización

¿Listo para saltar? Sigue este orden para minimizar el dolor.

  1. Copia de seguridad: Haz commit de todo. En serio.
  2. Actualizar el CLI Global:
    Actualizar Angular generalmente implica dos partes: el CLI global y las dependencias locales del proyecto. Asegúrate de que tu CLI global esté actualizado primero (podrías necesitar sudo o privilegios de administrador).

    # Opcional: Desinstalar la versión global antigua primero para evitar conflictos
    npm uninstall -g @angular/cli
    
    # Verificar la caché de npm
    npm cache verify
    
    # Instalar la última versión global del CLI
    npm install -g @angular/cli@latest
    
  3. Actualizar proyecto local:
    Ahora actualiza las dependencias locales de tu proyecto:

    ng update @angular/cli@21 @angular/core@21
    
  4. Ejecutar los diagnósticos:
    Angular 21 incluye diagnósticos más inteligentes. Presta atención a las advertencias sobre ngClass (obsoleto suavemente a favor de [class.my-class]) y oportunidades de migración standalone.

  5. Comprobar tus pruebas:
    Ejecuta ng test. Si explota, decide:

    • Ruta A: Mantener Karma (añadir @angular/build:karma manualmente si se eliminó).
    • Ruta B: Migrar a Vitest (recomendado).
  6. Opcional: Ir zoneless:
    Si te sientes valiente, ejecuta la migración experimental:

    ng generate @angular/core:zoneless-migration
    

Note
Esto es territorio "Agéntico". Mira nuestra Guía MCP para saber cómo dejar que la IA maneje esta refactorización compleja.

Resumen

Angular 21 es el lanzamiento de "borrón y cuenta nueva". Se deshace del peso de la última década (Zone, Karma, Modules) para competir con marcos modernos como Svelte y Solid.

La actualización puede ser irregular debido a los cambios en las pruebas, pero el destino —un marco más rápido, más simple e impulsado por signals— vale absolutamente la pena.

TPU: Why Google Doesn’t Wait in Line for NVIDIA GPUs (2/2)

2025-12-13 15:39:35

Continued from: https://dev.to/jiminlee/tpu-why-google-doesnt-wait-in-line-for-nvidia-gpus-12-2a2n

3. "Close Enough" is Good Enough (bfloat16)

Traditional scientific computing uses FP64 (double precision) or FP32 (single precision). These formats are incredibly accurate.

But Deep Learning isn't rocket trajectory physics. It doesn't matter if the probability of an image being a cat is 99.123456% or 99.12%.

Google leveraged this to create bfloat16 (Brain Floating Point).

  • It uses 16 bits (like FP16).

  • But it keeps the wide dynamic range of FP32.

FP16 can crash training because it can't handle very tiny or very huge numbers (range: ~6e-5 to 6e4).

bfloat16 sacrifices precision (how many decimal places) to keep the range (1e-38 to 1e38), matching FP32.

In AI, being able to represent a tiny number (0.00000001) is more important than knowing exactly what the 10th decimal digit is. This format was so successful that NVIDIA adopted it for their A100 and H100 GPUs.

4. TPU Pod: One Chip Is Not Enough

While a single TPU chip is effective at matrix multiplication, it is nowhere near powerful enough to run today's massive Deep Learning models. To solve this, Google decided to bundle multiple TPUs together. They call this super-cluster a TPU Pod.

The hierarchy works like this: You bundle TPU Chips to make a TPU Board, stack Boards to make a TPU Rack, and line up Racks to form a TPU Pod. When you tie 4,096 TPU chips together into a single Pod, you can trick the software into thinking it’s working with one single, massively powerful chip that is 4,096 times faster.

4.1 Connecting the Chips: Inter-Chip Interconnect (ICI)

Usually, when computers talk to each other, they use Ethernet cables—the same standard used for the internet. However, for AI training, chips need to exchange data constantly and instantly. Ethernet is simply too slow.

TPU Pods use a dedicated connection method called ICI (Inter-Chip Interconnect). This allows data to bypass the CPU entirely and zip between TPU chips at incredible speeds.

Google connects these TPUs in a 3D Torus topology—essentially a 3D donut shape.

From: https://www.nextplatform.com/2018/05/10/tearing-apart-googles-tpu-3-0-ai-coprocessor/

Thanks to this "donut" structure, the chip at the very far right edge is directly connected to the chip at the very far left edge. Data can travel to the most distant chip in the cluster in just a few hops.

4.2 Using Light Instead of Electricity: Optical Circuit Switch (OCS)

With the TPU v4 Pod, Google introduced a truly ingenious piece of technology: the OCS (Optical Circuit Switch).

In traditional systems, data transmission involves a conversion chain: "Light signal -> Convert to Electricity -> Calculation/Switching -> Convert back to Light."

But Google engineers thought: Why bother converting it to electricity? Can’t we just send the light directly? Their answer was mirrors. Google decided to bounce the light signals carrying data off mirrors to send them where they needed to go.

Inside the Pod, they installed MEMS (Micro-Electro-Mechanical Systems) mirrors. These are microscopic machines that can physically tilt and move using electrical signals. Google uses MEMS to adjust these tiny mirrors, reflecting the data-carrying light beams in the exact direction they need to go.

This approach offers two massive advantages:

  • Speed: Because there is no "Light -> Electricity -> Light" conversion process, data flies at the speed of light with almost zero latency.

  • Resiliency: Let's say 50 out of the 4,096 TPUs in a Pod fail. In a traditional setup, you might have to physically rewire the rack to bypass the broken chips. With OCS, you simply change the angle of the mirrors. The light bypasses the broken chips and finds a new path instantly.

4.3 An Aquarium-Like Cooling System

Being able to bundle thousands of TPU chips is great, but it brings an unavoidable problem: Heat. These chips generate a level of heat that traditional air conditioning fans simply cannot handle.

Google solves this by running pipes full of coolant directly on top of the chips. This is called Direct-to-Chip Liquid Cooling.

While NVIDIA is recently making headlines for adopting liquid cooling in their H100, Google has been doing this for years. They have effectively been turning their data centers into massive aquariums to keep these beasts cool.

5. The Software: JAX and XLA

Hardware is a paperweight without software.

TensorFlow used to be the king, but PyTorch stole the crown for ease of use. Google’s counter-punch is JAX. It feels like NumPy (easy Python) but runs on accelerators.

The magic bridge between Python and the TPU is XLA (Accelerated Linear Algebra).

Feature JAX (Frontend) XLA (Backend)
Role User Interface The Compiler Engine
What it does Auto-differentiation (grad), Vectorization (vmap) Graph optimization, Memory management
Input Python Code Intermediate Representation (HLO)
Output Computation Graph Binary Code for TPU/GPU

Why is it fast? Kernel Fusion.

Without XLA, calculating a x b + c involves three trips to memory (Read a,b, Write a*b, Read c, etc.).

XLA sees the whole equation and fuses it into a single operation kernel. It keeps the data in the registers and does the multiply-and-add in one shot, perfectly utilizing that Systolic Array we talked about earlier.

6. The 7th Generation TPU: Ironwood

In 2025, Google unveiled Ironwood, its 7th generation TPU. The design philosophy behind this chip is clear: capture both LLM Inference efficiency (running the models cheaply) and Large-scale Training (building the models quickly) at the same time.

Let’s break down the key features of Ironwood.

1) Overwhelming Compute Power and Native FP8

TPU v7 is the first TPU to support FP8 (8-bit floating point) operations natively. It delivers a staggering 4,614 TFLOPS of compute power (in FP8). To put that in perspective, that is approximately 10 times the performance of the TPU v5p and more than 4 times that of the previous v6e (Trillium).

2) Massive Memory and Bandwidth (HBM3E)

Each chip is equipped with 192GB of HBM3E memory and provides a memory bandwidth of 7.37 TB/s. Why does this matter? Large Language Models (LLMs) are typically "Memory-bandwidth bound." This means the chip spends more time waiting for data to arrive from memory than it does actually calculating. Having ultra-fast memory is not just a "nice-to-have"—it is essential for performance.

3) Scalability and Interconnect (ICI)

The "Pod" and "ICI" technologies we discussed earlier have been supercharged. You can now scale a single Pod up to 9,216 chips. Furthermore, the ICI bandwidth has been boosted to 1.2 TB/s bi-directional. This allows thousands of chips to talk to each other even faster than before.

4) Energy Efficiency

Building on the liquid cooling systems mentioned earlier, Ironwood improves power efficiency by roughly 2x compared to the TPU v6. It continues to use the aquarium-style direct-to-chip cooling as a standard.

From: https://blog.google/products/google-cloud/ironwood-tpu-age-of-inference/

7. If TPUs Are So Great, Why Is Everyone Still Obsessed with GPUs?

At this point, you might have a nagging question: "If the TPU is so efficient and specialized, why is the whole world continually scrambling to get their hands on NVIDIA GPUs?"

The answer lies in the fact that AI development isn't decided by chip performance alone.

1) CUDA

NVIDIA has been building its software ecosystem, CUDA, since 2006. That is a massive head start. Today, over 90% of the world's AI researchers write their code based on CUDA.

Even PyTorch, the darling framework of the AI community, is practically optimized to run on CUDA by default. Does PyTorch code run on TPUs? Yes, it does. But compared to the seamless experience on GPUs, it is often less mature and less efficient.

To get 100% out of a TPU, you really need to use tools designed for it, like JAX. But asking busy developers—who are already running fast to keep up with AI trends—to learn a new framework is a massive barrier to entry.

2) Hardware You Can Hold vs. Hardware in the Clouds

It is true that buying a GPU these days is difficult. But buying a TPU is literally impossible.

You cannot go to a store or a vendor and buy a TPU to put in your server room. Realistically, the only way to use a TPU is to rent it through Google Cloud Platform (GCP).

If your company is already built on AWS or Azure, or if you have built your own on-premise infrastructure, TPUs are effectively "pie in the sky"—something nice to look at but impossible to eat. To use TPUs, you have to migrate your data and workflow to Google Cloud. That fear of vendor lock-in—being tied exclusively to Google's infrastructure—is a major hurdle preventing widespread adoption.

8. Cheat Sheet: GPU vs. TPU

Feature GPU (NVIDIA) TPU (Google)
Philosophy Generalist. Good at Graphics, Crypto, AI, Gaming. Specialist. Only does Matrix Math (Deep Learning).
Core Arch SIMT. Thousands of small cores working in parallel. Systolic Array. A massive pipeline where data flows.
Memory High Access. Cores go to memory frequently. Low Access. Reuse data inside the chip.
Precision Flexible. (FP32, FP64 for science). Optimized. (bfloat16, FP8 for AI).
Ecosystem CUDA. The universal language of AI. JAX / XLA. Optimized for Google Cloud & massive scale.

Wrapping Up

We’ve taken a deep look at the TPU. We learned that Google isn't just making a "faster" chip, but a chip that is architecturally specialized for the nature of AI.

  • Systolic Arrays to eliminate memory bottlenecks.

  • Precision Trade-offs (bfloat16) for efficiency.

  • Optical Switches (OCS) & TPU Pods for massive scaling.

  • JAX and XLA to control it all perfectly.

If the NVIDIA GPU is a "Swiss Army Knife" strong in versatility, the Google TPU is a "Scalpel" specialized for efficiency in AI.

Right now, NVIDIA’s CUDA ecosystem looks like an impregnable fortress. But huge tech companies like Amazon, Microsoft, Tesla, and Meta are starting to walk the path Google paved. They are building their own chips not just because NVIDIA GPUs are expensive and hard to find, but to avoid becoming too dependent on a single vendor.

In this wave of change, how should we prepare?

When others are complaining, "I can't do anything because there are no GPUs," wouldn't it be cool to be the person who says, "Oh? I can just use JAX and run this on a TPU"?