MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Built a Wizard App to Create Code Review Guidelines for Your Company or Team

2026-01-19 10:47:34

When speaking about CodeRabbit, I often emphasize the importance of code review guidelines. Code review guidelines represent a shared understanding of how code reviews should be conducted within a team or organization.

Without such guidelines, the following issues tend to arise:

  • It is unclear what perspectives to use when reviewing
  • Review perspectives vary from person to person
  • Review criteria change depending on mood or circumstances

To prevent this kind of inconsistency, code review guidelines are essential.

A helpful resource when creating code review guidelines is Code Review Developer Guide | google-eng-practices-ja. This is a guideline published by Google and serves as the official explanation of Google’s code review process and policies. Note that the original repository has been archived, so it may not reflect the latest practices.

I usually recommend reading this guide and adapting it to your own organization, but creating guidelines from scratch is time-consuming. To address this, I built a web application that allows you to create your own code review guidelines in a wizard-based format, using Google’s guidelines as a foundation.

About Code Review Guideline Creator

The technologies used in Code Review Guideline Creator are as follows:

  • Cloudflare Workers
  • React Router
  • Tailwind CSS

The source code is open source and released under the MIT License.

goofmint/review-guideline-creator

Types of Guidelines You Can Create

With Code Review Guideline Creator, you can create the following two types of guidelines:

  • For reviewers (those who review code)
  • For reviewees (those whose code is reviewed)

The guidelines clarify what reviewers should focus on during reviews and what reviewees should pay attention to before submitting a pull request.

About the Wizard

The process is simple: you answer a series of questions. Default values are provided wherever possible.

The questions are defined in Markdown files, which are loaded and reflected in the wizard.

By editing these files, you can customize the wizard itself. Note that there are around 60 questions, so the process is relatively extensive.

Data Storage

No data is stored on the server. Intermediate progress is saved in SessionStorage and used to restore state on page reload. There is no authentication mechanism.

Supported Languages

The default language is English, and Japanese is also supported. Additional languages can be added by providing new wizard files.

AI Code Review

Google’s code review guidelines do not cover AI-assisted code reviews, so I added dedicated sections for this. You can define how AI code review tools (such as CodeRabbit) should be handled for both reviewers and reviewees.

About the Generated Output

The completed guidelines can be downloaded as Markdown or PDF files. However, since the output directly reflects the input answers, it may feel somewhat hard to read as-is.

To address this, the app also outputs a prompt for LLMs. By copying this prompt and pasting it into tools like ChatGPT or Claude, you can generate a cleaner and more readable coding guideline.

Summary

Having code review guidelines helps reduce stress for those new to code reviews and helps pull request authors prepare mentally before submitting their work.

To improve team productivity and psychological safety, I encourage you to give it a try.

Code Review Guideline Creator

TGL2209-SM High-Performance Microwave Power Amplifier: Overview and Applications

2026-01-19 10:34:20

In RF and microwave system design, the power amplifier (PA) plays a critical role in determining output power, signal integrity, and overall system reliability. TGL2209-SM is a microwave power amplifier designed for high-frequency applications, offering stable gain, solid linearity, and a compact surface-mount package. It is widely used in wireless communication, test and measurement, and industrial RF systems.

This article provides a structured overview of TGL2209-SM, with clear technical explanations, application scenarios, and explicit code-style examples commonly used in engineering documentation.

Product Overview

TGL2209-SM is a surface-mount microwave power amplifier optimized for high-frequency signal amplification. It is suitable for both continuous wave (CW) and modulated signal operation, making it flexible for modern RF system designs.

Key features include:

Wideband and stable gain performance

Balanced output power and linearity

Compact SMD package for automated PCB assembly

Good thermal stability for continuous operation

Key Technical Characteristics

From a system design perspective, the advantages of TGL2209-SM can be summarized as follows:

High Gain: Reduces the drive requirement from upstream stages

Thermal Stability: Supports long-duration RF transmission

Production Consistency: Ideal for scalable and multi-channel designs

In most RF architectures, TGL2209-SM is positioned as a driver amplifier or final-stage power amplifier.

Typical RF Signal Chain (Code Example)

In technical documents and design notes, RF engineers often describe system architecture using code-style formatting for clarity:

RF Signal Chain: [Baseband Processor] | v [Upconverter] | v [Driver Amplifier] | v [TGL2209-SM Power Amplifier] | v [Bandpass Filter] | v [Antenna]

This layout clearly shows where TGL2209-SM fits within the overall RF transmission path.

Biasing and Power Design Example

Proper biasing is essential to achieve optimal performance. Below is a code-style biasing reference commonly found in RF design documentation:

Bias Configuration: Vdd = Recommended operating voltage Idq = Set according to required linearity RF Choke = Used to isolate RF from DC supply Decoupling Capacitors = Placed close to Vdd pins

Engineers can adjust the quiescent current (Idq) depending on whether efficiency or linearity is the primary design goal.

PCB Layout Guidelines (Code Style)

Good PCB layout is critical for microwave performance. Typical layout recommendations are often summarized as follows:

PCB Layout Rules: - Use short and wide RF traces - Maintain solid ground plane beneath amplifier - Minimize via transitions on RF paths - Place thermal vias under the device pad

Following these rules helps maintain stability and reduce unwanted oscillations.

Application Scenarios

TGL2209-SM is well suited for a variety of RF and microwave applications:

Wireless Communications

Microwave point-to-point links

Private RF networks

Test and Measurement

RF signal generators

Microwave front-end modules

Industrial and Research

RF power modules

Laboratory microwave systems

Selection Considerations

When evaluating TGL2209-SM for a design, engineers should consider:

Selection Checklist: - Operating frequency range compatibility - Required output power level - Linearity requirements (EVM / ACPR) - Thermal and PCB design capability

Matching these factors ensures reliable system performance.

Conclusion

TGL2209-SM is a reliable and efficient microwave power amplifier that balances performance, stability, and ease of integration. With proper biasing, layout, and system design, it can serve as a robust solution for high-frequency RF applications.

For engineers and system designers seeking a proven microwave PA with straightforward integration, TGL2209-SM remains a strong candidate.

The Rise of Private AI Models: What It Means for Developers

2026-01-19 10:28:43

For a long time, AI followed a familiar arc.

Centralised models.
Shared infrastructure.
Public APIs.
One-size-fits-most intelligence.

That phase is ending.

Quietly, but decisively, the industry is moving toward private AI models, and this shift will change how developers build, deploy, and think about software.

Not in theory.
In practice.

Why Centralized AI Is Hitting Its Limits

Public AI models are powerful, but they come with structural constraints.

They struggle with:

  • deep domain specificity
  • proprietary data
  • strict compliance requirements
  • predictable behavior under risk
  • long-term context ownership

For many real-world systems, “good on average” is not good enough.

Enter private AI models.

What “Private AI” Actually Means (And What It Doesn’t)

Private AI does not necessarily mean:

  • training massive models from scratch
  • owning data centers
  • replacing foundation models

In most cases, it means:

  • fine-tuned or adapted models
  • controlled deployment environments
  • domain-specific intelligence
  • isolated data boundaries
  • predictable, governed behavior

Private AI is less about raw power and more about control.

Why This Shift Is Accelerating Now

Several forces are converging:

  • Data sensitivity is increasing

Enterprises can’t send everything to shared models.

  • Regulatory pressure is real

Auditability, explainability, and data residency matter.

  • AI is moving into decision-critical workflows

“Close enough” outputs are no longer acceptable.

  • Cost predictability matters

Private deployments offer more stable economics at scale.

Together, these forces make centralized AI insufficient for serious use cases.

What This Means for Developers

This shift fundamentally changes the developer’s role.

Developers are no longer just:

  • calling APIs
  • tuning prompts
  • handling responses

They are increasingly responsible for:

  • defining intelligence boundaries
  • managing model lifecycle
  • designing evaluation pipelines
  • enforcing constraints
  • integrating domain logic

In short: developers become stewards of intelligence, not just consumers of it.

From Prompting to Model Ownership

With private AI, prompts stop being the primary control mechanism.

Instead, control moves to:

  • training data selection
  • fine-tuning strategy
  • retrieval design
  • policy layers
  • eval-driven iteration

This is a shift from interaction, level control to system, level control.

Developers who understand this transition early will have a significant advantage.

Why Private AI Raises the Bar for Engineering Discipline

Private AI systems don’t benefit from “black box forgiveness.”

When something goes wrong, teams must answer:

  • why the model behaved this way
  • what data influenced the outcome
  • how behavior changed over time
  • who approved the update

That requires:

  • versioning
  • observability
  • evaluation harnesses
  • reproducibility

In other words, real engineering rigor.

The Trade-Off Most Developers Will Face

Private AI is not a free upgrade.

It introduces trade-offs:

  • more responsibility
  • more operational overhead
  • more design decisions
  • more accountability

But it also unlocks:

  • deeper customization
  • stronger trust
  • better alignment with real workflows
  • defensible differentiation

This is a classic leverage trade-off.

More ownership. More upside.

Why This Shift Favors Systems Thinkers

Private AI rewards developers who think in systems, not shortcuts.

The most valuable skills will be:

  • architecture design
  • data curation
  • evaluation strategy
  • failure-mode thinking
  • long-term maintenance planning

Developers who rely only on tools will struggle. Developers who design intelligence environments will thrive.

Where This Is Headed

In the coming years, we’ll see:

  • hybrid architectures (public + private AI)
  • domain-specific models becoming standard
  • AI behavior treated as a versioned artifact
  • intelligence managed like infrastructure

AI won’t disappear behind APIs. It will become part of the system you own.

The Real Takeaway

The rise of private AI models signals a maturation of the industry.

AI is moving from:

  • experimentation → operation
  • novelty → responsibility
  • access → ownership

For developers, this is not a threat.

It’s an opportunity to move up the stack.

Because the future won’t belong to those who simply use AI.

It will belong to those who can design, control, and operate intelligence responsibly. And private AI is where that future starts.

The silent layout bug in AI-generated slides

2026-01-19 10:13:31

I often generate slides by summarizing documents or PDFs.

The workflow itself is convenient, but I kept running into the same issue:
parts of the generated slides were silently cropped.

What made this tricky was that the slides usually looked fine during editing and review.
The overflow only became obvious after export — or worse, during the actual presentation.

After missing this a few times, I realized the problem wasn’t how I generated the slides,
but how hard it was to notice when something was already broken.

Why layout issues are easy to miss

Slide layouts depend on many factors:
screen size, font rendering, code block wrapping, and export targets.

Even with careful review, it’s surprisingly easy to miss small layout failures.
If everything mostly looks fine, our attention moves on.

This problem becomes worse when slides are generated automatically.
There’s often no strong “this looks wrong” moment, and broken output can slip through silently.

The real problem: detectability, not fixing

At some point, I realized the hard part wasn’t fixing layouts.

It was noticing failures early enough to matter.

Before fixing anything, you need a reliable signal that something is wrong.
Without that signal, both humans and automated systems tend to miss problems.

A small experiment

To explore this idea, I built a small CLI tool that tries to detect layout overflows
in Slidev presentations.

It’s intentionally heuristic-based and far from perfect.
The goal isn’t to guarantee correctness, but to make layout failures
machine-detectable early in the workflow — for example, in CI or automated pipelines.

If you’re curious, the repository is here:
https://github.com/mizuirorivi/slidev-overflow-checker

Takeaway

This project reminded me that many real-world problems aren’t about generating better output,
but about making failures visible.

Especially in AI-assisted workflows, the absence of clear failure signals
can be more limiting than generation quality itself.

I’m curious how others handle layout or visual validation
in generated documents or presentations.

GitHub Actions + Security Scanning: Cómo Integrar Trivy y Checkov en tu Pipeline

2026-01-19 10:06:04

🚨 La Pesadilla que Todos los DevOps Tememos

Era un martes cualquiera cuando recibí la alerta: "Vulnerabilidad CRÍTICA detectada en producción". Nuestra aplicación, que servía a miles de usuarios, tenía una vulnerabilidad de día cero que alguien había logrado explotar.

¿El culpable? Una imagen Docker que habíamos desplegado la semana anterior.

La peor parte: ¡La vulnerabilidad ya existía cuando publicamos la imagen! Había pasado por nuestro pipeline CI/CD sin que nadie la notara.

En ese momento entendí algo crucial: si tu pipeline no valida seguridad, estás desplegando ciegamente.

🛡️ La Solución: Integrar Security Scanning en GitHub Actions

Después de esa experiencia, me propuse crear un pipeline que nunca más permitiera que vulnerabilidades llegaran a producción. Así nació este proyecto que hoy comparto contigo.

🔍 El Problema que Resolvemos

# Así era nuestro workflow ANTES:
- name: Build Docker Image
  run: docker build -t mi-app .


- name: Push to Registry  
  run: docker push mi-app:latest

# ¡Sin validaciones de seguridad!

Resultado: Vulnerabilidades, secretos expuestos, configuraciones inseguras... todo llegando a producción.

🚀 La Arquitectura de Nuestro Pipeline Seguro

Nuestro nuevo pipeline tiene 4 fases críticas:

1. 🐳 Construcción con Etiquetas Inteligentes

# Tags únicos por commit + ambiente
IMAGE_TAG_SHA: "sha-${GITHUB_SHA::7}"
IMAGE_TAG_ENV: "${{ inputs.target_env }}-latest"
  1. 🔍 Escaneo de Vulnerabilidades con Trivy
- name: Security Scan con Trivy
  run: |
    docker run --rm \
      -v $(pwd):/src \
      aquasec/trivy:latest \
      image --severity CRITICAL,HIGH \
      mi-imagen:${TAG}

Trivy nos ayuda a encontrar:

  • Vulnerabilidades en paquetes del sistema

  • Dependencias con CVEs conocidos

  • Secretos expuestos accidentalmente

  • Configuraciones inseguras

3. 🏗️ Validación de Dockerfile con Checkov

- name: IaC Security con Checkov
  run: |
    docker run --rm \
      bridgecrew/checkov \
      --file Dockerfile \
      --framework dockerfile

Checkov revisa que nuestro Dockerfile siga mejores prácticas:

  • ¿Usuario root? ❌

  • ¿Paquetes sin actualizar? ❌

  • ¿Secretos hardcodeados? ❌

4. 🚫 Bloqueo Automático en Fallos

La magia está aquí: si hay vulnerabilidades CRÍTICAS o ALTAS, el pipeline SE DETIENE.

if [ "$VULNERABILIDADES" -gt 0 ]; then
  echo "❌ WORKFLOW BLOQUEADO"
  echo "Hay $VULNERABILIDADES problemas de seguridad"
  exit 1
fi

📊 Dashboard de Seguridad en Tiempo Real

Lo mejor de todo: todo aparece automáticamente en GitHub Actions Summary:

🐳 Reporte de Escaneo - Trivy
══════════════════════════════

📊 Resumen General
──────────────────
| Tipo              | Severidad  | Cantidad |
|-------------------|------------|----------|
| Vulnerabilidades  | 🔴 CRÍTICA | 2        |
| Vulnerabilidades  | 🟠 ALTA    | 9        |
| Total             |            | 11       |

🏗️ Checkov Security Scan  
══════════════════════════

✅ Todos los checks pasaron (22 passed, 0 failed)

🎯 El Resultado: Confianza Automatizada

Antes:

  • "Ojalá no haya vulnerabilidades"

  • Auditorías manuales cada 3 meses

  • Incidentes de seguridad recurrentes

Después:

  • ✅ Cada commit validado automáticamente

  • ✅ Cada imagen escaneada antes de publicar

  • ✅ Cada despliegue con reporte de seguridad

  • ✅ Cero vulnerabilidades en producción desde la implementación

trivy-1

trivy-2

checkov-1

💡 Lecciones Aprendidas

1. Seguridad ≠ Lentitud

Muchos piensan que agregar security scanning hará lento el pipeline. ¡Falso! Nuestros escaneos agregan solo 2-3 minutos.

2. Fail Fast es Mejor que Fail in Production

Preferimos que falle el pipeline (y notifique al desarrollador) a que falle en producción (y afecte a usuarios).

3. Los Reportes Son Tu Mejor Aliado

Un reporte claro y automático hace que los equipos entiendan y arreglen los problemas, no solo los "parcheen".

🚀 Implementa en 3 Pasos Sencillos

==================================

Paso 1: Clona y explora

git clone https://github.com/francotel/docker-image-security-scan  
cd docker-image-security-scan   

Paso 2: Examina el workflow

Revisa .github/workflows/publish-nginx-image.yml - ¡todo está listo para usar!

Paso 3: Adapta a tu caso

Modifica nombres de imágenes, registros y políticas según tu stack.

📈 Impacto Inmediato

  • Detección: De trimestral a en cada commit

  • Cobertura: De muestras a 100% de imágenes

  • Confianza: De "espero" a "sé que está validado"

🤝 ¿Cómo Contribuir?

El proyecto está activo en github.com/francotel/docker-image-security-scan

¿Ideas para mejorar?

  • Notificaciones en Slack/Teams

  • Dashboard histórico

  • Escaneo automático de imágenes base

  • Policy as Code personalizado

🎯 ¡Hazlo Hoy!

Ventajas clave:

  • ✅ Gratuito (open source)

  • ✅ Simple (un archivo YAML)

  • ✅ Efectivo (bloquea problemas reales)

Acción inmediata:

  1. ⭐ Dale estrella al repo

  2. 🐑 Haz fork y adapta

  3. 💬 Comenta en issues

¿Listo para seguridad automatizada?👉 github.com/francotel/docker-image-security-scan

#DevSecOps #GitHubActions #DockerSecurity

¡No te lo pierdas! Sígueme en LinkedIn para estar al tanto de todas las actualizaciones y futuros artículos:

LinkedIn

☕ Apóyame con un café

Si este contenido te ha sido útil y quieres apoyarme para seguir creando más, considera invitarme un café. ¡Tu apoyo hace la diferencia! 🥰

BuyMeACoffee

¡Gracias por leer y hasta la próxima! 👋