MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

CrossUI — A Reusable UI Component Library for Vue, React, Nuxt, and Next.js

2025-10-17 07:12:01

Hey developers! 👋

I’m excited to share CrossUI, a UI component library I built to help front-end developers save time and avoid repetitive work.

CrossUI supports:

Vue & React (full compatibility)

Nuxt & Next.js for SSR/SSG projects

Multiple CSS frameworks: Bootstrap-vue-next, Material ui, Element Plus,react-bootstrap

Why CrossUI?

Reusable components out-of-the-box

Easy to integrate with any project

Fully customizable via props and slots

Flexible design for any CSS framework

Installation:

Using npm

npm i cross-cli-new

Using yarn

yarn add cross-cli-new

Usage example:
// React
import { CustomAccordion } from 'cross-ui';

title="Hello World"
description="This is a demo component"
/>

// Vue


import { CustomAccordion } from 'cross-ui';

🔗 Cómo los desarrolladores están impulsando la próxima ola de adopción cripto en 2025

2025-10-17 07:10:36

El mundo cripto ya no se trata solo de trading, sino de construir. En 2025, los desarrolladores están en el centro de la revolución cripto, creando aplicaciones, APIs y plataformas que hacen que las finanzas digitales sean más accesibles que nunca.

Desde la integración de sistemas de identidad basados en blockchain hasta la habilitación de transacciones transfronterizas, los devs están ayudando a convertir el cripto en una herramienta práctica para el uso cotidiano. Un ejemplo claro de este cambio es cómo servicios como MoonPay ofrecen a los desarrolladores soluciones listas para usar que permiten integrar pagos con criptomonedas de forma segura, sin necesidad de tener un conocimiento profundo de blockchain.

A medida que el ecosistema madura, el éxito en el desarrollo cripto dependerá menos de seguir tendencias y más de diseñar soluciones seguras, escalables y centradas en el usuario. Los desarrolladores capaces de conectar la complejidad del blockchain con experiencias fluidas serán quienes definan las aplicaciones financieras del futuro.

How Agent Observability Ensures AI Agent Reliability

2025-10-17 07:01:09

Agent observability makes AI agents reliable by tracing behavior, evaluating quality, and alerting issues in real time.

TL;DR

Agent observability combines distributed tracing, automated evaluations, and real-time monitoring to improve AI agent reliability across voice agents, RAG systems, and copilots. By instrumenting every span and tool call, running rule-based and LLM evaluations, and closing the loop with data curation, teams reduce hallucinations, catch regressions, and ship with confidence. Maxim AI’s unified stack integrates experimentation, simulation, evaluation, and production observability to accelerate delivery while maintaining quality. Link your agent telemetry to quality metrics, then operationalize alerts and dashboards to sustain trustworthy AI in production.

Introduction

Reliable AI agents require visibility across prompts, tools, models, and context flows. Observability provides the missing layer: capturing rich traces, evaluating outcomes, and surfacing anomalies before users are impacted. This article explains how agent observability drives ai reliability, how to instrument agents for llm tracing and agent debugging, and why an end-to-end platform like Maxim AI aligns engineering and product teams to improve ai quality.

What is Agent Observability?

Agent observability is the systematic collection and analysis of agent telemetry—requests, spans, tool calls, and context—to understand behavior and quality. In practice, that means:
• Capturing session, trace, and span data for llm tracing, model tracing, and agent monitoring.
• Recording inputs/outputs, tool results, model/router decisions, and latency/cost.
• Linking traces to evaluations (ai evals, chatbot evals, rag evals, voice evals) and human review.

Maxim AI’s observability suite supports real-time production logs, distributed tracing, alerting, and automated evaluations so teams can track, debug, and resolve live quality issues quickly. See the product overview for agent observability features: Maxim Agent Observability.

Why Observability Ensures Reliability

Reliability is the consistent fulfillment of user intents under variable conditions. Observability improves reliability through:
• Transparency: Tracing reveals prompt versions, router decisions, and tool outcomes, enabling prompt management and prompt versioning across deployments. Explore advanced prompt workflows in Experimentation (https://www.getmaxim.ai/products/experimentation).
• Measurability: Automated evaluations quantify success, grounding decisions in metrics like task completion, faithfulness, and toxicity. Learn more under Agent Simulation & Evaluation.
• Control Loops: Alerts and dashboards detect regressions, route incidents, and guide fixes; curated datasets from production logs drive re-tests and fine-tuning using the Data Engine.

For multi-provider resilience, an ai gateway improves uptime via failover and routing. Maxim’s Bifrost provides unified access, automatic fallbacks, and load balancing through an OpenAI-compatible API—critical to sustaining agent reliability at scale.

Instrumentation: Tracing Agents the Right Way

Effective agent tracing requires consistent instrumentation across modalities and tools:
• Span Design: Capture prompts, model parameters, tool schemas, retrieval contexts, and intermediate reasoning steps for agent debugging.
• Metadata: Log versioned prompts, evaluator configs, router candidates (llm router / model router), cache hits, and latency/cost breakdowns.
• Context Linkage: Associate spans with datasets, evaluation runs, and human review for longitudinal analysis.

Maxim enables distributed tracing with repositories per application, real-time logging, and automated quality checks in production. Teams can connect experiments, simulations, and observability to analyze outputs across prompts, models, and parameters.

Evaluations: Automating Quality at Scale

LLM observability depends on credible evals that reflect task outcomes:
• Deterministic Rules: Regex/policy checks for PII, profanity, formatting, and schema adherence.
• Statistical Metrics: Latency, cost, retrieval precision/recall, grounding scores for rag evaluation and rag observability.
• LLM-as-a-Judge: Structured rubrics to assess helpfulness, correctness, and instruction following for copilot evals and agent evaluation.
• Human-in-the-Loop: Targeted reviews for nuanced cases and last-mile sign-off.

Maxim’s evaluator store and custom evaluators let teams quantify improvements across large test suites, compare versions, and visualize runs.

RAG and Voice: Special Observability Considerations

RAG systems and voice agents demand modality-aware observability:
• RAG Tracing: Track retrieval queries, top-k results, provenance, and grounding. Measure citation faithfulness, context coverage, and hallucination detection to reduce erroneous answers. Connect experiments to production with Experimentation and validate in Agent Observability.
• Voice Observability: Log ASR hypotheses, timestamps, interruptions, barge-in events, and TTS latencies. Run voice evaluation on transcription accuracy and dialog success, then alert on quality drift.

When running across multiple providers, Bifrost’s semantic caching, streaming support, and governance features improve throughput and cost control without sacrificing quality.

Operationalizing Reliability: Alerts, Dashboards, and Gateways

Reliability requires ongoing operations:
• Real-Time Alerts: Thresholds on evaluation scores, failure rates, grounding metrics, and model/tool errors.
• Custom Dashboards: Slice metrics by persona, scenario, prompt version, router policy, or provider to spot regressions fast.
• Gateway Controls: Rate limits, access policies, cost budgets, and multi-key load balancing to prevent outages and runaway spend.

With a drop-in OpenAI-compatible API, teams can integrate observability and governance with minimal code changes.

Conclusion

Agent observability is foundational to ai reliability. By tracing every decision, evaluating outcomes continuously, and operationalizing alerts and governance, teams achieve trustworthy ai across voice agents, rag pipelines, and copilots. Maxim AI unifies experimentation, simulation, evaluation, and observability—plus an ai gateway—to help engineering and product teams ship reliable agents faster, with measurable quality improvements. Explore end-to-end capabilities: Agent Observability, Agent Simulation & Evaluation, and Experimentation.

Request a Maxim demo or Sign up.

Start Safe: Terragrunt Import for Multi-Account AWS

2025-10-17 07:00:00

Terragrunt Import lets you bring brownfield infrastructure under Terraform control across multi-repo and multi-account setups. Done right, you’ll avoid state drift, unstable addresses, and risky access patterns.

The goal is a reproducible, auditable workflow with clean plans and minimal permissions. Use a consistent remote state, pin tooling versions, and validate every step in CI.

🔎 At a Glance: Terragrunt Import Best Practices

  • ✅ Standardize remote state and lock it
  • 📌 Pin Terraform, providers, and Terragrunt versions
  • 🧾 Document intent with Terraform import blocks
  • 🤖 Automate plans and halt on drift or diffs
  • 🔐 Use least-privilege, short-lived credentials

Mini-story: An engineer imported dozens of resources on a laptop with a newer provider than CI. The next pipeline showed a wall of “changes” — all caused by version drift. Pinning would have caught this earlier.

If you’re managing multiple accounts and environments, keeping configuration clean and consistent can be a real challenge.

👉 Check out Terragrunt Less Verbose for tips to reduce boilerplate and simplify Terragrunt structure across large repos.

🧱 Do: Prepare State, Providers & Repo for Safe Terragrunt Import

  • Use a remote backend with locking + encryption (S3 + DynamoDB lock, GCS, or Azure Blob). Inherit backend config via a root terragrunt.hcl to avoid divergent state and concurrent writes.
  • Pin versions for Terraform, providers, and Terragrunt. Run terraform init -upgrade only in controlled windows.
  • Validate in CI with terraform validate and terraform plan -detailed-exitcode gates.
  • Preflight with snapshots: enable bucket/container versioning and take a state backup before each import; start with a read-only discovery run.
# example CI guard
terraform fmt -check
terraform validate
terraform plan -detailed-exitcode   # exit 2 on diff; fail the job

🧭 Do: Use Import Blocks + Terragrunt Hooks for Clear, Stable Addresses

Prefer Terraform ≥ 1.5 import blocks to document import intent in code and keep resource addresses stable across runs. Combine with Terragrunt hooks to (a) generate import IDs, (b) run a plan immediately after import, and (c) fail on any unexpected diff.

Start with a skeleton HCL: declare essential arguments only; add temporary lifecycle.ignore_changes for noisy attributes until parity is verified.

Caveat: import blocks require Terraform 1.5+.

Canonical snippet (HCL):

# modules/storage/main.tf
resource "aws_s3_bucket" "logs" {
  bucket = var.bucket_name

  lifecycle {
    ignore_changes = [tags] # temporary while achieving parity
  }
}

# modules/storage/import.tf (Terraform ≥ 1.5)
import {
  to = aws_s3_bucket.logs
  id = "my-company-logs"
}

Live config with Terragrunt:

# live/prod/storage/terragrunt.hcl
terraform {
  source = "../../../modules/storage"
}

inputs = {
  bucket_name = "my-company-logs"
}

# Optional Terragrunt hook: force a plan after import and fail on drift
after_hook "after_import_plan" {
  commands = ["import"]
  execute  = ["bash", "-lc", "terraform plan -detailed-exitcode || exit 1"]
}

Keep module paths stable across environments so resource addresses never change.

🚫 Don’t: Refactor Modules Mid-Import or Apply Without a Clean Plan

  • Don’t refactor module names, move modules, or rename resources during an import — it changes addresses and breaks state mapping.
  • Never apply after an import unless the plan is clean (no unintended creates/destroys). Enforce with -detailed-exitcode in CI.
  • If you discover an address mismatch, fix it with:
terraform state mv 'aws_s3_bucket.logs' 'aws_s3_bucket.logs_new'

Caveat: state mv ops should be reviewed in PRs and run from the same pinned toolchain as your plans.

🔐 Do: Enforce Least-Privilege & Short-Lived Access

  • Use assume-role with external IDs/MFA and short sessions, scoped to import-only APIs for the target services.
  • Separate read-only discovery from write operations; rotate credentials; store secrets securely in CI.
  • Keep audit trails: confirm who imported what and when using provider/cloud logs (e.g., CloudTrail). Keep local CI run metadata as a cross-check (provider logs can lag).
# Example: short-lived AWS session (assume-role)
aws sts assume-role \
  --role-arn arn:aws:iam::123456789012:role/terraform-importer \
  --role-session-name terragrunt-import

🗂️ Example: Importing a Storage Bucket (Pattern Applies Broadly)

  1. Create a minimal resource block and keep the terragrunt.hcl path stable.
  2. Add a Terraform import block with the bucket’s canonical ID.
  3. terraform init, then nudge Terragrunt to plan:
terragrunt run-all plan -detailed-exitcode
  1. The plan should show no changes except legitimate drift.
  2. If noise appears (e.g., tags or server-generated fields), add temporary ignore_changes, reconcile code to reality, then remove ignores once parity is achieved.
  3. Commit the import block + configuration together so future plans remain clean.

If you hit unexpected diffs or failed imports, read The Complete Terraform Errors Guide to decode plan output, debug root causes, and avoid destructive applies.

🧰 Bring It Together with Guardrails

A disciplined Terragrunt Import flow yields reproducible, auditable results with clean plans and least-privilege access. Codify intent with import blocks, keep addresses stable, and block applies on drift.

Looking for acceleration? ControlMonkey can help with discovery, safe sequencing, and policy guardrails across multi-account environments.

👉 Request a demo to see it in action.

💬 Discussion

What’s your Terragrunt import playbook for multi-account AWS?

Which drift signals or CI gates have saved you from bad applies? Share your setup in the comments!

Day 1252 : Flush

2025-10-17 06:57:08

liner notes:

  • Professional : Had a bunch of meetings. In the first one, my manager recognized that I was sick and told me to take the rest of the day off. Of course I didn't listen because I wanted to attend the meetings. I was doing a presentation about my recent trip. That went well. Had another meeting afterwards. When that was over, I basically went to eat and take a nap. haha.

  • Personal : Didn't get too much done. I was feeling the cold starting to hit. I was able to take apart one of the items I ordered. It was basically what I thought. I can definitely incorporate it into the product idea I have. I responded to some folks that are looking to do some business in the future. I meant to do some CAD work, but I left the mouse. haha I did put together the social media posts for the projects I'm picking up on Bandcamp.

A serene and ethereal landscape at sunrise, featuring a lush green meadow covered in a thick, golden mist. Sunbeams pierce through the fog and trees, casting a magical glow across the scene and highlighting a prominent tree on the left. In the distance, a rugged mountain range is visible against a bright, cloudy sky. The location is a valley in Maple Ridge, British Columbia, Canada.

Going to purchase the Bandcamp projects and start putting together tracks for the radio show. I ordered some more items to put together a prototype for a future product. I need to not forget my mouse so I can finish up the prototype so I can print it tomorrow and make any adjustments. I also need to clear some space in the shed for the new shelves that came in. Looking to build that up this weekend. I'm out. Going to eat dinner and drink a lot of fluids to flush this cold out of my system. (I was lazy with the title and music selection. I'm sick and want to lay down. haha)

Have a great night!

peace piece
Dwane / conshus
https://dwane.io / https://HIPHOPandCODE.com

🎮 Introducing Pixalo — A Lightweight, Developer-Friendly 2D Game Engine for JavaScript

2025-10-17 06:44:21

Building 2D games in JavaScript should be simple — not a constant struggle with complex APIs and endless setup.
That’s exactly why I built Pixalo.

💡 Why I Created Pixalo

I wanted a way to make 2D game development as easy as writing HTML, CSS, or even jQuery — where you can create, animate, and control elements effortlessly without writing dozens of lines of code for a single sprite.

While many JavaScript game engines exist, they often require heavy configurations or WebGL setup.
Pixalo skips the complexity while staying fast, lightweight, and flexible — using optimized rendering techniques and even OffscreenCanvas Workers to maintain excellent FPS, even with many objects on screen.

🚀 What Pixalo Can Do

Pixalo comes packed with everything you need to build smooth, feature-rich 2D games:

  • 🏃 Advanced animation support with keyframes
  • 🎨 Layered background management
  • 📐 Grid system with customizable properties
  • 🔄 Sprite sheet and asset management
  • 💥 Collision detection
  • ⚙️ Physics support powered by the legendary Box2D
  • 🎵 Spatial audio controls
  • 🗺️ Tile map system for level design
  • 🎆 Particle emitter system
  • 📸 Screenshot support
  • 📹 Camera system (zoom, rotation, cinematic presets)
  • 🎚️ Quality control & scaling options
  • 🐞 Built-in debugging tools

It’s not just a game engine — you can even use it as a Canvas Editor, drawing and animating anything you want right on the canvas.

🧩 Getting Started

You can use Pixalo either via ESM import or CDN — no build tools required.

import Pixalo from 'https://cdn.jsdelivr.net/gh/pixalo/pixalo@master/dist/pixalo.esm.js';

const px = new Pixalo('#canvas', { 
    width: innerWidth,
    height: innerHeight
});
px.start();

🟦 Drawing Shapes

Creating shapes in Pixalo is ridiculously simple.

Draw a rectangle:

px.append('myRectangle', {
    width: 100,
    height: 100,
    fill: 'blue',
    borderRadius: 6
});

Draw a circle:

px.append('myCircle', {
    shape: 'circle',
    width: 100,
    height: 100,
    fill: 'red'
});

Handle click events:

px.find('myRectangle').on('click', e => {
    px.append('text', {
         x: px.baseWidth / 2,
         y: px.baseHeight / 2,
         text: 'You clicked on the rectangle',
         font: '16px Arial'
    });
});

🔺 Polygons Made Easy

Even complex shapes like polygons are straightforward to draw:

const size = 100;
px.append('polygon', {
    width: size,
    height: size,
    x: (px.baseWidth  - size) / 2,
    y: (px.baseHeight - size) / 2,
    fill: '#268984',
    shape: 'polygon',
    points: [
        {x: -size / 2, y: -size / 2},
        {x: size / 2, y: -size / 2},
        {x: size / 2, y: 0},
        {x: 0, y: size / 2},
        {x: -size / 2, y: 0}
    ]
});

🕹️ Example: Player Movement

Here’s a simple player movement example using keyboard controls:

import Pixalo from 'https://cdn.jsdelivr.net/gh/pixalo/pixalo@master/dist/pixalo.esm.js';

const game = new Pixalo('#canvas', {
    width : window.innerWidth,
    height: window.innerHeight
});

await game.loadAsset('image', 'player', 'https://raw.githubusercontent.com/pixalo/pixalo/refs/heads/main/examples/assets/character.png');

game.start();

const player = game.append('player', {
    width: 100,
    height: 100,
    x: 10,
    y: 10,
    image: 'player'
});

game.on('update', deltaTime => {
    const speed = 150;
    const step = speed * (deltaTime / 1000);
    let dx = 0, dy = 0;
    const leftKey = game.isKeyPressed('left', 'a');

    if (leftKey) dx -= step;
    if (game.isKeyPressed('right', 'd')) dx += step;
    if (game.isKeyPressed('up', 'w')) dy -= step;
    if (game.isKeyPressed('down', 's')) dy += step;

    player.style('flipX', leftKey).move(dx, dy);
});

🌐 Try Pixalo Yourself

Pixalo is still evolving — WebGL support is coming soon — but it’s already stable, fast, and fun to use.
Whether you’re prototyping or building a full indie game, Pixalo aims to make your workflow smooth and intuitive.

👉 Website: https://pixalo.xyz
👉 GitHub: https://github.com/pixalo/pixalo

💬 Final Thoughts

Pixalo was built with one simple goal:

To make 2D game development feel natural, expressive, and enjoyable — not tedious.

If you love making games in JavaScript and want a fresh, developer-friendly engine, give Pixalo a try and share your feedback. Every contribution and idea helps shape its future.