MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

NoamVC v0.3 — We deleted 3,500 lines and the app got better

2026-02-18 04:34:00

🎙️ What is NoamVC?

NoamVC is a peer-to-peer encrypted voice chat desktop app — think Discord, but your voice never touches a server. Audio travels directly between participants via WebRTC with end-to-end encryption.

Built with Tauri 2 (Rust) + React 19 + TypeScript. Available for macOS (ARM + Intel) and Windows.

TL;DR of the security model:

  • P2P audio via RTCPeerConnection — zero server-side processing
  • E2E encryption with Insertable Streams + PBKDF2 (256-bit key, 100K SHA-256 iterations)
  • HMAC-SHA256 signed signaling — secret embedded in Rust binary, never touches JS
  • Secure storage with IOTA Stronghold — nothing in localStorage

First time here? Check out the previous posts in the series for the full architecture breakdown.

🆕 What's new in v0.2.1 → v0.3.3

1. 🚪 Room Admission System (v0.3.0)

The biggest feature in this cycle. Previously, anyone with the room code could join silently. Now the room creator acts as host and must approve every join request.

New user → knock → Host sees dialog → Admit / Deny

How it works:

  • The first user to enter a room automatically becomes the host.
  • Subsequent participants enter a "waiting" state until explicitly admitted.
  • The host sees a dialog with the requester's name and avatar, plus admit/deny buttons.
  • Denied users get a clear message and return to the lobby.
  • Fully managed at the signaling level — zero changes to the WebRTC connection flow.

This is a significant UX and security improvement. In the previous model, knowing the room code was equivalent to being inside. Now there's a human gatekeeper.

2. 🐛 In-App Bug Reports → Firestore (v0.2.1)

Users can report bugs without leaving the app:

type BugCategory = "crash" | "audio" | "connection" | "ui" | "other";
  • Form with title, description, and category selector.
  • Real-time validation (title ≥ 3 chars, description ≥ 10 chars).
  • Sent to Firestore via REST API — no heavy Firebase SDK bundled.
  • Lazy-loaded: Firebase code doesn't load until the user opens the dialog.

The interesting part security-wise: the Firebase API key is embedded in the Rust binary at compile time. It never appears in the JavaScript bundle:

const FIREBASE_API_KEY: &str = env!("FIREBASE_API_KEY");

#[tauri::command]
fn get_firebase_api_key() -> String {
    FIREBASE_API_KEY.to_string()
}

The JS side calls this Tauri command on demand, so the key only exists in the Rust binary and only flows to the frontend when needed.

3. 🗑️ Embedded Server Removal — The -3,546 LOC Delete (v0.3.1)

Previous versions of NoamVC bundled an embedded signaling server in the Tauri binary (Axum + Socketioxide) for local/LAN development. We nuked it entirely.

What was removed:

  • Axum, Socketioxide, tower, and all related Rust dependencies from the desktop app.
  • -3,546 lines of code.

What replaced it:

  • The signaling server is now a standalone Rust service (Axum + Socketioxide) deployed on Railway. Same tech, separate repo, clean boundary.

Why?

  1. The embedded server added ~30s of extra compile time.
  2. It confused developers about which server was active.
  3. Nobody used it for LAN in practice.
  4. Removing it shrinks the binary and reduces the attack surface.

The best code is the code you don't have. - Someone wise

4. 🔧 Duplicate Peers Fix (v0.3.1)

A subtle race condition: under certain reconnection scenarios, the same peer could appear twice in the participant list. Root cause was the de-duplication logic in the WebRTC hook not accounting for rapid disconnect/reconnect cycles. Fixed with proper peer identity tracking.

5. 🔒 CSP Update for Firestore (v0.3.2)

After adding bug reports, Tauri's Content Security Policy was silently blocking requests to Firestore. The fix:

"connect-src": "... https://firestore.googleapis.com"

Small change, but a reminder: Tauri's CSP is strict by default (which is good), and every new external endpoint requires explicit allowlisting.

6. 🏗️ The CI/CD Bug That Took 3 Releases to Fix (v0.3.2 → v0.3.3)

This one was frustrating. The build.rs script loaded env vars from .env for local builds, but in GitHub Actions the variables were in the OS environment and weren't being forwarded to rustc.

The error:

error: environment variable `FIREBASE_API_KEY` not defined at compile time
 --> src\desktop.rs:11:32

The fix — explicitly forward OS env vars to the Rust compiler in build.rs:

for key in keys {
    if let Ok(val) = std::env::var(key) {
        println!("cargo:rustc-env={}={}", key, val);
    }
}

Plus the workflow secret:

env:
  FIREBASE_API_KEY: ${{ secrets.FIREBASE_API_KEY }}

Key takeaway for Tauri/Rust devs:
env!() reads variables that Cargo knows about, not the shell environment directly. If your CI sets env vars in the usual way, you need cargo:rustc-env in build.rs to bridge that gap. This isn't documented prominently and bit us hard.

📊 Numbers for this cycle

Metric Value
Commits 8
Files changed 51
Lines added ~1,222
Lines deleted ~4,768
Net -3,546 lines ✂️

We deleted 3.9x more code than we wrote. The best kind of release.

🛠️ Stack

Layer Technology
Frontend React 19, TypeScript 5.9, Vite 7
Styles Tailwind CSS 4, shadcn/ui
Desktop Tauri 2 (Rust backend)
Signaling Rust (Axum + Socketioxide) → Railway
Encryption Insertable Streams + PBKDF2, HMAC-SHA256
Storage IOTA Stronghold
CI/CD GitHub Actions → auto draft release
Bug Reports Firestore REST API

🔜 What's next

  • Push-to-talk — alternative mode for noisy environments
  • Per-peer connection quality indicators — RTT, packet loss, jitter visualized per participant
  • BLE data transmission — exploring Bluetooth Low Energy as an alternative signaling channel for ultra-local, serverless room setup between nearby devices
  • Linux support — AppImage / .deb packages

🔗 Links

NoamVC is under active development. If you're working with WebRTC, Tauri, or applied cryptography — or just want a voice chat that doesn't route your audio through someone else's server — give it a look. Feedback and contributions welcome.

DAY6 -Event-driven

2026-02-18 04:29:03

Overview

Today, I’ll build a simple event-driven pipeline using Amazon EventBridge, AWS Step Functions, AWS Lambda, Amazon SQS, Amazon SNS, and Amazon S3.
EventBridge triggers a Step Functions state machine, which invokes a producer Lambda. The producer stores the result in S3 and sends a message to SQS. SQS then triggers a worker Lambda, which publishes a notification to SNS. Finally, SNS delivers the message to my email.

※Using SQS to decouple components is a common exam pattern and a real-world best practice to separate components for each task to account for Lambda's execution time limits and simplify troubleshooting when errors occur.

Hands-on

1. EventBridge

1. Create a S3 bucket

Block Public Access : ON (default)
Default encryption : SSE-S3 (default)

2. Create a SNS topic

Type : Standard

Check the SNS Topic ARN!

Create a subscription to receive Email notifications.
the SNS topic created in the previous step → Subscriptions
Protocol : Email
Endpoint : your email address

Check your email and confirm subscription!

3. Create SQS queue

Type : Standard
Visibility timeout : 30s

Check the SQS queue URL!

4. Create a Producer Lambda (S3 saving + SQS triggering) invoked by the Step Functions

1. Create IAM role for the Lambda

Create IAM role and attach the following permission using the inline policy to allow lambda to access the S3 bucket and SQS.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PutToS3",
      "Effect": "Allow",
      "Action": ["s3:PutObject"],
      "Resource": "arn:aws:s3:::<s3-bucket-name>/*"
    },
    {
      "Sid": "SendToSQS",
      "Effect": "Allow",
      "Action": ["sqs:SendMessage"],
      "Resource": "arn:aws:sqs:<region>:<account-id>:<queue-name>"
    }
  ]
}

2. Create a Lambda function

Runtime : Python 3.12
Role : the IAM role created in the previous step

Set the following environment variables in the Code → Environment variables section.


BUCKET = <your S3 bucket name>
QUEUE_URL = <URL of SQS>

Set the following code and deploy.

---python
import json, os, time, uuid
import boto3

# Initialization
s3 = boto3.client("s3")
sqs = boto3.client("sqs")

# Reading Environment Variables
BUCKET = os.environ["BUCKET"]
QUEUE_URL = os.environ["QUEUE_URL"]

# Receive the event passed by the Step Functions
def lambda_handler(event, context):
job_id = str(uuid.uuid4())
ts = int(time.time())

# Create the object sent to the S3 bucket
result = {
"jobId": job_id,
"timestamp": ts,
"input": event
}

# Decide the S3 storage destination
key = f"day6/results/{job_id}.json"

# Put Object to the S3 bucket
s3.put_object(
Bucket=BUCKET,
Key=key,
Body=json.dumps(result).encode("utf-8"),
ContentType="application/json"
)

# Create the message sent to the SQS
msg = {
"jobId": job_id,
"bucket": BUCKET,
"key": key
}

# Send message to the SQS
sqs.send_message(
QueueUrl=QUEUE_URL,
MessageBody=json.dumps(msg)
)

# Return the information to the Step Functions
return {
"jobId": job_id,
"s3": f"s3://{BUCKET}/{key}",
"queued": True
}

5. Create a worker lambda (publish to SNS), triggered by SQS

1. Create IAM role for the lambda

AWSLambdaBasicExecutionRole
Create IAM role and attach the following permission to allow lambda to publish the SNS and receive message by the SQS.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublishToSNS",
      "Effect": "Allow",
      "Action": ["sns:Publish"],
      "Resource": "arn:aws:sns:<region>:<account-id>:hs-day6-topic"
    },
    {
      "Sid": "SQSReceiveForWorker",
      "Effect": "Allow",
      "Action": [
        "sqs:ReceiveMessage",
        "sqs:DeleteMessage",
        "sqs:GetQueueAttributes",
        "sqs:ChangeMessageVisibility"
      ],
      "Resource": "<QUEUE_ARN>"
    }
  ]
}

2. Create Lambda

Runtime : Python 3.12
Role : the IAM role created in the previous step

Set the following environment variables.

TOPIC_ARN = <your SNS Topic ARN>

Set the following code.

---python
import json, os
import boto3

# Initialization
sns = boto3.client("sns")
TOPIC_ARN = os.environ["TOPIC_ARN"]

# Receive the event by the SQS
def lambda_handler(event, context):

# Loop the record execution
for r in event.get("Records", []):
body = r["body"]
msg = json.loads(body)

# Create the notification message
text = (
f"Day6 completed.\n"
f"jobId: {msg.get('jobId')}\n"
f"S3: s3://{msg.get('bucket')}/{msg.get('key')}\n"
)

# Publish to the SNS
sns.publish(
TopicArn=TOPIC_ARN,
Subject="Day6 Notification",
Message=text
)

return {"ok": True}

3. Set the SQS as the trigger

Worker Lambda → Add trigger → SQS created in the previous step
Batch size : 1

6. Create Step Functions

Step Functions → State machines → Create state machine
Type : Standard

{
"Comment": "Day6: Invoke producer lambda",
"StartAt": "InvokeProducer",
"States": {
"InvokeProducer": {
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"Parameters": {
"FunctionName": "hs-day6-producer",
"Payload.$": "$"
},
"Retry": [
{
"ErrorEquals": ["States.ALL"],
"IntervalSeconds": 2,
"MaxAttempts": 2,
"BackoffRate": 2.0
}
],
"End": true
}
}
}

7. Create an Event Bridge rule

Rule type : EventBridge Scheduled rule (5 minutes)
Target : Step Functions state machine
Input : set the following example

{"source":"eventbridge","note":"day6 test"}

8. Functionality Verification

You've created all resources!
Enable EventBridge rule and wait for few minutes...

➀EventBridge trigger the Step functions every 5 minutes.
※You can ensure Step Functions is executed at the "Executions" tab.

➁Step functions trigger the Producer Lambda.
※You can ensure the producer Lambda is invoked in the "Monitor" tab.

➂Producer Lambda send the data to the S3.
※The json file is created in the S3!

➃Producer Lambda send message to the SQS.
※You can ensure the SQS receives the message from the Lambda at the "Monitoring" tab.

➄SQS trigger the Worker Lambda, which sends the results to the SNS.
※You can ensure the worker Lambda is invoked in the "Monitor" tab.

➅SNS sends an Email notification to the user.
※You should receive the notification Email from SNS!

※At first, My Worker Lambda occurred an error because I set the SNS subscription's ARN (not the SNS topic's ARN!) to the IAM policy attached to the Lambda... If you can't receive the Email, please review the Cloudwatch Logs in the order they occur.

9. Tidying up

  • Delete the EventBridge Rule
  • Delete the Step Functions state machine
  • Delete Lambda (Producer/Worker)
  • Delete SQS queue
  • Delete SNS Topic + Subscription
  • Delete S3 bucket

For test

Key exam points related to today's services.

EventBridge

  • Routing the event to the multiple targets (Step Functions/Lambda/SQS/SNS...) ※EventBridge can route the event. When you want to guarantee order or perform reprocessing, you should use SQS queuing.

Step Functions

  • Orchestration of the processing.
  • should use correct mode. Requires history/observability or Long-running processes → Standard High event volume or Ultra-low latency → Express
  • When processing fails, you can choose retry or catch (set other processing like notification).

Lambda

  • Execute code without needing a server
  • when you need heavy/time-consuming processing, you should use ECS/Batch as Lambda has processing time limits.

SQS

  • message queueing service. Processing can be made asynchronous and loosely coupled.
  • SQS itself does not execute processing; it only invokes functions like Lambda.

MLflow: primeiros passos em MLOps

2026-02-18 04:25:27

Alcançar uma métrica excelente em um modelo de Machine Learning não é uma tarefa fácil, imagina não conseguir reproduzir os resultados por não lembrar quais hiperparâmetros foram utilizados

A ideia é sair do “treinei um modelo e deu certo” e entrar no mundo de MLOps, onde eu consigo rastrear experimentoscomparar execuçõesarmazenar artefatos e repetir o processo com confiança.

O que é MLOps?

MLOps é a pratica de aplicar princípios de DevOps em machine learning (ML) projects. O que envolve gerenciar o lifecycle de projetos de ML, desde restreabilidade no desenvolvimento de treinamento até o deploy, monitoramento.

No desenvolvimento de software tradicional, o controle de versão de código (como o git) é suficiente. Em ML, o resultado é o conjunto de desenvolvimento entre Código + Dados + Hiperparâmetros.

Uma ferramenta dedicada pode facilitar o processo de auditoria e colaboração dos modelos, antes do processo de experimentação se tornar caótico. A ausência de um histórico estruturado impede a reprodutibilidade e escala de qualquer projeto de IA.

O que é MLflow

O MLflow é uma ferramenta que ajuda a aplicar MLOps para organizar o ciclo de vida de projetos de ML. Mesmo num projeto simples, ele já ajuda muito porque cria um histórico do que foi feito em cada treino.

Na prática, o MLflow me ajuda a responder perguntas como:

  • Qual conjunto de parâmetros gerou a melhor métrica?
  • Quando eu rodei o treinamento e qual foi o resultado?
  • Onde está salvo o modelo treinado daquela execução?

MLflow como sua camada de Tracking

O MLflow surge para padronizar essa gestão. Nesse momento iremos focar nos três tipos de registros fundamentais:

  • Parâmetros: Inputs de configuração (ex: learning_rate, n_estimators)
  • Métricas: Resultados de performance (ex: accuracy, log_loss)
  • Artefatos: O output binário (o modelo treinado em .pkl ou .onnx)

O MLflow transforma experimentação ad-hoc em um sistema de registro auditável e padronizado.

Hands On: Setup do projeto

Como de costume, irei demonstrar como implementar uma configuração simples do MLFlow:

  • Backend Store: SQLite (um arquivo mlflow.db) para guardar metadados (experimentos, runs, params, metrics).
  • Artifact Store: File store (pasta mlruns/) para guardar artefatos (como o modelo salvo).

Setup do Servidor

# Instalação
pip install mlflow

# Inicialização do servidor
mlflow server \
    --backend-store-uri sqlite:///mlflow.db \
    --default-artifact-root ./mlruns \
    --host 0.0.0.0 \
    --port 5000

Parâmetros:

  • mlflow server: inicia o serviço do mlflow
  • backend-store-uri: escolhi o sqlite por ser leve, ideal para primeira iteração com a ferramenta
  • default-artifact-root: path onde serão armazenados os metadados dos experimentos
  • host e port: caminho no qual o server estará disponível para conexão, inclusiva para uso via UI

Propriedades MLflow:

Quando uso SQLite + file store, é comum ver dois componentes principais no diretório do MLflow:

  • mlflow.db: banco SQLite com metadados. Armazena experimentos, runs, parâmetros e métricas.
  • mlruns/: pasta com os artefatos e o conteúdo das runs. Armazena modelos, plots, etc.

Integração no Script de Treino

A biblioteca mlflow do python nos permite conectar facilmente com a ferramenta, sem muito impacto no código, a integração é bem minimalista. Basta setar os parametros iniciais e usar mlflow.start_run():

import mlflow

mlflow.set_tracking_uri("http://localhost:5000")
mlflow.set_experiment("Customer_Churn_Experiment")

with mlflow.start_run():
    # Treino do modelo...
    mlflow.log_param("max_depth", 10)
    mlflow.log_metric("accuracy", 0.85)
    mlflow.sklearn.log_model(model, "random_forest_model")

Neste cenário, a infraestrutura do MLflow foi desacoplada do código de treinamento, tornando necessária a configuração da URI da instância via set_tracking_uri.

Primeira Interação

Na primeira vez que eu rodo o meu script treinamento:

  1. O experimento Customer_Churn_Experiment é criado automaticamente

  2. Uma run é registrada

  3. Params, metrics e artifacts são enviados para o tracking server

Params e Metrics rastreados

No meu caso, registrei:

  • Paramsn_estimatorsmax_depth
  • Metricsaccuracyrecall
y_pred = model.predict(X_test)  
acc = accuracy_score(y_test, y_pred)  
rec = recall_score(y_test, y_pred)  

mlflow.log_param("n_estimators", n_estimators)  
mlflow.log_param("max_depth", max_depth)  
mlflow.log_metric("accuracy", acc)  
mlflow.log_metric("recall", rec)

Isso me permite comparar execuções diferentes sem depender de print no terminal.

Segunda Interação

O valor real do MLOps aparece na análise comparativa entre os modelos que o cientista utilizou bem como o processo percorrido.

Nessa etapa, realizei uma segunda interação, alterando os valores dos hiperparâmetros max_depth e n_estimators. Embora esperasse uma melhora, ao inspecionar a UI do MLflow, percebi que a métrica de accuracy caiu.

Aqui pude perceber o poder real da ferramenta:

  • Rastreabilidade: Você tem a prova empírica de que a mudanca dos hiperparametros gerou overfitting

  • Comparação Técnica: Através da UI do MLflow, você compara as duas execuções

  • Pilar de Reprodutibilidade: Se precisar retornar à versão anterior, o artefato está registrado e pronto para deploy (aprofundarei mais sobre isso no futuro)

Se você ainda está surpreso com o que viu, saiba que essa ferramenta é realmente um divisor de águas. Ela transforma a forma como gerenciamos a rastreabilidade no ciclo de vida de modelos e simplifica a comparação entre diferentes versões, sendo essencial desde os primeiros testes até projetos construídos do zero.

Conferindo metadados no SQLite

Uma parte legal desse tipo de setup local é que eu consigo inspecionar o SQLite pela linha de comando.

  • Experimentos registrados no SQLite:

  • Parâmetros e métricas registrados:

Conclusão e Próximos Passos

Dominar a gestão do estado do modelo é um requisito fundamental na Engenharia de MLOps. Enquanto o MLflow soluciona a camada de tracking, o ciclo de vida se estende ao Model Registry e ao Model Serving. Em breve, exploraremos como essas etapas garantem a maturidade de um modelo em produção.

Dominar o rastreamento de experimentos é o alicerce fundamental para implementar Continuous Training (CT) no futuro.

Gostou dessa abordagem técnica?

Este projeto está disponível no meu GitHub: mlops-telco-churn-predict.

E você: já utiliza o MLflow no seu dia a dia? Na sua visão, qual é o maior desafio para garantir a reprodutibilidade total dos modelos?

Vamos debater nos comentários!

Kollabe vs EasyRetro: two free retro tools, very different bets 🏆

2026-02-18 04:21:08

Both Kollabe and EasyRetro have free plans. Both run sprint retrospectives. That's roughly where the similarities end.

I've been facilitating retros for six years across different team sizes and industries. These two tools have opposite philosophies about what a retro tool should be, and the choice between them says a lot about what your team actually needs.

I maintain detailed reviews of both tools (and about a dozen others) at RetroTools.io.

The pitch

EasyRetro has been around since 2015. Originally called FunRetro, it does one thing: retro boards. No planning poker. No standups. No health checks. Just boards with sticky notes, voting, and action items. It's been quietly doing this while other tools rebrand every year and bolt on features nobody asked for.

Kollabe launched in 2022 and went the opposite direction. Retros, planning poker, and async standups in one platform. AI grouping, AI summaries, inline polls, a drawing tool. It's trying to be the single tool your team opens for every ceremony.

Two different bets. EasyRetro bets you want one thing done well. Kollabe bets you're tired of paying for three separate tools.

Free plan: the part that actually matters

free plans

If you're reading a comparison post, there's a decent chance you're evaluating free tiers. So let's start there.

Kollabe Free EasyRetro Free
Boards/meetings Limited per month 1 board on dashboard at a time
Participants Up to 10 per room Unlimited per board
History 7 days None (must delete to create new)
Templates All 1,000+ All 200+
AI grouping Yes No
AI summaries Yes No
Anonymous mode Yes Yes
Voting Yes Yes
Planning poker Yes (10 issues/session) No
Standups Yes No
Exports No No
Integrations No No
Surveys No 1 per month

Neither free plan is generous. They're not supposed to be. But the constraints are different in ways that matter.

EasyRetro lets unlimited people join a board. That's great for large workshops or cross-team retros. But you get one board. One. And archived boards count against that limit — you have to delete your previous retro before creating a new one. For a team running biweekly sprints, that means your retro data disappears every two weeks. There's no history to look back on.

Kollabe caps you at 10 participants but gives you AI grouping and summaries on the free plan. That's unusual. Most tools gate AI behind their mid-tier paid plan. The 7-day history window is tight, but at least you have some window to reference past discussions. And you get planning poker and standups included, which is three tools for the price of zero.

Short version: EasyRetro's free plan is better for occasional, large-group retros. Kollabe's free plan is better for small teams running recurring ceremonies.

Retro board experience

This is where you'll spend 90% of your time, so it matters more than any feature table.

EasyRetro

easy retro board

EasyRetro's board is clean. Deliberately simple. You create columns, people add cards, you vote, you discuss. The presentation mode (revealing columns one at a time) is genuinely useful for controlling the flow of a retro without a formal guided workflow. Password-protected boards mean you can share a link without worrying about random people wandering in.

The anonymity model is worth calling out. Participants don't need accounts. They don't even need to sign up. Only the board creator needs an EasyRetro account. For ad-hoc retros or workshops where you're bringing in people from outside your org, this removes a real friction point that most tools ignore.

Card merging works well. Drag one card onto another, they combine. You can unmerge later if you grouped wrong. But this is manual. With 15 cards, fine. With 50 cards from a team of 12, you're spending ten minutes dragging and dropping before the actual discussion starts.

Kollabe

Kollabe board

Kollabe's AI grouping is the headline feature and it earns it. It uses semantic similarity, not keyword matching. "Our deployments are slow" and "CI/CD pipeline needs attention" end up in the same group without you touching anything. After running retros with both tools back to back, the time saved on grouping alone is 5-10 minutes per session. Over a year of biweekly sprints, that's roughly three hours of meeting time you get back. Not transformative, but not nothing.

The guided facilitation flow walks your team through phases: brainstorm, group, vote, discuss, action items. Good for newer facilitators or teams that tend to go off the rails. EasyRetro doesn't have this. Presentation mode gives you some control, but it's not the same as a structured workflow.

Kollabe also has a drawing tool, inline polls on retro items, and GIF support. Whether those matter depends on your team's culture. Some teams communicate better with sketches. Most don't need them.

Templates

EasyRetro has 200+ templates across multiple languages with an AI template generator. Kollabe claims 1,000+ with its own AI generator. Both numbers are high enough that the count doesn't really matter. What matters is whether the templates are good.

Both cover the basics: Start/Stop/Continue, Mad/Sad/Glad, 4Ls, Sailboat, Starfish. EasyRetro's templates are straightforward. Pick one, it creates the columns. Kollabe's templates include themed backgrounds (30+ options), which is a nice touch if your team responds to visual variety. After a year of the same white board, even a small visual change can reset the "retro fatigue" clock.

Neither tool's template library is a differentiator. Pick either one and you'll find what you need in under a minute.

Beyond retros

This is where the comparison gets lopsided.

EasyRetro is retros. That's it. If you need planning poker, you're opening a second tool. Standups? Third tool. Health checks? Find a template or use a spreadsheet.

Kollabe covers three ceremonies:

  • Planning poker with Fibonacci, T-shirt, and custom decks. Import tickets from Jira (JQL), GitHub, Azure DevOps, or Linear. Estimates sync back automatically.
  • Async standups with persistent daily rooms, customizable questions, and AI-generated daily/weekly summaries.
  • Retros with everything discussed above.

If your team runs all three, that's one login instead of three. One subscription. One place where data lives. If your team only runs retros, Kollabe's extra features are irrelevant and EasyRetro's simplicity is an advantage.

Integrations

Integration Kollabe EasyRetro
Jira Import + Export Export only
GitHub Import + Export No
Azure DevOps Import + Export No
Linear Import + Export No
Confluence Export Export
Trello No Export
Slack No Notifications only
MS Teams No Board embedding

EasyRetro's Jira integration is mature. Bulk export to Jira landed in September 2024, and the action item export has been solid since 2022. If your workflow is "run retro, push action items to Jira, track them there," EasyRetro handles that well.

Kollabe goes deeper on the import side. Pull tickets from Jira via JQL, import GitHub issues, bring in Azure DevOps work items with WIQL queries. Estimates sync back to your project management tool automatically. But no Slack integration on any plan. That's a real gap.

EasyRetro has basic Slack notifications and Teams embedding. Nothing deep, you won't be creating cards from Slack, but it exists. Neither tool has Zapier or webhook support, though Kollabe launched a public API in February 2026 that might partially fill that gap.

Pricing (when you outgrow free)

This is where your finance team gets involved.

Kollabe Premium EasyRetro Team
Price $29/month flat $38/month flat
What you get All ceremonies, unlimited participants, unlimited history, all AI features, all integrations, exports, analytics 5 boards/month, 1 team, unlimited participants, integrations, exports, analytics

Both use flat pricing, not per user. But the value at each tier is different.

Kollabe's $29/month gets you everything. Retros, poker, standups, unlimited participants, unlimited history, the works. One price, one team.

EasyRetro's $38/month gets you retro boards. Five per month. Just retros. If you need more boards, the Business tier is $60/month for 15 boards, or $90/month for 30.

There's a catch on Kollabe's side though. Each "Space" is one team. If you have three teams, that's $87/month. EasyRetro's Business plan at $60/month gives you 3 teams with 15 boards. For multi-team organizations, run the math on your specific structure before deciding.

For a single team, Kollabe is cheaper and gives you more. For three or more teams, it depends on how many boards you create per month.

The stuff nobody mentions

A few things that don't show up in feature comparison tables but affect your daily experience.

Kollabe participants don't need accounts. Send a link, they're in. EasyRetro requires participants to create a free account. For your regular team, that's a one-time friction. For workshops or cross-org retros with external stakeholders, Kollabe wins on access speed.

Both are web-only with no native apps. EasyRetro added mobile card dragging in April 2024. Both work on mobile browsers but are clearly built for desktop. Don't expect anyone to run a retro from their phone.

EasyRetro stores data in US Central (Firebase/GCP). Kollabe stores data in Australia (DigitalOcean). Neither offers data residency options. If you have data sovereignty requirements, this might matter.

Neither has SOC 2 certification. EasyRetro relies on GCP's certifications. Kollabe runs monthly penetration testing through Intruder. Both encrypt data in transit and at rest. Neither has audit logs. If your procurement team needs SOC 2 paperwork, look at TeamRetro instead.

Who should pick what

Pick EasyRetro if:

  • Your team only needs ultra simple retro boards, nothing else
  • You run large retros or workshops with external participants who shouldn't need to create accounts
  • You're a Jira shop and the action item export matters
  • You prefer simplicity over features (fewer buttons, fewer decisions)

Pick Kollabe if:

  • Your team runs retros and planning poker (or standups, or both)
  • You have 10 or fewer people and want the most capable free plan
  • AI grouping would actually save you time (it does if your retros regularly produce 30+ cards)
  • You want flat pricing that doesn't scale with headcount

Pick neither if:

  • You need SOC 2 compliance (TeamRetro)
  • You want open source and self-hosting (Parabol)
  • You're already paying for Miro and want to consolidate (Miro)

Both tools have real gaps. EasyRetro hasn't shipped AI grouping after two years of competitors having it. Kollabe still doesn't have Slack integration, which is table stakes for most teams. Pick the gaps you can live with.

Head-to-head comparisons for these and twelve other retro tools are at RetroTools.io. No affiliate links, no paid placements.

What's your team using for retros right now? Curious whether anyone's running Kollabe and EasyRetro side by side for different ceremonies.

AWS CDK Community Update: Jan/Feb 2026!

2026-02-18 04:20:33

What's New

Hello! So excited to get the regular blogpost updates on AWS CDK going for 2026! This update covers CDK updates from December 2025 through February 2026. The community delivered 150+ pull requests spanning EKS, Bedrock, ECS, and 15+ other AWS services. Whether you're new to CDK or a seasoned user, you'll find new capabilities that make infrastructure as code more powerful and easier to maintain.

Community at a Glance

  • 150+ pull requests merged from external contributors
  • 25+ first-time contributors joined the project
  • 17 top contributors driving major features
  • Active development across EKS, Bedrock, ECS, and 15+ other services

Upgrade Now

npm install -g aws-cdk@latest

TL;DR

Top features to try this week:
CDK Mixins (Preview): Add capabilities to constructs without rebuilding them link
Kiro Power for AWS IaC: AI-powered CDK and CloudFormation development assistance link
ECS Deployments: Built-in Linear/Canary strategies for safer rollouts link
CloudWatch Logs: Deletion protection prevents accidental data loss link
EKS Hybrid Nodes: Integrate on-premises infrastructure with EKS clusters link
Bedrock AgentCore: API Gateway integration and episodic memory for AI agents link
Also new: Glue typed partition projection, Synthetics Playwright 5.0, RDS enhanced metrics

Major Features

CDK Mixins (Preview) - Add Features to Any Construct

Mixins let you compose functionality onto existing constructs. Think of them as reusable "add-ons" that can be applied to any compatible construct. This means that you don't need to wait for a property to be supported in L2 constructs anymore. For example, using an unsupported AnalyticsConfigurationProperty on S3 L2 construct, becomes simple with the CfnBucketPropsMixin.

const l2bucket = new s3.Bucket(this, 'L2 Bucket')
      .with(new CfnBucketPropsMixin({
        analyticsConfigurations: [{
            id: 'my-analytics',
            storageClassAnalysis: {
                dataExport: {
                    destination: {....},
                }
            }
        }]
    }))

Try it: npm install @aws-cdk/mixins-preview - Read the docs
Note: Preview feature API may change. Currently TypeScript/JavaScript only.
Read blogpost about CDK Mixins

Amazon EKS Enhancements

The EKS module received significant love this quarter with multiple community-driven improvements:
EKS Hybrid Nodes Support You can now seamlessly integrate on-premises and edge infrastructure with your EKS clusters. This opens up hybrid cloud architectures with consistent Kubernetes management.
Native OIDC Provider We've introduced OidcProviderNative using L1 constructs, providing a more efficient alternative to the custom resource-based OpenIdConnectProvider. This improves deployment speed and reduces complexity.
New Access Entry Types Support for EC2, HYBRID_LINUX, and HYPERPOD_LINUX access entry types gives you more granular control over cluster access patterns.

declare const cluster: eks.Cluster;
declare const nodeRole: iam.Role;

// Grant access with EC2 type for Auto Mode node role
cluster.grantAccess('nodeAccess', nodeRole.roleArn, [
  eks.AccessPolicy.fromAccessPolicyName('AmazonEKSAutoNodePolicy', {
    accessScopeType: eks.AccessScopeType.CLUSTER,
  }),
], { accessEntryType: eks.AccessEntryType.EC2 });
});

Read more in the EKS module documentation

Bedrock AgentCore Expansion

Thanks to Dinesh Sajwan and Yuki Matsuda, Bedrock AgentCore received powerful new capabilities:
API Gateway Target Support Integrate your AgentCore gateways directly with API Gateway, enabling more flexible routing and integration patterns.
Gateway Interceptors Add custom logic to intercept and transform requests/responses flowing through your AgentCore gateways.
Episodic Memory Strategy Implement sophisticated memory patterns for your AI agents, allowing them to maintain context across interactions.
Read more in the Bedrock AgentCore module documentation

ECS Deployment Strategies

Yuki Matsuda contributed builtin support for Linear and Canary deployment strategies in ECS. These strategies reduce risk when rolling out changes by gradually shifting traffic to new versions.

declare const cluster: ecs.Cluster;
declare const taskDefinition: ecs.TaskDefinition;
declare const blueTargetGroup: elbv2.ApplicationTargetGroup;
declare const greenTargetGroup: elbv2.ApplicationTargetGroup;
declare const prodListenerRule: elbv2.ApplicationListenerRule;

const service = new ecs.FargateService(this, 'Service', {
  cluster,
  taskDefinition,
  deploymentStrategy: ecs.DeploymentStrategy.LINEAR,
  linearConfiguration: {
    stepPercent: 10.0,
    stepBakeTime: Duration.minutes(5),
  },
});

const target = service.loadBalancerTarget({
  containerName: 'web',
  containerPort: 80,
  alternateTarget: new ecs.AlternateTarget('AlternateTarget', {
    alternateTargetGroup: greenTargetGroup,
    productionListener: ecs.ListenerRuleConfiguration.applicationListenerRule(prodListenerRule),
  }),
});

target.attachToApplicationTargetGroup(blueTargetGroup);

Spot Instance Support David Glaser added capacityOptionType to ManagedInstancesCapacityProvider, enabling costeffective Spot instance usage for your ECS workloads.
Read more in the ECS module documentation

AWS Glue Typed Partition Projection

Kazuho CryerShinozuka introduced typed partition projection for Glue tables, bringing type safety to your data catalog definitions:

declare const myDatabase: glue.Database;
new glue.S3Table(this, 'MyTable', {
  database: myDatabase,
  columns: [{
    name: 'data',
    type: glue.Schema.STRING,
  }],
  partitionKeys: [{
    name: 'date',
    type: glue.Schema.STRING,
  }],
  dataFormat: glue.DataFormat.JSON,
  partitionProjection: {
    date: glue.PartitionProjectionConfiguration.date({
      min: '2020-01-01',
      max: '2023-12-31',
      format: 'yyyy-MM-dd',
      interval: 1,  // optional, defaults to 1
      intervalUnit: glue.DateIntervalUnit.DAYS,  // optional: YEARS, MONTHS, WEEKS, DAYS, HOURS, MINUTES, SECONDS
    }),
  },
});

Read more in the Glue module documentation

RDS Enhancements

Enhanced Metrics Jasdeep Singh Bhalla added Read/Write IOPS metrics to DatabaseInstance and Volume Read/Write IOPS metrics to DatabaseCluster, giving you better visibility into database performance.
RDS Proxy Auth Scheme Yuki Matsuda added support for default authentication schemes in RDS Proxy, simplifying proxy configuration.
Read more in the RDS module documentation

CloudWatch Logs Deletion Protection

Robert Hanuschke contributed deletion protection configuration for CloudWatch Log Groups. This prevents accidental data loss by blocking log group deletion until protection is explicitly disabled.

new logs.LogGroup(this, 'LogGroup', {
  deletionProtectionEnabled: true,
});

Read more in the CloudWatch Logs module documentation

EC2 VPC Flow Logs to Firehose

Tietew added support for using Firehose IDeliveryStreamRef as a flow log destination, enabling realtime log streaming and transformation.

API Gateway v2 EventBridge Integration

Jasdeep Singh Bhalla added PutEvents support for EventBridge integration in API Gateway v2, making eventdriven architectures even easier to build.

Kiro Power for AWS Infrastructure as Code

The challenge: CDK and CloudFormation development requires constantly referencing documentation, understanding best practices, validating templates, and troubleshooting deployments.

The solution: Kiro's AWS Infrastructure as Code Power provides AI-powered assistance for CDK and CloudFormation development. It dynamically loads specialized tools and context only when needed, avoiding MCP context overload.

What it does:

  • Latest Documentation Access: Search CDK docs, API references, and CloudFormation specs
  • Code Generation: Generate well-architected infrastructure following AWS best practices
  • Template Validation: Check CloudFormation templates for errors and compliance issues
  • Deployment Troubleshooting: Analyze CloudTrail logs and diagnose deployment failures
  • Best Practices: Apply CDK-NAG rules and security configurations automatically

Try it: Install the power in Kiro and activate it when working on infrastructure code. The power uses the AWS IaC MCP Server under the hood, providing the same capabilities with better context management.

Learn more about Kiro Powers

More Features

Additional improvements this quarter:
Hotswap Support for Bedrock AgentCore
Kenta Goto added hotswap support for AWS::BedrockAgentCore::Runtime, enabling faster development iterations.
EC2 VPC Flow Logs to Firehose (Tietew) - Real-time log streaming and transformation
API Gateway v2 EventBridge Integration (Jasdeep Singh Bhalla) - PutEvents support for event-driven architectures
SageMaker Health Check Timeout (Сергей) - More control over endpoint health checks
IoT Actions HTTP Batching (Kazuho Cryer-Shinozuka) - Improved efficiency for high-volume IoT workloads
Step Functions JSONata Support (Y.Nakamura) - Dynamic control over parallel execution

Community Highlights

Top External Contributors

We want to give a special shoutout to our most active external contributors this quarter:
Yuki Matsuda (mazyu36) - ECS deployment strategies, RDS Proxy auth, Bedrock AgentCore improvements, and EventScheduler fixes
Jasdeep Singh Bhalla - RDS metrics, ECS log driver options, and API Gateway v2 EventBridge integration
Kazuho Cryer-Shinozuka - Glue typed partition projection, IoT Actions batching, and Redshift improvements
Kenta Goto (go-to-k) - Mixins preview improvements and documentation fixes, hotswap on AgentCore
Robert Hanuschke - CloudWatch Logs deletion protection
Tietew - EC2 Flow Logs Firehose destination support
yatakemi - Synthetics Playwright runtimes

Get Involved

The AWS CDK is powered by the AWS developer community. Here's how to contribute:

  1. Join cdk.dev on Slack and introduce yourself
  2. Try the CDK Workshop Video Series or Hands-on session
  3. Pick a good first issue - we'll help you through it

Ongoing

Answer questions: Help others on Stack Overflow
Share your work: Publish constructs to Construct Hub
Report bugs: Open an issue with reproduction steps
Stay updated: Star the aws-cdk repo for release notifications Join our quarterly community meetings (videos available here)

Community Showcase

The CDK community continues to build, share, and teach. Here's how developers are using CDK in production and contributing back:

Learning & Best Practices

Manage IAM Policies Manually - Kenta Goto: Prevent CDK from automatically updating IAM role policies when you need full control
AIPowered CDK Development with Kiro - Nao San: Generate CDK code with latest best practices using AI assistance
Build a Serverless Website from Scratch - Dixit Rathod: 4 part tutorial: S3 hosting, API Gateway, DynamoDB, full integration

Monitoring & Operations

One Alarm for Many Resources - Johannes Konings: Use CloudWatch Metrics Insights to monitor 100 resources with one alarm instead of 100
Tag Log Buckets for Security Scanners - Johannes Konings: Make cdknag work with third-party security tools through tagging
Faster AgentCore Deployments - Kenta Goto: Skip CloudFormation for faster AI agent development cycles

Infrastructure Patterns

CDK Terrain Announcement Open Constructs Foundation has announced a community-driven continuation of the Cloud Development Kit for Terraform (CDKTF), with the new name of CDK Terrain.
Workflow Automation on ECS - Vu Dao: Deploy N8N workflow platform with Fargate, RDS, and ElastiCache
A/B Testing at the Edge - Kihara Takuya: Switch between S3 origins using CloudFront Functions for experimentation
Remote Access with Client VPN - Andrew Dunn: Set up secure VPN access to private AWS resources
Deploy TanStack Start Serverless - Johannes Konings: Full-stack React framework on Lambda with API Gateway streaming
Customize CDK Bootstrap - Johannes Konings: Add encryption and access logging to CDK staging bucket

Developer Tools

Self-Hosted GitHub Actions Runners - Amir Szekely (CloudSnorkel): Ephemeral runners on EC2, Fargate, Lambda, CodeBuild, or ECS
Parallel Lambda Bundling Marko (ServerlessLife): Bundle all Lambda functions at once instead of sequentially
Cost Analysis in Pull Requests - towardsthecloud: See AWS cost impact before merging infrastructure changes
Clean Up CDK Assets - Kenta Goto: Remove unused assets and Docker images from cdk.out directory
Visualize CDK Applications - Analyze and understand CDK app structure

AI & Advanced Use Cases

Build AI Agents with AgentCore - Martin Mueller Local development to cloud deployment for Bedrock AI agents
Custom Cognito Email Flows - Lee Gilmore Customize authentication emails with Lambda triggers

Video Content

CDK Development Workflows Practical development techniques and patterns
Advanced CDK Patterns Deep dive into CDK architecture

Thank You!

A huge thank you to everyone who contributed this quarter. Whether you submitted code, reported bugs, answered questions, or shared your knowledge you're making infrastructure as code better for everyone.
Special recognition to our top external contributors who drove major features and fixes. Your work impacts thousands of CDK users worldwide.
We're excited to see what the community builds next. Happy coding!

I automated 44 things in one day. Half were useless. How do you decide what to automate?

2026-02-18 04:16:29

Yesterday I went on an automation spree. 44 scripts. Health checks, deployment verifiers, traffic dashboards, blog publishers, tag managers — you name it, I scripted it.

Today I looked at my usage stats. Half of those scripts have been run exactly once. The day I made them.

What I Actually Use Daily

Out of 44 scripts, here's what survived:

  • health-check.sh — pings all my sites, flags 404s instantly
  • auto-heal.sh — detects dead GitHub Pages → auto rebuilds
  • cycle-status.sh — one-command overview of everything
  • verify-deployment.sh — checks if a deploy actually worked (HTTP + content + git)

That's... 4. Out of 44. A 9% survival rate.

The Trap I Fell Into

I automated things that felt productive but weren't actually bottlenecks:

  • A "smart draft generator" for blog posts (I still rewrite everything manually)
  • A "cron bus" for passing data between scheduled tasks (I just read files)
  • A "relay queue dashboard" (the queue had 3 items in it)

The pattern: I automated the fun parts, not the painful parts.

My New Rule

Before automating anything, I ask:

"Have I done this manually at least 3 times AND hated it each time?"

If yes → automate. If no → just do it manually and move on.

What's Your Heuristic?

How do you decide what's worth automating? I've seen the XKCD chart, but real life is messier than that.

Some specific questions:

  1. Do you automate for current pain or anticipated future pain?
  2. How do you handle automation that works but nobody uses?
  3. At what team size does "just write a script" stop being the answer?

Would love to hear your war stories. Drop them below 👇

Building in public at maxmini0214.github.io — 18+ free dev tools if you want to check them out.