2026-02-18 04:34:00
NoamVC is a peer-to-peer encrypted voice chat desktop app — think Discord, but your voice never touches a server. Audio travels directly between participants via WebRTC with end-to-end encryption.
Built with Tauri 2 (Rust) + React 19 + TypeScript. Available for macOS (ARM + Intel) and Windows.
TL;DR of the security model:
RTCPeerConnection — zero server-side processingFirst time here? Check out the previous posts in the series for the full architecture breakdown.
The biggest feature in this cycle. Previously, anyone with the room code could join silently. Now the room creator acts as host and must approve every join request.
New user → knock → Host sees dialog → Admit / Deny
How it works:
This is a significant UX and security improvement. In the previous model, knowing the room code was equivalent to being inside. Now there's a human gatekeeper.
Users can report bugs without leaving the app:
type BugCategory = "crash" | "audio" | "connection" | "ui" | "other";
The interesting part security-wise: the Firebase API key is embedded in the Rust binary at compile time. It never appears in the JavaScript bundle:
const FIREBASE_API_KEY: &str = env!("FIREBASE_API_KEY");
#[tauri::command]
fn get_firebase_api_key() -> String {
FIREBASE_API_KEY.to_string()
}
The JS side calls this Tauri command on demand, so the key only exists in the Rust binary and only flows to the frontend when needed.
Previous versions of NoamVC bundled an embedded signaling server in the Tauri binary (Axum + Socketioxide) for local/LAN development. We nuked it entirely.
What was removed:
What replaced it:
Why?
The best code is the code you don't have. - Someone wise
A subtle race condition: under certain reconnection scenarios, the same peer could appear twice in the participant list. Root cause was the de-duplication logic in the WebRTC hook not accounting for rapid disconnect/reconnect cycles. Fixed with proper peer identity tracking.
After adding bug reports, Tauri's Content Security Policy was silently blocking requests to Firestore. The fix:
"connect-src": "... https://firestore.googleapis.com"
Small change, but a reminder: Tauri's CSP is strict by default (which is good), and every new external endpoint requires explicit allowlisting.
This one was frustrating. The build.rs script loaded env vars from .env for local builds, but in GitHub Actions the variables were in the OS environment and weren't being forwarded to rustc.
The error:
error: environment variable `FIREBASE_API_KEY` not defined at compile time
--> src\desktop.rs:11:32
The fix — explicitly forward OS env vars to the Rust compiler in build.rs:
for key in keys {
if let Ok(val) = std::env::var(key) {
println!("cargo:rustc-env={}={}", key, val);
}
}
Plus the workflow secret:
env:
FIREBASE_API_KEY: ${{ secrets.FIREBASE_API_KEY }}
Key takeaway for Tauri/Rust devs:env!() reads variables that Cargo knows about, not the shell environment directly. If your CI sets env vars in the usual way, you need cargo:rustc-env in build.rs to bridge that gap. This isn't documented prominently and bit us hard.
| Metric | Value |
|---|---|
| Commits | 8 |
| Files changed | 51 |
| Lines added | ~1,222 |
| Lines deleted | ~4,768 |
| Net | -3,546 lines ✂️ |
We deleted 3.9x more code than we wrote. The best kind of release.
| Layer | Technology |
|---|---|
| Frontend | React 19, TypeScript 5.9, Vite 7 |
| Styles | Tailwind CSS 4, shadcn/ui |
| Desktop | Tauri 2 (Rust backend) |
| Signaling | Rust (Axum + Socketioxide) → Railway |
| Encryption | Insertable Streams + PBKDF2, HMAC-SHA256 |
| Storage | IOTA Stronghold |
| CI/CD | GitHub Actions → auto draft release |
| Bug Reports | Firestore REST API |
NoamVC is under active development. If you're working with WebRTC, Tauri, or applied cryptography — or just want a voice chat that doesn't route your audio through someone else's server — give it a look. Feedback and contributions welcome.
2026-02-18 04:29:03
Today, I’ll build a simple event-driven pipeline using Amazon EventBridge, AWS Step Functions, AWS Lambda, Amazon SQS, Amazon SNS, and Amazon S3.
EventBridge triggers a Step Functions state machine, which invokes a producer Lambda. The producer stores the result in S3 and sends a message to SQS. SQS then triggers a worker Lambda, which publishes a notification to SNS. Finally, SNS delivers the message to my email.
※Using SQS to decouple components is a common exam pattern and a real-world best practice to separate components for each task to account for Lambda's execution time limits and simplify troubleshooting when errors occur.
Block Public Access : ON (default)
Default encryption : SSE-S3 (default)
Type : Standard
Check the SNS Topic ARN!
Create a subscription to receive Email notifications.
the SNS topic created in the previous step → Subscriptions
Protocol : Email
Endpoint : your email address
Check your email and confirm subscription!
Type : Standard
Visibility timeout : 30s
Check the SQS queue URL!
Create IAM role and attach the following permission using the inline policy to allow lambda to access the S3 bucket and SQS.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PutToS3",
"Effect": "Allow",
"Action": ["s3:PutObject"],
"Resource": "arn:aws:s3:::<s3-bucket-name>/*"
},
{
"Sid": "SendToSQS",
"Effect": "Allow",
"Action": ["sqs:SendMessage"],
"Resource": "arn:aws:sqs:<region>:<account-id>:<queue-name>"
}
]
}
Runtime : Python 3.12
Role : the IAM role created in the previous step
Set the following environment variables in the Code → Environment variables section.
BUCKET = <your S3 bucket name>
QUEUE_URL = <URL of SQS>
Set the following code and deploy.
---python
import json, os, time, uuid
import boto3
# Initialization
s3 = boto3.client("s3")
sqs = boto3.client("sqs")
# Reading Environment Variables
BUCKET = os.environ["BUCKET"]
QUEUE_URL = os.environ["QUEUE_URL"]
# Receive the event passed by the Step Functions
def lambda_handler(event, context):
job_id = str(uuid.uuid4())
ts = int(time.time())
# Create the object sent to the S3 bucket
result = {
"jobId": job_id,
"timestamp": ts,
"input": event
}
# Decide the S3 storage destination
key = f"day6/results/{job_id}.json"
# Put Object to the S3 bucket
s3.put_object(
Bucket=BUCKET,
Key=key,
Body=json.dumps(result).encode("utf-8"),
ContentType="application/json"
)
# Create the message sent to the SQS
msg = {
"jobId": job_id,
"bucket": BUCKET,
"key": key
}
# Send message to the SQS
sqs.send_message(
QueueUrl=QUEUE_URL,
MessageBody=json.dumps(msg)
)
# Return the information to the Step Functions
return {
"jobId": job_id,
"s3": f"s3://{BUCKET}/{key}",
"queued": True
}
AWSLambdaBasicExecutionRole
Create IAM role and attach the following permission to allow lambda to publish the SNS and receive message by the SQS.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublishToSNS",
"Effect": "Allow",
"Action": ["sns:Publish"],
"Resource": "arn:aws:sns:<region>:<account-id>:hs-day6-topic"
},
{
"Sid": "SQSReceiveForWorker",
"Effect": "Allow",
"Action": [
"sqs:ReceiveMessage",
"sqs:DeleteMessage",
"sqs:GetQueueAttributes",
"sqs:ChangeMessageVisibility"
],
"Resource": "<QUEUE_ARN>"
}
]
}
Runtime : Python 3.12
Role : the IAM role created in the previous step
Set the following environment variables.
TOPIC_ARN = <your SNS Topic ARN>
Set the following code.
---python
import json, os
import boto3
# Initialization
sns = boto3.client("sns")
TOPIC_ARN = os.environ["TOPIC_ARN"]
# Receive the event by the SQS
def lambda_handler(event, context):
# Loop the record execution
for r in event.get("Records", []):
body = r["body"]
msg = json.loads(body)
# Create the notification message
text = (
f"Day6 completed.\n"
f"jobId: {msg.get('jobId')}\n"
f"S3: s3://{msg.get('bucket')}/{msg.get('key')}\n"
)
# Publish to the SNS
sns.publish(
TopicArn=TOPIC_ARN,
Subject="Day6 Notification",
Message=text
)
Worker Lambda → Add trigger → SQS created in the previous step
Batch size : 1
Step Functions → State machines → Create state machine
Type : Standard
{
"Comment": "Day6: Invoke producer lambda",
"StartAt": "InvokeProducer",
"States": {
"InvokeProducer": {
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"Parameters": {
"FunctionName": "hs-day6-producer",
"Payload.$": "$"
},
"Retry": [
{
"ErrorEquals": ["States.ALL"],
"IntervalSeconds": 2,
"MaxAttempts": 2,
"BackoffRate": 2.0
}
],
"End": true
}
}
}
Rule type : EventBridge Scheduled rule (5 minutes)
Target : Step Functions state machine
Input : set the following example
{"source":"eventbridge","note":"day6 test"}
You've created all resources!
Enable EventBridge rule and wait for few minutes...
➀EventBridge trigger the Step functions every 5 minutes.
※You can ensure Step Functions is executed at the "Executions" tab.
➁Step functions trigger the Producer Lambda.
※You can ensure the producer Lambda is invoked in the "Monitor" tab.
➂Producer Lambda send the data to the S3.
※The json file is created in the S3!
➃Producer Lambda send message to the SQS.
※You can ensure the SQS receives the message from the Lambda at the "Monitoring" tab.
➄SQS trigger the Worker Lambda, which sends the results to the SNS.
※You can ensure the worker Lambda is invoked in the "Monitor" tab.
➅SNS sends an Email notification to the user.
※You should receive the notification Email from SNS!
※At first, My Worker Lambda occurred an error because I set the SNS subscription's ARN (not the SNS topic's ARN!) to the IAM policy attached to the Lambda... If you can't receive the Email, please review the Cloudwatch Logs in the order they occur.
Key exam points related to today's services.
2026-02-18 04:25:27
Alcançar uma métrica excelente em um modelo de Machine Learning não é uma tarefa fácil, imagina não conseguir reproduzir os resultados por não lembrar quais hiperparâmetros foram utilizados
A ideia é sair do “treinei um modelo e deu certo” e entrar no mundo de MLOps, onde eu consigo rastrear experimentos, comparar execuções, armazenar artefatos e repetir o processo com confiança.
MLOps é a pratica de aplicar princípios de DevOps em machine learning (ML) projects. O que envolve gerenciar o lifecycle de projetos de ML, desde restreabilidade no desenvolvimento de treinamento até o deploy, monitoramento.
No desenvolvimento de software tradicional, o controle de versão de código (como o git) é suficiente. Em ML, o resultado é o conjunto de desenvolvimento entre Código + Dados + Hiperparâmetros.
Uma ferramenta dedicada pode facilitar o processo de auditoria e colaboração dos modelos, antes do processo de experimentação se tornar caótico. A ausência de um histórico estruturado impede a reprodutibilidade e escala de qualquer projeto de IA.
O MLflow é uma ferramenta que ajuda a aplicar MLOps para organizar o ciclo de vida de projetos de ML. Mesmo num projeto simples, ele já ajuda muito porque cria um histórico do que foi feito em cada treino.
Na prática, o MLflow me ajuda a responder perguntas como:
O MLflow surge para padronizar essa gestão. Nesse momento iremos focar nos três tipos de registros fundamentais:
learning_rate, n_estimators)accuracy, log_loss).pkl ou .onnx)O MLflow transforma experimentação ad-hoc em um sistema de registro auditável e padronizado.
Como de costume, irei demonstrar como implementar uma configuração simples do MLFlow:
mlflow.db) para guardar metadados (experimentos, runs, params, metrics).mlruns/) para guardar artefatos (como o modelo salvo).# Instalação
pip install mlflow
# Inicialização do servidor
mlflow server \
--backend-store-uri sqlite:///mlflow.db \
--default-artifact-root ./mlruns \
--host 0.0.0.0 \
--port 5000
Parâmetros:
mlflow server: inicia o serviço do mlflowbackend-store-uri: escolhi o sqlite por ser leve, ideal para primeira iteração com a ferramentadefault-artifact-root: path onde serão armazenados os metadados dos experimentoshost e port: caminho no qual o server estará disponível para conexão, inclusiva para uso via UIQuando uso SQLite + file store, é comum ver dois componentes principais no diretório do MLflow:
mlflow.db: banco SQLite com metadados. Armazena experimentos, runs, parâmetros e métricas.mlruns/: pasta com os artefatos e o conteúdo das runs. Armazena modelos, plots, etc.A biblioteca mlflow do python nos permite conectar facilmente com a ferramenta, sem muito impacto no código, a integração é bem minimalista. Basta setar os parametros iniciais e usar mlflow.start_run():
import mlflow
mlflow.set_tracking_uri("http://localhost:5000")
mlflow.set_experiment("Customer_Churn_Experiment")
with mlflow.start_run():
# Treino do modelo...
mlflow.log_param("max_depth", 10)
mlflow.log_metric("accuracy", 0.85)
mlflow.sklearn.log_model(model, "random_forest_model")
Neste cenário, a infraestrutura do MLflow foi desacoplada do código de treinamento, tornando necessária a configuração da URI da instância via set_tracking_uri.
Na primeira vez que eu rodo o meu script treinamento:
O experimento Customer_Churn_Experiment é criado automaticamente
Params, metrics e artifacts são enviados para o tracking server
No meu caso, registrei:
n_estimators, max_depth
accuracy, recall
y_pred = model.predict(X_test)
acc = accuracy_score(y_test, y_pred)
rec = recall_score(y_test, y_pred)
mlflow.log_param("n_estimators", n_estimators)
mlflow.log_param("max_depth", max_depth)
mlflow.log_metric("accuracy", acc)
mlflow.log_metric("recall", rec)
Isso me permite comparar execuções diferentes sem depender de print no terminal.
O valor real do MLOps aparece na análise comparativa entre os modelos que o cientista utilizou bem como o processo percorrido.
Nessa etapa, realizei uma segunda interação, alterando os valores dos hiperparâmetros max_depth e n_estimators. Embora esperasse uma melhora, ao inspecionar a UI do MLflow, percebi que a métrica de accuracy caiu.
Aqui pude perceber o poder real da ferramenta:
Rastreabilidade: Você tem a prova empírica de que a mudanca dos hiperparametros gerou overfitting
Comparação Técnica: Através da UI do MLflow, você compara as duas execuções
Pilar de Reprodutibilidade: Se precisar retornar à versão anterior, o artefato está registrado e pronto para deploy (aprofundarei mais sobre isso no futuro)
Se você ainda está surpreso com o que viu, saiba que essa ferramenta é realmente um divisor de águas. Ela transforma a forma como gerenciamos a rastreabilidade no ciclo de vida de modelos e simplifica a comparação entre diferentes versões, sendo essencial desde os primeiros testes até projetos construídos do zero.
Uma parte legal desse tipo de setup local é que eu consigo inspecionar o SQLite pela linha de comando.
Dominar a gestão do estado do modelo é um requisito fundamental na Engenharia de MLOps. Enquanto o MLflow soluciona a camada de tracking, o ciclo de vida se estende ao Model Registry e ao Model Serving. Em breve, exploraremos como essas etapas garantem a maturidade de um modelo em produção.
Dominar o rastreamento de experimentos é o alicerce fundamental para implementar Continuous Training (CT) no futuro.
Este projeto está disponível no meu GitHub: mlops-telco-churn-predict.
E você: já utiliza o MLflow no seu dia a dia? Na sua visão, qual é o maior desafio para garantir a reprodutibilidade total dos modelos?
Vamos debater nos comentários!
2026-02-18 04:21:08
Both Kollabe and EasyRetro have free plans. Both run sprint retrospectives. That's roughly where the similarities end.
I've been facilitating retros for six years across different team sizes and industries. These two tools have opposite philosophies about what a retro tool should be, and the choice between them says a lot about what your team actually needs.
I maintain detailed reviews of both tools (and about a dozen others) at RetroTools.io.
EasyRetro has been around since 2015. Originally called FunRetro, it does one thing: retro boards. No planning poker. No standups. No health checks. Just boards with sticky notes, voting, and action items. It's been quietly doing this while other tools rebrand every year and bolt on features nobody asked for.
Kollabe launched in 2022 and went the opposite direction. Retros, planning poker, and async standups in one platform. AI grouping, AI summaries, inline polls, a drawing tool. It's trying to be the single tool your team opens for every ceremony.
Two different bets. EasyRetro bets you want one thing done well. Kollabe bets you're tired of paying for three separate tools.
If you're reading a comparison post, there's a decent chance you're evaluating free tiers. So let's start there.
| Kollabe Free | EasyRetro Free | |
|---|---|---|
| Boards/meetings | Limited per month | 1 board on dashboard at a time |
| Participants | Up to 10 per room | Unlimited per board |
| History | 7 days | None (must delete to create new) |
| Templates | All 1,000+ | All 200+ |
| AI grouping | Yes | No |
| AI summaries | Yes | No |
| Anonymous mode | Yes | Yes |
| Voting | Yes | Yes |
| Planning poker | Yes (10 issues/session) | No |
| Standups | Yes | No |
| Exports | No | No |
| Integrations | No | No |
| Surveys | No | 1 per month |
Neither free plan is generous. They're not supposed to be. But the constraints are different in ways that matter.
EasyRetro lets unlimited people join a board. That's great for large workshops or cross-team retros. But you get one board. One. And archived boards count against that limit — you have to delete your previous retro before creating a new one. For a team running biweekly sprints, that means your retro data disappears every two weeks. There's no history to look back on.
Kollabe caps you at 10 participants but gives you AI grouping and summaries on the free plan. That's unusual. Most tools gate AI behind their mid-tier paid plan. The 7-day history window is tight, but at least you have some window to reference past discussions. And you get planning poker and standups included, which is three tools for the price of zero.
Short version: EasyRetro's free plan is better for occasional, large-group retros. Kollabe's free plan is better for small teams running recurring ceremonies.
This is where you'll spend 90% of your time, so it matters more than any feature table.
EasyRetro's board is clean. Deliberately simple. You create columns, people add cards, you vote, you discuss. The presentation mode (revealing columns one at a time) is genuinely useful for controlling the flow of a retro without a formal guided workflow. Password-protected boards mean you can share a link without worrying about random people wandering in.
The anonymity model is worth calling out. Participants don't need accounts. They don't even need to sign up. Only the board creator needs an EasyRetro account. For ad-hoc retros or workshops where you're bringing in people from outside your org, this removes a real friction point that most tools ignore.
Card merging works well. Drag one card onto another, they combine. You can unmerge later if you grouped wrong. But this is manual. With 15 cards, fine. With 50 cards from a team of 12, you're spending ten minutes dragging and dropping before the actual discussion starts.
Kollabe's AI grouping is the headline feature and it earns it. It uses semantic similarity, not keyword matching. "Our deployments are slow" and "CI/CD pipeline needs attention" end up in the same group without you touching anything. After running retros with both tools back to back, the time saved on grouping alone is 5-10 minutes per session. Over a year of biweekly sprints, that's roughly three hours of meeting time you get back. Not transformative, but not nothing.
The guided facilitation flow walks your team through phases: brainstorm, group, vote, discuss, action items. Good for newer facilitators or teams that tend to go off the rails. EasyRetro doesn't have this. Presentation mode gives you some control, but it's not the same as a structured workflow.
Kollabe also has a drawing tool, inline polls on retro items, and GIF support. Whether those matter depends on your team's culture. Some teams communicate better with sketches. Most don't need them.
EasyRetro has 200+ templates across multiple languages with an AI template generator. Kollabe claims 1,000+ with its own AI generator. Both numbers are high enough that the count doesn't really matter. What matters is whether the templates are good.
Both cover the basics: Start/Stop/Continue, Mad/Sad/Glad, 4Ls, Sailboat, Starfish. EasyRetro's templates are straightforward. Pick one, it creates the columns. Kollabe's templates include themed backgrounds (30+ options), which is a nice touch if your team responds to visual variety. After a year of the same white board, even a small visual change can reset the "retro fatigue" clock.
Neither tool's template library is a differentiator. Pick either one and you'll find what you need in under a minute.
This is where the comparison gets lopsided.
EasyRetro is retros. That's it. If you need planning poker, you're opening a second tool. Standups? Third tool. Health checks? Find a template or use a spreadsheet.
Kollabe covers three ceremonies:
If your team runs all three, that's one login instead of three. One subscription. One place where data lives. If your team only runs retros, Kollabe's extra features are irrelevant and EasyRetro's simplicity is an advantage.
| Integration | Kollabe | EasyRetro |
|---|---|---|
| Jira | Import + Export | Export only |
| GitHub | Import + Export | No |
| Azure DevOps | Import + Export | No |
| Linear | Import + Export | No |
| Confluence | Export | Export |
| Trello | No | Export |
| Slack | No | Notifications only |
| MS Teams | No | Board embedding |
EasyRetro's Jira integration is mature. Bulk export to Jira landed in September 2024, and the action item export has been solid since 2022. If your workflow is "run retro, push action items to Jira, track them there," EasyRetro handles that well.
Kollabe goes deeper on the import side. Pull tickets from Jira via JQL, import GitHub issues, bring in Azure DevOps work items with WIQL queries. Estimates sync back to your project management tool automatically. But no Slack integration on any plan. That's a real gap.
EasyRetro has basic Slack notifications and Teams embedding. Nothing deep, you won't be creating cards from Slack, but it exists. Neither tool has Zapier or webhook support, though Kollabe launched a public API in February 2026 that might partially fill that gap.
This is where your finance team gets involved.
| Kollabe Premium | EasyRetro Team | |
|---|---|---|
| Price | $29/month flat | $38/month flat |
| What you get | All ceremonies, unlimited participants, unlimited history, all AI features, all integrations, exports, analytics | 5 boards/month, 1 team, unlimited participants, integrations, exports, analytics |
Both use flat pricing, not per user. But the value at each tier is different.
Kollabe's $29/month gets you everything. Retros, poker, standups, unlimited participants, unlimited history, the works. One price, one team.
EasyRetro's $38/month gets you retro boards. Five per month. Just retros. If you need more boards, the Business tier is $60/month for 15 boards, or $90/month for 30.
There's a catch on Kollabe's side though. Each "Space" is one team. If you have three teams, that's $87/month. EasyRetro's Business plan at $60/month gives you 3 teams with 15 boards. For multi-team organizations, run the math on your specific structure before deciding.
For a single team, Kollabe is cheaper and gives you more. For three or more teams, it depends on how many boards you create per month.
A few things that don't show up in feature comparison tables but affect your daily experience.
Kollabe participants don't need accounts. Send a link, they're in. EasyRetro requires participants to create a free account. For your regular team, that's a one-time friction. For workshops or cross-org retros with external stakeholders, Kollabe wins on access speed.
Both are web-only with no native apps. EasyRetro added mobile card dragging in April 2024. Both work on mobile browsers but are clearly built for desktop. Don't expect anyone to run a retro from their phone.
EasyRetro stores data in US Central (Firebase/GCP). Kollabe stores data in Australia (DigitalOcean). Neither offers data residency options. If you have data sovereignty requirements, this might matter.
Neither has SOC 2 certification. EasyRetro relies on GCP's certifications. Kollabe runs monthly penetration testing through Intruder. Both encrypt data in transit and at rest. Neither has audit logs. If your procurement team needs SOC 2 paperwork, look at TeamRetro instead.
Pick EasyRetro if:
Pick Kollabe if:
Pick neither if:
Both tools have real gaps. EasyRetro hasn't shipped AI grouping after two years of competitors having it. Kollabe still doesn't have Slack integration, which is table stakes for most teams. Pick the gaps you can live with.
Head-to-head comparisons for these and twelve other retro tools are at RetroTools.io. No affiliate links, no paid placements.
What's your team using for retros right now? Curious whether anyone's running Kollabe and EasyRetro side by side for different ceremonies.
2026-02-18 04:20:33
Hello! So excited to get the regular blogpost updates on AWS CDK going for 2026! This update covers CDK updates from December 2025 through February 2026. The community delivered 150+ pull requests spanning EKS, Bedrock, ECS, and 15+ other AWS services. Whether you're new to CDK or a seasoned user, you'll find new capabilities that make infrastructure as code more powerful and easier to maintain.
npm install -g aws-cdk@latest
Top features to try this week:
CDK Mixins (Preview): Add capabilities to constructs without rebuilding them link
Kiro Power for AWS IaC: AI-powered CDK and CloudFormation development assistance link
ECS Deployments: Built-in Linear/Canary strategies for safer rollouts link
CloudWatch Logs: Deletion protection prevents accidental data loss link
EKS Hybrid Nodes: Integrate on-premises infrastructure with EKS clusters link
Bedrock AgentCore: API Gateway integration and episodic memory for AI agents link
Also new: Glue typed partition projection, Synthetics Playwright 5.0, RDS enhanced metrics
Mixins let you compose functionality onto existing constructs. Think of them as reusable "add-ons" that can be applied to any compatible construct. This means that you don't need to wait for a property to be supported in L2 constructs anymore. For example, using an unsupported AnalyticsConfigurationProperty on S3 L2 construct, becomes simple with the CfnBucketPropsMixin.
const l2bucket = new s3.Bucket(this, 'L2 Bucket')
.with(new CfnBucketPropsMixin({
analyticsConfigurations: [{
id: 'my-analytics',
storageClassAnalysis: {
dataExport: {
destination: {....},
}
}
}]
}))
Try it: npm install @aws-cdk/mixins-preview - Read the docs
Note: Preview feature API may change. Currently TypeScript/JavaScript only.
Read blogpost about CDK Mixins
The EKS module received significant love this quarter with multiple community-driven improvements:
EKS Hybrid Nodes Support You can now seamlessly integrate on-premises and edge infrastructure with your EKS clusters. This opens up hybrid cloud architectures with consistent Kubernetes management.
Native OIDC Provider We've introduced OidcProviderNative using L1 constructs, providing a more efficient alternative to the custom resource-based OpenIdConnectProvider. This improves deployment speed and reduces complexity.
New Access Entry Types Support for EC2, HYBRID_LINUX, and HYPERPOD_LINUX access entry types gives you more granular control over cluster access patterns.
declare const cluster: eks.Cluster;
declare const nodeRole: iam.Role;
// Grant access with EC2 type for Auto Mode node role
cluster.grantAccess('nodeAccess', nodeRole.roleArn, [
eks.AccessPolicy.fromAccessPolicyName('AmazonEKSAutoNodePolicy', {
accessScopeType: eks.AccessScopeType.CLUSTER,
}),
], { accessEntryType: eks.AccessEntryType.EC2 });
});
Read more in the EKS module documentation
Thanks to Dinesh Sajwan and Yuki Matsuda, Bedrock AgentCore received powerful new capabilities:
API Gateway Target Support Integrate your AgentCore gateways directly with API Gateway, enabling more flexible routing and integration patterns.
Gateway Interceptors Add custom logic to intercept and transform requests/responses flowing through your AgentCore gateways.
Episodic Memory Strategy Implement sophisticated memory patterns for your AI agents, allowing them to maintain context across interactions.
Read more in the Bedrock AgentCore module documentation
Yuki Matsuda contributed builtin support for Linear and Canary deployment strategies in ECS. These strategies reduce risk when rolling out changes by gradually shifting traffic to new versions.
declare const cluster: ecs.Cluster;
declare const taskDefinition: ecs.TaskDefinition;
declare const blueTargetGroup: elbv2.ApplicationTargetGroup;
declare const greenTargetGroup: elbv2.ApplicationTargetGroup;
declare const prodListenerRule: elbv2.ApplicationListenerRule;
const service = new ecs.FargateService(this, 'Service', {
cluster,
taskDefinition,
deploymentStrategy: ecs.DeploymentStrategy.LINEAR,
linearConfiguration: {
stepPercent: 10.0,
stepBakeTime: Duration.minutes(5),
},
});
const target = service.loadBalancerTarget({
containerName: 'web',
containerPort: 80,
alternateTarget: new ecs.AlternateTarget('AlternateTarget', {
alternateTargetGroup: greenTargetGroup,
productionListener: ecs.ListenerRuleConfiguration.applicationListenerRule(prodListenerRule),
}),
});
target.attachToApplicationTargetGroup(blueTargetGroup);
Spot Instance Support David Glaser added capacityOptionType to ManagedInstancesCapacityProvider, enabling costeffective Spot instance usage for your ECS workloads.
Read more in the ECS module documentation
Kazuho CryerShinozuka introduced typed partition projection for Glue tables, bringing type safety to your data catalog definitions:
declare const myDatabase: glue.Database;
new glue.S3Table(this, 'MyTable', {
database: myDatabase,
columns: [{
name: 'data',
type: glue.Schema.STRING,
}],
partitionKeys: [{
name: 'date',
type: glue.Schema.STRING,
}],
dataFormat: glue.DataFormat.JSON,
partitionProjection: {
date: glue.PartitionProjectionConfiguration.date({
min: '2020-01-01',
max: '2023-12-31',
format: 'yyyy-MM-dd',
interval: 1, // optional, defaults to 1
intervalUnit: glue.DateIntervalUnit.DAYS, // optional: YEARS, MONTHS, WEEKS, DAYS, HOURS, MINUTES, SECONDS
}),
},
});
Read more in the Glue module documentation
Enhanced Metrics Jasdeep Singh Bhalla added Read/Write IOPS metrics to DatabaseInstance and Volume Read/Write IOPS metrics to DatabaseCluster, giving you better visibility into database performance.
RDS Proxy Auth Scheme Yuki Matsuda added support for default authentication schemes in RDS Proxy, simplifying proxy configuration.
Read more in the RDS module documentation
Robert Hanuschke contributed deletion protection configuration for CloudWatch Log Groups. This prevents accidental data loss by blocking log group deletion until protection is explicitly disabled.
new logs.LogGroup(this, 'LogGroup', {
deletionProtectionEnabled: true,
});
Read more in the CloudWatch Logs module documentation
Tietew added support for using Firehose IDeliveryStreamRef as a flow log destination, enabling realtime log streaming and transformation.
Jasdeep Singh Bhalla added PutEvents support for EventBridge integration in API Gateway v2, making eventdriven architectures even easier to build.
The challenge: CDK and CloudFormation development requires constantly referencing documentation, understanding best practices, validating templates, and troubleshooting deployments.
The solution: Kiro's AWS Infrastructure as Code Power provides AI-powered assistance for CDK and CloudFormation development. It dynamically loads specialized tools and context only when needed, avoiding MCP context overload.
What it does:
Try it: Install the power in Kiro and activate it when working on infrastructure code. The power uses the AWS IaC MCP Server under the hood, providing the same capabilities with better context management.
Additional improvements this quarter:
Hotswap Support for Bedrock AgentCore
Kenta Goto added hotswap support for AWS::BedrockAgentCore::Runtime, enabling faster development iterations.
EC2 VPC Flow Logs to Firehose (Tietew) - Real-time log streaming and transformation
API Gateway v2 EventBridge Integration (Jasdeep Singh Bhalla) - PutEvents support for event-driven architectures
SageMaker Health Check Timeout (Сергей) - More control over endpoint health checks
IoT Actions HTTP Batching (Kazuho Cryer-Shinozuka) - Improved efficiency for high-volume IoT workloads
Step Functions JSONata Support (Y.Nakamura) - Dynamic control over parallel execution
We want to give a special shoutout to our most active external contributors this quarter:
Yuki Matsuda (mazyu36) - ECS deployment strategies, RDS Proxy auth, Bedrock AgentCore improvements, and EventScheduler fixes
Jasdeep Singh Bhalla - RDS metrics, ECS log driver options, and API Gateway v2 EventBridge integration
Kazuho Cryer-Shinozuka - Glue typed partition projection, IoT Actions batching, and Redshift improvements
Kenta Goto (go-to-k) - Mixins preview improvements and documentation fixes, hotswap on AgentCore
Robert Hanuschke - CloudWatch Logs deletion protection
Tietew - EC2 Flow Logs Firehose destination support
yatakemi - Synthetics Playwright runtimes
The AWS CDK is powered by the AWS developer community. Here's how to contribute:
Answer questions: Help others on Stack Overflow
Share your work: Publish constructs to Construct Hub
Report bugs: Open an issue with reproduction steps
Stay updated: Star the aws-cdk repo for release notifications Join our quarterly community meetings (videos available here)
The CDK community continues to build, share, and teach. Here's how developers are using CDK in production and contributing back:
Manage IAM Policies Manually - Kenta Goto: Prevent CDK from automatically updating IAM role policies when you need full control
AIPowered CDK Development with Kiro - Nao San: Generate CDK code with latest best practices using AI assistance
Build a Serverless Website from Scratch - Dixit Rathod: 4 part tutorial: S3 hosting, API Gateway, DynamoDB, full integration
One Alarm for Many Resources - Johannes Konings: Use CloudWatch Metrics Insights to monitor 100 resources with one alarm instead of 100
Tag Log Buckets for Security Scanners - Johannes Konings: Make cdknag work with third-party security tools through tagging
Faster AgentCore Deployments - Kenta Goto: Skip CloudFormation for faster AI agent development cycles
CDK Terrain Announcement Open Constructs Foundation has announced a community-driven continuation of the Cloud Development Kit for Terraform (CDKTF), with the new name of CDK Terrain.
Workflow Automation on ECS - Vu Dao: Deploy N8N workflow platform with Fargate, RDS, and ElastiCache
A/B Testing at the Edge - Kihara Takuya: Switch between S3 origins using CloudFront Functions for experimentation
Remote Access with Client VPN - Andrew Dunn: Set up secure VPN access to private AWS resources
Deploy TanStack Start Serverless - Johannes Konings: Full-stack React framework on Lambda with API Gateway streaming
Customize CDK Bootstrap - Johannes Konings: Add encryption and access logging to CDK staging bucket
Self-Hosted GitHub Actions Runners - Amir Szekely (CloudSnorkel): Ephemeral runners on EC2, Fargate, Lambda, CodeBuild, or ECS
Parallel Lambda Bundling Marko (ServerlessLife): Bundle all Lambda functions at once instead of sequentially
Cost Analysis in Pull Requests - towardsthecloud: See AWS cost impact before merging infrastructure changes
Clean Up CDK Assets - Kenta Goto: Remove unused assets and Docker images from cdk.out directory
Visualize CDK Applications - Analyze and understand CDK app structure
Build AI Agents with AgentCore - Martin Mueller Local development to cloud deployment for Bedrock AI agents
Custom Cognito Email Flows - Lee Gilmore Customize authentication emails with Lambda triggers
CDK Development Workflows Practical development techniques and patterns
Advanced CDK Patterns Deep dive into CDK architecture
A huge thank you to everyone who contributed this quarter. Whether you submitted code, reported bugs, answered questions, or shared your knowledge you're making infrastructure as code better for everyone.
Special recognition to our top external contributors who drove major features and fixes. Your work impacts thousands of CDK users worldwide.
We're excited to see what the community builds next. Happy coding!
2026-02-18 04:16:29
Yesterday I went on an automation spree. 44 scripts. Health checks, deployment verifiers, traffic dashboards, blog publishers, tag managers — you name it, I scripted it.
Today I looked at my usage stats. Half of those scripts have been run exactly once. The day I made them.
Out of 44 scripts, here's what survived:
health-check.sh — pings all my sites, flags 404s instantlyauto-heal.sh — detects dead GitHub Pages → auto rebuildscycle-status.sh — one-command overview of everythingverify-deployment.sh — checks if a deploy actually worked (HTTP + content + git)That's... 4. Out of 44. A 9% survival rate.
I automated things that felt productive but weren't actually bottlenecks:
The pattern: I automated the fun parts, not the painful parts.
Before automating anything, I ask:
"Have I done this manually at least 3 times AND hated it each time?"
If yes → automate. If no → just do it manually and move on.
How do you decide what's worth automating? I've seen the XKCD chart, but real life is messier than that.
Some specific questions:
Would love to hear your war stories. Drop them below 👇
Building in public at maxmini0214.github.io — 18+ free dev tools if you want to check them out.