2026-02-23 10:30:00
Adding Google Analytics (GA4) to a standard HTML website is straightforward: paste the tracking snippet into your <head> and you're done. Every time a user clicks a link, the browser fetches a new HTML page, and GA registers a page view.
But if you are building a Single Page Application (SPA) with React, Vite, and React Router, this out-of-the-box behavior breaks down.
In a React SPA, clicking a link doesn't trigger a page reload. React simply unmounts the old component and mounts the new one while manipulating the browser's URL history. Because the page never actually "reloads," Google Analytics never registers the new URL, and your analytics will show users seemingly stuck on the homepage forever.
Here is exactly how I solved this for my portfolio site.
First, avoid hardcoding your tracking ID into your source code. If your repo is public, anyone could scrape it. Add your Measurement ID (found in your GA dashboard, usually starting with G-XXXXXXXXXX) to your .env file.
VITE_GA_TRACKING_ID=G-**********
You still need the base Google Analytics tracking code in your index.html.
However, we need to make one critical modification: we must tell GA4 not to automatically track the initial page view. If we don't disable it, we will end up double-counting the first page load when our React Router listener kicks in later.
In index.html:
<!-- Google tag (gtag.js) -->
<script async src="https://www.googletagmanager.com/gtag/js?id=%VITE_GA_TRACKING_ID%"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag() { dataLayer.push(arguments); }
gtag('js', new Date());
// IMPORTANT: Disable the default page_view tracking here!
gtag('config', '%VITE_GA_TRACKING_ID%', { send_page_view: false });
</script>
Note on Vite: I used
%VITE_GA_TRACKING_ID%to inject the environment variable directly into the HTML at build time.
Now we need a way to listen to URL changes inside React and ping Google Analytics manually. We can create a lightweight, invisible component to handle this using the useLocation hook from react-router-dom.
Create components/Analytics.tsx:
import { useEffect } from 'react';
import { useLocation } from 'react-router-dom';
// Extend window object for TypeScript so it doesn't complain about window.gtag
declare global {
interface Window {
gtag: (...args: any[]) => void;
}
}
export const Analytics = () => {
const location = useLocation();
useEffect(() => {
// Read the tracking ID from environment variables
const GA_MEASUREMENT_ID = import.meta.env.VITE_GA_TRACKING_ID;
// Ensure the ID exists and the gtag function was loaded successfully
if (GA_MEASUREMENT_ID && typeof window.gtag === 'function') {
// Ping Google Analytics manually
window.gtag('config', GA_MEASUREMENT_ID, {
page_path: location.pathname + location.search,
});
}
}, [location]); // Re-run this effect every time the URL changes
return null; // This component is invisible
};
Finally, we need to mount our new <Analytics /> component.
The most important rule here is that Analytics must be placed inside the <BrowserRouter> but outside of the <Routes> block. If it's outside the Router, the useLocation hook will throw a contextual error.
In index.tsx:
import { Analytics } from './components/Analytics';
// ... other imports
const App = () => {
return (
<HelmetProvider>
<BrowserRouter>
{/* Place the listener here! */}
<Analytics />
<ScrollToTop />
<Suspense fallback={<LoadingFallback />}>
<Routes>
<Route path="/" element={<HomePage />} />
<Route path="/about" element={<AboutPage />} />
{/* ... other routes */}
</Routes>
</Suspense>
</BrowserRouter>
</HelmetProvider>
);
};
And that's it!
Now, when a user lands on vicentereyes.org, the Google Analytics script loads.
Then, React boots up, mounts the router, hits the <Analytics /> component, and fires a page view event for /.
When the user clicks "Projects", React Router handles the transition, the URL updates to /projects, the useEffect inside <Analytics /> fires again, and a clean page view event for /projects is pushed to Google. Perfect SPA tracking, no heavy external libraries required.
2026-02-23 10:22:06
Conheci a Roblox quando tinha por volta de 7 anos. Hoje tenho 16 e faço 17 este ano. Comecei a desenvolver jogos na plataforma há cerca de 3 anos e, desde então, venho aprendendo sobre criação de jogos, engenharia de software e funcionamento geral do ecossistema de desenvolvimento.
Nada do que está aqui deve ser interpretado como verdade absoluta. São reflexões baseadas na minha experiência e na forma como enxergo a área hoje.
Muitas pessoas observam jogos populares e concluem que ganhar dinheiro no ecossistema é algo simples. A lógica costuma ser superficial: se o jogo é básico, segue uma tendência e possui alto número de jogadores simultâneos, então provavelmente está gerando grande receita.
O problema dessa interpretação é que ela se baseia apenas nas exceções visíveis. Em praticamente qualquer área existem casos fora da curva, e usá-los como regra costuma ser um erro.
Um exemplo recente foi a grande onda de jogos baseados no meme viral dos Italians Brainrots. A plataforma foi rapidamente inundada por variações praticamente idênticas explorando a mesma tendência.
À primeira vista, a presença de vários jogos semelhantes e com alto CCU pode sugerir que o modelo é facilmente replicável. Porém, essa visão ignora o principal gargalo do desenvolvimento independente: a distribuição.
Criar o jogo geralmente é a parte mais simples do processo. O maior desafio está em fazer o produto alcançar usuários suficientes para se sustentar dentro do algoritmo da plataforma. Isso envolve métricas de retenção, concorrência, posicionamento de mercado, timing e, muitas vezes, investimento.
A página inicial da plataforma tende a mostrar apenas os vencedores. O que não fica visível são centenas ou milhares de projetos que tentaram replicar fórmulas semelhantes e desapareceram silenciosamente.
Outro fator relevante é a dependência de tendências. Projetos que não estão alinhados com algo já popular possuem menor probabilidade de crescimento orgânico. Fora das trends, o crescimento exige a construção do próprio alcance, o que normalmente envolve estratégia de divulgação e posicionamento contínuo.
Existe um discurso comum de que a plataforma paga 70% da receita ao desenvolvedor, baseado na ideia de que 30% fica com a plataforma.
Na prática, quando se considera a conversão monetária via sistema de exchange, a receita real recebida pelo desenvolvedor tende a ser significativamente menor, frequentemente abaixo de ~25% do valor gerado pelo produto.
Isso acontece porque os 70% representam uma fração de moeda virtual que ainda passará por outra conversão antes de se transformar em dinheiro real.
Isso não ignora o fato de que a plataforma fornece infraestrutura como servidores, distribuição, moderação e sistema de pagamento. Esse modelo possui valor operacional e deve ser considerado dentro da análise, mas não altera o fato de que o ganho efetivo é diferente da impressão inicial.
Não é objetivo entrar profundamente no debate sobre o sistema de exchange, pois já existem discussões extensas sobre o tema:
“Issues with the Developer Exchange: Testimonies from the Roblox Dev Community”
A escolha da plataforma de aprendizado não é o fator mais determinante para a construção de uma carreira na programação. O ponto central é o tipo de profissional que se deseja formar ao longo do tempo.
Quando o desenvolvimento fica excessivamente concentrado dentro do ecossistema da plataforma, existe o risco de otimização apenas para problemas locais, sem aprofundamento nos princípios gerais da engenharia de software.
O problema não está na ferramenta em si. Qualquer ambiente pode gerar essa limitação quando o foco do aprendizado é apenas fazer o projeto funcionar, sem compreender os fundamentos que sustentam a solução.
Esse cenário é comum quando o aprendizado se restringe ao uso de ferramentas prontas, frameworks ou abstrações que escondem a complexidade estrutural do sistema.
É relativamente frequente encontrar desenvolvedores que conseguem construir aplicações funcionais, mas têm dificuldade para organizar projetos maiores, trabalhar em equipe ou manter código a longo prazo.
Um exemplo prático disso é a baixa adoção consistente de controle de versão. Para projetos pequenos, a ausência dessa prática pode não causar impacto imediato, porém se torna um gargalo conforme o sistema cresce ou exige colaboração.
Mesmo quando há interesse em melhorar o fluxo de trabalho estudando novas metodologias, surge outro problema: aplicar essas ideias dentro de um ambiente coletivo. Processos não mudam apenas pela qualidade técnica da ideia, mas também pela cultura da equipe e pela disposição organizacional para mudanças.
No fim, o valor real da trajetória profissional não está em dominar um único ecossistema, mas em desenvolver raciocínio técnico que seja transferível entre diferentes áreas da computação.
Outro ponto relevante é a questão da estabilidade e das oportunidades reais de inserção profissional. Trabalhar dentro do ecossistema da Roblox, especialmente em projetos independentes ou comissionados, pode apresentar uma limitação prática em termos de previsibilidade de renda.
Os pagamentos dentro desse ambiente tendem a ser relativamente baixos quando comparados a outros mercados, e mesmo oportunidades internacionais ainda podem exigir esforço significativo para alcançar remunerações consideradas confortáveis. Além disso, muitos modelos de trabalho acabam dependendo de comissão ou projetos pontuais, o que naturalmente reduz a estabilidade financeira.
É comum que alguém mencione a importância do networking como estratégia de crescimento. E sim, construir conexões profissionais é relevante em qualquer área. O problema não é o networking em si, mas a dependência dele como única forma de obtenção de oportunidades.
Depender excessivamente de relações pessoais para conseguir trabalho ou renda pode tornar o processo profissional mais frágil e menos previsível. Embora seja possível alcançar bons resultados por esse caminho, ele não garante uma estrutura de carreira estável.
Para mim, faz mais sentido buscar áreas onde existam maiores possibilidades de acesso ao mercado de trabalho e maior diversidade de modelos de contratação, mesmo que isso exija estudo e adaptação técnica.
A decisão de migrar não está baseada apenas no que é possível construir dentro da plataforma, mas no nível de autonomia técnica e evolução profissional que eu considero mais relevante.
O desenvolvimento dentro da plataforma tende a incentivar produção orientada à execução rápida de ideias, o que pode ser útil como aprendizado inicial, mas também cria dependência de soluções específicas do ecossistema.
O maior ponto de divergência não é a capacidade de criar jogos, e sim a limitação prática de escopo profissional. A especialização excessiva em um ambiente fechado pode dificultar a transição para áreas mais amplas da engenharia de software.
Existe uma diferença fundamental entre dominar uma ferramenta e dominar os princípios que permitem construir sistemas independentes dela. Quando o aprendizado fica restrito ao funcionamento interno de uma única plataforma, o desenvolvedor pode se tornar eficiente em resolver problemas locais, mas menos adaptável a cenários externos.
O desenvolvimento web representa, para mim, um espaço com maior liberdade arquitetural e maior proximidade com conceitos que atravessam diferentes áreas da computação.
Não se trata de considerar desenvolvimento de jogos inferior, mas de observar que alguns ecossistemas possuem menor abertura estrutural para construção de produtos fora do fluxo principal da plataforma.
Outro fator é a perspectiva de crescimento fora do papel de empregado dentro de um nicho fechado. Se a trajetória profissional não estiver limitada apenas à execução técnica, faz mais sentido buscar áreas que permitam a criação de produtos, escalabilidade de ideias e exploração de oportunidades com maior controle sobre o próprio trabalho.
No fim, a decisão não é sobre qual área é melhor, mas sobre qual direção profissional possui maior coerência com o tipo de evolução técnica e intelectual que se deseja construir a longo prazo.
Talvez soe um pouco contraditório ou até pareça clickbait kkk, mas não tenho intenção de abandonar o desenvolvimento dentro da Roblox.
A ideia não é tratar a plataforma como um ponto de ruptura, e sim como um espaço secundário de experimentação. Os projetos que já existem serão mantidos como side projects, evoluindo apenas quando houver tempo e interesse, sem pressão de produtividade ou expectativa de retorno profissional.
O foco principal deixa de ser a plataforma e passa a ser outra direção de carreira, mas isso não significa eliminar completamente o que já foi construído. É apenas uma reorganização de prioridade, não uma rejeição ao ecossistema.
2026-02-23 10:19:20
Bringing Async MCP to Google Cloud Run — Introducing cloudrun-mcp
When you design distributed AI or agentic workloads on Google Cloud’s Cloud Run, you often juggle three recurring problems:
cloudrun-mcp solves all three in one lightweight Python SDK.
MCP — Model Context Protocol is an emerging open standard for exchanging context between AI models, tools, and environments.
Think of it as “WebSockets for AI knowledge.”
Instead of hardcoding API calls, your model connects to an MCP server and streams structured events such as:
For developers deploying AI agents on Cloud Run, GKE, or hybrid workloads, an async client is essential for scalability.
Async MCP (Model Context Protocol) client for Cloud Run.
Built by Raghava Chellu (February 2026), cloudrun-mcp brings:
to your production workloads.
Under the hood:
http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=<your-audience>
pip install cloudrun-mcp
import asyncio
from cloudrun_mcp import MCPClient
async def main():
client = MCPClient(base_url="https://your-mcp-server.run.app")
async for event in client.events():
print(event)
asyncio.run(main())
{"event":"context.create","status":"ok"}
{"event":"model.response","content":"42"}
{"event":"model.done"}
That’s it — you’ve connected an async agent running on Cloud Run to an MCP backend and are receiving real-time context updates.
AI workloads are evolving from simple request-response APIs to long-running reasoning graphs.
Synchronous I/O becomes a bottleneck.
cloudrun-mcp leverages Python’s asyncio to keep event loops responsive across:
It’s especially powerful for Agentic AI, where orchestrators consume continuous model context (tool outputs, planning updates, memory events).
The SDK automatically:
Authorization: Bearer <token>
No OAuth flows.
No key.json files.
Perfect for production micro-agents.
async for event in client.events(buffer=32):
await handle_event(event)
[MCP Clients] <--SSE--> [cloudrun-mcp SDK] <--Auth--> [Cloud Run Service]
\
↳ [Agent Processors / Vector DB / PubSub Pipelines]
cloudrun-mcp acts as the async bridge between Cloud identity and AI reasoning streams.
Event-Driven AI Agents
Agents listening to MCP streams and triggering workflows automatically.
🔹 LLM Orchestration Pipelines
Streaming intermediate reasoning steps to dashboards.
🔹 IoT Telemetry Ingestion
Continuous SSE device streams pushed to Pub/Sub.
🔹 Hybrid Edge Inference
Bridge local MCP hubs with Cloud Run decision services.
The SDK follows three principles:
2026-02-23 09:57:07
Scaling Climate Tech: An Introduction to Helpothon Sustainability for Developers
The role of the software developer has evolved significantly over the last decade. While efficiency and scalability remain core pillars of our work, a new priority has emerged at the forefront of global engineering: sustainability. As organizations face increasing pressure to meet carbon neutrality goals and comply with international environmental standards, the demand for robust, data-driven environmental solutions is at an all-time high.
This shift has given rise to a new category of technology often referred to as Climate Tech or Green Ops. For developers looking to build applications that contribute to a healthier planet, Helpothon Sustainability offers a comprehensive ecosystem designed to simplify the complexities of environmental data management.
What is Helpothon Sustainability?
Helpothon Sustainability is a suite of environmental and sustainability-focused technology solutions designed to bridge the gap between raw climate data and actionable insights. Whether you are building internal tools for a multinational corporation or a public-facing platform for environmental advocacy, the infrastructure provided by Helpothon enables developers to integrate sophisticated sustainability metrics into their existing workflows.
You can explore the full range of their offerings at https://helpothon.com.
The Core Pillars of Helpothon Sustainability
To build effective climate-positive software, developers need more than just a database. They need specialized tools that can handle the unique nuances of environmental science, such as longitudinal atmospheric data, carbon sequestering metrics, and supply chain transparency. Helpothon Sustainability focuses on four critical areas:
Real-time data is the foundation of any sustainability initiative. Helpothon provides frameworks for environmental monitoring that allow developers to connect with IoT sensors and remote sensing satellites. This functionality is essential for tracking air quality, water health, and energy consumption across distributed networks. By providing standardized schemas for this data, Helpothon reduces the overhead of data normalization, allowing engineers to focus on building the logic that responds to these environmental triggers.
Raw data is rarely useful without context. Helpothon Sustainability incorporates advanced analytics designed specifically for ecological impact. These tools allow teams to calculate carbon footprints, assess biodiversity risks, and model future climate scenarios. For developers, this means access to pre-built algorithms and processing pipelines that can ingest high-velocity data and output clear, digestible metrics.
Building a climate data platform from scratch is an immense undertaking involving massive datasets and complex geospatial requirements. Helpothon serves as a foundational platform that aggregates global climate data, providing a centralized repository for developers. This ensures that the information powering your application is accurate, updated, and verified by reputable scientific sources. By utilizing https://helpothon.com as a central hub, teams can ensure their data integrity remains uncompromised.
Transparency is the final piece of the puzzle. With the rise of ESG (Environmental, Social, and Governance) reporting requirements, companies must provide verifiable proof of their sustainability efforts. Helpothon provides impact reporting tools that automate the creation of audit-ready documents. Developers can integrate these tools into their CI/CD pipelines or administrative dashboards to generate real-time reports on an organization's environmental progress.
Why Developers Should Care About Climate Tech
The integration of sustainability into the tech stack is not just a trend; it is a fundamental shift in how we build for the future. Developers who specialize in these technologies are finding themselves at the intersection of high-growth software engineering and meaningful global impact.
Leveraging a platform like Helpothon allows developers to:
Building for a Sustainable Future
The challenges facing our planet are significant, but the tools available to developers have never been more powerful. By focusing on environmental monitoring, sustainability analytics, and robust climate data platforms, Helpothon Sustainability provides the technical foundation necessary to build the next generation of green applications.
If you are a developer, architect, or technical leader interested in how software can drive environmental change, now is the time to explore these tools. Integrating sustainability into your product roadmap is no longer just an ethical choice; it is a competitive advantage in a world that increasingly values transparency and ecological responsibility.
Get started by visiting the Helpothon Sustainability website to see how these tools can fit into your next project.
Visit https://helpothon.com to learn more about how to integrate environmental monitoring and sustainability analytics into your development workflow.
2026-02-23 09:53:21
In Part 1 of this series, we explored how Microsoft Agent Framework unifies Semantic Kernel and AutoGen into a cohesive SDK. We built simple agents, added tools, and managed conversations.
But real-world AI applications often require more than a single agent responding to queries. You need:
This is where Workflows come in.
Before diving in, let's clarify when workflows make sense:
| Scenario | Recommendation |
|---|---|
| Simple Q&A, chat interfaces | Single agent |
| Content generation with review cycles | Workflow |
| Data processing pipelines | Workflow |
| Tasks requiring human approval | Workflow |
| Complex research with multiple perspectives | Workflow |
| Long-running processes (hours/days) | Workflow with checkpointing |
The rule of thumb: if your task has explicit steps that should happen in a defined order, or if multiple agents need to collaborate, use a workflow.
A workflow in Agent Framework consists of:
Let's build a simple content creation workflow:
using Microsoft.Agents.AI;
using Microsoft.Agents.AI.Workflows;
// Create agents
var researcher = new ChatClientAgent(chatClient, new ChatClientAgentOptions
{
Name = "Researcher",
Instructions = """
You are a research specialist. Given a topic, you:
1. Identify key aspects to cover
2. Find relevant facts and statistics
3. Note any controversies or debates
4. Summarize your findings in a structured format
"""
});
var writer = new ChatClientAgent(chatClient, new ChatClientAgentOptions
{
Name = "Writer",
Instructions = """
You are a content writer. Given research notes, you:
1. Create an engaging narrative
2. Use clear, accessible language
3. Include relevant examples
4. Structure with headers and bullet points
"""
});
var editor = new ChatClientAgent(chatClient, new ChatClientAgentOptions
{
Name = "Editor",
Instructions = """
You are an editor. Review content for:
1. Factual accuracy
2. Grammar and style
3. Clarity and flow
4. Engagement
Provide specific, actionable feedback.
"""
});
// Build the workflow
var workflow = new WorkflowBuilder("content-pipeline")
.AddStep("research", async ctx =>
{
var topic = ctx.GetInput<string>("topic");
var result = await researcher.InvokeAsync(
$"Research this topic thoroughly: {topic}");
ctx.Set("research_notes", result.Content);
})
.AddStep("write", async ctx =>
{
var notes = ctx.Get<string>("research_notes");
var result = await writer.InvokeAsync(
$"Write an article based on these research notes:\n\n{notes}");
ctx.Set("draft", result.Content);
})
.AddStep("edit", async ctx =>
{
var draft = ctx.Get<string>("draft");
var result = await editor.InvokeAsync(
$"Review and improve this article:\n\n{draft}");
ctx.Set("final_content", result.Content);
})
.Connect("research", "write")
.Connect("write", "edit")
.Build();
// Run the workflow
var context = new WorkflowContext();
context.SetInput("topic", "The impact of AI on software development in 2026");
await workflow.RunAsync(context);
Console.WriteLine(context.Get<string>("final_content"));
The WorkflowContext is the shared state container that flows through your workflow:
// Setting values
context.Set("key", value); // Any serializable type
context.SetInput("inputKey", value); // Specifically for inputs
// Getting values
var value = context.Get<T>("key");
var input = context.GetInput<T>("inputKey");
// Check existence
if (context.TryGet<T>("key", out var result)) { ... }
// Metadata
context.Metadata["executionId"] = Guid.NewGuid();
context.Metadata["startedAt"] = DateTime.UtcNow;
Real workflows aren't always linear. Let's add quality checks and revision loops:
var workflow = new WorkflowBuilder("content-with-review")
.AddStep("research", async ctx => { /* ... */ })
.AddStep("write", async ctx => { /* ... */ })
.AddStep("review", async ctx =>
{
var draft = ctx.Get<string>("draft");
var result = await editor.InvokeAsync(
$"""Review this article and respond with a JSON object:
{{
"quality": "approved" | "needs_revision",
"feedback": "your detailed feedback",
"score": 1-10
}}
Article:
{draft}""");
var review = JsonSerializer.Deserialize<ReviewResult>(result.Content);
ctx.Set("review", review);
ctx.Set("quality", review.Quality);
})
.AddConditionalStep("quality_gate", ctx =>
{
var quality = ctx.Get<string>("quality");
return quality == "approved" ? "publish" : "revise";
})
.AddStep("revise", async ctx =>
{
var draft = ctx.Get<string>("draft");
var review = ctx.Get<ReviewResult>("review");
var revisionCount = ctx.GetOrDefault("revision_count", 0);
if (revisionCount >= 3)
{
// Force approve after 3 attempts
ctx.Set("quality", "approved");
return;
}
var result = await writer.InvokeAsync(
$"""Revise this article based on the feedback:
Current draft:
{draft}
Feedback:
{review.Feedback}
Make specific improvements addressing each point.""");
ctx.Set("draft", result.Content);
ctx.Set("revision_count", revisionCount + 1);
})
.AddStep("publish", async ctx =>
{
var content = ctx.Get<string>("draft");
// Publish logic here
ctx.Set("published", true);
ctx.Set("published_at", DateTime.UtcNow);
})
// Connections
.Connect("research", "write")
.Connect("write", "review")
.Connect("review", "quality_gate")
.Connect("quality_gate", "publish", when: "publish")
.Connect("quality_gate", "revise", when: "revise")
.Connect("revise", "review") // Loop back for re-review
.Build();
This creates a revision loop:
research → write → review → quality_gate
↓ ↓
publish revise
↓
review (loop)
Some steps can run concurrently. Agent Framework makes this explicit:
var workflow = new WorkflowBuilder("parallel-research")
.AddStep("init", ctx =>
{
ctx.Set("topic", ctx.GetInput<string>("topic"));
return Task.CompletedTask;
})
// These three run in parallel
.AddParallelSteps("gather",
("technical", async ctx =>
{
var topic = ctx.Get<string>("topic");
var result = await technicalResearcher.InvokeAsync(
$"Research technical aspects of: {topic}");
ctx.Set("technical_notes", result.Content);
}),
("market", async ctx =>
{
var topic = ctx.Get<string>("topic");
var result = await marketResearcher.InvokeAsync(
$"Research market trends for: {topic}");
ctx.Set("market_notes", result.Content);
}),
("competition", async ctx =>
{
var topic = ctx.Get<string>("topic");
var result = await competitionAnalyst.InvokeAsync(
$"Analyze competitors in: {topic}");
ctx.Set("competition_notes", result.Content);
})
)
// This waits for all parallel steps to complete
.AddStep("synthesize", async ctx =>
{
var technical = ctx.Get<string>("technical_notes");
var market = ctx.Get<string>("market_notes");
var competition = ctx.Get<string>("competition_notes");
var result = await synthesizer.InvokeAsync(
$"""Create a comprehensive report combining these perspectives:
Technical Analysis:
{technical}
Market Research:
{market}
Competitive Analysis:
{competition}""");
ctx.Set("report", result.Content);
})
.Connect("init", "gather")
.Connect("gather", "synthesize")
.Build();
// Wait for all (default)
.AddParallelSteps("all-required",
ParallelCompletion.All,
steps...);
// First one wins
.AddParallelSteps("race",
ParallelCompletion.First,
steps...);
// Majority must complete
.AddParallelSteps("majority",
ParallelCompletion.Majority,
steps...);
// At least N must complete
.AddParallelSteps("quorum",
ParallelCompletion.AtLeast(2),
steps...);
Critical workflows often need human oversight:
var workflow = new WorkflowBuilder("human-approval")
.AddStep("generate", async ctx => { /* ... */ })
.AddHumanStep("approval", new HumanStepOptions
{
Prompt = ctx => $"Please review this content:\n\n{ctx.Get<string>("draft")}",
Timeout = TimeSpan.FromHours(24),
OnTimeout = HumanStepTimeoutBehavior.Reject,
AllowedResponses = new[] { "approve", "reject", "revise" },
// Optional: notify via webhook, email, etc.
NotificationHandler = async (stepId, ctx) =>
{
await emailService.SendAsync(
to: "[email protected]",
subject: "Content awaiting approval",
body: ctx.Get<string>("draft"));
}
})
.AddConditionalStep("route", ctx =>
{
return ctx.Get<HumanResponse>("approval").Decision;
})
.Connect("generate", "approval")
.Connect("approval", "route")
.Connect("route", "publish", when: "approve")
.Connect("route", "archive", when: "reject")
.Connect("route", "revise", when: "revise")
.Build();
When a workflow is waiting for human input:
// Get pending human steps
var pending = await workflowRunner.GetPendingHumanStepsAsync();
foreach (var step in pending)
{
Console.WriteLine($"Workflow: {step.WorkflowId}");
Console.WriteLine($"Step: {step.StepId}");
Console.WriteLine($"Prompt: {step.Prompt}");
Console.WriteLine($"Waiting since: {step.CreatedAt}");
}
// Submit a response
await workflowRunner.SubmitHumanResponseAsync(
workflowInstanceId: "abc123",
stepId: "approval",
response: new HumanResponse
{
Decision = "approve",
Comment = "Looks good! Minor typo on line 3, but acceptable.",
RespondedBy = "[email protected]",
RespondedAt = DateTime.UtcNow
});
Long-running workflows need to survive failures. Checkpointing saves the workflow state after each step:
// Configure checkpoint storage
var checkpointStore = new AzureBlobCheckpointStore(
connectionString: config["Storage:ConnectionString"],
containerName: "workflow-checkpoints");
var runner = new WorkflowRunner(workflow)
{
CheckpointStore = checkpointStore,
CheckpointFrequency = CheckpointFrequency.AfterEachStep,
OnError = WorkflowErrorBehavior.PauseAndCheckpoint
};
// Start a workflow
var instanceId = await runner.StartAsync(context);
Console.WriteLine($"Started workflow: {instanceId}");
// The workflow runs... then your server crashes...
// Later, after restart:
// Resume any incomplete workflows
var incomplete = await runner.GetIncompleteWorkflowsAsync();
foreach (var workflow in incomplete)
{
Console.WriteLine($"Resuming {workflow.InstanceId} from step {workflow.LastCompletedStep}");
await runner.ResumeAsync(workflow.InstanceId);
}
// Azure Blob Storage
var store = new AzureBlobCheckpointStore(connectionString, container);
// Azure Table Storage (good for many small workflows)
var store = new AzureTableCheckpointStore(connectionString, tableName);
// SQL Server
var store = new SqlCheckpointStore(connectionString);
// File system (development only)
var store = new FileCheckpointStore("./checkpoints");
// In-memory (testing only)
var store = new InMemoryCheckpointStore();
Agent Framework provides several built-in patterns for multi-agent collaboration:
Agents take turns in a fixed order:
var chat = new RoundRobinGroupChat(new[]
{
analyst,
critic,
synthesizer
});
var result = await chat.RunAsync(
"Analyze the pros and cons of microservices vs monoliths",
maxRounds: 3);
An AI selector chooses the next speaker:
var selector = new ChatClientAgent(chatClient, new ChatClientAgentOptions
{
Name = "Selector",
Instructions = """
You are a conversation moderator. Based on the conversation so far,
decide which agent should speak next. Choose from:
- Researcher: for finding facts
- Analyst: for interpreting data
- Writer: for creating content
- Critic: for reviewing work
Respond with just the agent name.
"""
});
var chat = new SelectorGroupChat(
selector: selector,
agents: new[] { researcher, analyst, writer, critic },
terminationCondition: conversation =>
conversation.Messages.Last().Content.Contains("TASK COMPLETE"));
await chat.RunAsync("Write a market analysis report for electric vehicles");
All agents respond to each message:
var broadcast = new BroadcastGroupChat(new[]
{
optimist,
pessimist,
realist
});
// Each agent will provide their perspective
var responses = await broadcast.CollectResponsesAsync(
"Should we invest in quantum computing startups?");
foreach (var response in responses)
{
Console.WriteLine($"{response.Agent.Name}: {response.Content}");
}
Nested group chats for complex organization:
// Research team
var researchTeam = new RoundRobinGroupChat(new[]
{
seniorResearcher,
juniorResearcher,
dataAnalyst
});
// Writing team
var writingTeam = new RoundRobinGroupChat(new[]
{
contentWriter,
copyEditor,
factChecker
});
// Executive summary
var executiveChat = new SelectorGroupChat(
selector: projectManager,
agents: new IAgent[]
{
researchTeam.AsAgent("ResearchTeam"),
writingTeam.AsAgent("WritingTeam"),
stakeholderLiaison
});
await executiveChat.RunAsync("Create quarterly market report");
Magentic One is a research-proven pattern from Microsoft Research for complex, open-ended tasks. It features:
var magneticOne = new MagenticOneTeam(new MagenticOneOptions
{
Orchestrator = new ChatClientAgent(chatClient, new ChatClientAgentOptions
{
Name = "Orchestrator",
Instructions = """
You are the orchestrator for a team of AI agents. Your job is to:
1. Break down complex tasks into subtasks
2. Assign subtasks to the most appropriate agent
3. Monitor progress and adjust plans as needed
4. Synthesize results into a coherent output
Available agents:
- WebSurfer: Can browse the web and extract information
- Coder: Can write and execute code
- FileSurfer: Can read and analyze files
- ComputerTerminal: Can execute shell commands
"""
}),
Agents = new[]
{
CreateWebSurferAgent(chatClient),
CreateCoderAgent(chatClient),
CreateFileSurferAgent(chatClient),
CreateTerminalAgent(chatClient)
},
MaxIterations = 10,
TaskLedger = new AzureBlobTaskLedger(blobClient)
});
var result = await magneticOne.ExecuteAsync(
"Research the latest developments in quantum error correction, " +
"find the top 5 research papers from 2025, and create a summary " +
"comparing their approaches.");
var workflow = new WorkflowBuilder("resilient")
.AddStep("risky_operation", async ctx =>
{
// This might fail
await externalApi.CallAsync();
})
.WithRetry("risky_operation", new RetryOptions
{
MaxAttempts = 3,
Delay = TimeSpan.FromSeconds(1),
BackoffMultiplier = 2.0,
RetryOn = ex => ex is HttpRequestException or TimeoutException
})
.WithFallback("risky_operation", async (ctx, ex) =>
{
// If all retries fail, use cached data
ctx.Set("result", await cache.GetLastKnownGoodAsync());
ctx.Set("used_fallback", true);
})
.Build();
var runner = new WorkflowRunner(workflow)
{
OnStepError = async (stepId, context, exception) =>
{
logger.LogError(exception, "Step {StepId} failed", stepId);
await alertService.SendAsync(
$"Workflow step failed: {stepId}",
exception.Message);
},
OnWorkflowError = async (context, exception) =>
{
// Save partial results before failing
await savePartialResults(context);
throw; // Re-throw to mark workflow as failed
}
};
Workflows integrate with OpenTelemetry:
var runner = new WorkflowRunner(workflow)
{
ActivitySource = new ActivitySource("Workflows.ContentPipeline")
};
// Each step creates a span
// Trace hierarchy:
// workflow:content-pipeline
// ├── step:research
// │ └── agent:Researcher.invoke
// ├── step:write
// │ └── agent:Writer.invoke
// └── step:edit
// └── agent:Editor.invoke
workflow.OnStepCompleted += (sender, args) =>
{
stepDurationHistogram.Record(
args.Duration.TotalMilliseconds,
new KeyValuePair<string, object?>("step", args.StepId),
new KeyValuePair<string, object?>("workflow", args.WorkflowId));
if (args.Context.TryGet<int>("tokens_used", out var tokens))
{
tokenCounter.Add(tokens,
new KeyValuePair<string, object?>("step", args.StepId));
}
};
// ❌ Too much in one step
.AddStep("do_everything", async ctx =>
{
// Research, write, edit, publish... 500 lines
})
// ✅ Single responsibility
.AddStep("research", async ctx => { /* just research */ })
.AddStep("write", async ctx => { /* just writing */ })
.AddStep("edit", async ctx => { /* just editing */ })
public record ContentWorkflowState
{
public string Topic { get; init; } = "";
public string? ResearchNotes { get; set; }
public string? Draft { get; set; }
public ReviewResult? Review { get; set; }
public int RevisionCount { get; set; }
}
// Extension for type safety
public static class ContextExtensions
{
public static ContentWorkflowState GetState(this WorkflowContext ctx)
=> ctx.Get<ContentWorkflowState>("state");
public static void SetState(this WorkflowContext ctx, ContentWorkflowState state)
=> ctx.Set("state", state);
}
.AddStep("publish", async ctx =>
{
var articleId = ctx.Get<string>("article_id");
// Check if already published (in case of retry)
if (await cms.ExistsAsync(articleId))
{
ctx.Set("publish_result", "already_exists");
return;
}
await cms.PublishAsync(articleId, ctx.Get<string>("content"));
ctx.Set("publish_result", "published");
})
// Always use checkpointing for production
var runner = new WorkflowRunner(workflow)
{
CheckpointStore = new AzureTableCheckpointStore(...),
CheckpointFrequency = CheckpointFrequency.AfterEachStep,
// Set reasonable timeouts
StepTimeout = TimeSpan.FromMinutes(5),
WorkflowTimeout = TimeSpan.FromHours(24),
// Handle orphaned workflows
OrphanedWorkflowTimeout = TimeSpan.FromHours(48)
};
In Part 3, we'll explore the Model Context Protocol (MCP) — the universal tool standard that lets your agents use tools built in any language, and exposes your C# tools to agents everywhere.
2026-02-23 09:40:51
TL;DR: I needed a code → review → test pipeline with autonomous AI agents, where the orchestration is deterministic (no LLM deciding the flow). After two months exploring Copilot agent sessions, building my own wrapper (Protoagent), evaluating Ralph Orchestrator, and diving deep into OpenClaw's internals, I found that Lobster (OpenClaw's workflow engine) was the right foundation — except it lacked loops. So I contributed sub-workflow steps with loop support to Lobster, enabling fully deterministic multi-agent pipelines where LLMs do creative work and YAML workflows handle the plumbing. GitHub Copilot coding agent wrote 100% of the implementation.
This didn't start last weekend. It started two months ago when GitHub shipped the Copilot coding agent — the ability to assign a GitHub issue to @copilot and have it work autonomously in a GitHub Actions environment, pushing commits to a draft PR. The Agent Sessions view in VS Code gave you a mission control for all your agents, local or cloud.
That planted the seed: if a cloud agent can work on one issue autonomously, what if you could chain multiple specialized agents into a pipeline? Programmer → reviewer → tester, all running in the background, all pushing to PRs.
The first thing I built was Protoagent — a multi-channel AI agent wrapper in TypeScript/Bun that bridges Claude SDK and GitHub Copilot CLI to Telegram and REST API. The idea was to control AI agents from my phone, using my own subscriptions, with no vendor lock-in. It supported multi-provider switching, voice messages via Whisper, session management, crash recovery, and a REST API for Siri/Apple Watch integration.
Protoagent solved the "talk to an agent from anywhere" problem, but not the orchestration problem. It was still one agent, one session, one task at a time. I needed the pipeline.
Around the same time, I found Ralph Orchestrator — an elegant pattern for autonomous agent loops with hard context resets. And then OpenClaw — which turned out to be a much more complete version of what I was trying to build with Protoagent: multi-channel, multi-agent, with a full tool ecosystem, skills marketplace, and a Gateway architecture.
OpenClaw made Protoagent redundant. But none of these tools solved the specific problem I was after.
I wanted autonomous AI agents working as a dev team: a programmer, a reviewer, and a tester, running in parallel across multiple projects. The pipeline: code → review (max 3 iterations) → test → done. No human in the loop unless something breaks.
The requirements were clear:
I spent a full day exploring options. This is the journey.
Ralph Orchestrator implements the "Ralph Wiggum technique" — an elegant pattern where you trade throughput for correctness by doing hard context resets between iterations. The agent has no memory except a session file (goal, plan, status, log), and each iteration starts fresh with only that file as context.
Ralph is solid, and it does support multiple parallel loops with Telegram routing (reply-to, @loop-id prefix). But for my use case it fell short:
human.interact for blocking questions), but it's unclear how to define custom events — say, code_complete or review_rejected — that would trigger transitions between different loops. The orchestration between agents (programmer finishes → reviewer starts) would require inventing the event emission and routing mechanism myself.Ralph solved the "how to make one agent iterate reliably with hard context resets" problem beautifully. The session file pattern (goal, plan, status, log) is elegant. But I needed inter-agent coordination with event-driven transitions, not better intra-agent loops.
OpenClaw is the open-source AI agent platform (150K+ GitHub stars) that connects to messaging platforms and runs locally with full tool access. It already had multi-agent support, so the obvious question was: can I use OpenClaw's built-in sessions_spawn to create my pipeline?
Short answer: no. Here's why.
sessions_spawn creates child agents within a parent session. The parent is an LLM that decides when to spawn children. This means:
agent:<agentId>:subagent:<uuid>. I can't address them by project name.maxSpawnDepth defaults to 1, max 2. An orchestrator pattern needs depth 2, and sub-agents at depth 2 can't spawn further children.maxConcurrent: 8 globally. With 4 projects × 3 roles, I'd hit the limit immediately.The sub-agent model is designed for "main agent delegates subtask to helper" scenarios, not for peer-to-peer agent coordination with deterministic state machines.
At this point I started sketching a custom architecture:
[Telegram] → [OpenClaw Gateway] ← WebSocket ← [External Orchestrator]
│ │
[Agent Workspaces] State Machine
- programmer/ Redis Streams
- reviewer/ Worker Pool
- tester/
The idea: use OpenClaw purely as I/O (messaging + agent execution), and build an external event bus with Redis Streams or NATS for routing, a state machine engine per project, and a worker spawner with pool control.
It would work. It would also be a massive amount of infrastructure for what should be a simple pipeline. I was reinventing half of what OpenClaw already does.
Three OpenClaw features changed everything when I actually found them:
agentToAgent — Native Peer Messaging
Buried in the multi-agent docs:
{
"tools": {
"agentToAgent": {
"enabled": true,
"allow": ["programmer", "reviewer", "tester"]
}
}
}
When enabled, agents can send messages directly to other agents. Not sub-agents, not spawned children — peer agents with their own workspaces and identities.
sessions_send — Addressable Sessions
sessions_send(sessionKey, message, timeoutSeconds?)
An agent can send a message to any session key. Fire-and-forget with timeoutSeconds: 0, or synchronous (wait for the response). Combined with OpenClaw's session key convention (agent:<agentId>:<key>), this means:
agent:programmer:project-a
agent:reviewer:project-a
agent:tester:project-b
The session key is the address. Agent + project as coordinates.
curl -X POST http://127.0.0.1:18789/hooks/agent \
-H 'Authorization: Bearer SECRET' \
-d '{
"message": "Implement JWT auth",
"agentId": "programmer",
"sessionKey": "hook:project-a:programmer",
"deliver": false
}'
External triggers that route to specific agents and sessions. The deliver: false flag keeps everything internal — no Telegram notification until you explicitly want one.
With these primitives, I could have each agent carry a "pipeline skill" that tells it to use sessions_send to pass the baton:
# Pipeline Skill
When you finish coding, call sessions_send to notify the reviewer.
When you finish reviewing, call sessions_send to notify the tester or programmer.
Read the session history to know which iteration you're on.
This works, but the state machine lives inside the LLM's head. It's reading the skill, interpreting rules, and deciding what to do. If the LLM misinterprets the iteration count or forgets to call sessions_send, the pipeline breaks silently.
I wanted deterministic orchestration. The LLM does creative work (writing code, reviewing code, running tests). A machine does the routing.
OpenClaw supports custom hooks — TypeScript handlers that fire on events like message_sent, tool_result_persist, etc. My idea:
[event:code_complete] {"project": "project-a"}
subscriptions.json to find the next agentPOST /hooks/agent to trigger the next step
const handler: HookHandler = async (event) => {
const match = event.context.lastMessage.match(/\[event:(\w+)\]\s*(\{.*\})/s);
if (!match) return;
const [, eventType, payload] = match;
const targets = subscriptions[eventType];
for (const target of targets) {
await fetch("http://127.0.0.1:18789/hooks/agent", {
body: JSON.stringify({
message: data.message,
agentId: target.agentId,
sessionKey: `hook:${data.project}:${target.role}`,
deliver: false
})
});
}
};
This was closer — deterministic routing, testable without LLMs, extensible via JSON config. But it required writing a custom plugin, maintaining subscription mappings, and handling iteration counting in the hook.
Then I found the real solution.
Lobster is OpenClaw's built-in workflow engine. It's a typed, local-first pipeline runtime with:
The analogy: Lobster is to OpenClaw what GitHub Actions is to GitHub — a declarative pipeline spec that runs within the platform.
A Lobster workflow file looks like this:
name: email-triage
steps:
- id: collect
command: inbox list --json
- id: categorize
command: inbox categorize --json
stdin: $collect.stdout
- id: apply
command: inbox apply --json
stdin: $categorize.stdout
approval: required
Lobster can call any OpenClaw tool via openclaw.invoke, including agent-send (to message other agents) and llm-task (for structured LLM calls with JSON schema validation).
My pipeline needs to loop the code→review cycle up to 3 times. Lobster's step model was linear — no native loop construct.
So I built it.
I opened PR #20 on the Lobster repo, introducing sub-lobster steps — the ability to embed a .lobster file as a step, with optional loop support.
New fields on WorkflowStep:
| Field | Description |
|---|---|
lobster |
Path to a .lobster file to run as a sub-workflow |
args |
Key/value map passed to the sub-workflow |
loop.maxIterations |
Maximum number of iterations |
loop.condition |
Shell command evaluated after each iteration. Exit 0 = continue, non-zero = stop |
The loop condition receives LOBSTER_LOOP_STDOUT, LOBSTER_LOOP_JSON, and LOBSTER_LOOP_ITERATION as environment variables, so you can inspect the sub-workflow's output to decide whether to continue.
Main workflow (dev-pipeline.lobster):
name: dev-pipeline
args:
project: { default: "project-a" }
task: { default: "implement feature" }
steps:
- id: code-review-loop
lobster: ./code-review.lobster
args:
project: ${project}
task: ${task}
loop:
maxIterations: 3
condition: '! echo "$LOBSTER_LOOP_JSON" | jq -e ".approved" > /dev/null'
- id: test
command: >
openclaw.invoke --tool agent-send --args-json '{
"agentId": "tester",
"message": "Test the approved code: $code-review-loop.stdout",
"sessionKey": "pipeline:${project}:tester"
}'
condition: $code-review-loop.json.approved == true
- id: notify
command: >
openclaw.invoke --tool message --action send --args-json '{
"provider": "telegram",
"to": "${chat_id}",
"text": "✅ ${project}: pipeline complete"
}'
condition: $test.exitCode == 0
Sub-workflow (code-review.lobster):
name: code-review
args:
project: {}
task: {}
steps:
- id: code
command: >
openclaw.invoke --tool agent-send --args-json '{
"agentId": "programmer",
"message": "${task}. Iteration $LOBSTER_LOOP_ITERATION.",
"sessionKey": "pipeline:${project}:programmer"
}'
- id: review
command: >
openclaw.invoke --tool agent-send --args-json '{
"agentId": "reviewer",
"message": "Review this: $code.stdout",
"sessionKey": "pipeline:${project}:reviewer"
}'
stdin: $code.stdout
- id: parse
command: >
openclaw.invoke --tool llm-task --action json --args-json '{
"prompt": "Did the review approve? Return approved (bool) and feedback (string).",
"input": $review.json,
"schema": {
"type": "object",
"properties": {
"approved": {"type": "boolean"},
"feedback": {"type": "string"}
},
"required": ["approved", "feedback"]
}
}'
stdin: $review.stdout
Here's what happens when someone sends "project-a: implement JWT" on Telegram:
code-review.lobster as a sub-workflowllm-task parses the review into structured JSON: {approved: false, feedback: "..."}
$LOBSTER_LOOP_JSON.approved — if false and iteration < 3, go to step 2All deterministic. All inside OpenClaw. Zero external infrastructure.
Telegram
│
▼
OpenClaw Gateway (:18789)
│
├── Agents (isolated workspaces, tools, identity, models)
│ ├── programmer/
│ ├── reviewer/
│ └── tester/
│
├── Lobster (workflow engine)
│ ├── dev-pipeline.lobster (main: loop → test → notify)
│ └── code-review.lobster (sub: code → review → parse)
│
├── llm-task plugin (structured JSON from LLM, schema-validated)
│
└── Webhooks (/hooks/agent)
└── Trigger pipelines per project with isolated session keys
Each agent is a full OpenClaw agent:
AGENTS.md, SOUL.md
exec, write; reviewer gets read only; tester gets exec + test runners)The LLMs do what LLMs are good at: writing code, analyzing code, running tests. Lobster does what code is good at: sequencing, counting, routing, retrying.
1. Don't orchestrate with LLMs. Every time I tried to put flow control in a prompt ("when you're done, send to the reviewer"), I introduced a failure mode. LLMs are unreliable routers. Use them for creative work, use code for plumbing.
2. Read the docs twice. I almost built an entire external event bus before discovering that OpenClaw already had agentToAgent, sessions_send, and webhooks with session routing. The primitives were there — I just hadn't found them yet.
3. Contribute the missing piece instead of working around it. Lobster didn't have loops. Instead of building a wrapper script or a plugin hook to simulate loops, I added loop support to Lobster itself. The sub-lobster PR is 129 lines of implementation + 186 lines of tests. It took less time than any of the workarounds would have.
4. Session keys are your data model. The pattern pipeline:<project>:<role> gives you project isolation, role separation, and addressability in one string. No database needed — the session key is the address.
5. Typed pipelines beat prompt engineering for coordination. A YAML file with condition, loop, and stdin piping is infinitely more reliable than telling an LLM "if the review is negative, go back to step 2, but only up to 3 times."
If you're building multi-agent systems, consider whether your orchestration layer needs to be an LLM at all. Sometimes the best agent architecture is one where the agents don't know they're being orchestrated.
This article describes work that spanned about two months and involved several different tools and approaches.
Claude helped me think through the architecture options — bouncing ideas, evaluating trade-offs between approaches, and structuring the decision tree. It was a thinking partner for the design phase.
The exploration of OpenClaw's internals was largely manual. Claude wasn't able to fully parse OpenClaw's documentation and source code to surface the key primitives I needed (agentToAgent, sessions_send, Lobster workflows, plugin hooks). I found those by reading the docs myself, tracing through the codebase, and connecting dots that weren't obvious from search results alone. If you're building on a fast-moving open-source project, there's no substitute for reading the source.
GitHub Copilot coding agent wrote 100% of the Lobster fork code. I assigned the task, described what I wanted (sub-workflow steps with loop support), and Copilot worked autonomously in its cloud environment. My only involvement was code review on the PR. The irony isn't lost on me: an autonomous coding agent built the loop primitive that enables autonomous coding agent pipelines.