2026-01-16 21:00:00
Building AI applications often feels like writing "glue code"—endless if/else statements and loops to manage how data flows between your Prompt, LLM, and Output Parser.
LangChain Expression Language (LCEL) solves this by giving us a declarative, composable way to build chains. It’s like Unix pipes | for AI.
In this post, I'll walk you through a Python demo I built using LangChain, Ollama, and the Gemma model that showcases three advanced capabilities: Routing, Parallel Execution, and Streaming Middleware.
You have one chatbot, but you want it to behave differently based on what the user asks. If they ask for code, you want a "Senior Engineer" persona. If they ask about data, you want a "Data Scientist".
RunnableBranch
Instead of writing imperative if statements, we build a Router Chain.
RunnableBranch to direct the flow to the correct sub-chain.# A chain that outputs "code", "data", or "general"
classifier_chain = classifier_prompt | llm | parser
# Route based on the output of classifier_chain
routing_chain = RunnableBranch(
(lambda x: x["intent"] == "code", code_chain),
(lambda x: x["intent"] == "data", data_chain),
general_chain
)
When you run: python main.py routing --query "Write a binary search in Python"
Output:
[Router] Detected 'code'
def binary_search(arr, target):
# ... concise, professional code output ...
The system automatically detected the intent and switched to the coding expert persona!
You need to answer a question using info from multiple distinct documents (e.g., your internal wiki, API docs, and general notes). Querying them one by one is slow.
RunnableParallel
RunnableParallel runs multiple runnables at the same time. We use it to fan-out our query to three different retrievers simultaneously.
parallel_retrievers = RunnableParallel({
"lc_docs": retriever_langchain,
"ollama_docs": retriever_ollama,
"misc_docs": retriever_misc,
})
When you run: python main.py parallel_rag --query "What is LCEL?"
Output:
The "Merger" step received results from all three databases instantly, combined them, and the LLM answered using the full context.
You are streaming the LLM's response letter-by-letter to the user, but you need to catch sensitive information (like PII) before it hits the screen.
We can wrap the standard .astream() iterator with our own Python async generator. This acts as a "middleware" layer that can buffer, sanitize, or log the tokens in real-time.
async def middleware_stream(iterable):
buffer = ""
async for chunk in iterable:
buffer += chunk
# If buffer contains a potential email, Redact it
if "@" in buffer:
yield "[REDACTED_EMAIL]"
buffer = ""
else:
yield buffer
(Note: The actual implementation uses smarter buffering to handle split tokens)
When you run: python main.py stream_middleware --query "My email is [email protected]"
Output:
Even though the LLM generated the real email, our middleware caught it on the fly and replaced it before the user saw it.
This demo proves that LCEL isn't just syntactic sugar—it's a powerful framework for building complex, production-ready flows. We achieved:
...all using standard, composable components running locally with Ollama!
2026-01-16 20:55:12
APIs rarely “hard fail” in a clean way.
What ApiWatch is
ApiWatch helps you monitor mission-critical API endpoints with confidence, with real-time monitoring and instant alerts when something breaks.
You can try it here:
https://apiwatch.eu
What it does today
Current features are intentionally focused on the “simple but not too simple” sweet spot:
ApiWatch is trying to stay developer-first: quick setup, clear signals, and enough detail to debug without drowning you in noise.
What’s next (and where feedback helps)
ApiWatch is still early, and feedback from teams who run real APIs is the most valuable input right now.
If you use tools like UptimeRobot / Better Stack / Kuma, it’d help a lot to hear:
Which alert channels matter most (Email, Slack, webhooks, SMS)?
What’s your biggest source of false positives today?
What would make you switch? (Terraform/API-based setup, better grouping/tagging, etc.)
Try it + roast it
Link: https://apiwatch.eu
If you comment with your use case (public API, internal microservices, cron endpoints, etc.), it’s easier to prioritize the next features.
2026-01-16 20:53:49
The cloud is not built for predictability.
It is built for change.
Traffic spikes without warning. Costs drift silently. Instances fail at 3 a.m. Configurations change hundreds of times a day. In this reality, static infrastructure thinking breaks down fast.
That’s why modern cloud infrastructure is event-driven by design.
Event-driven architecture (EDA) is not a pattern you “adopt later.”
It is the operating system of the cloud itself.
At its core, event-driven architecture is simple:
Something changes → the system reacts automatically.
An event is any meaningful state change:
Instead of waiting for a human or a synchronous request, systems listen for these changes and respond in real time.
This creates:
Cloud platforms like Amazon Web Services, Microsoft Azure, and Google Cloud are fundamentally built around this model, using services such as AWS Lambda and Google Cloud Pub/Sub.
Traditional infrastructure scales based on assumptions.
Event-driven infrastructure scales based on reality.
A sudden traffic surge (flash sale, feature launch, marketing spike) overwhelms fixed capacity.
Result:
Traffic increase is treated as an event, not a surprise.
That single signal automatically triggers:
All of this happens in seconds, without human involvement.
For developers, this means fewer firefights.
For FinOps, it means capacity exists only when it’s needed - no idle waste.
Failures are inevitable. Downtime is not.
In an event-driven cloud, failures don’t trigger panic - they trigger workflows.
Example:
No tickets. No waiting. No heroics.
This is self-healing infrastructure, and it’s only possible when systems react to events instead of relying on manual processes.
In large cloud environments, configuration drift is guaranteed.
Manual enforcement does not scale.
Event-driven governance flips the model:
Instead of periodic audits and retroactive fixes, compliance becomes continuous and automatic.
This is especially critical for:
This is where event-driven cloud truly compounds value.
Think of events as the glue that connects your entire platform.
A single event can fan out into multiple automated actions:
Each step emits new events, chaining actions without tight coupling.
The result?
Engineers focus on building products.
FinOps teams focus on optimizing signals, not chasing bills.
Here’s the uncomfortable truth:
Cloud costs don’t spike randomly. They spike because something happened.
All of these are events.
Event-driven infrastructure allows FinOps teams to:
Without events, FinOps is reactive.
With events, FinOps becomes real-time cost control.
Modern cloud infrastructure is not about managing servers.
It’s about responding intelligently to change.
Event-driven architecture enables that shift by making every change observable, actionable, and automated.
From:
Event-driven design is no longer optional.
If your cloud cannot react automatically to what’s happening right now, you’re already behind.
The future of cloud infrastructure isn’t static.
It listens. It reacts. It optimizes.
And it’s event-driven.
2026-01-16 20:52:05
Starting my journey in Data Science, Analysis, and AI at LUXDevHQ felt like learning a new language while trying to build a house. One of the most important tools I’ve discovered along the way is Version Control.
In this guide, I’ll walk you through:
Git is a Version Control System (VCS). Think of it as a save-point system for your code.
To let GitHub know who is uploading code, configure your global Git settings:
git config --global user.name "Your Name"
git config --global user.email [email protected]
Using SSH is the professional standard. It’s more secure and saves you from typing your password every time you push code.
Open Git Bash and enter (replace with your GitHub email):
ssh-keygen -t ed25519 -C [email protected]
• File Location: Press Enter to use the default location.
• Passphrase: As a beginner, I left this empty for convenience.
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_ed25519
cat ~/.ssh/id_ed25519.pub
Go to GitHub: Settings → SSH and GPG keys → New SSH Key.
Give it a name (e.g., "My Learning Laptop") and paste your key into the "Key" box.
Run:
ssh -T [email protected]
Success Check! If you see "Hi [YourUsername]! You've successfully authenticated", you are ready!
Learning to navigate via Git Bash makes you much faster than using a mouse! Use these commands to create your first repository:
• Check Location: pwd (Print Working Directory).
• Go to Desktop: cd Desktop
• Create Folder: mkdir my-first-repo
• Enter Folder: cd my-first-repo
Before sending code to GitHub, Git needs to "track" it locally. Run these inside your project folder:
Pushing sends your local save points to the cloud.
On the GitHub setup page, click SSH and copy the URL. Then run these commands one by one:
git remote add origin [email protected]:your-username/repo-name.git
git push -u origin main
If you work on a different computer, use Pull to download the latest updates from the cloud:
git pull origin main
• Official Git Documentation
• GitHub Skills: Interactive Courses
• Visualizing Git Commands (Game)
2026-01-16 20:43:44
Originally published at https://allcoderthings.com/en/article/csharp-decision-structures-if-else-switch
In programming, you often need to perform different actions depending on conditions. In C#, such cases are handled with decision structures. The most common ones are if, else if, else, and switch.
The if statement executes a block when the condition is true.
int number = 10;
if (number > 5)
{
Console.WriteLine("Number is greater than 5.");
}
Use else to run an alternative block when the condition is false.
Console.Write("Enter your grade: ");
int grade = int.Parse(Console.ReadLine()); // convert from string to int
if (grade >= 50)
{
Console.WriteLine("You passed.");
}
else
{
Console.WriteLine("You failed.");
}
Check multiple conditions in sequence using else if.
int grade = 75;
if (grade >= 90)
{
Console.WriteLine("Grade: A");
}
else if (grade >= 70)
{
Console.WriteLine("Grade: B");
}
else if (grade >= 50)
{
Console.WriteLine("Grade: C");
}
else
{
Console.WriteLine("Failed");
}
Use switch to branch execution based on fixed values. It is more readable than many else if blocks.
Console.Write("Enter day number (1-7): ");
int day = int.Parse(Console.ReadLine());
switch (day)
{
case 1: Console.WriteLine("Monday"); break;
case 2: Console.WriteLine("Tuesday"); break;
case 3: Console.WriteLine("Wednesday"); break;
case 4: Console.WriteLine("Thursday"); break;
case 5: Console.WriteLine("Friday"); break;
case 6: Console.WriteLine("Saturday"); break;
case 7: Console.WriteLine("Sunday"); break;
default: Console.WriteLine("Invalid day!"); break;
}
Decision structures often rely on user input.
Console.Write("Enter a number: ");
int number = int.Parse(Console.ReadLine());
if (number % 2 == 0)
{
Console.WriteLine("The number is even.");
}
else
{
Console.WriteLine("The number is odd.");
}
if: Runs when condition is true.else: Runs when condition is false.else if: Checks multiple conditions sequentially.switch: Branches on fixed values.In this example, a simple console-based menu system is created. The user makes a choice, which is validated with the if / else if / else structure, and the switch statement executes the selected mathematical operation. This demonstrates how conditional and selection structures can be combined.
using System;
class Program
{
static void Main()
{
Console.WriteLine("=== Menu ===");
Console.WriteLine("1 - Addition");
Console.WriteLine("2 - Subtraction");
Console.WriteLine("3 - Multiplication");
Console.WriteLine("4 - Division");
Console.WriteLine("0 - Exit");
Console.Write("Enter your choice: ");
int choice = int.Parse(Console.ReadLine()); // convert from string to int
if (choice == 0)
{
Console.WriteLine("Exiting program...");
}
else if (choice >= 1 && choice <= 4)
{
Console.Write("Enter the first number: ");
double num1 = double.Parse(Console.ReadLine());
Console.Write("Enter the second number: ");
double num2 = double.Parse(Console.ReadLine());
switch (choice)
{
case 1:
Console.WriteLine($"Result: {num1 + num2}");
break;
case 2:
Console.WriteLine($"Result: {num1 - num2}");
break;
case 3:
Console.WriteLine($"Result: {num1 * num2}");
break;
case 4:
if (num2 != 0)
Console.WriteLine($"Result: {num1 / num2}");
else
Console.WriteLine("Error: Division by zero is not allowed!");
break;
}
}
else
{
Console.WriteLine("Invalid choice.");
}
}
}
2026-01-16 20:42:45
In today’s technical blogging world, clear, accurate, and up-to-date diagrams are essential for explaining complex systems—whether it’s architecture, workflows, data flows, or infrastructure layouts like the Puppet setups we’ve been exploring in this series.
For years, tools like Microsoft Visio, Lucidchart, draw.io (now diagrams.net), or even PlantUML have been go-to solutions. However, more and more authors (especially in the DevOps, cloud, and open-source communities) are turning to Mermaid — and for good reason.
In this post, we’ll explore:
Mermaid is a JavaScript-based diagramming and charting tool that lets you create diagrams entirely from text using a simple, Markdown-friendly syntax.
graph TD
A[Start] --> B{Decision?}
B -->|Yes| C[Do Something]
B -->|No| D[Do Something Else]
C --> E[End]
D --> E
That tiny block of text becomes a clean, professional-looking flowchart when rendered:
Mermaid supports many diagram types out of the box:
Here are the key advantages of Mermaid compared to traditional tools like Visio, Lucidchart, or even draw.io:
| Feature | Mermaid | Visio / Lucidchart / draw.io |
|---|---|---|
| Version control friendly | 100% text → perfect for Git | Binary files or proprietary formats |
| Diffs & reviews | Easy to see changes in PRs | Almost impossible without special viewers |
| Editing speed | Extremely fast (just type) | Requires opening a GUI tool |
| Cost | Completely free & open source | "Visio = paid Lucidchart = subscription" |
| Dependencies | Just a JS library or CLI | Needs installed software or account |
| Integration with Markdown | "Native (GitHub, GitLab, Obsidian, etc.)" | Requires exporting images manually |
| Maintainability | Change one line → diagram updates | Must manually reposition elements |
| Collaboration | Works in any text editor + Git | Requires shared accounts or export/import cycles |
| Reproducibility | Same text → same diagram every time | Risk of “font substitution” or layout shifts |
In short:
Mermaid turns diagrams into code — which means they become first-class citizens in your documentation repository, just like your README, tests, or configuration files.
While GitHub, GitLab, Obsidian, Notion (recently), and many static site generators now render Mermaid natively, many popular blogging platforms still do not:
So what happens when you paste a beautiful Mermaid code block into a blog post on one of these platforms?
→ It just shows up as plain text — completely useless to readers.
This is exactly the problem I wanted to solve for my Puppet blog series.
. In my case this is done by the python logic of the pipeline at runtime.
6. Fallback behavior
If rendering fails for any reason → the original Mermaid code block remains unchanged (and an error is logged)| Use Case | Recommended Tool | Why? |
|---|---|---|
| Infrastructure / architecture docs | Mermaid | "Lives in Git, easy to update perfect for CI/CD pipelines" |
| Quick flowcharts in READMEs | Mermaid | Native GitHub rendering |
| Complex interactive dashboards | Lucidchart / draw.io | Better for drag-and-drop and heavy collaboration |
| Official company org charts / presentations | Visio / PowerPoint | "Polished look, integration with Microsoft 365 ecosystem" |
| One-off pretty diagrams for blog posts | draw.io (export PNG) | "Fast to create, great styling options" |
| "Long-lived frequently updated docs" | Mermaid | Minimizes maintenance pain |
Mermaid isn’t trying to replace heavy-duty GUI tools like Visio or Lucidchart — it’s solving a different problem:
“How can I keep my diagrams in sync with my code, version-controlled, and easy to maintain forever?”
By rendering Mermaid diagrams to PNGs during the publishing process, we get the best of both worlds:
This new pipeline is now live for all coming series like the Puppet series — so all future architecture diagrams (like the Puppet HA setup with Foreman, compile masters, and PuppetDB) will render beautifully, no matter where you read the blog.
Have you started using Mermaid in your documentation?
Or are you still exporting screenshots from draw.io?
Let me know in the comments — I’d love to hear your workflow!
Happy diagramming! 🚀
Did you find this post helpful? You can support me.