MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Mastering LangChain Expression Language (LCEL): Branching, Parallelism, and Streaming

2026-01-16 21:00:00

Building AI applications often feels like writing "glue code"—endless if/else statements and loops to manage how data flows between your Prompt, LLM, and Output Parser.

LangChain Expression Language (LCEL) solves this by giving us a declarative, composable way to build chains. It’s like Unix pipes | for AI.

In this post, I'll walk you through a Python demo I built using LangChain, Ollama, and the Gemma model that showcases three advanced capabilities: Routing, Parallel Execution, and Streaming Middleware.

1. Intelligent Routing (Branching)

The Problem

You have one chatbot, but you want it to behave differently based on what the user asks. If they ask for code, you want a "Senior Engineer" persona. If they ask about data, you want a "Data Scientist".

The LCEL Solution: RunnableBranch

Instead of writing imperative if statements, we build a Router Chain.

  1. Classify Intent: We ask the LLM to categorize the input (e.g., "code", "data", "general").
  2. Branch: We use RunnableBranch to direct the flow to the correct sub-chain.

The Code

# A chain that outputs "code", "data", or "general"
classifier_chain = classifier_prompt | llm | parser

# Route based on the output of classifier_chain
routing_chain = RunnableBranch(
    (lambda x: x["intent"] == "code", code_chain),
    (lambda x: x["intent"] == "data", data_chain),
    general_chain 
)

The Result

When you run: python main.py routing --query "Write a binary search in Python"

Output:

[Router] Detected 'code'

def binary_search(arr, target):
    # ... concise, professional code output ...

The system automatically detected the intent and switched to the coding expert persona!

RunnableBranch

2. Parallel Fan-Out (Multi-Source RAG)

The Problem

You need to answer a question using info from multiple distinct documents (e.g., your internal wiki, API docs, and general notes). Querying them one by one is slow.

The LCEL Solution: RunnableParallel

RunnableParallel runs multiple runnables at the same time. We use it to fan-out our query to three different retrievers simultaneously.

The Code

parallel_retrievers = RunnableParallel({
    "lc_docs": retriever_langchain,
    "ollama_docs": retriever_ollama,
    "misc_docs": retriever_misc,    
})

The Result

When you run: python main.py parallel_rag --query "What is LCEL?"

Output:

Parallel Fan-Out

The "Merger" step received results from all three databases instantly, combined them, and the LLM answered using the full context.

3. Streaming Middleware (Real-Time Transforms)

The Problem

You are streaming the LLM's response letter-by-letter to the user, but you need to catch sensitive information (like PII) before it hits the screen.

The LCEL Solution: Generator Middleware

We can wrap the standard .astream() iterator with our own Python async generator. This acts as a "middleware" layer that can buffer, sanitize, or log the tokens in real-time.

The Code

async def middleware_stream(iterable):
    buffer = ""
    async for chunk in iterable:
        buffer += chunk
        # If buffer contains a potential email, Redact it
        if "@" in buffer:
             yield "[REDACTED_EMAIL]"
             buffer = ""
        else:
             yield buffer

(Note: The actual implementation uses smarter buffering to handle split tokens)

The Result

When you run: python main.py stream_middleware --query "My email is [email protected]"

Output:

Generator Middleware

Even though the LLM generated the real email, our middleware caught it on the fly and replaced it before the user saw it.

This demo proves that LCEL isn't just syntactic sugar—it's a powerful framework for building complex, production-ready flows. We achieved:

  1. Dynamic Logic (Routing)
  2. Performance (Parallelism)
  3. Safety (Middleware)

...all using standard, composable components running locally with Ollama!

Github: https://github.com/harishkotra/langchain-lcel-demo

I built ApiWatch — a free, developer-friendly API uptime monitor (HTTP + keyword checks)

2026-01-16 20:55:12

APIs rarely “hard fail” in a clean way.

What ApiWatch is
ApiWatch helps you monitor mission-critical API endpoints with confidence, with real-time monitoring and instant alerts when something breaks.

You can try it here:
https://apiwatch.eu

What it does today
Current features are intentionally focused on the “simple but not too simple” sweet spot:

  • Scheduled checks for HTTP endpoints (status codes, timeouts).
  • Optional keyword matching (catch cases where you get 200 OK but the response is actually wrong).
  • SSL, port, DNS, and WHOIS monitoring (useful for catching expiring certs and domain issues early).
  • Notifications when an endpoint goes down or comes back online (with retries)
  • CI/CD integration, disable the monitor from pipeline and enable after the successfull release.
  • A shareable status page/widgets for communicating incidents.

ApiWatch is trying to stay developer-first: quick setup, clear signals, and enough detail to debug without drowning you in noise.

What’s next (and where feedback helps)
ApiWatch is still early, and feedback from teams who run real APIs is the most valuable input right now.

If you use tools like UptimeRobot / Better Stack / Kuma, it’d help a lot to hear:

Which alert channels matter most (Email, Slack, webhooks, SMS)?
​What’s your biggest source of false positives today?

What would make you switch? (Terraform/API-based setup, better grouping/tagging, etc.)

Try it + roast it
Link: https://apiwatch.eu

If you comment with your use case (public API, internal microservices, cron endpoints, etc.), it’s easier to prioritize the next features.

Why Is Cloud Infrastructure Event-Driven?

2026-01-16 20:53:49

Why Is Cloud Infrastructure Event-Driven?

The cloud is not built for predictability.

It is built for change.

Traffic spikes without warning. Costs drift silently. Instances fail at 3 a.m. Configurations change hundreds of times a day. In this reality, static infrastructure thinking breaks down fast.

That’s why modern cloud infrastructure is event-driven by design.

Event-driven architecture (EDA) is not a pattern you “adopt later.”

It is the operating system of the cloud itself.

Event-driven architecture (EDA)

What Event-Driven Architecture Really Means (Beyond the Textbook)

At its core, event-driven architecture is simple:

Something changes → the system reacts automatically.

An event is any meaningful state change:

  • CPU crosses a threshold
  • Traffic suddenly spikes
  • A VM becomes unhealthy
  • A deployment is pushed
  • A cost anomaly appears
  • A security rule is modified

Instead of waiting for a human or a synchronous request, systems listen for these changes and respond in real time.

This creates:

  • Loosely coupled systems
  • Faster reactions
  • Higher resilience
  • Far less human intervention

Cloud platforms like Amazon Web Services, Microsoft Azure, and Google Cloud are fundamentally built around this model, using services such as AWS Lambda and Google Cloud Pub/Sub.

Reactive Scalability: Scale Because Something Happened

Traditional infrastructure scales based on assumptions.

Event-driven infrastructure scales based on reality.

Power of Reactive Scalability

The old problem

A sudden traffic surge (flash sale, feature launch, marketing spike) overwhelms fixed capacity.

Result:

  • Slow response times
  • Errors
  • Pager alerts
  • Revenue loss

The event-driven reality

Traffic increase is treated as an event, not a surprise.

That single signal automatically triggers:

  • New containers or instances spinning up
  • Load balancers redistributing traffic
  • Read replicas scaling out
  • Caches warming proactively

All of this happens in seconds, without human involvement.

For developers, this means fewer firefights.

For FinOps, it means capacity exists only when it’s needed - no idle waste.

Automated Remediation: Failures Are Just Another Event

Failures are inevitable. Downtime is not.

In an event-driven cloud, failures don’t trigger panic - they trigger workflows.

Example:

  1. A node becomes unresponsive
  2. Monitoring emits a failure event
  3. The instance is removed from rotation
  4. A replacement is provisioned
  5. Traffic is rerouted
  6. The incident is logged and alerted

No tickets. No waiting. No heroics.

Automated remediation & resil

This is self-healing infrastructure, and it’s only possible when systems react to events instead of relying on manual processes.

Configuration, Governance, and Compliance - Enforced by Events

In large cloud environments, configuration drift is guaranteed.

Manual enforcement does not scale.

Configuration Management & Compliancen

Event-driven governance flips the model:

  • Every infrastructure change becomes an event
  • Every event triggers automated policy checks
  • Violations generate corrective actions or alerts instantly
  • Drift is detected and corrected in near real time

Instead of periodic audits and retroactive fixes, compliance becomes continuous and automatic.

This is especially critical for:

  • Regulated environments
  • Multi-account, multi-cloud setups
  • High-velocity engineering teams

Automation: Turning Signals Into Outcomes

This is where event-driven cloud truly compounds value.

Think of events as the glue that connects your entire platform.

Orchestration & Automation

A single event can fan out into multiple automated actions:

  • Storage upload → processing function
  • Processing completion → database update
  • Database update → notification
  • Notification → downstream workflow

Each step emits new events, chaining actions without tight coupling.

The result?

  • Fewer scripts
  • Fewer cron jobs
  • Fewer manual runbooks
  • More reliable systems

Engineers focus on building products.

FinOps teams focus on optimizing signals, not chasing bills.

Why This Matters Even More for FinOps

Here’s the uncomfortable truth:

Cloud costs don’t spike randomly. They spike because something happened.

  • A workload scaled unexpectedly
  • A schedule was removed
  • A deployment looped
  • A service went idle but stayed on

All of these are events.

Event-driven infrastructure allows FinOps teams to:

  • Detect cost-impacting events instantly
  • React before bills explode
  • Automate shutdowns, scale-downs, and optimizations
  • Tie cost directly to system behavior

Without events, FinOps is reactive.

With events, FinOps becomes real-time cost control.

The Cloud Doesn’t Wait - Neither Should Your Infrastructure

Modern cloud infrastructure is not about managing servers.

It’s about responding intelligently to change.

Event-driven architecture enables that shift by making every change observable, actionable, and automated.

From:

  • Intelligent scaling
  • To self-healing systems
  • To continuous compliance
  • To real-time cost optimization

Event-driven design is no longer optional.

If your cloud cannot react automatically to what’s happening right now, you’re already behind.

The future of cloud infrastructure isn’t static.

It listens. It reacts. It optimizes.

And it’s event-driven.

👉 Try ZopNight by ZopDev today

👉 Book a demo

Link to Original Article

A Beginner’s Guide to Git and GitHub: From Installation to Your First Push

2026-01-16 20:52:05

Starting my journey in Data Science, Analysis, and AI at LUXDevHQ felt like learning a new language while trying to build a house. One of the most important tools I’ve discovered along the way is Version Control.

In this guide, I’ll walk you through:

  • Setting up Git Bash
  • Connecting Git to GitHub
  • Mastering essential push and pull commands

1. What Is Git and Why Does It Matter?

Git is a Version Control System (VCS). Think of it as a save-point system for your code.

Why is Git important?

  • ⏪ Time Travel – If you break your code, you can roll back to a version that worked.
  • 🤝 Collaboration – Multiple people can work on the same project without overwriting each other’s work.
  • 🧪 Experimentation – You can create branches to try new features without affecting the main project.

2. Setting Up Your Environment

Step A: Install Git Bash

  1. Go to Git and download Git for your OS (I used Windows).
  2. Run the installer. > 💡 Pro tip: You can keep the default settings for most options.
  3. After installation, search for Git Bash in your applications. It looks like a terminal window.

Step B: Configure Your Identity

To let GitHub know who is uploading code, configure your global Git settings:

git config --global user.name "Your Name"
git config --global user.email [email protected]

3. Secure Your Connection: Setting Up SSH Keys

Using SSH is the professional standard. It’s more secure and saves you from typing your password every time you push code.

Step 1: Generate Your SSH Key

Open Git Bash and enter (replace with your GitHub email):

ssh-keygen -t ed25519 -C [email protected]

• File Location: Press Enter to use the default location.
• Passphrase: As a beginner, I left this empty for convenience.

Step 2: Add Key to the SSH Agent

eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_ed25519

Step 3: Add the Public Key to GitHub

  1. Copy the key:
cat ~/.ssh/id_ed25519.pub 
  1. Go to GitHub: Settings → SSH and GPG keys → New SSH Key.

  2. Give it a name (e.g., "My Learning Laptop") and paste your key into the "Key" box.

Step 4: Test the Connection

Run:

Success Check! If you see "Hi [YourUsername]! You've successfully authenticated", you are ready!

4. Navigating and Creating Your Project

Learning to navigate via Git Bash makes you much faster than using a mouse! Use these commands to create your first repository:
• Check Location: pwd (Print Working Directory).
• Go to Desktop: cd Desktop
• Create Folder: mkdir my-first-repo
• Enter Folder: cd my-first-repo

5. Tracking Changes (The Core Workflow)

Before sending code to GitHub, Git needs to "track" it locally. Run these inside your project folder:

  1. Initialize Git: git init (Starts tracking the folder).
  2. Check Status: git status (See what files Git notices).
  3. Add Files: git add . (Stages all changes to be saved).
  4. Commit: git commit -m "My first commit" (Creates the "save point").

6. Pushing Code to GitHub

Pushing sends your local save points to the cloud.

Step A: Create the Repository on GitHub.com

  1. Log into GitHub, click the + icon → New repository.
  2. Name it (e.g., my-first-project) and keep it Public.
  3. Important: Leave "Add a README" unchecked to avoid conflicts.
  4. Click Create repository.

Step B: Connect and Push

On the GitHub setup page, click SSH and copy the URL. Then run these commands one by one:

git remote add origin [email protected]:your-username/repo-name.git
git push -u origin main

7. Pulling Code from GitHub

If you work on a different computer, use Pull to download the latest updates from the cloud:

git pull origin main

📚 Resources to Keep Learning

Official Git Documentation
GitHub Skills: Interactive Courses
Visualizing Git Commands (Game)

Conclusion: Congratulations! You've just set up a professional dev workflow. Git can be tricky at first, but keep practicing and it will become second nature. If you ran into any issues, drop a comment below and let's help each other out!

C# Conditional Statements (if, else, switch)

2026-01-16 20:43:44

Originally published at https://allcoderthings.com/en/article/csharp-decision-structures-if-else-switch

In programming, you often need to perform different actions depending on conditions. In C#, such cases are handled with decision structures. The most common ones are if, else if, else, and switch.

if Statement

The if statement executes a block when the condition is true.

int number = 10;

if (number > 5)
{
    Console.WriteLine("Number is greater than 5.");
}

if - else Statement

Use else to run an alternative block when the condition is false.

Console.Write("Enter your grade: ");
int grade = int.Parse(Console.ReadLine()); // convert from string to int

if (grade >= 50)
{
    Console.WriteLine("You passed.");
}
else
{
    Console.WriteLine("You failed.");
}

if - else if - else

Check multiple conditions in sequence using else if.

int grade = 75;

if (grade >= 90)
{
    Console.WriteLine("Grade: A");
}
else if (grade >= 70)
{
    Console.WriteLine("Grade: B");
}
else if (grade >= 50)
{
    Console.WriteLine("Grade: C");
}
else
{
    Console.WriteLine("Failed");
}

switch Statement

Use switch to branch execution based on fixed values. It is more readable than many else if blocks.

Console.Write("Enter day number (1-7): ");
int day = int.Parse(Console.ReadLine());

switch (day)
{
    case 1: Console.WriteLine("Monday"); break;
    case 2: Console.WriteLine("Tuesday"); break;
    case 3: Console.WriteLine("Wednesday"); break;
    case 4: Console.WriteLine("Thursday"); break;
    case 5: Console.WriteLine("Friday"); break;
    case 6: Console.WriteLine("Saturday"); break;
    case 7: Console.WriteLine("Sunday"); break;
    default: Console.WriteLine("Invalid day!"); break;
}

User Input with Decision Structures

Decision structures often rely on user input.

Console.Write("Enter a number: ");
int number = int.Parse(Console.ReadLine());

if (number % 2 == 0)
{
    Console.WriteLine("The number is even.");
}
else
{
    Console.WriteLine("The number is odd.");
}

Summary

  • if: Runs when condition is true.
  • else: Runs when condition is false.
  • else if: Checks multiple conditions sequentially.
  • switch: Branches on fixed values.

Menu Example: Using if, else if, else and switch

In this example, a simple console-based menu system is created. The user makes a choice, which is validated with the if / else if / else structure, and the switch statement executes the selected mathematical operation. This demonstrates how conditional and selection structures can be combined.

using System;

class Program
{
    static void Main()
    {
        Console.WriteLine("=== Menu ===");
        Console.WriteLine("1 - Addition");
        Console.WriteLine("2 - Subtraction");
        Console.WriteLine("3 - Multiplication");
        Console.WriteLine("4 - Division");
        Console.WriteLine("0 - Exit");
        Console.Write("Enter your choice: ");

        int choice = int.Parse(Console.ReadLine()); // convert from string to int

        if (choice == 0)
        {
            Console.WriteLine("Exiting program...");
        }
        else if (choice >= 1 && choice <= 4)
        {
            Console.Write("Enter the first number: ");
            double num1 = double.Parse(Console.ReadLine());

            Console.Write("Enter the second number: ");
            double num2 = double.Parse(Console.ReadLine());

            switch (choice)
            {
                case 1:
                    Console.WriteLine($"Result: {num1 + num2}");
                    break;
                case 2:
                    Console.WriteLine($"Result: {num1 - num2}");
                    break;
                case 3:
                    Console.WriteLine($"Result: {num1 * num2}");
                    break;
                case 4:
                    if (num2 != 0)
                        Console.WriteLine($"Result: {num1 / num2}");
                    else
                        Console.WriteLine("Error: Division by zero is not allowed!");
                    break;
            }
        }
        else
        {
            Console.WriteLine("Invalid choice.");
        }
    }
}

Publishing Pipeline - inline Mermaid code

2026-01-16 20:42:45

Enhancing Blog Posts with Mermaid Diagrams: Why They’re a Game-Changer (and How to Make Them Work Everywhere)

In today’s technical blogging world, clear, accurate, and up-to-date diagrams are essential for explaining complex systems—whether it’s architecture, workflows, data flows, or infrastructure layouts like the Puppet setups we’ve been exploring in this series.
For years, tools like Microsoft Visio, Lucidchart, draw.io (now diagrams.net), or even PlantUML have been go-to solutions. However, more and more authors (especially in the DevOps, cloud, and open-source communities) are turning to Mermaid — and for good reason.
In this post, we’ll explore:

  • What Mermaid is and why it’s gaining so much traction
  • How it compares to traditional diagramming tools like Visio and draw.io
  • The biggest challenge when publishing Mermaid diagrams on blogs
  • The elegant solution: rendering Mermaid to PNG during the publishing pipeline (exactly what I’ve now implemented!)

What Is Mermaid?

Mermaid is a JavaScript-based diagramming and charting tool that lets you create diagrams entirely from text using a simple, Markdown-friendly syntax.

Example: A simple flowchart

graph TD
A[Start] --> B{Decision?}
B -->|Yes| C[Do Something]
B -->|No| D[Do Something Else]
C --> E[End]
D --> E

That tiny block of text becomes a clean, professional-looking flowchart when rendered:

Mermaid diagram

Mermaid supports many diagram types out of the box:

  • Flowcharts (graph)
  • Sequence diagrams
  • Class diagrams
  • State diagrams
  • Entity-Relationship diagrams
  • Gantt charts
  • Pie charts
  • Requirement diagrams
  • Git graphs
  • …and more!

Why Mermaid Is Winning Hearts in 2025/2026

Here are the key advantages of Mermaid compared to traditional tools like Visio, Lucidchart, or even draw.io:

Feature Mermaid Visio / Lucidchart / draw.io
Version control friendly 100% text → perfect for Git Binary files or proprietary formats
Diffs & reviews Easy to see changes in PRs Almost impossible without special viewers
Editing speed Extremely fast (just type) Requires opening a GUI tool
Cost Completely free & open source "Visio = paid Lucidchart = subscription"
Dependencies Just a JS library or CLI Needs installed software or account
Integration with Markdown "Native (GitHub, GitLab, Obsidian, etc.)" Requires exporting images manually
Maintainability Change one line → diagram updates Must manually reposition elements
Collaboration Works in any text editor + Git Requires shared accounts or export/import cycles
Reproducibility Same text → same diagram every time Risk of “font substitution” or layout shifts

In short:
Mermaid turns diagrams into code — which means they become first-class citizens in your documentation repository, just like your README, tests, or configuration files.

The One Big Challenge: Platform Support

While GitHub, GitLab, Obsidian, Notion (recently), and many static site generators now render Mermaid natively, many popular blogging platforms still do not:

  • WordPress (even with plugins) often has inconsistent or outdated Mermaid support
  • Medium does not support Mermaid at all
  • Dev.to supports it only partially
  • Hashnode has good support but can be finicky with themes
  • Company wikis, Confluence instances, or older platforms → usually no native rendering

So what happens when you paste a beautiful Mermaid code block into a blog post on one of these platforms?
→ It just shows up as plain text — completely useless to readers.

The Solution: Render Mermaid to PNG in Your Publishing Pipeline

This is exactly the problem I wanted to solve for my Puppet blog series.

How it works

  1. Detect Mermaid code blocks The pipeline looks for standard fenced code blocks starting with ``` mermaid 2. Generate a unique hash based on the exact content of the diagram → Identical diagrams are rendered only once (great for performance and storage) 3. Render the Mermaid code to a high-quality PNG Using a headless browser + Mermaid CLI (or a Node.js renderer). In my case, Jenkins runs that as part of the pipeline. 4. Upload the PNG to the blog’s media library (e.g., WordPress media uploads) or an S3 bucket. 5. Replace the original code block with proper Markdown image syntax: Puppet Infrastructure with Foreman and Compile Masters . In my case this is done by the python logic of the pipeline at runtime. 6. Fallback behavior If rendering fails for any reason → the original Mermaid code block remains unchanged (and an error is logged)

Benefits of this approach

  • Works on every platform — even those with zero Mermaid support, since a centrally placed image is loaded
  • No broken diagrams — readers always see a nice image
  • Still version-controlled — the source Mermaid code lives in your repo
  • Deduplication — same diagram across multiple posts reuses the same image file
  • High resolution — PNGs look crisp on retina displays
  • SEO-friendly — images have proper alt text (you can customize this further)

When to Use Mermaid vs. Traditional Tools

Use Case Recommended Tool Why?
Infrastructure / architecture docs Mermaid "Lives in Git, easy to update perfect for CI/CD pipelines"
Quick flowcharts in READMEs Mermaid Native GitHub rendering
Complex interactive dashboards Lucidchart / draw.io Better for drag-and-drop and heavy collaboration
Official company org charts / presentations Visio / PowerPoint "Polished look, integration with Microsoft 365 ecosystem"
One-off pretty diagrams for blog posts draw.io (export PNG) "Fast to create, great styling options"
"Long-lived frequently updated docs" Mermaid Minimizes maintenance pain

Final Thoughts

Mermaid isn’t trying to replace heavy-duty GUI tools like Visio or Lucidchart — it’s solving a different problem:

“How can I keep my diagrams in sync with my code, version-controlled, and easy to maintain forever?”

By rendering Mermaid diagrams to PNGs during the publishing process, we get the best of both worlds:

  • The maintainability and reproducibility of text-based diagrams
  • The universal compatibility of plain old images

This new pipeline is now live for all coming series like the Puppet series — so all future architecture diagrams (like the Puppet HA setup with Foreman, compile masters, and PuppetDB) will render beautifully, no matter where you read the blog.

Have you started using Mermaid in your documentation?
Or are you still exporting screenshots from draw.io?
Let me know in the comments — I’d love to hear your workflow!

Happy diagramming! 🚀

Did you find this post helpful? You can support me.

Hetzner Referral

Confdroid Feedback Portal

Related posts