MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Creating a Vertical Area Chart with JavaScript: 80+ Years of U.S. Presidential Approval Data

2026-04-16 03:12:50

Traditionally, charts that visualize data over time are horizontal. But sometimes a vertical layout is a better fit. In this tutorial, you will learn how to create an interactive vertical area chart using JavaScript step by step.

The practical example uses monthly approval and disapproval ratings of American presidents from 1941 to 2025, according to Gallup polls. The final chart shows over 80 years of public support and opposition across U.S. administrations as two mirrored area series running top to bottom.

The result will look like this:

Preview of the JavaScript vertical area chart built in this tutorial visualizing approval and disapproval ratings of American presidents from 1941 to 2025

What Is a Vertical Area Chart?

A vertical area chart is a type of data visualization that rotates a standard area chart 90 degrees. The time or category axis runs vertically, and the value axis runs horizontally. The filled area between the series line and the baseline still communicates magnitude and change — the orientation just shifts so that time flows top to bottom instead of left to right.

This layout works well in a few specific situations: when the timeline is long and a horizontal chart would compress the data too much, when category labels are long and hard to fit on a horizontal axis, or when the chart is embedded in a vertically scrolling page and a wide horizontal layout would interrupt the reading flow. It is also a natural choice when the main story is the balance between two opposing series.

How to Build a JavaScript Vertical Area Chart

Building an interactive vertical area chart with JavaScript involves four steps: creating the HTML page, loading the library, preparing the data, and writing the visualization code.

1. Create an HTML Page

Start with a minimal HTML file containing a <div> element that will hold the chart. The #container div is set to fill the full browser window here, but you can replace the percentage values with fixed pixel dimensions if the chart should occupy only part of a page.

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <title>JavaScript Vertical Area Chart</title>
  <style>
    /* make the page and container fill the full browser window */
    html, body, #container {
      width: 100%;
      height: 100%;
      margin: 0;
      padding: 0;
    }
  </style>
</head>
<body>
  <!-- the chart will render inside this div -->
  <div id="container"></div>
</body>
</html>

Now that the HTML page is in place, let's add the charting library.

2. Include the JavaScript Files

In this tutorial, we will be using the AnyChart JavaScript charting library. The vertical area chart type is available in its anychart-base.min.js module. Load it from the AnyChart CDN by adding a <script> tag in the <head> section, then add an empty <script> block in the <body> where the chart code will go.

<head>
  ...
  <!-- load the AnyChart base module, which includes vertical area charts -->
  <script src="https://cdn.anychart.com/releases/8.14.1/js/anychart-base.min.js"></script>
</head>
<body>
  <div id="container"></div>
  <!-- chart code goes here -->
  <script>
  </script>
</body>

With the library loaded, the next step is to prepare the data.

3. Prepare the Data

The chart uses U.S. presidential job approval data from the American Presidency Project at UC Santa Barbara, based on Gallup polling results covering 16 presidents from Franklin D. Roosevelt to Donald Trump's second term. Monthly averages of approval and disapproval percentages were computed across all available polls for each month.

The dataset contains 910 monthly data points. Each row holds a month label, the average approval percentage, and the average disapproval percentage stored as a negative number. Storing disapproval as negative creates the mirrored effect around the zero baseline — the technique that gives this chart its distinctive shape. Here is a sample of the data:

Month Approval (%) Disapproval (stored as negative %)
Jul 1941 67 -24
Jan 1953 68 -8
Jan 1961 72 -6
Oct 2001 88 -9
Jul 1974 24 -63
Jan 2009 56 -31
Jan 2021 46 -50
Dec 2025 36 -59

Each row is a three-element array: the month label string, the approval value, and the disapproval value, and it looks like this in the code:

// each entry: [month label, approval %, disapproval as negative %]
// the full dataset has 910 rows; the complete version is in the Playground link below
const rawData = [
  ["Jul 1941", 67, -24],
  ["Aug 1941", 67, -24],
  ["Sep 1941", 70, -24],
  // ... 907 more monthly entries ...
  ["Dec 2025", 36, -59]
];

4. Write the JS Code for the Chart

All the JavaScript goes inside an anychart.onDocumentReady() wrapper — a function that AnyChart calls as soon as the page has fully loaded. This guarantees that the chart container <div> exists in the DOM before the chart tries to render into it.

anychart.onDocumentReady(function () {

  // all chart code goes here

});

Add the Data

The rawData array from Step 3 is the first thing to place inside anychart.onDocumentReady().

The time axis will use a date/time scale, which requires JavaScript Date objects rather than strings like "Jul 1941". Each row is therefore needs to be converted before passing it into the chart.

MONTH_IDX is a small lookup object that maps three-letter month abbreviations to their zero-based index — January = 0, December = 11 — matching the Date constructor.

// map month abbreviations to their zero-based JS Date index
const MONTH_IDX = {
  Jan:0, Feb:1, Mar:2, Apr:3, May:4,  Jun:5,
  Jul:6, Aug:7, Sep:8, Oct:9, Nov:10, Dec:11
};

// convert each "MMM YYYY" string to a Date object, keep the approval and disapproval values
const data = rawData.map(function(row) {
  const parts = row[0].split(" ");
  return [new Date(parseInt(parts[1]), MONTH_IDX[parts[0]], 1), row[1], row[2]];
});

Create a Data Set and Map the Two Series

AnyChart uses a data set as a single source that can feed multiple series at once. We load all 910 rows into one with anychart.data.set(), then create two mappings from it. approvalMap reads column 1 as the series value; disapprovalMap reads column 2. Both share column 0 — the Date object — as their x position.

// load the data into an AnyChart data set
const ds = anychart.data.set(data);

// map approval (column 1) and disapproval (column 2) as separate series
// both share column 0 (the Date object) as their x position
const approvalMap    = ds.mapAs({x: 0, value: 1});
const disapprovalMap = ds.mapAs({x: 0, value: 2});

Create the Chart

One call creates the chart. anychart.verticalArea() returns a vertical area chart instance — a standard area chart rotated 90 degrees, with the time axis running vertically and the value axis running horizontally. Everything else — series, scales, visual settings — attaches to this object.

// create the vertical area chart
const chart = anychart.verticalArea();

Add the Two Area Series

Each series gets its own chart.area() call, bound to one of the data mappings created above. In addition, connectMissingPoints(true) keeps the line continuous across months where no poll was conducted — without it, the chart would show gaps in the data.

// approval series: positive values extend to the right of the zero line
const approvalSeries = chart.area(approvalMap);
approvalSeries.name("Approval");
approvalSeries.connectMissingPoints(true); // bridge months with no poll data

// disapproval series: negative values extend to the left of the zero line
const disapprovalSeries = chart.area(disapprovalMap);
disapprovalSeries.name("Disapproval");
disapprovalSeries.connectMissingPoints(true);

Configure the Scales

The chart has two scales to configure: the x-scale for the vertical time axis and the y-scale for the horizontal value axis.

The default x-scale places ticks at data-point positions, which is not useful here. We replace it with a date/time scale that puts one tick at the start of every calendar year. inverted(true) flips the direction: 1941 at the top, 2025 at the bottom — the natural reading order for a historical timeline.

// replace the default scale with a datetime scale for proper yearly tick marks
const xScale = anychart.scales.dateTime();
xScale.ticks().interval("year", 1); // one tick at the start of every calendar year
xScale.inverted(true);              // 1941 at top, 2025 at bottom
chart.xScale(xScale);

For the y-scale, we fix the range at −100 to 100. This keeps both sides symmetrical and prevents clipping even the most extreme values in the dataset.

// set the horizontal value axis to run symmetrically from -100 to 100
chart.yScale().minimum(-100);
chart.yScale().maximum(100);

Final Steps

A few more calls finish the setup before rendering.

First, chart.title() sets a descriptive title above the chart.

// add a descriptive title above the chart
chart.title("U.S. Presidential Approval Ratings (1941–2025)");

Second, chart.legend(true) enables the legend so the viewer knows which color is approval and which is disapproval.

// show the series legend
chart.legend(true);

Finally, chart.container() names the <div> the chart should render into, and chart.draw() triggers the render because nothing appears on the page until this call runs.

// point the chart at the container div and render it
chart.container("container");
chart.draw();

Full Code and Result

Here is the complete, runnable HTML code with all the pieces assembled. The data array is abbreviated below — the full 910-row dataset is available in the Playground link below.

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <title>JavaScript Vertical Area Chart</title>
  <style>
    html, body, #container {
      width: 100%;
      height: 100%;
      margin: 0;
      padding: 0;
    }
  </style>
  <script src="https://cdn.anychart.com/releases/8.14.1/js/anychart-base.min.js"></script>
</head>
<body>
  <div id="container"></div>
  <script>
    anychart.onDocumentReady(function () {
      // monthly approval/disapproval data, 1941–2025
      // source: American Presidency Project, UC Santa Barbara
      const rawData = [
        ["Jul 1941", 67, -24],
        // ... full dataset in the Playground link below
        ["Dec 2025", 36, -59]
      ];
      // convert "MMM YYYY" labels to Date objects for the datetime scale
      const MONTH_IDX = {
        Jan:0, Feb:1, Mar:2, Apr:3, May:4,  Jun:5,
        Jul:6, Aug:7, Sep:8, Oct:9, Nov:10, Dec:11
      };
      const data = rawData.map(function(row) {
        const parts = row[0].split(" ");
        return [new Date(parseInt(parts[1]), MONTH_IDX[parts[0]], 1), row[1], row[2]];
      });
      // load into a data set and create two series mappings
      const ds = anychart.data.set(data);
      const approvalMap    = ds.mapAs({x: 0, value: 1});
      const disapprovalMap = ds.mapAs({x: 0, value: 2});
      // create a vertical area chart
      const chart = anychart.verticalArea();
      // approval and disapproval area series
      const approvalSeries = chart.area(approvalMap);
      approvalSeries.name("Approval");
      approvalSeries.connectMissingPoints(true);
      const disapprovalSeries = chart.area(disapprovalMap);
      disapprovalSeries.name("Disapproval");
      disapprovalSeries.connectMissingPoints(true);
      // datetime x-scale: yearly ticks, oldest at top
      const xScale = anychart.scales.dateTime();
      xScale.ticks().interval("year", 1);
      xScale.inverted(true);
      chart.xScale(xScale);
      // y-scale: symmetrical range around zero
      chart.yScale().minimum(-100);
      chart.yScale().maximum(100);
      // title, legend, and render
      chart.title("U.S. Presidential Approval Ratings (1941–2025)");
      chart.legend(true);
      chart.container("container");
      chart.draw();
    });
  </script>
</body>
</html>

That's it! A basic JavaScript vertical area chart is ready, showing U.S. presidential approval and disapproval ratings since 1941 according to Gallup. Take a look at it below or open it on AnyChart Playground.

Basic JavaScript Vertical Area Chart Visualizing U.S. Presidential Approval and Disapproval Ratings Since 1941

How to Customize a JavaScript Vertical Area Chart

Now let's make some changes to the chart's design and behavior. The five customizations below improve readability and add contextual information to the vertical area chart built in the previous part of the tutorial.

A. Smooth the Curves with Spline Area

The plain area() series connects data points with straight line segments, producing a jagged silhouette. Switching to splineArea() fits a smooth curve through the same points. Over 910 monthly values, the smoothed version reveals the broad trends without visual noise from minor month-to-month fluctuations.

Replace chart.area() with chart.splineArea() for both series:

// splineArea draws smooth interpolated curves instead of angular segments
const approvalSeries    = chart.splineArea(approvalMap);
const disapprovalSeries = chart.splineArea(disapprovalMap);

B. Set Series Colors

Green and red carry an intuitive meaning for approval and disapproval data. Use the fill() method to set the area color and opacity, and stroke() to set the outline color and thickness.

// green fill and outline for the approval series
approvalSeries.normal().fill("#27ae60", 0.5);
approvalSeries.normal().stroke("#27ae60", 1.5);

// red fill and outline for the disapproval series
disapprovalSeries.normal().fill("#e74c3c", 0.5);
disapprovalSeries.normal().stroke("#e74c3c", 1.5);

C. Format the Value Axis

Disapproval values are stored as negative numbers, so the horizontal axis would normally label the left side "-75", "-50", and so on. Displaying both sides as absolute values — "75%", "50%" — makes the chart symmetrical and easier to read. The Math.abs() function in the label format() callback handles the conversion. While here, grid lines at every 25 percentage points and an explanatory axis title add further clarity.

// display absolute values with % sign on both sides of the axis
chart.yAxis().labels().format(function() {
  return Math.abs(this.value) + "%";
});

// label the axis to indicate which direction means approval and which means disapproval
chart.yAxis().title("← Disapproval  |  Approval →");

// add vertical grid lines at ±25, ±50, ±75, and 0
chart.yScale().ticks().interval(25);
chart.yGrid(true);
chart.yGrid().stroke({color: "#dddddd", thickness: 0.5});

D. Highlight the Zero Baseline

The zero line — where approval equals disapproval — is the most important reference point in the chart. A lineMarker at value 0 draws a prominent vertical line across the plot, making the boundary between net-positive and net-negative approval immediately visible.

// draw a distinct line at zero — the boundary between net approval and net disapproval
const zeroLine = chart.lineMarker(0);
zeroLine.value(0);
zeroLine.stroke({color: "#444444", thickness: 2});

E. Add a Contextual Tooltip

By default, the tooltip shows the raw timestamp and series value. We can make it more informative by displaying the president's name and party in the title, the calendar month in the body, and the correct percentage label for each series. This can be done through a lookup table with a function to work with it, followed by the tooltip configuration itself.

Build the President Lookup Table and Function

The lookup table is an array of objects, one per president, each holding the name, party abbreviation, and the start and end dates of their term. JavaScript Date objects use zero-based month numbers, so January = 0, April = 3, August = 7, and so on.

// each entry: president name, party, and term start and end as Date objects
const presidents = [
  {name: "Franklin D. Roosevelt", party: "D", from: new Date(1941,6,1), to: new Date(1945,3,1)},
  {name: "Harry S. Truman",       party: "D", from: new Date(1945,3,1), to: new Date(1953,0,1)},
  {name: "Dwight D. Eisenhower",  party: "R", from: new Date(1953,0,1), to: new Date(1961,0,1)},
  {name: "John F. Kennedy",       party: "D", from: new Date(1961,0,1), to: new Date(1963,10,1)},
  {name: "Lyndon B. Johnson",     party: "D", from: new Date(1963,10,1),to: new Date(1969,0,1)},
  {name: "Richard Nixon",         party: "R", from: new Date(1969,0,1), to: new Date(1974,7,1)},
  {name: "Gerald Ford",           party: "R", from: new Date(1974,7,1), to: new Date(1977,0,1)},
  {name: "Jimmy Carter",          party: "D", from: new Date(1977,0,1), to: new Date(1981,0,1)},
  {name: "Ronald Reagan",         party: "R", from: new Date(1981,0,1), to: new Date(1989,0,1)},
  {name: "George H.W. Bush",      party: "R", from: new Date(1989,0,1), to: new Date(1993,0,1)},
  {name: "Bill Clinton",          party: "D", from: new Date(1993,0,1), to: new Date(2001,0,1)},
  {name: "George W. Bush",        party: "R", from: new Date(2001,0,1), to: new Date(2009,0,1)},
  {name: "Barack Obama",          party: "D", from: new Date(2009,0,1), to: new Date(2017,0,1)},
  {name: "Donald Trump",          party: "R", from: new Date(2017,0,1), to: new Date(2021,0,1)},
  {name: "Joe Biden",             party: "D", from: new Date(2021,0,1), to: new Date(2025,0,1)},
  {name: "Donald Trump",          party: "R", from: new Date(2025,0,1), to: new Date(2029,0,1)}
];

Finding the president in office on any given date requires a function that searches the lookup table by date range. getPresident() takes a timestamp, walks through the table, and returns the matching president object. If no entry covers the date, it returns null.

// find the president in office on a given date (passed as a timestamp)
function getPresident(ts) {
  const d = new Date(ts);
  for (const p of presidents) {
    if (d >= p.from && d < p.to) return p;
  }
  return null;
}

Configure the Tooltip

Two settings merge the series data and enable HTML formatting. displayMode("union") displays both series in a single tooltip, so the viewer sees approval and disapproval for the same month side by side. useHtml(true) enables HTML markup in the tooltip body, which lets us use <br/> to put values on separate lines.

// merge both series into one tooltip and allow HTML inside it
chart.tooltip().displayMode("union");
chart.tooltip().useHtml(true);

The tooltip title should name the president in office at the hovered date. titleFormat is a callback that runs once per hovered point and returns this tooltip title string. Inside it, this.x holds the timestamp of the hovered month — passed directly to getPresident() to retrieve the president's name and party.

// tooltip title: look up the president in office on the hovered date
chart.tooltip().titleFormat(function() {
  const p = getPresident(this.x);
  return p ? (p.name + " (" + p.party + ")") : "";
});

The tooltip body should show the calendar month and the series value. format runs once per series and returns that series' line in the tooltip body. A short month-name array rebuilds the date label from the timestamp. For the disapproval series, Math.abs() converts the stored negative value back to a readable positive percentage.

// month name array for formatting the date label in the tooltip body
const MON = ["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"];

// tooltip body: show the month, then the value for each series
chart.tooltip().format(function() {
  const d = new Date(this.x);
  const label = MON[d.getMonth()] + " " + d.getFullYear();
  if (this.seriesName === "Approval") {
    return label + "<br/>Approval: " + this.value + "%";
  }
  return "Disapproval: " + Math.abs(this.value) + "%";
});

Final Result

Below is the complete interactive vertical area chart with all customizations applied — smoothed curves, custom colors, formatted axis, zero baseline, and a contextual tooltip showing the president in office for any hovered month. Feel free to explore the full code and play with it further on AnyChart Playground.

Final (Customized) JavaScript Vertical Area Chart Visualizing U.S. Presidential Approval and Disapproval Ratings Since 1941

Conclusion

This tutorial covered building an interactive JavaScript vertical area chart that maps more than eight decades of public opinion data in a single view. The vertical orientation, mirrored series, and contextual tooltip make it easy to compare how approval and disapproval moved together across U.S. administrations.

Browse the gallery for more vertical chart examples. To build a horizontal area chart, see the area chart tutorial. Beyond that, explore other JavaScript charting tutorials, the supported chart types, the documentation, and the API reference.

Questions? Ask in the comments or contact the Support Team.

I Built an LLM Gateway That Learns Which Model to Use — Here's How the Routing Works

2026-04-16 03:09:08

How it works:

Request arrives at an OpenAI-compatible endpoint
Classifier detects task type + complexity
Adaptive router picks the highest-scoring model for that cell
Quality feedback (user ratings + LLM judge) continuously improves routing
Change 2 lines in your code. That's it.

But it's more than a router. Full platform:

Request logs with replay + diff view
Time-series analytics (cost, latency p50/p95/p99)
A/B testing between models
Guardrails (PII redaction)
Prompt template versioning
Spend/latency alerting
Tweet 4:

Self-hosted with Docker. Your data never leaves your infrastructure.

Supports OpenAI, Anthropic, Google, Mistral, xAI, Ollama, and any OpenAI-compatible provider.

BYOK — bring your own API keys.

The routing gets smarter over time. An LLM-as-judge automatically scores responses, building quality data per model per task type.

After enough feedback, the router stops sending complex prompts to the cheapest model and starts picking the best one.

No manual configuration needed.

GitHub: https://github.com/syndicalt/provara
Live demo: https://www.provara.xyz

I Built My First AI Agent in an Afternoon. Here's the Playbook I Wish I Had on Day One.

2026-04-16 03:06:56

A step-by-step guide to going from a blank folder to a working agent, without the hype.

"AI agent" has become one of the most overloaded words in tech. Depending on who you ask, it's either a ChatGPT wrapper, a multi-step workflow, or a fully autonomous system that replaces an employee.

Here's the definition I actually use when I sit down to build one: an AI agent is an LLM in a loop that can use tools, read from and write to memory, and make decisions about what to do next.

That definition is small enough to build in an afternoon, and powerful enough to automate real work. This is the guide I wish someone had handed me before I started.

Step 1: Define the Agent's Job (Narrowly)

The single biggest predictor of whether your agent will work is how clearly you can describe its job. Vague goals produce vague agents.

The rule of thumb: if you can't describe the success criteria in one sentence, the scope is too broad.

Too broad: "An agent that helps me manage my business."

Good: "An agent that reads incoming invoice emails, extracts vendor, amount, and due date, then appends a row to a Google Sheet."

Before writing any code, write down three things: the input the agent receives, the output it produces, and the boundary, meaning what it is explicitly not allowed to do (send email, spend money, delete files). That boundary is what keeps an autonomous loop safe.

Step 2: Pick the Model and the Loop

Every agent has two core pieces: a model that makes decisions, and a loop that feeds those decisions back into the next step.

For most projects in 2026, the default model choice is Claude Sonnet 4.6. It's fast, cheap enough to run in a loop, and strong at tool use. Reach for Opus when the task requires deep reasoning over long context.

The loop itself is deceptively simple:

while not done:
    response = model.run(messages, tools=available_tools)
    if response.has_tool_call:
        result = execute_tool(response.tool_call)
        messages.append(result)
    else:
        done = True
        return response.text

That's it. Everything else (memory, planning, multi-agent orchestration) is a variation on this pattern. If you're using Claude Code as your runtime, the loop is already implemented for you. You just supply the tools and the instructions.

Step 3: Give the Agent Tools (This Is Where It Gets Real)

An LLM without tools can only produce text. An LLM with tools can read files, call APIs, query databases, and trigger side effects in the world. Tools are what turn a chatbot into an agent.

There are roughly three ways to give your agent tools, in increasing order of power:

1. Built-in tools. Filesystem, bash, web fetch. Claude Code ships with these out of the box. If your agent's job involves reading code, running tests, or pulling data from URLs, you already have everything you need.

2. Custom functions. Write a function, describe its inputs and outputs in JSON schema, and hand it to the model. Good for private APIs, internal databases, and anything specific to your project.

3. MCP servers. The Model Context Protocol is the standard for exposing tools to AI agents. Instead of re-implementing Gmail, Slack, or Postgres integrations for every project, you run (or build) an MCP server once and plug it into any agent. This is the path that scales.

Step 4: Write the System Prompt (aka CLAUDE.md)

The system prompt is the agent's job description. It tells the model who it is, what tools it has, what the success criteria are, and critically, what it should refuse to do.

A good system prompt has four sections:

  • Role: "You are an agent that..."
  • Inputs and outputs: what you'll receive and what you should produce
  • How to use your tools: when to reach for each one, in what order
  • Guardrails: explicit limits, things to escalate, never-do actions

In Claude Code, this prompt lives in a file called CLAUDE.md at the root of your project. The agent reads it on every invocation. If you find yourself re-explaining the same thing across runs, that's a signal the information belongs in CLAUDE.md instead.

Step 5: Add Memory (When the Agent Needs to Remember)

A single-shot agent is stateless. Each run starts fresh. That's fine for "parse this invoice" but useless for "keep track of which customers have been contacted."

You give agents memory the same way you give yourself memory: by writing things down.

Two patterns cover most use cases:

Scratchpad memory. A single Markdown file the agent reads at the start of each run and appends to at the end. Simple, inspectable, easy to debug. Good for personal agents and small teams.

Structured memory. A database or vector store the agent queries through tools. Necessary when memory grows past what fits comfortably in context, or when multiple agents share state.

Start with scratchpad memory. Upgrade only when you hit a concrete limit. Most agents never do.

Step 6: Run Agents in Parallel (When One Isn't Enough)

Once a single agent is working, the natural next step is to run several at once. Parallel agents are how you scale from "automate one task" to "process a queue of tasks," or "explore several solutions simultaneously and pick the best."

The common patterns:

  • Fan-out / fan-in: split a big task into independent subtasks, run them in parallel, merge the results. Great for research, code review, and batch processing.
  • Specialist agents: a planner delegates to specialists (tester, writer, reviewer), each with its own system prompt and tools.
  • Critic loops: one agent proposes, another critiques, the first revises. Useful when quality matters more than speed.

Step 7: Test, Observe, Iterate

Agents fail in ways that deterministic programs don't. They hallucinate tool arguments, loop on the same action, or misread the system prompt. The cure is observability.

Three things to put in place before you trust an agent with anything real:

Log every tool call. Inputs, outputs, and the model's stated reasoning. When something goes wrong, this is your black box recorder.

Run on a fixed eval set. A folder of sample inputs and expected outputs. Every prompt change gets re-run against it. You'd be surprised how often a "small tweak" regresses an edge case.

Put irreversible actions behind confirmations. Sending email, deleting data, spending money. These should either require human approval or be locked behind a hard-coded allowlist until the agent has earned trust.

The Lesson I Keep Relearning

The agents that end up valuable aren't usually the most ambitious ones. They're the ones with a narrow job, clear boundaries, and a prompt that's been refined through a hundred real runs.

Treat the CLAUDE.md like a living document. Every bug you hit is a line you should add so the agent doesn't hit it again. Start small, ship it, and let the scope grow from there.

Getting Started

You can absolutely build all of this from first principles, and you should, at least once, to understand the moving parts. But for real projects, starting from a battle-tested template is faster and less error-prone.

I publish a collection of free, copy-paste Claude Code playbooks at claudecodehq.com, including an AI Agent Builder that scaffolds a custom agent from a plain-English description, an MCP Server Builder for exposing your own tools, and a Parallel Task Agents template for running multiple sub-agents concurrently. Worth a look if you want to skip the boilerplate and get to the interesting parts.

Originally published on claudecodehq.com

Build an AI Assistant Web App Using Streamlit

2026-04-16 03:03:15

In the previous blog, we built an AI assistant using PromptTemplate and LangChain to generate:

  • a clinic name
  • possible clinic locations

In this blog, we will take the same idea and turn it into a simple web app using Streamlit.

This makes the project more practical because users can:

  • select a clinic type
  • choose a city
  • set a distance range
  • instantly generate a clinic name and suggested locations

First, install the Streamlit

pip install streamlit

Store API Keys in a Separate File

To keep the code clean, we can store API keys in a separate Python file.

Create a file named secret_keys.py

arliai_api_key = "xxxx"
cerebras_api_key = "yyyy"
openai_api_key = "zzzz"

Create LLM Configuration File

Now create a file named llm_conf.py

This file will:

  • load API keys
  • configure providers
  • allow switching between LLMs easily
import os
from secret_keys import arliai_api_key, cerebras_api_key, openai_api_key
from langchain_openai import ChatOpenAI

os.environ['ARLIAI_API_KEY'] = arliai_api_key
os.environ['CEREBRAS_API_KEY'] = cerebras_api_key
os.environ['OPENAI_API_KEY'] = openai_api_key

llm_providers = {
    "arliai": {
        "api_key": os.getenv("ARLIAI_API_KEY"),
        "base_url": "https://api.arliai.com/v1",
        "model": "GLM-4.7",
    },
    "cerebras": {
        "api_key": os.getenv("CEREBRAS_API_KEY"),
        "base_url": "https://api.cerebras.ai/v1",
        "model": "llama3.1-8b",
    },
    "openai": {
        "api_key": os.getenv("OPENAI_API_KEY"),
        "base_url": "https://api.openai.com/v1",
        "model": "gpt-4o-mini",
    }
}

def set_llm(llm_type, creative_level=0.7):
    my_provider = llm_providers[llm_type]
    llm = ChatOpenAI(
        model=my_provider["model"],
        api_key=my_provider["api_key"],
        base_url=my_provider["base_url"],
        temperature=creative_level,
    )
    return llm

This setup makes your app flexible.

You can switch providers like this:

llm = set_llm("openai")

or

llm = set_llm("cerebras")

Add Helper Assistant Function

Now create a file named helpers.py

This file will contain the logic to:

  • generate a clinic name
  • generate possible clinic locations
  • return both results together
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain_core.prompts import PromptTemplate

def search_location_for_clinic_with_name(llm, clinic_type, city, km):
    parser = StrOutputParser()

    name_template = PromptTemplate.from_template(
        "Opening New {clinic_type} clinic. Generate only 1 professional name for that."
        "No explanation, no numbering."
    )

    search_location_template = PromptTemplate.from_template(
        "Opening a new clinic for {new_clinic}, search 5 locations in {city} where no other opponents are found within {km} distance."
        "Return ONLY comma-separated area names. No explanation, no numbering."
    )

    name_chain = name_template | llm | parser

    full_chain = (
        RunnablePassthrough()
        | {
            "new_clinic": name_chain,
            "city": lambda x: x["city"],
            "km": lambda x: x["km"],
        }
        | {
            "clinic_name": lambda x: x["new_clinic"],
            "locations": search_location_template | llm | parser,
        }
    )

    response = full_chain.invoke({
        "clinic_type": clinic_type,
        "city": city,
        "km": km
    })

    return response

Build the Streamlit Web App

Now create the main file named clinic.py

This file will:

  • create the web interface
  • show select boxes and inputs
  • call the helper function
  • display the result
import streamlit as st

from llm_conf import set_llm
from helpers import search_location_for_clinic_with_name

llm = set_llm('openai')

st.title("Generate Clinic Name & Locations")

clinic_type = st.sidebar.selectbox(
    "Pick a Type of Clinic",
    ("Ayurvedic", "Dental", "Eye", "Gynaecology", "Orthopedic", "Pediatric", "Physiotherapy", "Psychiatry", "Urology")
)

city = st.sidebar.selectbox(
    "Pick a City",
    ("Chennai", "Bangalore", "Mumbai", "Delhi", "Hyderabad", "Pune", "Madurai", "Jaipur", "Kochi", "Coimbatore")
)

km = st.sidebar.number_input(
    "Pick a Distance in KM",
    min_value=1,
    max_value=100,
    value=7
)

if clinic_type and city and km:
    response = search_location_for_clinic_with_name(llm, clinic_type, city, km)
    locations = response['locations'].strip().split(",")

    clinic_name = response['clinic_name'].strip()
    st.header(f"{clinic_type} Clinic: {clinic_name}")

    st.subheader("Generated Locations")
    for item in locations:
        st.write(f"- {item}")

Run the Streamlit App

streamlit run clinic.py

Final Thoughts

In this blog, we converted our AI helper into a Streamlit web app.

We created:

  • a file for API keys
  • a file for LLM configuration
  • a helper function for clinic name and location generation
  • a simple web app UI to run everything

This is a great starting point for building AI-powered tools with a real interface.

In the next blog, we can improve this further by adding Langchain Tools.

SFMC Data Extension Sync: Monitoring Hidden Delays

2026-04-16 03:02:04

SFMC Data Extension Sync: Monitoring Hidden Delays

Marketing campaigns fail in silence more often than they fail with alarms. While Salesforce Marketing Cloud's monitoring dashboard alerts you to outright Data Extension import failures, it remains frustratingly quiet about the performance degradation that can destroy campaign timing and segmentation accuracy.

The challenge with SFMC data extension sync monitoring delays lies in distinguishing between acceptable processing times and problematic slowdowns that haven't yet crossed into failure territory. A Data Extension that typically syncs in 15 minutes but suddenly takes 90 minutes won't trigger error notifications—yet it can derail time-sensitive campaigns and create downstream segmentation issues that surface hours later.

Understanding SFMC's Sync Blindspots

Traditional SFMC monitoring focuses on binary outcomes: success or failure. But sync delays operate in a gray zone where technical processes complete successfully while business requirements fail catastrophically.

Consider this scenario: Your morning batch import from Salesforce CRM typically completes by 6:00 AM, allowing Journey Builder to process updated contact attributes for an 8:00 AM campaign send. When that sync stretches to 7:30 AM due to increased data volume or platform congestion, Journey Builder executes on stale data without generating warnings.

The system reports success across all components:

  • Data Extension import: ✓ Completed
  • Journey activation: ✓ Active
  • Send execution: ✓ Delivered

Yet your campaign targets contacts based on outdated information, potentially sending renewal offers to customers who upgraded yesterday or promotional emails to subscribers who opted out hours earlier.

Establishing Baseline Sync Windows

Effective monitoring begins with establishing performance baselines for each critical Data Extension sync operation. This requires capturing metrics beyond SFMC's standard success/failure reporting.

Key metrics to track include:

Processing Duration: Track actual sync completion times against historical averages. A Data Extension consistently importing 50,000 records in 12-18 minutes that suddenly requires 45 minutes signals infrastructure stress or data complexity changes.

Queue Position Impact: Monitor how Data Extension import requests queue during peak processing periods. Morning batch operations often compete with overnight automation activities, creating unpredictable delays.

Record Processing Rate: Calculate records processed per minute to identify performance degradation independent of data volume changes. Declining processing rates often precede visible performance issues.

Implementing Custom Monitoring Dashboards

SFMC's native monitoring lacks the granularity needed for proactive delay detection. Building custom monitoring requires leveraging SFMC's REST API endpoints combined with external monitoring tools.

The Data Extension import status can be queried using:

GET /hub/v1/dataevents/{requestId}

This returns detailed timing information typically hidden in SFMC's interface:

  • CreatedDate: When the import request was submitted
  • StatusLastUpdated: Most recent status change timestamp
  • Request.Status: Current processing state

Cross-reference these timestamps with your established baselines to calculate performance variance in real-time.

For automation-triggered imports, query the automation execution details:

GET /automation/v1/automations/{automationId}/history

The response includes startTime and endTime values for each step, enabling precise delay attribution to specific components within complex automation chains.

Detecting Pre-Failure Performance Degradation

SFMC data extension sync monitoring delays become most critical when they indicate broader platform performance issues before they escalate to visible failures.

Implement threshold-based alerting that triggers when sync durations exceed 150% of historical averages. This provides early warning while maintaining reasonable tolerance for normal variance.

Monitor these specific delay patterns:

Progressive Slowdown: Sync times gradually increasing over days or weeks often indicate database indexing issues or accumulating platform debt requiring proactive intervention.

Sudden Spikes: Abrupt duration increases typically correlate with platform maintenance, infrastructure changes, or competing high-volume operations from other business units.

Time-of-Day Degradation: Consistent slowdowns during specific hours reveal capacity constraints that can be addressed through scheduling optimization.

Real-World Impact Scenarios

A financial services client experienced this exact challenge when their daily customer status updates began processing 90 minutes later than baseline without triggering alerts. Their compliance-critical communications continued sending based on previous-day data, creating regulatory exposure and customer confusion.

The delay stemmed from SFMC platform maintenance affecting their data processing pod, but the lack of proactive monitoring meant discovery only occurred when customer service reported incorrect campaign targeting—six hours after the issue began.

Another scenario involved an e-commerce retailer whose product inventory Data Extension sync delays caused promotional emails to advertise sold-out items. The 45-minute delay in inventory updates didn't register as a technical failure, but resulted in customer frustration and decreased campaign performance metrics.

Segmentation Error Prevention

Delayed Data Extension syncs create segmentation errors that manifest as campaign performance issues rather than technical alerts. These errors typically present as:

  • Incorrect audience sizes in Journey Builder entry criteria
  • Outdated personalization attributes in email content
  • Inconsistent contact records across related Data Extensions

Prevent these issues by implementing sync dependency validation. Before campaign execution, verify that prerequisite Data Extensions completed updates within acceptable timeframes. This can be automated using SSJS validation scripts that check LastUpdated timestamps against campaign requirements.

Conclusion

SFMC data extension sync monitoring delays require proactive monitoring strategies that extend beyond SFMC's built-in alerting capabilities. The platform's focus on success/failure binary outcomes masks performance degradation that can compromise campaign effectiveness and data accuracy.

Implementing baseline performance tracking, custom monitoring dashboards, and threshold-based alerting transforms reactive fire-fighting into predictive performance management. The investment in enhanced monitoring infrastructure pays dividends through improved campaign timing, reduced segmentation errors, and early detection of platform issues before they impact business operations.

The question isn't whether sync delays will affect your SFMC environment—it's whether you'll detect and address them before they compromise your marketing objectives.

Stop SFMC fires before they start. Get monitoring alerts, troubleshooting guides, and platform updates delivered to your inbox.

Subscribe to MarTech Monitoring

Claude Mythos Preview: Capability, Cybersecurity, and the Governance Gap

2026-04-16 03:01:42

Why Claude Mythos Preview Deserves Serious Attention

Claude Mythos Preview is not just another model release cycle headline.

It is a useful case for discussing a harder question in AI: what happens when software intelligence scales faster than institutional controls.

Anthropic introduced Mythos in a restricted-access model through Project Glasswing, emphasizing defensive cybersecurity workflows instead of broad public rollout. That decision alone is meaningful: when a model’s capabilities raise risk, deployment strategy becomes part of the technical story.

What Makes This Case Different

Based on Anthropic’s public materials, Mythos shows strong performance in software reasoning and vulnerability-related tasks. The important point is not a single benchmark score; it is the combination of capabilities:

  • advanced code understanding
  • long-horizon task execution
  • higher autonomy in technical workflows

This combination matters because it is inherently dual-use.

A system that can accelerate secure coding and vulnerability remediation can also reduce the operational barrier for offensive misuse. That is not a side effect. It is a structural property of high-capability software models.

Real Opportunities

1) Security work at machine scale

Defensive security still depends on scarce human expertise and slow audit cycles.

If used responsibly, models in this class can reduce time between:

  1. discovery,
  2. triage,
  3. patching,
  4. validation.

That is a practical gain, not a theoretical one.

2) Better support for under-resourced maintainers

Critical infrastructure often relies on open-source components maintained by small teams.

A strong AI assistant, when properly constrained, can reduce asymmetry between well-funded organizations and smaller maintainers.

3) Spillover to broader engineering quality

Capabilities relevant to security often improve adjacent workflows too:

  • code review depth
  • test generation
  • architectural analysis
  • refactoring support

In the best scenario, these systems augment engineering judgment instead of replacing it.

Risks That Should Not Be Minimized

1) Dual-use is unavoidable

The same mechanism that supports defense can also support exploitation.

Ignoring this is not optimism; it is poor risk analysis.

2) Skill-threshold compression

As model guidance improves, fewer specialized skills may be needed to execute sophisticated technical paths. This can expand the pool of actors capable of harmful operations.

3) Transparency asymmetry

Restricted deployment may be justified for safety reasons, but it also limits independent verification.

The result is a governance paradox: higher public impact, lower public auditability.

4) Bad framing on both extremes

Two weak positions dominate discussion:

  • “This changes nothing.”
  • “This is immediate catastrophe.”

A more defensible position is in between: meaningful capability shift, meaningful governance debt.

Governance Is the Core Technical Problem

For high-impact models, governance cannot be an afterthought or a policy PDF.

It has to be implemented in operations:

  • access tiering by risk profile
  • audit logs and traceability
  • sandboxed execution for sensitive tasks
  • mandatory human-in-the-loop checkpoints
  • continuous post-deployment monitoring
  • clear criteria to throttle, limit, or suspend usage

Frameworks like NIST AI RMF and OECD AI principles are useful references, but execution quality is what determines real-world safety.

Final Position

Claude Mythos Preview is better understood as a transition signal than as an isolated product event.

The central issue is no longer just model capability.

It is governance maturity: who can use these systems, under which constraints, with what accountability, and with what external scrutiny.

If institutions evolve slower than capability, technical progress will increase systemic exposure.

If governance and capability advance together, the same technology can materially strengthen defensive security.

That tradeoff is the real frontier.