2026-02-21 08:05:02
Two months ago, I merged a PR that added face detection to our image processing service. Nobody on the team realized this put us squarely in the EU AI Act's high-risk category.
We found out three weeks later, during a manual review. By then, the feature had been in production for 20 days with zero compliance documentation.
That's when I decided: compliance checks belong in CI/CD, not in quarterly reviews.
Most teams treat AI regulation the same way they treated GDPR in 2018 — as a legal problem, not an engineering problem. Someone from legal sends a spreadsheet once a quarter, engineers fill it out from memory, and everyone hopes nothing slipped through.
This doesn't work when your codebase changes daily. A single pip install face-recognition in a feature branch can shift your regulatory classification overnight.
My CI pipeline now checks three things on every PR that touches Python files:
1. Framework detection — What AI/ML libraries are in the dependency tree? New imports of transformers, torch, tensorflow, face_recognition, or similar trigger a flag.
2. Risk indicator scan — Does the code contain patterns associated with high-risk categories? Keywords like candidate_score, credit_risk, biometric, recidivism in function names, variable names, or comments.
3. Article mapping — Which EU AI Act articles apply based on what was found? The scan maps findings to specific obligations (Article 6 for high-risk classification, Article 52 for transparency, Article 53 for foundation models).
If anything changes between the base branch and the PR, the check fails with a clear message explaining what was detected and what the developer needs to do.
Here's the workflow I use. It runs on every PR that modifies Python files:
name: EU AI Act Compliance Check
on:
pull_request:
paths:
- '**.py'
- 'requirements*.txt'
- 'pyproject.toml'
jobs:
compliance-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Run compliance scan
run: |
pip install mcp-eu-ai-act
python -m mcp_eu_ai_act.scan --path . --output report.json
- name: Check results
run: |
python -c "
import json
report = json.load(open('report.json'))
risks = report.get('risk_indicators', {})
high_risk = any(v for v in risks.values() if v)
if high_risk:
print('::warning::High-risk AI indicators detected')
for cat, items in risks.items():
for item in items:
print(f'::warning::{cat}: {item}')
print('Review required before merge.')
else:
print('No high-risk indicators found.')
"
The key decisions:
requirements.txt can change your classification even without code changes.::warning instead of exit 1 because not every detection means "stop." Sometimes you need face detection. The point is to make it visible, not to block everything.report.json becomes part of the PR artifacts. When a regulator asks "when did you assess this?", you point to the PR.When the scan detects something, the developer sees a comment like:
EU AI Act Compliance Scan — 2 findings
[HIGH-RISK] Biometric indicators detected in src/face_verify.py
- face_recognition.face_encodings (line 34)
- Applicable: Article 6, Annex III Category 1
[TRANSPARENCY] LLM interaction detected in src/chatbot.py
- openai.ChatCompletion (line 12)
- Applicable: Article 52 (disclose AI interaction to users)
Action required: Document risk assessment in docs/compliance/
Clear, actionable, tied to specific lines. No legal jargon, no 50-page report.
In the first month of running this:
sklearn, but since warnings don't block, nobody complained.The resume parser is the one that justified the entire setup. Without the automated check, it would have shipped undocumented to production — exactly like the face detection incident.
Full enforcement of high-risk system rules starts August 2026. That's about six months from now. If you're shipping AI features to EU users (or EU customers of your SaaS), the clock is running.
Adding a compliance scan to CI doesn't guarantee compliance — you still need legal review for anything flagged as high-risk. But it does two things no manual process can:
The MCP server I use for scanning is open-source: arkforge.fr/mcp. It runs locally, analyzes your Python files, and outputs structured JSON.
If you want the CI/CD integration without maintaining it yourself, there's a hosted version that plugs into GitHub Actions and sends alerts when your compliance status changes.
Either way — the worst time to discover you're building a high-risk AI system is after a regulator tells you.
I build tools that help developers deal with AI regulation. If you've set up something similar, I'd like to hear what worked — drop a comment or open an issue on the repo.
2026-02-21 07:53:27
Ever tried setting up MCP servers for Claude Code or Cursor? You end up copy-pasting JSON configs, hunting for package names, and praying the args are right.
I got tired of it and built mcpx — a pip-installable CLI that manages MCP servers the way npm manages packages.
pip install mcpx
mcpx search filesystem
mcpx install filesystem
That's it. The server gets added to your ~/.claude.json (or Cursor/VS Code config) automatically. Restart your AI tool and it works.
.mcp.json
mcpx/
├── cli.py # typer CLI with 11 commands
├── config.py # Auto-detect Claude/Cursor/VS Code configs
├── registry.py # 50+ server registry with search/browse
├── installer.py # Install/uninstall into JSON configs
├── doctor.py # Diagnostic checks (deps, config, platforms)
└── registry_data.json # Curated server database
The registry is a curated JSON file with server metadata — package name, command, args, env vars, config parameters, categories, and star counts. The installer resolves parameters (like API keys) into the final config entry.
# See what is popular
mcpx top
# Search for database servers
mcpx search database
# Get details
mcpx info supabase
# Install with API key
mcpx install supabase -p api_key=your_key
# Check your setup
mcpx doctor
# List what is installed
mcpx list
MCP (Model Context Protocol) is Anthropic's open standard that lets AI tools like Claude connect to external services — databases, APIs, file systems, cloud providers. It is powerful but the setup is manual: find the right npm package, figure out the args, edit JSON config, hope it works.
I wanted the experience of brew install or npm install but for MCP servers. Search, install, done.
pip install mcpx
mcpx top
GitHub: github.com/LakshmiSravyaVedantham/mcpx
Stars appreciated if you find it useful!
Inspired by the MCP ecosystem growth and projects like claude-code-templates. Built to make the MCP developer experience as smooth as possible.
2026-02-21 07:52:22
Hi, I'm Ayra👋
I'm an aspiring front-end developer, and instead of watching another React tutorial... I decided to build something on my own.
After practicing HTML, CSS, and JavaScript, I wanted to challenge myself:
Can I build a React project without following step-by-step guidance?
So I chose Frontend Mentor's Digital Bank Landing Page Challenge and rebuilt it using React, even though the challenge was originally meant for plain HTML, CSS and JavaScript.
Here's what I learned.
React is component-based, which means you can break your UI into small, reusable pieces instead of writing everything in one large file.
Instead of traditional HTML tags, React uses JSX, which lets you write HTML-like syntax directly inside JavaScript.
Even more interesting:
That means no manual DOM manipulation. No querySelector. No addEventListener for every interaction.
Just state → UI
I started with the Header component.
Instead of hardcoding navigation links, I stored them in an array and rendered them dynamically using map():
const links = ["Home", "About", "Contact", "Blog", "Careers"];
<ul className="nav__links" aria-label="navigation links">
{links.map(link => (
<li key={link}>
<a href="#">{link}</a>
</li>
))}
</ul>
It felt like a small decision, but it made the component much more flexible.
This was the part that truly made me understand React.
In vanilla JavaScript, I would normally:
In React, we manage UI changes using state.
Here's how I handled the toggle:
import { useState } from "react";
function Header() {
const [isOpen, setIsOpen] = useState(false);
return (
<button
className={`toggle ${isOpen ? "toggle--open" : ""}`}
aria-label="menu toggle"
aria-expanded={isOpen}
onClick={() => setIsOpen(!isOpen)}
>
<span></span>
<span></span>
<span></span>
</button>
);
}
Now, instead of "manually changing the DOM," I just update the state.
React handles the rest.
That mental shift of thinking declaratively instead of imperatively was huge for me.
I didn't want my Header component to become too large.
So I split things into:
Both depend on the same isOpen state, so I passed it down as props:
<MobileMenu links={links} isOpen={isOpen} />
<Backdrop isOpen={isOpen} onClose={() => setIsOpen(false)} />
You can check out the full MobileMenu and Backdrop components in my GitHub repo: https://github.com/ayra-baet/bank-landing-page-react
This made everything:
It also forced me to think about:
This is something you don't really experience when building static HTML pages.
Even though I only rebuilt the header section, I learned a lot:
Should everything live in one file?
Or should it be broken into smaller pieces?
If multiple components need the same state, it should probably live higher up.
Arrays + props + dynamic rendering = scalable UI
It's not just "JavaScript with different syntax."
It's a shift in mindset.
React felt confusing when I just watched tutorials.
But when I built something on my own, struggled a bit, and solved real UI problems... it finally clicked!
In my next post, I'll share how I built the rest of the landing page...
2026-02-21 07:49:02
This is a follow-up to my previous articles: AWS SRE's First Day with GCP: 7 Surprising Differences and AWS Multi-Account Architecture: The Organizational Chaos No One Talks About.
A few months ago, I wrote enthusiastically about GCP after my first hands-on experience. The infrastructure design was cleaner. The networking model made more sense. The pricing was better. I genuinely believed GCP had solved many of AWS's fundamental architectural problems.
After actually building and running my personal ML project on GCP for several months, I need to eat some humble pie.
Here's what I've learned: Infrastructure elegance doesn't win. Ecosystem breadth does.
GCP's design is still superior from an architectural purity standpoint. But AWS remains the better choice for most organizations—and now I understand why.
When I praised GCP's cleaner architecture, I focused on foundational services: compute, networking, storage, Kubernetes. These are areas where GCP genuinely excels.
But here's what I didn't account for: The majority of production workloads don't just need foundational services. They need the ecosystem around them.
In AWS:
Amazon Managed Streaming for Kafka (MSK) gives you:
In GCP:
You build it yourself with open-source Kafka on GCE instances or GKE.
The reality check:
Running Kafka in-house is not impossible—SREs have been doing it for years. But it's a significant operational burden:
For a dedicated SRE, this becomes a part-time to full-time job if Kafka is core to your business. For a small team, it's a distraction from product development.
AWS MSK doesn't make this complexity disappear—it just shifts the responsibility. That shift is worth hundreds of thousands in salary costs annually for most organizations.
In AWS:
Amazon OpenSearch Service (formerly Elasticsearch Service):
In GCP:
Roll your own Elasticsearch cluster, or use Elastic Cloud Marketplace (third-party, more expensive).
The operational nightmare:
Elasticsearch is notoriously finicky in production:
I've seen dedicated SRE teams with 2-3 engineers just managing Elasticsearch clusters for logging and observability. It's that complex at scale.
Unless search is your core business (like Elastic.co itself), running it in-house is resource-intensive compared to using a managed service.
AWS:
Amazon Managed Workflows for Apache Airflow (MWAA)
GCP:
Cloud Composer (managed Airflow)
My experience:
I previously ran Airflow in-house on Docker. Both managed services are better than DIY. But AWS MWAA integrates more naturally with the broader AWS ecosystem (Lambda, Step Functions, Glue, etc.).
For GCP, if you're already heavily invested in BigQuery and Dataflow, Cloud Composer makes sense. For multi-service orchestration, MWAA edges ahead.
In my first article, I praised GKE as more mature and better integrated. After deeper experience, I've changed my mind.
Why GKE looks better on day 1:
gcloud container commands mirror kubectl patternsAs an SRE coming from AWS, GKE genuinely felt cleaner and more Kubernetes-idiomatic.
In EKS:
You need to install and maintain add-ons for AWS integration:
First reaction: "Why isn't this built-in? GKE is cleaner!"
After practice both: This separation is actually better for enterprise environments:
Reality: If you manage these through Terraform and hide the complexity in IaC, the operational overhead is minimal. After initial setup, add-ons are stable and rarely require attention.
This was the biggest surprise.
Cost comparison for a production cluster:
Scenario: 10-50 nodes, scaling based on load, mix of workload types
GKE (with Google-managed node pools):
- Control plane: FREE (under 15,000 pods)
- Nodes: Standard pricing
- Node pool autoscaling: Built-in
- Typical monthly cost: $2,500-4,000
EKS (with managed node groups + Karpenter):
- Control plane: $73/month per cluster
- Nodes: Standard pricing (often cheaper than GCP equivalent)
- Managed node groups: Built-in autoscaling
- Karpenter: Advanced provisioning (free, OSS)
- Typical monthly cost: $2,200-3,500
EKS is 10-15% cheaper for equivalent workloads at scale, even with the control plane cost.
Why? Two reasons:
What is Karpenter?
An open-source Kubernetes cluster autoscaler built by AWS, designed to replace the standard Cluster Autoscaler.
Why it's better:
Traditional autoscaling (GKE and EKS Cluster Autoscaler):
Karpenter:
GKE alternative: GKE has improved its autoscaling, but as of 2025, it doesn't match Karpenter's flexibility and intelligence.
After running workloads on both:
GKE advantages:
EKS advantages:
For SRE teams managing production infrastructure at scale, EKS wins. The cost savings and Karpenter's intelligence outweigh GKE's cleaner initial experience.
For most organizations, AWS remains the better choice.
Not because the infrastructure is better designed (it's not).
Not because networking is simpler (it's definitely not).
But because AWS reduces the operational burden more completely through breadth of managed services.
Choose GCP if:
Choose AWS if:
Have you compared AWS and GCP in production? What was your experience? Did you find the managed services gap as significant as I did? Let me know in the comments.
This article is part of a series exploring practical cloud architecture. Check out the previous articles for more context on AWS multi-account architecture and GCP's design advantages.
Connect with me on LinkedIn: https://www.linkedin.com/in/rex-zhen-b8b06632/
I share insights on cloud architecture, SRE practices, and honest takes on cloud platforms. Let's connect!
2026-02-21 07:39:41
liner notes:
Professional : I was planning on taking the day off to rest up from being sick, but it was a light day and I wanted to see if I could get this MCP App to a good place. Started the day off catching up with some team members. Responded to a couple of community questions. Then I dedicated the rest of the day to getting an MVP of this MCP App going. Yesterday, I tried using a "skill" that was in the official docs to use AI to generate an MCP App, but it failed to use the "skill" twice. I then forced it to use the skill an the app that it created didn't even have the start up command that the docs said it should have. It also had a different file structure from the example apps I've seen. Kind of felt like a waste of time. So I just followed the manual steps and then added the stuff that I wanted. Had to dig deep into the docs and API to find if I could make it work. I found it and it works, kind of. haha I got the base and main functionality working and there's an extra feature I would like to add, but it's working.
Personal : Last night, I went through Bandcamp and picked up some projects and started to put together the social media posts. I also played around with some new Web APIs before calling it a night.
Going to put together the radio show for tomorrow. The laptop battery that I replaced seems to be working. After the radio show work, I may work on another application I've been looking to build. We'll see. Going to eat dinner and get to work. Radio show on Saturday at https://kNOwBETTERHIPHOP.com and Sunday study sessions at https://untilit.works.
Have a great night and weekend!
peace piece
Dwane / conshus
https://dwane.io / https://HIPHOPandCODE.com
2026-02-21 07:30:35
Back in 2018 I wrote a Medium article about a school project I built to help clean up GitHub repositories. I had published the little app and shared it with classmates. Little did I know that people were using the tool!
The way I found out was this: I published some other crappy article on Medium and wanted to see how many people had read it. It was something like 6. But I couldn't help but notice there were over 10k reads on the dev tool one. And a bunch of comments about how it's broken and it sucks. Woo hoo.
So I immediately fixed all the bugs and responded to the comments.
After that I polished up the app, made a new UI, and generally got obsessed with the the project again.
Funnily enough, even after that wake up call I didn't think about starting a db to track usage. I eventually did in 2022, but I lost 4 years of it's most popular time period. The db went down in 2024 and I didn't notice for like 9 months. Jeez. There were a lot of hard lessons learned here.
Eventually I got all the basics covered. But at that point the need for the tool died down significantly. I've tracked that we've had 6.5k unique signups for the RepoSweeper between 2022-2024 + 2025-2026. In reality the number is prob north of 15k.
Not many people squander lightning in a bottle as regularly as I do.
I recently expanded the tool suite to do other bulk actions that I saw in Github's Community Discussion board: collaborator management, visibility setting, archives, etc.
Anyway. I did a writeup about the new features and documented everything I'd built.
11 reads.
Not 11k. Eleven.
The painful irony is that the product is genuinely more useful now. But "more useful" doesn't make for a better headline.
The original post worked because it was about the reader's problem, not my product. "25 ways my tool helps you" is always worse than "here's the exact shell command I was too lazy to remember."
Every good dev post probably has one job: make the reader feel seen before you make them feel sold to.
I built something that solved a real problem for me, wrote honestly about it, and 25k people related. Then I got excited about what I built next, wrote about that, and almost nobody cared — because I switched from their perspective to mine.
Going back to basics. Next post: one problem, one solution, one story.
Hopefully I'll notice when it blows up this time.
RepoSweeper is still free if you want to check it out. RepoRecap PRO is the AI layer I got too excited about. Roast me in the comments.
Roast me in the comments.