2026-01-27 23:13:48
TL;DR: Organizations face a dilemma monitoring hybrid cloud (AWS, GCP) and on-premise infrastructure: extend traditional Centreon or adopt cloud-native Prometheus/AlertManager. The article compares these approaches, including a hybrid model, to guide selection based on infrastructure dynamics and operational needs.
Choosing between Centreon and Prometheus with AlertManager for cloud monitoring in AWS and GCP requires a deep dive into architecture, scalability, and integration. This guide compares both solutions, provides configuration examples, and outlines a hybrid approach to help you select the right toolset for your cloud and on-premise infrastructure.
You’re managing a hybrid infrastructure with critical workloads on-premise and across multiple cloud providers like AWS and GCP. Your existing monitoring stack, perhaps built around a traditional tool like Centreon, is robust for your servers and network gear. However, as you scale in the cloud, you face a new set of challenges:
This decision impacts everything from team skillset requirements to the reliability of your alerting. Let’s explore three practical solutions to this common problem.
For organizations with a significant investment in Centreon, extending it to monitor the cloud is a logical first step. This approach leverages Centreon’s powerful framework and connects it to cloud provider APIs, treating cloud services as just another set of resources to be monitored.
Centreon integrates with cloud platforms primarily through its “Plugin Packs” and the underlying Nagios-style plugins. The workflow is typically:
centreon-plugin-Cloud-Aws-Api or centreon-plugin-Cloud-Gcp-Api) that query the cloud provider’s monitoring API (e.g., AWS CloudWatch, Google Cloud Monitoring).CPUUtilization or FreeableMemory.First, you install the necessary AWS plugin on your Centreon poller. Then, within the Centreon UI, you would configure a new host and a service check. The underlying command might look something like this:
/usr/lib/centreon/plugins/centreon_aws_ec2_api.pl \
--plugin=cloud::aws::ec2::plugin \
--mode=cpu \
--aws-secret-key='SECRET_KEY' \
--aws-access-key='ACCESS_KEY' \
--region='eu-west-1' \
--dimension-name='InstanceId' \
--dimension-value='i-0123456789abcdef0' \
--warning-cpu-utilization='80' \
--critical-cpu-utilization='95'
This command checks the CPU utilization for a specific EC2 instance (i-0123456789abcdef0) and will change state if the utilization exceeds 80% (Warning) or 95% (Critical).
This approach embraces the cloud-native ecosystem. Prometheus is a pull-based monitoring system designed for the dynamic, service-oriented world of containers and microservices, making it a natural fit for monitoring cloud environments.
The Prometheus stack uses a different paradigm:
stackdriver_exporter for GCP, cloudwatch_exporter for AWS) that query the cloud APIs and expose the metrics in a Prometheus-compatible format. Crucially, Prometheus has built-in service discovery for AWS (EC2) and GCP (GCE), automatically finding new instances to monitor.Your prometheus.yml configuration would use service discovery to find and scrape metrics from all GCE instances in a project:
# prometheus.yml
scrape_configs:
- job_name: 'gcp-gce-instances'
gce_sd_configs:
- project: 'your-gcp-project-id'
zone: 'europe-west1-b'
port: 9100 # Assuming node_exporter is running on this port
relabel_configs:
- source_labels: [__meta_gce_instance_name]
target_label: instance
Next, you would define an alerting rule in a separate file (e.g., gce_alerts.yml):
# gce_alerts.yml
groups:
- name: gce_instance_alerts
rules:
- alert: HighCpuUtilization
expr: 100 - (avg by (instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 90
for: 10m
labels:
severity: critical
annotations:
summary: "High CPU utilization on {{ $labels.instance }}"
description: "{{ $labels.instance }} has had a CPU utilization above 90% for the last 10 minutes."
This rule will fire if any instance’s CPU utilization (calculated from the node_exporter metric) remains above 90% for 10 minutes. AlertManager would then take over to route the notification.
| Feature | Centreon | Prometheus & AlertManager |
| Architecture | Centralized pollers executing checks (push/active check model). State-based (OK, WARN, CRIT). | Decentralized scrapers pulling metrics from endpoints. Stores data as time-series. |
| Cloud Integration | Via API-based plugins (Plugin Packs). Requires manual or semi-automated configuration of hosts/services. | Native service discovery for major cloud providers. Uses exporters to query cloud APIs (e.g., cloudwatch_exporter). |
| Dynamic Environments | Can be challenging. Relies on auto-discovery modules or API scripts to keep configuration in sync. | Excellent. Service discovery automatically detects and removes targets as they are created and destroyed. |
| Alerting | Mature and powerful. Features complex dependencies, acknowledgements, scheduled downtime, and escalation chains built-in. | Highly flexible rules via PromQL. AlertManager handles grouping, silencing, and routing but lacks Centreon’s deep dependency logic out-of-the-box. |
| Data Model | Stores performance data (RRDtool) and state. Less suited for high-cardinality metrics. | Time-series with labels. Optimized for high-volume, high-cardinality data from sources like containers. |
| Best For | Hybrid environments with a strong on-premise footprint. Teams invested in a traditional ITIL/NOC workflow. | Cloud-native, containerized, and microservice-based workloads. DevOps teams that value flexibility and integration. |
You don’t always have to choose. A hybrid approach can be a powerful strategy, especially during a transition period or in complex environments where each tool plays to its strengths.
The goal is to integrate the two systems. A common and effective pattern is to use Prometheus for what it does best (collecting cloud-native metrics) and feed critical alerts into Centreon to leverage its powerful notification engine.
webhook_config receiver that forwards the alert to a custom script or API endpoint on the Centreon side.This way, your Network Operations Center (NOC) can still use Centreon as their single source of truth for alerts, while your DevOps teams can leverage the power and flexibility of Prometheus for cloud monitoring.
In your alertmanager.yml, you would define a receiver that points to a webhook listener on your Centreon server:
# alertmanager.yml
route:
receiver: 'centreon-webhook'
receivers:
- name: 'centreon-webhook'
webhook_configs:
- url: 'http://your-centreon-server/path/to/webhook-listener.php'
send_resolved: true
The webhook-listener.php script would be responsible for parsing the JSON payload from AlertManager and translating it into a passive check result for a corresponding service in Centreon. For example, it could extract the alert’s status (‘firing’ or ‘resolved’) and map it to a Centreon state (CRITICAL or OK).
The choice between Centreon and Prometheus/AlertManager is not just about technology; it’s about matching the tool to your architecture, your team, and your operational model.
Ultimately, the best solution is one that provides clear, actionable insights into the health of your systems, regardless of where they run.
👉 Read the original article on TechResolve.blog
☕ Support my work
If this article helped you, you can buy me a coffee:
2026-01-27 23:11:27
TL;DR: AI-generated logos create significant technical debt for IT professionals due to their non-scalable raster format, workflow bottlenecks, and high copyright risk. The article advocates for treating logos as technical assets, proposing solutions like emergency command-line vectorization, self-service SVG toolkits, and a “Design-as-Code” CI/CD pipeline for sustainable, scalable asset management.
AI image generators are powerful but produce non-scalable, legally ambiguous raster images unsuitable for logos. This guide explains the technical pitfalls of using tools like ChatGPT for logo design and provides scalable solutions for IT professionals, from emergency vectorization to integrating proper design assets into a CI/CD pipeline.
You’ve seen it happen. A project manager or a junior developer, empowered by the latest AI tools, proudly presents a “logo” for a new internal service, generated in seconds via a prompt to ChatGPT or Midjourney. While the initial concept might be interesting, the asset itself is the beginning of a long chain of technical debt. This is not a critique of the idea, but a diagnosis of the deliverable.
As a DevOps or IT professional, you’re the one who has to integrate this asset into production systems, and the symptoms of a poor foundation become immediately apparent:
This isn’t a design problem; it’s an engineering problem. You’ve been handed a poorly built dependency, and now you have to manage the fallout.
It’s 4:00 PM on a Friday and the deployment is blocked waiting for a usable favicon. You don’t have time to engage a designer. The immediate goal is to convert the provided raster image into a scalable vector graphic (SVG). This is a lossy, imperfect process, but it can unblock you in an emergency.
We can use the powerful, open-source vector graphics editor Inkscape, which has a robust command-line interface perfect for scripting. The goal is to use its “Trace Bitmap” functionality to generate SVG paths from the pixels.
crappy-logo.png) in your working directory.inkscape --actions="file-open:crappy-logo.png;select-all:all;trace-bitmap;export-filename:logo-traced.svg;export-do;file-close"
This command performs the following actions in sequence:
file-open:crappy-logo.png: Opens the input file.select-all:all: Selects the image object on the canvas.trace-bitmap: Runs the default bitmap tracing algorithm. You can add parameters for more control (e.g., trace-bitmap:scans,8 for 8 color layers).export-filename:logo-traced.svg: Sets the output file name.export-do: Executes the export.file-close: Closes the file without saving changes to the original.Result: You now have a logo-traced.svg file. This SVG is often complex, with many unnecessary nodes, and may have visual artifacts. It will likely need manual cleanup in Inkscape’s GUI or optimization, but it’s a scalable vector format that can get you through the immediate deployment.
The best way to fix a recurring problem is to provide better tools and processes. Instead of letting team members resort to external AI generators, empower them to create simple, consistent, and technically sound assets for internal projects. Since SVG is just XML, developers can create and modify it with code.
For a new microservice or an internal dashboard, a complex, artistic logo is overkill. A simple, geometric logo is often sufficient. You can create a template that developers can easily modify.
Here is an example of a simple, hand-written SVG for a tool called “BuildBox”. It’s just a box with a checkmark, using your company’s approved brand colors.
<svg width="100" height="100" viewBox="0 0 100 100" xmlns="http://www.w3.org/2000/svg">
<!-- A simple, solid background box with rounded corners -->
<rect
x="5" y="5"
width="90" height="90"
rx="10" ry="10"
fill="#333F4F" /> <!-- Your company's primary dark color -->
<!-- A 'check' symbol to indicate success/builds -->
<path
d="M30 50 L45 65 L70 40"
stroke="#4CAF50" <!-- Your company's success color -->
stroke-width="8"
fill="none"
stroke-linecap="round"
stroke-linejoin="round" />
</svg>
A developer can copy this, change the fill and stroke colors, or adjust the path data to create a different icon. It’s version-controllable, incredibly lightweight, and requires no specialized tools.
For any customer-facing product or long-running project, logos and other design assets must be treated as first-class citizens of the development lifecycle. This means engaging professional designers and integrating their output into your automated workflows—a “Design-as-Code” approach.
/src/assets/svg).Here’s an example command for an SVGO optimization step in a CI script:
# Install SVGO as a dev dependency in your project
npm install --save-dev svgo
# Run SVGO from your package.json scripts or CI file
# This command processes a folder, outputs to a build folder,
# and keeps the code human-readable.
npx svgo --folder=src/assets/svg --output=dist/assets/svg --pretty --indent=2
This process ensures that every logo and icon in your system is:
To put it all in perspective, here is how the different approaches stack up against key technical requirements.
| Feature | AI Image Generator | Self-Service SVG | Professional Design Workflow |
| Scalability (Vector vs. Raster) | Raster. Poor scalability. | Vector. Infinitely scalable. | Vector. Infinitely scalable. |
| Editability | Extremely difficult. A flat image. | Simple. Edit colors and shapes in code. | Excellent. Master files are fully editable. |
| Copyright Risk | High. Ownership is ambiguous. | None. The asset is original code. | None. Clear work-for-hire ownership. |
| Brand Consistency | Very low. AI has no context of your brand. | Moderate. Can use brand colors and simple shapes. | High. Enforces brand guidelines. |
| Initial Speed | Very fast (seconds). | Fast (minutes). | Slow (hours to days). |
| Long-term Maintainability | None. It’s a dead-end asset creating debt. | Good. Version controlled and easy to update. | Excellent. Managed and automated pipeline. |
While AI image generation is a fascinating technology, it is the wrong tool for creating foundational brand assets like logos. By treating logos as critical technical assets and applying engineering principles to their management, you can avoid technical debt and build more robust, professional systems.
👉 Read the original article on TechResolve.blog
☕ Support my work
If this article helped you, you can buy me a coffee:
2026-01-27 23:09:41
I've been using AI coding assistants daily for over a year now.
Claude Code for complex refactoring, Cursor for quick edits, GitHub Copilot for autocomplete. But there was always a frustrating gap: these tools couldn't see my architecture documentation.
Every time I asked Claude to "add a new endpoint to the payment service," it would guess. It didn't know that our payment service talks to Stripe, uses Redis for caching, and has specific security requirements documented in our ADRs. I'd spend more time correcting the AI than writing code myself.
Today, we're closing that gap. Archyl now exposes a full MCP (Model Context Protocol) server with 56 tools that give AI assistants complete visibility into your architecture.
Model Context Protocol is Anthropic's open standard for connecting AI assistants to external tools and data sources. Think of it as a universal adapter between LLMs and the systems they need to interact with.
Instead of copy-pasting context into prompts, MCP lets AI assistants directly query your tools. They can read data, take actions, and stay synchronized with your actual systems.
And Archyl's MCP server means your architecture documentation becomes a first-class data source for any AI assistant.
Here's where it gets exciting. With the Archyl MCP server, your AI assistant can:
Ask natural questions and get real answers:
"Which elements are linked to the Payment Processor system?"
"What containers does the User Service depend on?"
"Show me all systems that interact with our PostgreSQL database"
"What ADRs affect the authentication flow?"
The AI doesn't guess. It queries your actual documented architecture and returns precise, structured information.
Claude Code querying architecture via MCP
Your AI understands the full hierarchy:
Drill down from systems to containers to components
Explore relationships and dependencies
Understand the technology stack at each level
When you ask "what technologies does the Order Service use?", the AI returns the actual documented stack, not a hallucinated guess.
This is the killer feature. The MCP server supports write operations:
Add relationships between elements
Create and update ADRs
Write project documentation
Define user flows
Ask Claude to "document the new notification service we just built" and it can create the C4 elements, link them to existing systems, and even draft an ADR explaining the design decision.
The AI always sees the latest state. No stale context, no outdated documentation. When your teammate updates the architecture, your AI assistant sees it immediately.
56 Tools, One Integration
We didn't build a minimal proof-of-concept. The MCP server exposes comprehensive functionality:
Projects & Settings: List, get, and manage projects. Configure AI providers and discovery settings.
C4 Model (All 4 Levels): Full CRUD for systems, containers, components, and code elements. Create relationships, manage overlays, handle the complete model hierarchy.
Documentation: Create and update architecture documentation. Link docs to specific C4 elements.
ADRs: Full Architecture Decision Record management. Create, update, list, and link ADRs to the elements they affect.
User Flows: Define and visualize user journeys through your system.
Discovery: Trigger AI-powered architecture discovery on your connected repositories.
Teams: Query team structure and project access.
Every tool returns structured data that AI assistants can reason about. No parsing HTML, no scraping UIs, no brittle integrations.
Here's how to connect Claude Code (or any MCP-compatible tool):
Go to your Archyl profile, click on "API Keys", and create a new key. Give it a descriptive name like "Claude Code" and select the scopes you need (read-only or full access).
Copy the key immediately — you won't see it again.
Add Archyl to your MCP configuration. For Claude Code, add this to your settings:
{
"mcpServers": {
"archyl": {
"url": "https://api.archyl.com/mcp",
"transport": "http",
"headers": {
"X-API-Key": "your_api_key_here"
}
}
}
}
That's it. Ask your AI assistant about your architecture and watch it fetch real data from Archyl.
Try these prompts:
"List all my Archyl projects"
"What systems exist in the E-commerce Platform project?"
"Show me the relationships for the Payment Gateway"
"Create a new ADR explaining why we chose PostgreSQL"
Why This Matters
Architecture documentation has always had a discoverability problem. You write it, it lives in a wiki or a diagram somewhere, and then nobody reads it. Engineers ask questions in Slack instead of checking the docs.
MCP changes the interaction model. Documentation isn't something you go read — it's something your AI assistant knows. When you ask "how does payment processing work?", the answer comes from your actual architecture, not the AI's training data.
This has profound implications:
Onboarding becomes instant. New engineers ask their AI about system architecture and get accurate answers from day one.
Context is always available. When writing code, the AI knows exactly what services exist, how they connect, and what decisions shaped them.
Documentation stays current. Because it's actively used, inaccuracies get noticed and fixed. Dead documentation is documentation nobody reads.
AI suggestions are grounded. When Claude suggests a design, it's informed by your actual architecture, not generic patterns.
We're entering an era where AI assistants are genuine collaborators in software development. But they're only as good as the context they have access to.
Most AI interactions today are context-poor. You paste some code, add a brief description, and hope the AI figures out the rest. The results are mediocre because the AI is working blind.
MCP-powered integrations flip this model. Your AI has persistent, queryable access to everything it needs: your code (via repository integration), your architecture (via Archyl), your issues (via Jira/Linear integrations), your documentation (via Notion/Confluence integrations).
The AI becomes a true team member with access to team knowledge.
Archyl's MCP server is our contribution to this vision. Your architecture shouldn't be locked in a diagram tool. It should be accessible to every tool your team uses, including your AI assistants.
This is version 1. Here's what we're building next:
Proactive suggestions: The MCP server could watch for architecture changes and suggest documentation updates.
Cross-reference linking: Connect ADRs to specific commits, link documentation to CI/CD events, create a web of interconnected knowledge.
Custom queries: Define organization-specific queries like "show me all services owned by the payments team."
Audit logging: Track every MCP interaction for compliance and debugging.
The MCP server is available today on all Archyl plans. If you're already using Claude Code, Cursor, or another MCP-compatible tool, you can connect in minutes.
Create an API key, add the configuration, and start talking to your architecture.
And if you're not using Archyl yet, sign up for free and see how AI-powered architecture documentation works. Connect a repository, run discovery, and then connect your favorite AI assistant.
Your architecture is too important to be locked in static diagrams. Let your AI assistants explore it.
Want to learn more about Archyl's AI capabilities? Check out our post on AI-Powered Architecture Discovery, or start with the basics in our Introduction to the C4 Model.
2026-01-27 23:07:38
Recently I went through a phase of learning Nim. It was a good time, but I ultimately decided not to dwell too deep into it. This journey however spawned a microlibrary: Monika.
I started Nim on a whim because I wanted to have a system's programming language under my belt or at least native compilation, better speed and for the sake of applying what I was learning. I did C a lot of years ago, but I didn't want to deal with pointers directly and Rust syntax was a bit too much for me (even though I'm used to seeing some unsightly Scala's method signatures).
This library is actually a port from my own library JpnUtils written in Scala. It has a few extra functions that I didn't include in JpnUtils.
Monika is a Japanese strings microlibrary for Nim. It provides a clean, "implicit-style" API for handling characters, strings, and conversions.
Monika uses a converter to extend standard strings. Simply import Monika and start using the utility methods.
import monika/japaneseutils
import monika/punctuation
import monika/halfwidthconverter
if "こんにちは".hasHiragana:
echo "Contains Hiragana!"
if "モニカ".hasKatakana:
echo "Contains Katakana!"
if "学校".hasKanji:
echo "Contains Kanji!"
# full-width to half-width
echo "ハロー、ワールド!".toHalfWidth
# Output: ハロー、ワールド!
# Check for voiced marks
if "が".hasDakuten:
echo "This character is voiced."
let msg = "Hello"
echo msg.wrapInSingleQuotes # Output: 「Hello」
echo msg.wrapInDoubleQuotes # Output: 『Hello』
let s = "ガキ"
let h = "が".asRune()
let k = "エ".asRune()
if h.isSome:
echo h.get.hiraToKata() # Output: カ
else:
echo "empty string"
if k.isSome:
echo k.get.kataToHira() # Output: え
else:
echo "empty string"
let str = "日本語abcカナ"
echo str.containsOnly({Kanji, Katakana}) # false
let summary = str.scriptSummary()
echo summary.hiragana # 0
echo summary.katakana # 2
echo summary.kanji # 3
echo summary.other # 3
It was a good experience, but If I were to be honest; I don't see myself using Nim that much outside of really niche things and even so, I might reconsider it. Maybe I'll make a post in the future about what I don't like about the actual Nim ecosystem.
If you wanna check it out, the repo's here
2026-01-27 23:01:23
A wave of armed conflicts and international political crises is redrawing power lines and pushing millions into perishing wars in Ukraine, Palestine, in the Middle East, Sudan, Yemen, Ethiopia, and Afghanistan, and now the recent US attack on Venezuela are not isolated episodes. They are part of a broader breakdown of cooperation and human solidarity.
When the United States withdrew from international aid, abandoning 66 organizations including 31 UN entities, many working on peace, democracy, and climate — it’s a clear and loud message — “multilateralism is losing ground, and national interests are taking its place”. The cost is not abstract. It is a voice for lives, rights, and the dignity of people who depended on these institutions.
Instead of preventing wars and protecting civilians, a retreat from multilateralism is fueling human rights violations, shrinking democratic freedoms, and giving more space to authoritarian politics. The world today is watching a system unraveling that was built after 1945 to keep humanity away from the horrors of another world war under UN Charter to uphold human rights, and promote peace and prosperity through cooperation.
When the United Nations was created, its promise was simple but powerful “prevent conflict, defend human rights, and promote social progress through cooperation”. Over time, multilateralism took shape through international human rights law, WHO-led global health cooperation, and development cooperation for decolonization and poverty reduction. It never claimed to erase conflict completely—but it did create a space where negotiation could replace violence.
The UN Security Council, is no doubt meant to maintain international peace, is now split by veto politics. In June 2025, a Gaza ceasefire resolution failed because of a veto, despite deepening humanitarian tragedy. Permanent members have become “Judge and Jury”, blocking peace efforts on Ukraine, Syria, and occupied Palestinian territory. The victims are civilians, not governments.
The war in Ukraine, now in its third year, has taken tens of thousands of lives and displaced millions. Its shockwaves hit food security across Africa and the Middle East. Gaza faces one of the worst humanitarian crises of this century, with massive civilian casualties and the displacement of millions. These conflicts show how far multilateral tools have weakened when power politics takes precedence over human protection.
The consequences go far beyond war zones. According to the “2025 Sustainable Development Goals (SDG) Progress Report,” many targets are now stalled or going backward, especially in countries affected by conflict and climate shocks. COVID-19 widened inequalities. Climate disasters — from “flooding in Southeast Asia and drought in the Horn of Africa” have wiped out livelihoods where safety nets are already fragile.
Meanwhile, states are turning inward. Sanctions and unilateral actions taken outside international frameworks often deepen suffering. US policy swings from halting development aid and withdrawing the Paris Climate Agreement have damaged international cooperation and support for millions suffering in the crisis in various parts of the world.
The erosion of multilateralism has a clear human face. It is women facing heightened violence. It is refugees losing access to basic services. It is families in climate-hit communities losing both land and hope. When global cooperation collapses, accountability weakens, conflicts last longer, and people become disposable.
And at precisely this moment, global threats are multiplying. Pandemics, climate change, cyber insecurity, and mass displacement do not respect borders. No single state can solve them alone. Walking away from multilateralism today is like walking away from lifeboats during a storm.
This is not about defending a perfect system. Multilateralism was never perfect. The real question is simple and urgent. Can humanity afford its collapse right now?
If the answer is no—then and it must be rebuilding cooperation becomes both a moral responsibility and a practical necessity. Reforming the UN Security Council, strengthening international law, and defending human rights systems are not academic debates. They are survival strategies.
The retreat from multilateralism and rising human rights violations are not separate trends. They are interconnected crises that feed each other. A world where vetoes silence humanity is a world that drifts toward darkness.
This moment is time-sensitive. Delay comes with a cost of perishing human lives, freedoms crushed, and futures erased. We are faced with a choice: renew our commitment to working together, or accept a future where power replaces principle, and human dignity becomes collateral damage. If multilateralism fails, it is not institutions alone that collapse. It is the promise that every life matters.
2026-01-27 23:00:59
Washin Village AI Director Tech Notes #4
Just like human teamwork, Ensemble Learning makes multiple AI models work together, combining their judgments for more accurate results.
Core concept: Two heads are better than one.
Single model limitations:
| Model | Pros | Cons |
|---|---|---|
| Unified_v18 | High accuracy (79.5%) | Prone to errors on certain categories |
| Inc_v201 | Different training data | Lower accuracy (46%) |
However! The probability of both models making the same mistake is very low.
def ensemble_vote(predictions):
"""Majority voting"""
from collections import Counter
votes = Counter([p['class'] for p in predictions])
return votes.most_common(1)[0][0]
def ensemble_weighted(predictions, weights):
"""Weighted average confidence"""
combined = {}
for pred, weight in zip(predictions, weights):
for cls, conf in pred.items():
combined[cls] = combined.get(cls, 0) + conf * weight
return max(combined, key=combined.get)
def ensemble_validate(primary, secondary, threshold=0.85):
"""Primary model predicts, secondary validates"""
if primary['confidence'] >= threshold:
return primary['class']
if primary['class'] == secondary['class']:
return primary['class']
return "needs_human_review"
Input Image
│
├─→ Primary Model (Unified_v18) ──→ Prediction + Confidence
│
└─→ Validation Model (Inc_v201) ──→ Prediction + Confidence
│
↓
Ensemble Decision Engine
│
↓
Final Result
| Scenario | Action |
|---|---|
| Primary confidence ≥ 90% | Use primary result directly |
| Both models agree | Use that result |
| Models disagree | Choose higher confidence |
| Both uncertain | Mark for human review |
| Configuration | Top-1 Accuracy | Human Review Rate |
|---|---|---|
| Primary only | 79.5% | 0% |
| Ensemble (voting) | 81.2% | 0% |
| Ensemble (validation) | 82.8% | 5% |
Conclusion: Ensemble + validation mode works best!
class EnsembleValidator:
def __init__(self):
self.primary = YOLO('Unified_v18.pt')
self.secondary = YOLO('Inc_v201.pt')
def predict(self, image):
# Primary model prediction
p1 = self.primary.predict(image)
# High confidence: return directly
if p1.confidence >= 0.90:
return p1.class_name, "high_confidence"
# Secondary model validation
p2 = self.secondary.predict(image)
if p1.class_name == p2.class_name:
return p1.class_name, "validated"
# Return higher confidence
if p1.confidence > p2.confidence:
return p1.class_name, "primary_higher"
else:
return p2.class_name, "secondary_higher"
Washin Village 🏡 by AI Director