MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Python Check If Number is an Integer

2026-02-24 12:53:46

Simply here is the ultimate code...

def is_int(x: int | float | str | None):
    try:
        return float(x).is_integer()
    except (TypeError, ValueError):
        return False

TypeError if we try to float(None).
ValueError if x is a string and contain non-digit characters.

Example usage (test cases)

a1 = 12.0  # True
a2 = "12.0"  # True
a3 = "012.0"  # True

b1 = 12  # True
b2 = "12"  # True
b3 = "012"  # True

c1 = 12.34  # False
c2 = "12.34"  # False
c3 = "012.34"  # False

d1 = None  # False
d2 = "12X100ML"  # False
d3 = "12.x"  # False

print(is_int(a1))
print(is_int(a2))
print(is_int(a3))

print(is_int(b1))
print(is_int(b2))
print(is_int(b3))

print(is_int(c1))
print(is_int(c2))
print(is_int(c3))

print(is_int(d1))
print(is_int(d2))
print(is_int(d3))

Explanation (how it started)

A simple isinstance(x, int) won't work fully enough. It fails when cases like this:

  1. Number stored as str.
  2. Integer but represented as float type (ex. x=12.0 technically it's an integer value right.).
  3. Or some edge cases where string like "abc" or "12.x" should gracefully return False rather than crashing.

This is important when we work for web forms, APIs, type strict column on database like on Django (for ex. models.PositiveIntegerField, etc.).

Here is how my initial code was before getting simplified in the code above.

def is_int(x: int | float | str | None):
    if isinstance(x, str):
        try:
            x = float(x)
        except (TypeError, ValueError):
            return False

    if isinstance(x, int):
        return True

    if isinstance(x, float):
        return x.is_integer()

    return False

Yes, you can use regex, but I think it's more complex and why not use the built-in.

For regex version, check on my other post.

Configuring Apache Exporter with Prometheus in Kubernetes (Minikube)

2026-02-24 12:53:20

Lab Objectives

By completing this lab, you will:

  • Deploy Apache in Kubernetes
  • Deploy Apache Exporter
  • Deploy Prometheus
  • Configure Prometheus to scrape Apache metrics
  • Generate load and observe real-time metrics
  • Clean up the environment

Lab Prerequisites

Ensure the following are installed:

  • Docker
  • kubectl
  • Minikube
  • Git
  • curl

Verify installation:

kubectl version --client
minikube version
docker --version

Lab 1 – Start Kubernetes Environment

1. Start Minikube

minikube start --driver=docker

Verify cluster:

kubectl get nodes

2. Create Namespace

kubectl create namespace monitoring

Set default namespace:

kubectl config set-context --current --namespace=monitoring

Verify:

kubectl get ns

Lab 2 – Deploy Apache Web Server

1. Create Apache Deployment

Create file: apache-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: apache
spec:
  replicas: 1
  selector:
    matchLabels:
      app: apache
  template:
    metadata:
      labels:
        app: apache
    spec:
      containers:
      - name: apache
        image: httpd:2.4
        ports:
        - containerPort: 80

Apply:

kubectl apply -f apache-deployment.yaml

2. Expose Apache Service

Create file: apache-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: apache-service
spec:
  selector:
    app: apache
  ports:
    - port: 80
      targetPort: 80

Apply:

kubectl apply -f apache-service.yaml

Verify:

kubectl get pods
kubectl get svc

Lab 3 – Deploy Apache Exporter

We will use the official Apache Exporter image:

quay.io/prometheuscommunity/apache-exporter

1. Create Exporter Deployment

Create file: apache-exporter.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: apache-exporter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: apache-exporter
  template:
    metadata:
      labels:
        app: apache-exporter
    spec:
      containers:
      - name: apache-exporter
        image: quay.io/prometheuscommunity/apache-exporter
        args:
          - --scrape_uri=http://apache-service/server-status?auto
        ports:
        - containerPort: 9117

Apply:

kubectl apply -f apache-exporter.yaml

2. Create Exporter Service

Create file: apache-exporter-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: apache-exporter-service
spec:
  selector:
    app: apache-exporter
  ports:
    - port: 9117
      targetPort: 9117

Apply:

kubectl apply -f apache-exporter-service.yaml

Verify:

kubectl get pods
kubectl get svc

Lab 4 – Deploy Prometheus

1. Create Prometheus ConfigMap

Create file: prometheus-config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
data:
  prometheus.yml: |
    global:
      scrape_interval: 5s

    scrape_configs:
      - job_name: 'apache-exporter'
        static_configs:
          - targets: ['apache-exporter-service:9117']

Apply:

kubectl apply -f prometheus-config.yaml

2. Create Prometheus Deployment

Create file: prometheus-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
      - name: prometheus
        image: prom/prometheus
        ports:
        - containerPort: 9090
        volumeMounts:
        - name: config-volume
          mountPath: /etc/prometheus/
      volumes:
      - name: config-volume
        configMap:
          name: prometheus-config

Apply:

kubectl apply -f prometheus-deployment.yaml

3. Expose Prometheus

kubectl expose deployment prometheus --type=NodePort --port=9090

Check service:

kubectl get svc

Lab 5 – Verify Exporter Metrics

1. Port Forward Exporter

kubectl port-forward svc/apache-exporter-service 9117:9117

Test metrics:

curl http://localhost:9117/metrics

You should see metrics such as:

apache_up
apache_workers
apache_scoreboard

Lab 6 – Access Prometheus UI

1. Get Prometheus URL

minikube service prometheus

Or port-forward:

kubectl port-forward svc/prometheus 9090:9090

Open browser:

http://localhost:9090

2. Verify Target

In Prometheus UI:

Status → Targets

Ensure:

apache-exporter = UP

3. Query Metrics

Try:

apache_up

Or:

apache_workers

Lab 7 – Generate Load (Apache Benchmark)

Use Apache Benchmark (ab) to simulate traffic.

If installed locally:

ab -n 1000 -c 50 http://$(minikube service apache-service --url)/

Or port-forward Apache:

kubectl port-forward svc/apache-service 8080:80

Then:

ab -n 1000 -c 50 http://localhost:8080/

Observe Metrics in Prometheus

Watch:

apache_workers
apache_scoreboard
apache_cpu_load

You should see real-time metric changes.

Lab 8 – Troubleshooting

Check pod logs:

kubectl logs deployment/apache-exporter
kubectl logs deployment/prometheus

Verify endpoints:

kubectl get endpoints

Check configuration:

kubectl describe configmap prometheus-config

Lab 9 – Cleanup Environment

Delete namespace:

kubectl delete namespace monitoring

Stop Minikube:

minikube stop

Optional full cleanup:

minikube delete

Lab Summary

You have successfully:

  • Deployed Apache in Kubernetes
  • Deployed Apache Exporter
  • Configured Prometheus
  • Verified metric scraping
  • Generated load and observed real-time metrics
  • Cleaned up the environment

Hello, World

2026-02-24 12:49:29

Introduction

Welcome to my personal blog! Here, I will document my technical explorations, open source projects, and reflections in the age of AI.

As a technology practitioner, I believe AI is profoundly transforming the way we build software. In this blog, I will share:

  • Technical Practice: Practical experience with AI infrastructure, inference optimization, Agent development, and more
  • Open Source Projects: Design rationale and technical details of open source projects I've participated in and created
  • Industry Insights: Personal observations and analysis of AI industry trends and technological evolution

Why Write a Blog?

In an era of information overload, systematically organizing and expressing one's thoughts has become increasingly important. Writing is not merely output, but a form of deep learning.

"If you can't explain something simply, you haven't truly understood it." — Feynman

I hope the content here inspires you. Feel free to connect with me on GitHub!

Originally published at https://guanjiawei.ai/en/blog/hello-world

Jestr (2014): The Architecture of a Social App and the Power of PostgreSQL Views

2026-02-24 12:48:58

When building a modern social media application, engineers are immediately confronted with two massive hurdles: data security and performance. Social networks are inherently complex webs of relational data—users create posts, posts have comments, comments belong to users, and posts receive reactions. Fetching this data usually results in the dreaded "N+1 query problem" or highly convoluted Object-Relational Mapping (ORM) code that is slow, heavy, and prone to exposing sensitive data.

Back in 2014, during the development of the Jestr ecosystem, the engineering team tackled this problem head-on. The project was split cleanly into a native iOS frontend (jestr-app) and its backend engine (jestr-api). After consulting with an engineer at Facebook, the Jestr team implemented a brilliant, highly optimized architecture relying heavily on PostgreSQL Views and native JSON serialization.

The fundamental philosophy? No accidental data leaves the DB, and results are highly optimized for mobile client consumption.

Let's dive into the anatomy of the Jestr app and how its backend achieved an impenetrable, high-performance architecture well ahead of its time.

The Jestr iOS App: The Need for Speed on Mobile

To understand why the backend was engineered the way it was, we have to look at the frontend. In 2014, the mobile landscape was defined by devices like the iPhone 5s and the newly released iPhone 6. While powerful for their time, mobile processors and cellular networks (3G/LTE transitions) were easily overwhelmed by poorly optimized web APIs.

The Jestr iOS application was a fully-fledged visual social network. A user's timeline was dense, consisting of:

  • High-resolution image posts
  • Author profiles and avatars
  • Nested comment threads
  • Reaction counts (referred to natively as "Coolerz")
  • Associated hashtags

Achieving a butter-smooth 60 frames-per-second scrolling experience on a 2014 iPhone meant the client app needed to do as little "thinking" as possible. If the iOS app had to make multiple network requests to fetch a post, then fetch its author, then fetch the top comments, the UI would stutter, and the device's battery would drain rapidly. The iOS client needed a single, perfectly formatted data payload delivered instantly.

The Problem with Traditional API Architectures

In a standard web API from that era, data flow usually looked like this:

  1. The iOS client requests a feed of posts.
  2. The API server queries the Post table.
  3. For each post, the API queries the User table for the author, the Comment table for recent comments, and the Like table for reactions.
  4. The API (often written in Node.js, Python, or Ruby) loops through these rows, serializes them into JSON objects in memory, and sends them to the client.

This approach had two fatal flaws. First, doing JSON serialization at the API application layer consumed immense amounts of CPU and memory on the server, creating massive bottlenecks during traffic spikes.

Second, it created a massive security risk. If a backend developer added a sensitive field (like email_address, password_hash, or location_coordinates) to the User table, a lazy SELECT * or an improperly configured ORM serializer might accidentally send that sensitive data straight to the public iOS client.

The Facebook-Consulted Solution: Postgres Views and Native JSON

Through consultation with a Facebook employee familiar with the grueling demands of massive-scale social feeds, the Jestr team pivoted. Instead of letting the Node API handle data aggregation and serialization, they pushed that responsibility entirely down to the database layer.

PostgreSQL had just started rolling out robust native JSON support around 2012-2014. By utilizing SQL Views (like user_public_view, comment_public_view, and cool_public_view) alongside PostgreSQL's row_to_json and array_to_json functions, Jestr essentially turned the database into a secure, pre-formatted JSON engine built specifically for the iOS app.

1. Impregnable Data Security

By exclusively using "Public Views," the database acts as a strict whitelist. Let's look at how Jestr securely fetched user data:

SELECT u.id,
    u.username,
    u.profile,
    u.location,
    u.private,
    ( SELECT row_to_json(user_stats.*) AS row_to_json
        FROM ( SELECT usv.*
                FROM user_stat_view usv
                WHERE upuv.id = p.user_id) user_stats) AS "stats"
FROM "user" u
WHERE u.deleted = false;

Because the API strictly queries these public views rather than the raw base tables, no accidental data can ever leave the database. Even if a backend developer makes a mistake in the API layer, they physically do not have access to the underlying sensitive columns. The view acts as an airtight contract between the raw relational data and the iOS frontend.

2. The Mega-Query: Serializing Complex Relationships

To generate a complete, rich "Post" object for the iOS feed, the database handled the assembly of the author, a preview of comments, the "coolerz", and hashtags all in one swoop:

SELECT p.id,
    p.user_id,
    -- 1. Fetching the Author Safely
    ( SELECT row_to_json(user_profile.*) AS row_to_json
        FROM ( SELECT upuv.id, upuv.username, upuv.profile_picture
               FROM user_public_view upuv
               WHERE upuv.id = p.user_id) user_profile) AS author,
    p.images,

    -- 2. Fetching and Counting Comments
    ( SELECT row_to_json(comments.*) AS row_to_json
        FROM ( SELECT ( SELECT count(*) AS count
                        FROM comment_public_view cpuv
                        WHERE cpuv.post_id = p.id) AS count,
                 array_to_json(array_agg(row_to_json(comments_1.*))) AS data
               FROM ( SELECT cpuv.id, cpuv.created_time, cpuv.message, cpuv.from
                      FROM comment_public_view cpuv
                      LIMIT 10) comments_1) comments) AS comments,

    -- 3. Fetching and Counting "Coolerz" (Likes)
    ( SELECT row_to_json(coolerz.*) AS row_to_json
        FROM ( SELECT ( SELECT count(*) AS count
                        FROM cool_public_view copuv
                        WHERE copuv.post_id = p.id) AS count,
                 array_to_json(array_agg(row_to_json(coolerz_1.*))) AS data
               FROM  (SELECT copuv.user
                      FROM cool_public_view copuv
                      WHERE copuv.post_id = p.id
                      LIMIT 10) coolerz_1) coolerz) AS coolerz,

    -- 4. Aggregating Hashtags
    ( SELECT array_to_json(array_agg(row_to_json("row".*))) AS array_to_json
        FROM ( SELECT h.hash
               FROM hash_public_view h
               WHERE h.post_id = p.id) "row") AS hashtags,
    p.date AS created_time
FROM post p
WHERE p.deleted = false
ORDER BY p.date DESC;

Notice the sheer elegance here:

  • Rich Data, One Trip: The API makes exactly one query to the database. The iOS app gets a single JSON response containing everything it needs to render a fully interactive post.
  • Smart Previews: Subqueries limit comments and coolerz to 10. This gives the mobile UI exactly what it needs for a feed preview without over-fetching data and eating up the user's cellular data plan.
  • Pre-Baked JSON: By the time the data reaches the jestr-api server, it is already perfectly formatted JSON. The backend doesn't need to loop over objects; it just passes the string directly to the iOS client.

3. Caching for High-Speed Mobile Feeds

This architectural choice provided a massive performance benefit when combined with caching. Because the output of these complex queries were pure JSON strings, the backend could easily cache the exact string output of post_public_view in a memory store like Redis.

Furthermore, querying these unified data sets simplified the logic required to build the actual timeline the user sees when they open the app:

-- Feed Generation via UNION
(SELECT ppv.id, ppv.user_id, ppv.author, ppv.images, ppv.comments, ppv.coolers, ppv.hashtags, ppv.created_time 
 FROM post_public_view ppv)
UNION
(SELECT ppv.id, ppv.to_id AS "user_id", ppv.author, ppv.images, ppv.comments, ppv.coolers, ppv.hashtags, ppv.created_time 
 FROM post_public_view ppv)

By leveraging UNION on the pre-constructed public views, Jestr effortlessly combined disparate data streams into a single timeline. The iOS app asks for the feed, the API pulls the pre-compiled, pre-formatted JSON string from the cache, and the timeline loads in milliseconds.

Conclusion

The architecture behind jestr-api and jestr-app proves that sometimes, the best place to handle data logic is right where the data lives. By relying on PostgreSQL's robust view system and native JSON capabilities, Jestr's engineering team achieved an ideal trifecta back in 2014:

  1. Absolute Security: Guaranteed by strict public views that prevented accidental data leaks to the mobile client.
  2. Mobile Optimization: Delivering fully formed, heavily nested JSON objects in a single network request to keep the iOS app lightweight and fast.
  3. Simplicity: The API layer became a thin, fast proxy that simply passed cached JSON strings straight to the phone.

It was a masterclass in treating the database as an active participant in the architecture rather than a dumb storage box—a paradigm shift heavily influenced by the hyper-scaled realities of giants like Facebook, and an engineering design that remains remarkably relevant today.

Agentic SEO: What It Actually Is and How I Use It (2026 Guide)

2026-02-24 12:48:15

Everyone's talking about agentic SEO. Nobody's showing what it actually looks like when an AI agent does your SEO work autonomously.

I built one. Connected it to Google Search Console, pointed it at my blog, and let it do its thing. 68,000 impressions in 9 days. From basically zero. Here's what agentic SEO actually means — not the enterprise whitepaper version, the version where you sit in a chat and watch an agent pull your data, find your problems, and fix them.

What Agentic SEO Actually Means

The term is everywhere right now. Search Engine Land wrote a guide. WordLift sells an "AI SEO Agent." Frase calls itself an "agentic SEO platform." Siteimprove has a whitepaper about it.

Most of them describe the same thing: AI that does SEO tasks without you manually prompting every step.

Here's the simpler version.

Traditional SEO workflow: Open GSC. Export CSV. Open spreadsheet. Sort by impressions. Copy data into ChatGPT. Ask for advice. Get "improve your meta descriptions." Repeat for an hour. Five tabs. Three vague action items.

Agentic SEO workflow: Tell the agent "analyze my site." It pulls your GSC data via API. Crawls your pages. Cross-references keywords against existing content. Finds gaps. Tells you exactly what to fix. Writes the content if you ask.

The difference isn't AI. The difference is autonomy. The agent decides which tools to use, what data to pull, and what actions to take — based on your specific site, not a generic playbook.

Why Most "Agentic SEO" Tools Aren't Agentic

Here's where I get opinionated.

Most tools calling themselves "agentic" are just AI wrappers with a fancier UI. You still type a prompt. It still gives you a response. There's no tool loop. No autonomous decision-making. No persistent memory between sessions.

A real agentic SEO system needs three things:

1. Direct data access.

Not "paste your CSV." Direct API connection to Google Search Console. Live queries. 90 days of data. The agent pulls what it needs without you being the middleman.

2. Site awareness.

The agent crawls your actual pages. It knows your titles, your H1s, your internal links, your content gaps. It doesn't guess what your site looks like — it reads every page.

3. An agentic loop.

This is the part most tools skip. A real agent doesn't make one API call and respond. It plans. Executes. Evaluates. Executes again. Up to 5 rounds of tool calls per message. It might pull GSC data, realize it needs to check a specific page, crawl that page, compare it to a competitor keyword, then come back with a recommendation. That's a loop. That's agentic.

Without all three, you're using ChatGPT with extra steps.

What an Agentic SEO Workflow Actually Looks Like

Here's a real example. A user connected their real estate website — 69 pages, covering multiple geographic areas and blog topics.

They asked one question: "Analyze my GSC data and identify content gaps."

The agent:

  1. Pulled 50 keywords sorted by impressions from GSC
  2. Listed all 69 site URLs from the sitemap
  3. Checked keyword placement across titles and H1 tags
  4. Identified keyword clusters with zero dedicated pages
  5. Found 430+ impressions on "nocatee communities" keywords with no page targeting them
  6. Found 290+ impressions on "real estate agent Jacksonville" keywords going to an unoptimized page
  7. Discovered an entire buyer persona (55+ communities) with zero content coverage
  8. Delivered a prioritized action plan with specific URLs to create or optimize

Total time: about 2 minutes. No CSV exports. No spreadsheets. No copy-pasting. One chat message in, full analysis out.

That's agentic SEO. Not a dashboard. Not a report. An agent that investigates your data and comes back with a diagnosis.

graph TD
    U[User: Analyze my content gaps] --> A[Agent Plans Approach]
    A --> GSC[Pull GSC Data via API]
    GSC --> SITE[Crawl All Site URLs]
    SITE --> KW[Cross-Reference Keywords vs Pages]
    KW --> GAP[Identify Unserved Clusters]
    GAP --> PRI[Prioritize by Impression Volume]
    PRI --> OUT[Deliver Action Plan]
    OUT --> WRITE[Optional: Write the Content]

How I Built Mine

I started with duct tape. Claude Code connected to my Supabase CMS and Google Search Console via knowledge files and manual OAuth tokens. It worked — that's how I got the 68k impressions — but it was fragile. Tokens expired silently. Context vanished between sessions. Nobody else could use it.

So I built it into a real product.

Google Search Console integration. OAuth with automatic token refresh. Live API queries pulling 90 days of keyword and page data. No manual exports.

Site crawler. Reads every page on your site automatically. Maps internal links. Identifies topic clusters. Works with any platform — WordPress, Astro, Next.js, static HTML.

Persistent memory. After each conversation, the agent extracts key findings. Next session, it remembers what it found last time. SEO is longitudinal. Your agent should be too.

Writing style system. This took the longest. The agent reads your existing content and generates 6 style files — tone, structure, sentence patterns, vocabulary, examples, and a banned words list that kills 50+ AI slop phrases. When it writes an article, it sounds like you. Not like ChatGPT.

20+ models. GPT, Claude, DeepSeek, Gemini, Llama, Mistral — switch mid-conversation. Use Claude for writing, GPT for analysis, whatever fits.

BYOK. Bring your own API key on every tier. No middleman markup on tokens.

Stack: Next.js, TypeScript, custom SSE streaming. No Vercel AI SDK — I built my own provider adapters for full control over the agentic loop.

// The core of the agentic loop — simplified
async function agentLoop(message: string, tools: Tool[]) {
  let rounds = 0;
  let response = await llm.chat(message, { tools });

  while (response.hasToolCalls && rounds < 5) {
    const results = await executeTools(response.toolCalls);
    response = await llm.chat(results, { tools });
    rounds++;
  }

  return response.content;
}

The agent isn't a single prompt-response cycle. It's a loop that keeps going until it has enough information to give you a real answer. That's the technical difference between "AI-assisted" and "agentic."

The Reality Check

Agentic SEO isn't magic. Here's what it doesn't do.

It doesn't guarantee rankings. No tool does. It finds opportunities and executes faster than you can manually. What you do with the output still matters.

It doesn't replace strategy. The agent is a fast analyst and writer. It's not a strategist. You still decide which direction to go. The agent tells you what's there — you decide what to prioritize.

It struggles with low-data sites. If your site has 16 total impressions and everything is at position 80+, there's not enough signal for the agent to do meaningful analysis. It needs data to work with. New sites need to build a baseline first.

It writes well but not perfectly. The writing style system is good. It's not you. Every article needs a review pass. The agent gets you 80% there in 2 minutes instead of starting from zero.

Agentic SEO vs the SEO Tool Stack

Here's what you're probably using right now and what changes.

What Before Agentic SEO
GSC Analysis Export CSV, sort in sheets Agent pulls live data, finds patterns
Content Gaps Compare keywords manually Agent cross-references all pages automatically
Content Briefs Write from scratch Agent generates from your GSC data + site context
Article Writing Prompt ChatGPT with no context Agent writes in your voice with your keywords
Internal Linking Eyeball it Agent maps all pages and suggests links
Monthly Cost Semrush $130 + Surfer $89 + ChatGPT $20 $29/mo + your API key

I'm not saying you should cancel Semrush. Semrush does things my agent doesn't — backlink analysis, domain authority tracking, competitive intelligence at scale. Different tools for different jobs.

But for the daily workflow of "what should I write, how should I optimize, what's working and what isn't" — the agent replaces 5 tabs and 30 minutes with one chat and 2 minutes.

The Verdict

Agentic SEO is real. Most tools claiming the label aren't.

The actual test is simple: does the tool connect to your data, make autonomous decisions about what to analyze, and come back with specific actions — without you manually feeding it every piece of context? If yes, it's agentic. If you're still copy-pasting, it's just AI with marketing.

I built mine because I got tired of being the copy-paste middleman between my own data and the AI that was supposed to help me. The 68k impressions in 9 days weren't because of magic — they were because the agent found problems I'd been sitting on for weeks and I spent a day fixing them.

The hosted version (free tier available): myagenticseo.com

Connect your Search Console, talk to the agent, see what it finds. No credit card. No setup. Just your data and an agent that actually looks at it.

Hacking something impactful

2026-02-24 12:43:43

I am building something in healthcare. And yes, I know how that sounds.

It is a heavily regulated domain, the stakes are unusually high, and building anything serious here is not a weekend project. But that is exactly why I chose it. I have been doing hackathons almost every month for the past year, and the pattern I keep coming back to is this: the ideas that are the most fun and fulfilling to build are the ones with the biggest potential impact, even when they do not win.

So when Google launched the Gemini Live Agent Challenge on Devpost last week, I knew I don't want to build another cool, but ultimately a toy project. I wanted to build something I could actually stand behind.

For me, healthcare is that domain. The potential to meaningfully change lives is greater here than almost anywhere else. And if I can solve the right problems and build a good solution, this could genuinely matter to people. That is the kind of project I am willing to burn the ships for. I know it's a risky bet, especially in the context of a hackathon. The prior Gemini 3 hackathon even discouraged it outright. But I think if the idea is strong enough, it speaks for itself.

So what is the idea? What am I planning to build and how?

I will be posting here every day, documenting the process of going from exploration, to raw ideation to something production-grade. Today is day one. The idea is locked in. Tomorrow, we get into what it actually is. Stay tuned for more.