MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

MetaCoreX Earns a 93 Proof of Usefulness Score by Building a Sovereign Decentralized Operating System

2026-03-10 05:52:19

MetaCoreX is a decentralized operating system designed to power a merit-based digital ecosystem where AI agents, humans, and smart contracts interact as equal participants. Its key innovation is ARZY-G, a token minted only when Chainlink oracles verify real usefulness, aiming to replace hype-driven tokenomics with contribution-based value creation. Built on Ethereum-compatible infrastructure with modular smart contracts, the project targets Web3 developers, AI researchers, and digital-first workers building decentralized applications and economies.

I Replaced MyFitnessPal and Other Apps With a Single MCP Server: Here's How I Did It

2026-03-10 05:48:19

My doctor asked me to keep a food diary. Simple enough, right?

\ I downloaded MyFitnessPal. Then Lifesum. Then a couple more. They all did the job of logging meals, but when I needed to actually use the data — export a summary for a specific period to show my doctor — things fell apart. MyFitnessPal has some export features on desktop, but it's clunky. Lifesum doesn't really let you export at all. None of them felt comfortable with what should be a straightforward task: log what I eat, then share a summary with my doctor.

\ My next thought was to build something on top of these apps. Maybe connect to their APIs and pull my data out. But they all have closed APIs — so that wasn't an option either.

The Idea

I was talking to Claude about this problem, and something clicked. Current LLMs are capable enough to analyze nutrition from a photo of food, look up a barcode, or estimate macros from a brief description and portion size. I don't need a dedicated nutrition database with millions of entries and a complex UI. I just need a place to store the data and a smart interface to put it there.

\ That interface already exists. It's the conversation.

What I Built

I built a remote MCP (Model Context Protocol) server that connects to any MCP-compatible AI assistant: Claude, ChatGPT, Mistral, or any other client that supports MCP. It's a simple backend: a database for meals, authentication so each user's data is private, and a set of tools the AI can call: log a meal, get today's meals, get a nutrition summary for a date range, update or delete entries.

\ The workflow is dead simple: I take a photo of my food, send it to Claude, and say, "log this." Claude analyzes the photo, estimates the calories and macros, and stores everything in the database. If I eat a packaged product, I can snap the barcode or the nutrition label — same result. No manual entry, no scrolling through food databases, no guessing serving sizes from dropdown menus.

\ And when I need to show my doctor? I can ask Claude to generate a summary for any date range as a table, as an Excel file, or just as a conversation. I could literally hand my phone to my doctor and let them ask whatever they want about my nutrition.

One Evening of Vibe Coding

The whole thing took me one evening and one morning to build. I used Claude Code to vibe code the entire server: the MCP tools, OAuth authentication, Supabase database integration, everything. The tech stack is Bun, Hono, and Supabase, deployed as a Docker container.

\ I didn't write the code line by line. I described what I wanted, iterated with Claude Code, and had a fully functional, deployed MCP server with user registration and persistent data storage in about 12 hours of actual work.

Using it Daily

I've been using it every day since. The experience on my phone is exactly what I wanted: open Claude, describe or photograph my meal, done. No app to switch to, no UI to navigate. Just a message in a conversation I'm probably already having.

\ The best part is how it complements other tools. I also built a Withings MCP server that provides data about my physical activity: steps, weight, and sleep. Because both are MCP servers connected to the same assistant, I can ask things like, "How did my calorie intake compare to my activity level this week?" and get an answer that pulls from both sources. No integration work needed — the AI just uses both tools.

Why MCP Matters Here

MCP turns AI assistants into a universal interface. I built a thin backend and let the AI handle the UI, the export, and the integration layer. No dedicated nutrition app, no custom export features, no glue code to connect systems that don't talk to each other.

\ This pattern works for anything where you need to store personal data and interact with it flexibly. Nutrition tracking just happened to be my use case, but the idea applies broadly: give the AI tools to read and write your data, and the interface takes care of itself.

Under the Hood

The architecture is intentionally simple: four layers, no frontend framework, no mobile app.

HTTP Layer

A Hono server running on Bun. It handles routing, CORS, security headers, and body size limits. The MCP endpoint lives at POST/mcp, protected by Bearer token middleware.

OAuth 2.0

The server implements the full authorization code flow with PKCE — the same flow Claude.ai and other MCP clients expect. When you connect for the first time, it opens a login page in the browser, you enter an email and password, and the server exchanges that for a long-lived access token. Sessions are in-memory with a 10-minute TTL; tokens persist for 365 days in the database.

MCP Server

Each client session gets its own MCP server instance using the StreamableHTTP transport. Tools are registered per user — every query is scoped to the authenticated user's ID, so there's no way to access someone else's data. The server exposes seven tools:

  • log_meal — log a meal with description, calories, macros, and notes
  • get_meals_today — get all meals logged today
  • get_meals_by_date — get meals for a specific date
  • get_nutrition_summary — daily nutrition totals for a date range
  • update_meal — update any fields of an existing meal
  • delete_meal — delete a meal by ID
  • delete_account — permanently delete your account and all data

Database

Supabase handles both authentication (email/password sign-up and sign-in) and data storage (PostgreSQL). Four tables: meals for nutrition entries, oauth_tokens and refresh_tokens for session management, and auth_codes for the OAuth flow. Row Level Security is enabled on all tables.

\ The whole request flow: your MCP client sends a tool call, the Bearer token is validated, the user ID is resolved, the tool handler queries Supabase scoped to that user, and the response goes back through the MCP protocol. No frontend, no state management, no React — just a database and a conversation.

Try It

The server is hosted and ready to use at nutrition-mcp.com. Setup takes under a minute:

Claude Desktop App or Claude.ai:

  1. Go to Customize → Connectors → +
  2. Choose Add custom connector
  3. Set the Remote MCP Server URL to https://nutrition-mcp.com/mcp
  4. Click Connect

\ Once connected on the desktop, it automatically syncs to Claude on iOS and Android.

Other MCP Clients:

Add this to your client's MCP config (the client must support OAuth 2.0 with PKCE):

{
    "mcpServers": {
        "nutrition": {
            "url": "https://nutrition-mcp.com/mcp"
        }
    }
}

\ On first connect, a browser window opens where you enter an email and password to create an account. That's it — your meals are linked to this account, and you sign in with the same credentials if you reconnect later.

\ The source code is open: github.com/akutishevsky/nutrition-mcp

\

Why Modern BI Architectures Need More Than Just Star Schemas

2026-03-10 04:27:56

For decades, the star schema has been the foundation of business intelligence.

\ Every data warehouse architecture follows the same pattern:

  • Fact tables store measurable events
  • Dimension tables describe those events
  • Analytical queries join them together

\ The design is elegant and flexible. It allows analysts to slice business metrics across dimensions such as product, store, time, and geography.

\ For many BI workloads, this architecture still works extremely well.

\ But modern analytics environments are beginning to expose a challenge that dimensional modeling was never originally designed for: Massive scale combined with continuous query workloads.


The Original Promise of Dimensional Modeling

Traditional BI architectures rely on dimensional modeling inside a data warehouse.

Traditional Star Schema Architecture

In this design:

  • the fact table stores business events
  • dimension tables provide descriptive attributes
  • BI tools query the model using joins

\ Example query:

select
  ds.store_id,
  dp.category,
  sum(fs.sales_amount)
from fact_sales as fs
left join dim_product as dp
  on fs.item_id = dp.item_id
left join dim_store as ds
  on fs.store_id = ds.store_id
group by all

\ This model works well because analysts can explore data across multiple dimensions without needing specialized tables for each question.


When Star Schemas Perform Extremely Well

Star schemas perform especially well when analytical models are loaded into memory.

\ Many BI tools support this through extract or import models.

In-Memory BI Architecture

When data is loaded into memory:

  • joins execute extremely quickly
  • columnar compression reduces storage footprint
  • analytical engines optimize query execution

\ For datasets that fit comfortably in memory, dimensional models remain extremely effective.

\ This is one reason star schemas became the dominant modeling pattern in BI.

\ However, not all datasets can be loaded into memory anymore.


The Scale Problem Modern BI Faces

Modern enterprise analytics platforms often operate on datasets containing hundreds of millions or billions of rows.

\ Consider a transactional dataset with:

500 million sales transactions

\ Now, imagine dashboards refreshing continuously across hundreds of users.

\ Each interaction triggers queries that repeatedly perform joins against large fact tables.

\ Query Pressure on Large Fact Tables

The issue is not dimensional modeling itself.

\ The challenge arises when three factors increase simultaneously:

  1. Dataset scale
  2. Query frequency
  3. Latency expectations

\ When these factors combine, join-heavy queries become expensive.


The Workaround Most BI Teams Discover

As datasets grow, many BI teams introduce a second layer of data structures.

\ Instead of querying the star schema directly, dashboards query purpose-built analytical tables.

Purpose-Built Analytical Tables

These tables often contain:

  • pre-aggregated metrics
  • denormalized attributes
  • simplified schemas optimized for dashboards

\ Example:

store_category_sales_summary

\ Now the dashboard query becomes:

select
  store_id,
  category,
  sum(total_sales)
from store_category_sales_summary
group by all

\ The benefits are clear:

  • fewer joins
  • smaller scans
  • faster queries

\ Many organizations implement this pattern through:

  • reporting marts
  • aggregated tables
  • semantic layers
  • precomputed datasets

\ But these structures are usually treated as engineering optimizations rather than a formal modeling strategy.


A Simple Framework for BI Modeling Decisions

Modern analytics workloads require a structured way to decide when dimensional models are sufficient and when additional structures are needed.

\ One way to think about this is through Modeling Pressure.

\ Modeling pressure is influenced by four variables:

  1. Dataset scale – number of rows in the fact table
  2. Query complexity – number of joins and aggregations
  3. Query frequency – how often dashboards execute queries
  4. Latency requirement – how quickly results must be returned

\ We can think of the modeling pressure conceptually as:

Modeling Pressure = Dataset Scale × Query Complexity × Query Frequency × Latency Requirement

\ When modeling pressure is low, dimensional models perform well.

\ When modeling pressure becomes high, purpose-driven analytical tables become increasingly valuable.


Examples Across Industries

This pattern appears across many industries.

Retail Analytics

Retail companies often track hundreds of millions of transactions.

\ Dashboards monitoring store performance may query:

sales by store by category by week

\ Repeated joins across massive transaction tables can become expensive.

\ Retail teams often create summary tables such as:

store_weekly_sales_summary


Digital Analytics

Clickstream platforms process billions of events.

\ A dashboard analyzing:

page views by device by hour

\ may rely on pre-aggregated tables instead of raw event logs.


IoT Analytics

Industrial sensors generate millions of readings per hour.

\ Operational dashboards monitoring machine performance often rely on pre-aggregated telemetry tables instead of raw sensor data.


The Emerging Hybrid BI Architecture

Instead of replacing dimensional models, modern data platforms increasingly adopt hybrid architectures.

\ In this architecture:

  1. Operational systems generate raw data
  2. The warehouse stores dimensional models
  3. Optimized analytical tables support high-frequency workloads
  4. Dashboards and AI tools consume optimized data

\ The warehouse remains the system of record.

\ Purpose-built analytical tables become the performance layer.


The Real Question BI Teams Should Ask

Dimensional modeling remains one of the most important innovations in business intelligence.

\ But modern analytics workloads are very different from those of early data warehouses.

\ Instead of asking:

“Should we use star schemas?”

\ Modern BI teams should ask:

“When should dimensional models be complemented by purpose-driven analytical structures?”

\ Understanding that balance may become one of the most important design decisions in modern BI architecture.

From Prompt to Operations: The Real Shift in AI Website Building

2026-03-10 04:18:38

\ AI website generation solved the first five minutes of website creation. The real challenge is everything that happens after.

Over the past two years, AI website builders have fundamentally changed how we think about building websites. With a simple prompt, you can generate a ready-to-launch site complete with layouts, design elements, and even written copy. Not long ago, creating a website required an entire agency team—product managers, designers, and developers working together. What was once accessible only to those who could afford agency services is now available to anyone who can write a prompt.

However, generating a page is not the same as running a real website.

A production website needs far more than an initial layout. It must support ongoing content updates, maintain SEO structure, connect to analytics tools, integrate with third-party services, maintain performance and security, and evolve as a business grows.

In other words, AI has largely solved the first five minutes of website creation. The harder problem is everything that comes after. All of these combined need a team to function.

Instead of treating AI as a one-shot page creator, systems like 10Web’s Agentic Website Builder are replicating the coordinated workflow of a full web team, moving from an initial brief to a production-ready website which also provides ongoing website management.

\

Why Real Websites Are Still Built by Teams

\ A production website is rarely the work of a single person. Instead, it usually involves multiple roles working together across several disciplines.

\ A typical workflow might include:

\

  • Strategy and project management, defining the site’s goals, sitemap, and user journeys.
  • Design, establishing visual systems, layout logic, and brand identity.
  • Development, implementing responsive pages, and technical architecture.
  • Content creation, producing copy, blog posts, and product descriptions.
  • SEO specialists, defining search architecture, schema, and metadata.
  • Infrastructure and DevOps, managing hosting, security, monitoring, and performance.
  • Testing, ensuring cross-device compatibility and accessibility.
  • Analytics, setting up tracking systems, and conversion funnels.

\ Each role contributes a piece of the final system.

The key insight is that websites are not static. They are living systems that evolve. Content changes. Features expand. Integrations grow. Performance must be monitored continuously.

Because of this complexity, building a production website often takes weeks or months and involves multiple specialists. Maintenance then becomes an ongoing operational cost.

The Prototype Trap of First-Generation AI Builders

\ Most first-generation AI website builders make starting easy, but turning a generated draft into a real, production website is where the friction begins.

Editing generated pages can easily break layout logic. Integrations are often missing or fragile. SEO structures may be incomplete. Content systems can be limited. Scaling the site beyond its initial structure becomes difficult, creating what might be called the prototype trap.

The site looks complete but is not operational. Instead of a production system, users often receive a static snapshot—something that resembles a finished website but is difficult to maintain, extend, or manage over time.

The Shift Toward Agentic Website Building

\ Agentic website building changes the unit of value from a generated draft to a managed, production-ready website. The difference is simple: generation produces something that looks done, while agentic systems carry the work until it is done, and keep improving it after launch.

The agentic model treats website generation as a lifecycle with continuity. It doesn’t just output pages. It keeps context, moves the project forward, and handles the kind of follow-through that usually happens only after a series of handoffs.

Here’s what that lifecycle looks like in practice:

\

  • From goals to structure. Business intent/prompt becomes an actual plan: sitemap, page hierarchy, and the content that needs to exist.
  • From structure to a real build. Pages and content are assembled into a coherent site on CMS rails, so the result is editable, extensible, and built for real use, not a static draft.
  • From build to readiness. The system moves the site toward launch conditions: performance, SEO foundations, and functional completeness.
  • From launch to iteration. The website doesn’t freeze after generation. It continues evolving through updates, content changes, monitoring, and ongoing refinement as needs shift.

\ The key shift is continuity. A production website isn’t something you generate once, it’s something you build, ship, and operate. Agentic website building is what happens when AI is responsible for the execution layer that gets the site across the finish line.

Why Infrastructure and Ecosystems Matter More Than Generated Code

\ Many AI website builders focus primarily on generating front-end code, often using frameworks like React. For quick prototypes, this approach works well. But a production website depends on far more than code generation.

Real websites require operational infrastructure to function and evolve over time. This includes systems such as:

\

  • Content management with revision history
  • Role-based permissions
  • Publishing workflows
  • Structured SEO architecture
  • Analytics integrations
  • Ecommerce support
  • Performance optimization tools

\ These are not just product features. They are the operational layers that make websites usable, maintainable, and scalable.

Rebuilding this infrastructure from scratch is an enormous challenge. A modern website platform must support integrations with marketing tools, payment systems, localization frameworks, compliance layers, and developer ecosystems. These systems are not built overnight — they accumulate over years through developer communities and real-world usage.

This is where open ecosystems change the equation.

Large CMS ecosystems provide thousands of integrations with payment gateways, shipping systems, analytics tools, automation platforms, and marketing services. They also support localization across global markets, where businesses often depend on region-specific infrastructure such as local payment methods, tax systems, shipping providers, and regulatory requirements.

WordPress is a clear example of this scale. It powers roughly 43% of all websites globally, with more than 65,000 plugins and WooCommerce supporting a large share of global ecommerce.

Instead of rebuilding this infrastructure layer, 10Web built its agentic website architecture on top of WordPress, leveraging its mature CMS capabilities and global plugin ecosystem while adding orchestration and AI automation.

The result is a different approach to AI website building. Rather than generating isolated front-end code, the system operates within an existing infrastructure layer and focuses on automating the workflow that turns a generated site into a production-ready, continuously evolving website.

Distribution: Where Website Creation Actually Happens

\ Small businesses rarely start by shopping for a standalone builder. They build websites inside the platforms that already run their business: where they buy hosting, manage domains, operate SaaS tools, or work with a provider.

That changes the winning model. The systems that scale are prompt to production website to managed lifecycle, delivered inside the channels that already reach SMBs at volume.

In practice, that means packaging website creation as infrastructure that can be embedded and sold through:

\

  • Hosting providers and control panels
  • SaaS platforms serving SMB workflows
  • Agencies and MSPs delivering managed services
  • Telcos and banks bundling business tools

\ They can be distributed through APIs, self-hosted plugins, and white-label reseller dashboards so partners can offer website creation natively inside their own ecosystem.

From Website Tools to Website Systems

\ Website creation has moved in waves: first hand-coded by developers, then standardized by CMS platforms, then simplified by drag-and-drop builders. Now the next shift is underway, not in how websites look, but in how they are built and sustained.

The last era was about giving people better tools. This era is about turning website creation into a system where AI that doesn’t just generate pages but carries the work through the lifecycle, structure, build, launch, and continuous improvement on production infrastructure.

In other words, the competitive edge is moving towards who can deliver a website that stays alive, accurate, performant, and improving over time.

\n

\

Risk Mirror Earns a 71 Proof of Usefulness Score by Building a Stateless Privacy Firewall for Safe AI Adoption

2026-03-10 03:35:06

Risk Mirror is a stateless privacy firewall that prevents sensitive data leaks when using AI tools. It detects over 150 types of confidential information and replaces them with synthetic “twins,” allowing developers, startups, and security teams to safely use AI assistants without exposing PII, API keys, or production data.

GEO is the new SEO? What Insiders Say

2026-03-10 03:27:49

Good old ‘search engine optimization’ which focused on making you visible online and rank in that sweet first page is well-known. SEO is the ABCs of digital marketing, and every business, brand or company in search of public attention is investing heavily into quality content, readable websites and searchability.

\ Enter ‘generative engine optimization’ or GEO, arguably SEO 2.0. My Linkedin feed is full of GEO tools that promise to bring today's no-name to the very top of ChatGPT responses. GEO practice optimizes content ‘so it is selected, summarized, and surfaced directly inside AI-generated answers’, as Perplexity puts it.

\ I would put it in bolder terms still: when SEO gets you clicks, GEO gets you visibility. \n

The question I had in mind when I started my interviews for this article was: is GEO revolutionary? Do we just leave SEO behind — or at least seriously take our attention away from it?

\

POV 1: SEO isn’t dead. It’s the new baseline

Most forward-thinking SEO managers, copywriters and marketers are intrigued by AI and are using it actively across their daily tasks. Their take on GEO is: AI-search is the new normal but SEO remains a non-negotiable foundation for healthy and converting AI visibility.

\ Blue Sitten from the Yoder Family brands oversees and manages all 9 of the company’s websites and Google My Business profiles in North America. When we talked, Blue explained that old SEO tactics ‘don’t work anymore’. She says that brands now need to look for new ‘formulas’ that make content LLM-friendly, while keeping ongoing SEO work in place. Tech hygiene, structure and clarity remain essential to fast sites and fruitful effort in online findability.

\ ‘LLM-friendliness’ was also spotted by Destiny Flaherty, the head of SEO at Princess Polly, a popular e-com women apparel brand. Her insight was: ‘Where previously technical foundations might have been or they might have been not prioritized, now it's a conversation to revisit’. Destiny sees ‘a 90% overlap’ between SEO and GEO, highlighting the importance of traditional technical optimization for search engines.

\ I think Katie Beirne beautifully summarizes these two ideas in her interview. Katie is a cybersecurity-focused content marketer who oversees broader marketing strategies as fractional CMO for clients. She shares the vision that GEO is layered on top of existing SEO practices. From experience, her approach to GEO today is like early approaches to SEO, before tools like SEMRush were a thing: query as the target persona, see what ranks, study top results and adapt those patterns.

\

POV 2: GEO as SEO’s high-intent, low-volume cousin

But what is with GEO that everyone’s so obsessed about it? Why do so many startups spring up offering to boost brands’ AI visibility, if, essentially, you do well on AI if you’ve been careful with your SEO?

\ Aurelie GiardJacquet, the founder of Business Crush SAS and an AI consultant, shares an intriguing statistic. She says overall website traffic is dropping across the board, but that traffic coming from LLMs converts far better than traffic from Google because people referred by AI typically have much stronger purchase intent (for example, someone asking 'what is the best plumber near me?' is highly likely to request a quote or appointment).

\ Aurelie also notes that B2B buyers and the younger generation of buyers are already used to conversational, assistant-style tools, so instead of 'googling and clicking blue links,' they go straight to LLMs to get synthesized, holistic answers. Brands now need to be recommended inside AI answers — where these buyers actually start and decide.

\ Speaking of presence, companies “are expected to be everywhere in the digital ecosystem”, Katie Beirne notes. LLMs summarize what’s already out there (reviews, socials, media, owned content), so brand sentiment and narrative in those sources directly shape AI answers. She advises brands to deliberately get their content 'seen in AI search,' which in practice means feeding the models high‑quality, on‑brand content and reviews that AI can reflect back in summaries.

\

POV 3: Machine-friendly content with people in mind

I’ve honestly been meaning to write an article on the Dead Internet theory — and I will! — but the next point kind of resonates with this sentiment. Aren’t we just going to run into an internet of GenAI soulless content? It is discoverable, findable, crushable by every crawler — yet it is humanly empty, if you will.

\ I audibly laughed here. It is hilarious, albeit sad.

\ \ One point that really resonated with me was that AI can be — and should be — used as a drafting assistant. Gary Kane is an omnichannel sales leader at JoyJolt. I loved his saying on AI in content creation actually: he uses ChatGPT as a starting point for formal emails so he doesn’t 'stress for 7 minutes' drafting from scratch, instead quickly generating a version he can edit in 2 minutes. I feel like this approach is a helpful kick-starter for writers, especially SEO- and other content managers.

\ Melissa Rosen, the content marketing manager at Motion, shares this view of AI in her professional work. To Melissa, AI is a drafting companion and a smart editing assistant rather than a full scale tool for autonomous content writing. She uses Claude and Athena for drafting and SEO research but still handpicks keywords, rewrites drafts and layers in audience nuance from her own conversations.

\ From my own experience, one thing AI lacks is distinctly human and recognizably personal language. I mean — you could probably train it, like I trained a Space in Perplexity (yes, I’m a fan of this tool, so what?), but it won’t write or sound like you for 100%. The SMB brand strategist Stacy Eleczko shared that she keeps AI and its miraculous analytic skills for research and revision rather than pure writing. Stacy explains that AI cannot read emotional nuance or invent a differentiated positioning. While it can structure, restructure and say the same thing in so many different ways, AI just is not… human. In fact, she doubles down on using real customer language and actual questions in content so her client brands show up and feel relevant in both traditional SEO and LLM search.

\

POV 4: Off-site brand knowledge matters in GEO

In her webinar ‘The AI Search Action Checklist’, Aleyda Solis from AirOps states that in AI search, authority signals are 'shifting from backlinks to mentions, citations and entity-based trust’. Sounds familiar yet? While LLMs consider a brand credible if it shows up across directories, listings and socials, traditional search algorithms lean on backlinks as a proxy for authority. In other words, where SEO treated links as votes, GEO treats mentions as evidence that a brand is real, trusted and relevant across the wider web.

\ I saw this logic play out in Tiffany Da Silva’s work. Tiffany is a digital marketing and AI marketing consultant and her work still centers on link development for Google, but her AI search audits explicitly look at how often brands are cited across the sources that target LLMs pull from. Likewise, Destiny Flaherty builds this into her Princess Polly roadmap, treating digital PR and brand mentions as a primary lever for AI visibility and using ProFound to quantify how often the brand appears across high‑intent prompts compared to competitors. (Oh, I guess I am doing quite a favor to the tools I’m mentioning in this piece.)

\

POV 5: Measuring the immeasurable

It was Tiffany actually, who articulated my own concern so well. Again, going back to these GEO and AI-visibility solutions popping up (and grabbing millions at pitches!), what do they measure? Or else — how? The feeling from popular AI solutions is ‘black box’.

\ Tiffany shared her frustration here: while she uses Aleyda Solis’s Orianti to track AI visibility and compares that data with Google Analytics 4 explorer views to isolate AI traffic and conversions, she says it is still a provisional setup and a standardized GEO measurement framework is ‘1-2 years away’.

\ I was lucky to have talked to Russel Benoit, senior partnerships manager at AVTECH, who gave an insight as a professional from outside SEO. He treats ChatGPT as a meaningful traffic source and goes out to manually test queries across ChatGPT, Claude, and Gemini to track where and how one of AVTECH’s products shows up. But there is no specific AI-related KPI in the company’s roadmap yet — he goes even further, saying he and his team have ‘no visibility into AI search performance across engines’ yet.

\ You might think it comes down to the size of the company. Well, here’s a similar insight from Claudia Vanselow from Henkel. Claudia is the senior global e-commerce manager, and she shared that her team has ‘no direct AI traffic measurement capabilities’. This highlights that even large FMCGs, despite their tech-savviness and reliance on truly powerful data systems, are — at the moment — stuck with third-party benchmarks rather than an owned GEO dashboard.

\

POV 6: How do you unite SEO and GEO to win in both?

Among popular articles and research on SEO vs. GEO, one of my favorite finds is LLM Pulse’s ‘GEO is the new SEO: a practical playbook for 2025’. The playbook explains that the emphasis now is on making content and websites easier for LLMs to sprawl, parse and quote. The workflow it suggests is to treat your existing SEO engine as the base, then systematically make those same pages 'answer‑ready' for generative engines. You can work on the availability of your pages by unblocking AI bots. You can re-purpose your existing content and restructure pages and blogs into more ordered pieces with explicit FAQ sections and listicles.

\ Another approach is building presence wherever AI models 'source truth' (high‑signal forums and authoritative media — think G2 for SaaS reviews) and then treating GEO as a measurable discipline rather than a guessing game. In practice, this turns SEO and GEO into one pipeline: SEO makes you discoverable on the open web, and GEO instrumentation tells you whether that same content is actually showing up on AI answers.

\

Key question: How does GEO vs. SEO translate into business operations?

Well, the answer is two-fold.

\ I spoke with a DTC leader who runs an 8-figure ecommerce portfolio. Justin Perez from 2.7 August Apparel described this translation into operations as a non-stop balancing act. On the one hand, there is performance marketing, which delivers immediate ROI. On the other hand, organic and AI‑driven discoverability needs patient ongoing investment. In his words, ‘SEO is treated as a core acquisition channel, not a side initiative, with his team tracking organic and assisted revenue, non‑branded vs. branded keyword growth, ranking movement for high‑intent queries, CTR and impressions in Search Console, indexation health and technical performance metrics. And the emerging GEO layer doesn’t replace this foundation. Instead, it adds a new leadership challenge: how much time and budget to shift from chasing the next conversion spike to structuring data, content and brand authority so the company actually shows up in AI‑generated answers?

\ People I have talked to (some are not mentioned here) all seem to point to the same uncomfortable truth for businesses. GEO is not a shiny add‑on, it is the new visibility layer sitting on top of SEO, and it is already moving real money. McKinsey’s 2025 'New front door to the internet' report warns that brands may lose up to 50% of their traditional search traffic as more decisions happen inside AI answers rather than on websites (look at Shopping with ChatGPT with in-chat checkouts).

\ Interestingly, studies and agency data show that this AI‑driven traffic, while smaller, converts far better. Growth Marshal’s 2026 article shares an intriguing statistic: AI search traffic is 4.4 times more valuable than organic. Put bluntly: GEO is where a growing slice of high‑intent demand is being captured and routed. Businesses that are invisible at that layer will simply never be considered on spot at a given interaction with AI.

\ Dropping SEO entirely is not what I am building for here. For an SMB, the call is to evolve into dual-discovery marketing where one pipeline feeds both Google and AI engines. In practice, it is all the ‘human’ stuff my interviewees talked about: human opinionated content, aggressive editing of GenAI drafts and authentic off-site reviews and mentions — along with rebuilding and restructuring your best content into AI-ready information.

\ It also means dedicating a little time every month from chasing one more blog keyword to doing what Katie, Tiffany, Destiny, and others already do: run a simple AI visibility audit, reinforce your presence on the sources LLMs trust and treat GEO metrics (like AI-referred leads) as early‑stage KPIs, even if they are messy.

\ In that sense, the 'new reality of GEO/SEO' translates into business as a shift from fighting for one more blue link click to competing for the brief moment when an AI system decides whose name to put into the answer — and smart SMBs are quietly redesigning their operations, content, and measurement so their brand is the one that gets spoken out loud.