MoreRSS

site iconLuke WroblewskiModify

Luke joined Google when it acquired Polar in 2014 where he was the CEO and Co-founder. Before founding Polar, Luke was the Chief Product Officer and Co-Founder of Bagcheck which was acquired by Twitte
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Luke Wroblewski

Wrapper vs. Embedded AI Apps

2025-10-07 06:00:00

As we saw during the PC, Web, and mobile technology shifts, how software applications are built, used, and distributed will change with AI. Most AI applications built to date have adopted a wrapper approach but increasingly, being embedded is becoming a viable option too. So let's look at both...

Wrapper vs. Embedded AI Apps

Wrapper AI Apps

Wrapper apps build a custom software experience around an AI model(s). Think of these as traditional applications that use AI as their primary processing engine for most core tasks. These apps have a familiar user interface but their features use one or more AI models for processing user input, information retrieval, generating output, and more. With wrapper apps, you need to build the application's input and output capabilities, integrations, user interface, and how all these pieces and parts work with AI models.

The upside is you can make whatever interface you want, tailored to your specific users' needs and continually improve it through your visibility of all user interactions. The cost is you need to build and maintain the ever-increasing list of capabilities users expect with AI applications like: the ability to use image understanding as input, the ability to search the Web, the ability to create PDFs or Word docs, and much more.

Embedded AI Apps

Embedded AI apps operate within existing AI clients like Claude.ai or ChatGPT. Rather than building a standalone experience, these apps leverage the host client for input, output, and tool capabilities, alongside a constantly growing list of integrations.

To use concrete examples, if ChatGPT lets people to turn their conversation results into a podcast, your app's results can be turned into a podcast. With a wrapper app, you'd be building that capability yourself. Similarly, if Claude.ai has an integration with Google Drive, your app running in Claude.ai has an integration with Google Drive, no need for you to build it. If ChatGPT can do deep research, your app can ... you get the idea.

So what's the price? For starters, your app's interface is limited by what the client you're operating in allows. ChatGPT Apps, for instance, have a set of design guidelines and developer requirements not unlike those found in other app stores. This also means how your app can be found and used is managed by the client you operate in. And since the client manages context throughout any task that involves your app, you lose the ability to see and use that context to improve your product.

Doing Both

Like the choice between native mobile apps and mobile Webs sites during the mobile era... you can do both. Native mobile apps are great for rich interactions and the mobile Web is great for reach. Most software apps benefit from both. Likewise, an AI application can work both ways. ChatDB illustrates this and the trade-offs involved. ChatDB is a service that allows people to easily make chat dashboards from sets of data.

People can use ChatDB as a standalone wrapper app or embedded within their favorite AI client like Claude or ChatGPT (as illustrated in the two embedded videos). The ChatDB wrapper app allows people to make charts, pin them to a dashboard, rearrange them and more. It's a UI and product experience focused solely on making awesome dashboards and sharing them.

The embedded ChatDB experience allows people make use of tools like Search and Browse or integrations with data sources like Linear to find new data and add it to their dashboards. These capabilities don't exist in the ChatDB wrapper app and maybe never will because of the time required to build and maintain them. But they do exist in Claude.ai and ChatGPT.

The capabilities and constraints of both wrapper and embedded AI apps are going to continue evolving quickly. So today's tradeoffs might be pretty different than tomorrow's. But it's clear what an application is, how it's distributed, and used is changing once again with AI. That means everyone will be rewriting their apps like they did for the PC, Web, and Mobile and that's a lot of opportunity for new design and development approaches.

We're (Still) Not Giving Data Enough Credit

2025-10-02 08:00:00

In his AI Speaker Series presentation at Sutter Hill Ventures, UC Berkeley's Alexei Efros argued that data, not algorithms, drives AI progress in visual computing. Here's my notes from his talk: We're (Still) Not Giving Data Enough Credit.

Large data is necessary but not sufficient. We need to learn to be humble and to give the data the credit that it deserves. The visual computing field's algorithmic bias has obscured data's fundamental role. recognizing this reality becomes crucial for evaluating where AI breakthroughs will emerge.

AI Speaker Series presentation at Sutter Hill Ventures with Alexei Efros

The Role of Data

  • Data got little respect in academia until recently as researchers spent years on algorithms, then scrambled for datasets at the last minute
  • This mentality hurt us and stifled progress for a long time.
  • Scientific Narcissism in AI: we prefer giving credit to human cleverness over data's role
  • Human understanding relies heavily on stored experience, not just incoming sensory data.
  • People see detailed steam engines in Monet's blurry brushstrokes, but the steam engine is in your head. Each person sees different versions based on childhood experiences
  • People easily interpret heavily pixelated footage with brains filling in all the missing pieces
  • "Mind is largely an emergent property of data" -Lance Williams
  • Three landmark face detection papers achieved similar performance with completely different algorithms: neural networks. naive Bayes, and boosted cascades
  • The real breakthrough wasn't algorithmic sophistication. It was realizing we needed negative data (images without faces). But 25 years later, we still credit the fancy algorithm.
  • Efros's team demonstrated hole-filling in images using 2 million Flickr images with basic nearest-neighbor lookup. "The stupidest thing and it works."
  • Comparing approaches with identical datasets revealed that fancy neural networks performed similarly to simple nearest neighbors.
  • All the solution was in the data. Sophisticated algorithms often just perform fast lookup because the lookup contains the problem's solution.

Interpolation vs. Intelligence

  • MIT's Aude Oliva's experiments reveal extraordinary human capacity for remembering natural images.
  • But memory works selectively: high recognition rates for natural, meaningful images vs. near-chance performance on random textures.
  • We don't have photographic memory. We remember things that are somehow on the manifold of natural experience.
  • This suggests human intelligence is profoundly data-driven, but focused on meaningful experiences.
  • Psychologist Alison Gopnik reframes AI as cultural technologies. Like printing presses, they collect human knowledge and make it easier to interface with it
  • They're not creating truly new things, they're sophisticated interpolation systems.
  • "Interpolation in sufficiently high dimensional space is indistinguishable from magic" but the magic sits in the data, not the algorithms
  • Perhaps visual and textual spaces are smaller than we imagine, explaining data's effectiveness.
  • 200 faces in PCA could model the whole of humanity's face. Can expand this to linear subspaces of not just pixels, but model weights themselves.
  • Startup algorithm: "Is there enough data for this problem?" Text: lots of data, excellent performance. Images: less data, getting there. Video/Robotics: harder data, slower progress
  • Current systems are "distillation machines" compressing human data into models.
  • True intelligence may require starting from scratch: remove human civilization artifacts and bootstrap from primitive urges: hunger, jealousy, happiness
  • "AI is not when a computer can write poetry. AI is when the computer will want to write poetry"

Podcast: Generative AI in the Real World

2025-09-30 08:00:00

I recently had the pleasure of speaking with Ben Lorica on O'Reilly's Generative AI in the Real World podcast about how software applications are changing in the age of AI. We discussed a number of topics including:

  • The shift from "running code + database" to "URL + model" as the new definition of an application
  • How this transition mirrors earlier platform shifts like the web and mobile, where initial applications looked less robust but evolved significantly over time
  • How a database system designed for AI agents instead of humans operates
  • The "flipped" software development process where AI coding agents allow teams to build working prototypes rapidly first, then design and integrate them into products
  • How this impacts design and engineering roles, requiring new skill sets but creating more opportunities for creation
  • The importance of taste and human oversight in AI systems
  • And more...

Generative AI in the Real World

You can listen to the podcast Generative AI in the Real World: Luke Wroblewski on When Databases Talk Agent-Speak (29min) on O-Reilly's site. Thanks to all the folks there for the invitation.

Future Product Days: How to solve the right problem with AI

2025-09-26 08:00:00

In his How to solve the right problem with AI presentation at Future Product Days, Dave Crawford shared insights on how to effectively integrate AI into established products without falling into common traps. Here are my notes from his talk:

  • Many teams have been given the directive to "go add some AI" to their products. With AI as a technology, it's very easy to fall into the trap of having an AI hammer where every problem looks like an AI nail.
  • We need to focus on using AI where it's going to give the most value to users. It's not what we can do with AI, it's what makes sense to do with AI.

AI Interaction Patterns

  • People typically encounter AI through four main interaction types
  • Discovery AI: Helps people find, connect, and learn information, often taking the place of search
  • Analytical AI: Analyzes data to provide insights, such as detecting cancer from medical scans
  • Generative AI: Creates content like images, text, video, and more
  • Functional AI: Actually gets stuff done by performing actions directly or interacting with other services
  • AI interaction patterns exist on a context spectrum from high user burden to low user burden
  • Open Text-Box Chat: Users must provide all context (ChatGPT, Copilot) - high overhead for users
  • Sidecar Experience: Has some context about what's happening in the rest of the app, but still requires context switching
  • Embedded: Highly contextual AI that appears directly in the flow of user's work
  • Background: Agents that perform tasks autonomously without direct user interaction

Principles for AI Product Development

  • Think Simply: Make something that makes sense and provides clear value. Users need to know what to expect from your AI experience
  • Think Contextually: Can you make the experience more relevant for people using available context? Customize experiences within the user's workflow
  • Think Big: AI can do a lot, so start big and work backwards.
  • Mine, Reason, Infer: Make use of the information people give you.
  • Think Proactively: What kinds of things can you do for people before they ask?
  • Think Responsibly: Consider environmental and cost impacts of using AI.
  • We should focus on delivering value first over delightful experiences

Problems for AI to Solve

  • Boring tasks that users find tedious
  • Complex activities users currently offload to other services
  • Long-winded processes that take too much time
  • Frustrating experiences that cause user pain
  • Repetitive tasks that could be automated
  • Don't solve problems that are already well-solved with simpler solutions
  • Not all AI needs to be a chat interface. Sometimes traditional UI is better than AI
  • Users' tolerance and forgiveness of AI is really low. It takes around 8 months for a user to want to try an AI product again after a bad experience
  • We're now trying to find the right problems to solve rather than finding the right solutions to problems. Build things that solve real problems, not just showcase AI capabilities

Future Product Days: Hidden Forces Driving User Behavior

2025-09-26 08:00:00

In her talk Reveal the Hidden Forces Driving User Behavior at Future Product Days, Sarah Thompson shared insights on how to leverage behavioral science to create more effective user experiences. Here's my notes from her talk:

  • While AI technology evolves exponentially, the human brain has not had a meaningful update in approximately 40,000 years so we're still designing for the "caveman brain"
  • This unchanging human element provides a stable foundation for design that doesn't change with every wave of technology
  • Behavioral science matters more than ever because we now have tools that allow us to scale faster than ever
  • All decisions are emotional because there is an system one (emotional) part of the brain that makes decisions first. This part of the brain lights up 10 seconds before a person is even aware they made a decision
  • System 1 thinking is fast, automatic, and helped us survive through gut reactions. It still runs the show today but uses shortcuts and over 180 known cognitive biases to navigate complexity
  • Every time someone makes a decision, the emotional brain instantly predicts whether there are more costs or gains to taking action. More costs? Don't do it. More gains? Move forward
  • The emotional brain only cares about six intuitive categories of costs and gains: internal (mental, emotional, physical) and external (social, material, temporal)
  • Mental: "Thinking is hard" We evolved to conserve mental effort - people drop off with too many choices, stick with defaults. Can the user understand what they need to do immediately?
  • Social: "We are wired to belong" We evolved to treat social costs as life or death situations. Does this make users feel safe, seen, or part of a group? Or does it raise embarrassment or exclusion?
  • Emotional: "Automatic triggers" Imagery and visuals are the fastest way to set emotional tone. What automatic trigger (positive or negative) might this design bring up for someone?
  • Physical: "We're wired to conserve physical effort" Physical gains include tap-to-pay, facial recognition, wearable data collection. Can I remove real or perceived physical effort?
  • Material: "Our brains evolved in scarcity" Scarcity tactics like "Bob booked this three minutes ago" drive immediate action. Are we asking people to give something up or are we giving them something in return?
  • Temporal: "We crave immediate rewards" Any time people have to wait, we see drop off. Can we give immediate reward or make people feel like they're saving time?
  • You can't escape the caveman brain, but you can design for it.

Future Product Days: Future of Product Creators

2025-09-25 08:00:00

In his talk The Future of Product Creators at Future Product Days in Copenhagen, Tobias Ahlin argued that divergent opinions and debate, not just raw capability, are the missing factors for achieving useful outcomes from AI systems. Here are my notes from his presentation:

  • Many people are exposing a future vision where parallel agents creating products and features on demand.
  • 2025 marked the year when agentic workflows became part of daily product development. AI agents quantifiably outperform humans on standardized tests: reading, writing, math, coding, and even specialized fields.
  • Yet we face the 100 interns problem: managing agents that are individually smarter but "have no idea where they're going"

Limitations of Current Systems

  • Fundamental reasoning gaps: AI models have fundamental reasoning gaps. For example, AI can calculate rock-paper-scissors odds while failing to understand it has a built-in disadvantage by going second.
  • Fatal mistakes in real-world applications: suggesting toxic glue for pizza, recommending eating rocks for minerals.
  • Performance plateau problem: Unlike humans who improve with sustained effort, AI agents plateau after initial success and cannot meaningfully progress even with more time
  • Real-world vs. benchmark performance: Research from Monitor shows 63% of AI-generated code fails tests, with 0% working without human intervention

Social Nature of Reasoning

  • True reasoning is fundamentally a social function, "optimized for debate and communication, not thinking in isolation"
  • Court systems exemplify this: adversarial arguments sharpen and improve each other through conflict
  • Individual biases can complement each other when structured through critical scrutiny systems
  • Teams naturally create conflicting interests: designers want to do more, developers prefer efficiency, PMs balance scope.This tension drives better outcomes
  • AI significantly outperforms humans in creativity tests. In a Cornell study, GPT-4 performed better than 90.6% of humans in idea generation, with AI ideas being seven times more likely to rank in the top 10%
  • So the cost of generating ideas is moving towards zero but human capability remains capped by our ability to evaluate and synthesize those ideas

Future of AI Agents

  • Current agents primarily help with production but future productivity requires and equal amount of effort in evaluation and synthesis.
  • Institutionalized disconfirmation: creating systems where disagreement drives clarity, similar to scientific peer review
  • Agents designed to disagree in loops: one agent produces code, another evaluates it, creating feedback systems that can overcome performance plateaus
  • True reasoning will come from agents that are designed to disagree in loops rather than simple chain-of-thought approaches