MoreRSS

site iconGeoffrey LittModify

Researcher explores malleable software and AI, with a PhD from MIT and work at Ink & Switch.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Geoffrey Litt

Avoid the nightmare bicycle

2025-03-04 06:13:00

In my opinion, one of the most important ideas in product design is to avoid the “nightmare bicycle”.

Imagine a bicycle where the product manager said: “people don’t get math so we can’t have numbered gears. We need labeled buttons for gravel mode, downhill mode, …”

This is the hypothetical “nightmare bicycle” that Andrea diSessa imagines in his book Changing Minds.

As he points out: it would be terrible! We’d lose the intuitive understanding of how to use the gears to solve any situation we encounter. Which mode do you use for gravel + downhill?

It turns out, anyone can understand numbered gears totally fine after a bit of practice. People are capable!

Along the same lines: one of the worst misconceptions in product design is that a microwave needs to have a button for every thing you could possibly cook: “popcorn”, “chicken”, “potato”, “frozen vegetable”, bla bla bla.

You really don’t! You can just have a time (and power) button. People will figure out how to cook stuff.

Good designs expose systematic structure; they lean on their users’ ability to understand this structure and apply it to new situations. We were born for this.

Bad designs paper over the structure with superficial labels that hide the underlying system, inhibiting their users’ ability to actually build a clear model in their heads.

Two pages from a book describing the nightmare bicycle concept

p.s. Changing Minds is one of the best books ever written about design and computational thinking, you should go read it.

AI-generated tools can make programming more fun

2024-12-22 22:05:00

I want to tell you about a neat experience I had with AI-assisted programming this week. What’s unusual here is: the AI didn’t write a single line of my code. Instead, I used AI to build a custom debugger UI… which made it more fun for me to do the coding myself.

* * *

I was hacking on a Prolog interpreter as a learning project. Prolog is a logic language where the user defines facts and rules, and then the system helps answer queries. A basic interpreter for this language turns out to be an elegant little program with surprising power—a perfect project for a fun learning experience.

The trouble is: it’s also a bit finicky to get the details right. I encountered some bugs in my implementation of a key step called unification—solving symbolic equations�—which was leading to weird behavior downstream. I tried logging some information at each step of execution, but I was still parsing through screens of text output looking for patterns.

I needed better visibility. So, I asked Claude Artifacts to whip up a custom UI for viewing one of my execution traces. After a few iterations, here’s where it ended up:

I could step through an execution and see a clear visualization of my interpreter’s stack: how it has broken down goals to solve; which rule it’s currently evaluating; variable assignments active in the current context; when it’s come across a solution. The timeline shows an overview of the execution, letting me manually jump to any point to inspect the state. I could even leave a note annotating that point of the trace.

Oh yeah, and don’t forget the most important feature: the retro design 😎.

Using this interactive debug UI gave me far clearer visibility than a terminal of print statements. I caught a couple bugs immediately just by being able to see variable assignments more clearly. A repeating pattern of solutions in the timeline view led me to discover an infinite loop bug.

And, above all: I started having more fun! When I got stuck on bugs, it felt like I was getting stuck in interesting, essential ways, not on dumb mistakes. I was able to get an intuitive grasp of my interpreter’s operation, and then hone in on problems. As a bonus, the visual aesthetic made debugging feel more like a puzzle game than a depressing slog.

* * *

Two things that stick out to me about this experience are 1) how fast it was to get started, and 2) how fast it was to iterate.

When I first had the idea, I just copy-pasted my interpreter code and a sample execution trace into Claude, and asked it to build a React web UI with the rough functionality I wanted. I also specified “a fun hacker vibe, like the matrix”, because why not? About a minute later (after a single iteration for a UI bug which Claude fixed on its own), I had a solid first version up and running:

My prompt to Claude

That fast turnaround is absolutely critical, because it meant I didn’t need to break focus from the main task at hand. I was trying to write a Prolog interpreter here, not build a debug UI. Without AI support, I would have just muddled through with my existing tools, lacking the time or focus to build a debug UI. Simon Willison says: “AI-enhanced development makes me more ambitious with my projects”. In this case: AI-enhanced development made me more ambitious with my dev tools.

By the way: I was confident Claude 3.5-Sonnet would do well at this task, because it’s great at building straightforward web UIs. That’s all this debugger is, at the end of the day: a simple view of a JSON blob; an easy task for a competent web developer. In some sense, you can think of this workflow as a technique for turning that narrow, limited programming capability—rapidly and automatically building straightforward UIs—into an accelerant for more advanced kinds of programming.

Whether you’re an AI-programming skeptic or an enthusiast, the reality is that many programming tasks are beyond the reach of today’s models. But many decent dev tools are actually quite easy for AI to build, and can help the rest of the programming go smoother. In general, these days any time I’m spending more than a minute staring at a JSON blob, I consider whether it’s worth building a custom UI for it.

* * *

As I used the tool in my debugging, I would notice small things I wanted to visualize differently: improving the syntax display for the program, allocating screen real estate better, adding the timeline view to get a sense of the full history.

Each time, I would just switch windows, spend a few seconds asking Claude to make the change, and then switch back to my code editor and resume working. When I came back at my next breaking point, I’d have a new debugger waiting for me. Usually things would just work the first time. Sometimes a minor bug fix was necessary, but I let Claude handle it every time. I still haven’t looked at the UI code.

Eventually we landed on a fairly nice design, where each feature had been motivated by an immediate need that I had felt during use:

Claude wasn’t perfect—it did get stuck one time when I asked it to add a flamegraph view of the stack trace changing over time. Perhaps I could have prodded it into building this better, or even resorted to building it myself. But instead I just decided to abandon that idea and carry on. AI development works well when your requirements are flexible and you’re OK changing course to work within the current limits of the model.

Overall, it felt incredible that it only took seconds to go from noticing something I wanted in my debugger to having it there in the UI. The AI support let me stay in flow the whole time; I was free to think about interpreter code and not debug tool code. I had a yak-shaving intern at my disposal.

This is the dream of malleable software: editing software at the speed of thought. Starting with just the minimal thing we need for our particular use case, adding things immediately as we come across new requirements. Ending up with a tool that’s molded to our needs like a leather shoe, not some complicated generic thing designed for a million users.

Related reading

Your pie doesn't need to be original (unless you claim it so)

2024-08-25 23:39:00

Imagine you bake a delicious peach pie over the weekend, and you offer a slice to your friend. They respond:

“Wait, how is this different from every other peach pie that’s ever been baked? It seems really similar to another pie I had recently.”

This is obviously an absurd reaction!

But this exact dynamic happens all the time in creative software projects. Someone shares a project they made, and the first reaction is: how’s it different?

The problem here is a mismatch in values.

The friend has assumed that your goal is to “efficiently” reach the goal of a delicious pie, or perhaps even to create a new kind of pie. But that’s not the goal at all!

Baking a pie is a creative act. It’s personal, it’s inherently delightful, it’s an act of caring for others. It’s also a craft that one can improve at over time. Just buying the “best” pie would defeat the point.


The next day, you find out there’s a scientific conference in town: CRISP, the Conference for Research on Innovative Sweet Pastries. This is where the world’s foremost experts push forward the frontier of pie-baking technique.

You show up with your delicious peach pie, and the first question from the judging panel is:

“Wait, how is this different from every other peach pie that’s ever been baked? It seems really similar to another pie I had recently.”

You respond: “I have no idea, I just enjoyed baking it and thought it was delicious! I don’t even know what recipe I used. Why does it matter to you, huh?”

The expert says: “Well then, you’re welcome to bake pies all you want at home, but your pie is not welcome at CRISP. The community cannot understand your contribution or build on your work.”

You might be upset about this outcome, but you’d be wrong. In this context, the judge’s criticism is totally fair.

The goal of CRISP isn’t just to enjoy pies, it’s to build up a community of practice. Part of being a good citizen of that community is being able to explain how your pie is different—which in turn requires learning about all the other ways of making pie. This isn’t merely a higher bar than amateur weekend baking, it’s a totally different frame of mind.


I think mixing up these two situations is the source of a lot of unfortunate confusion.

I work on prototyping new kinds of software interfaces and programming tools, and I spend time in various communities that span across the cultures of playful exploration (sharing demos on Twitter) and academic research (writing formal papers).

Often I see creative people share personal projects and get their spirits weakened by “how’s it different?” The question can be well-meaning; it isn’t necessarily cynical! It’s just misunderstanding the goal.

On the other hand, I see people submit cool work to academic research venues and get confused by, or even chafe at, the stringent requirement of situating the work in context. I used to be pretty dismissive of Related Work sections myself, until I went through grad school and realized how valuable they are to the world.


So, as creators and feedback-givers, how can we avoid this confusion?

There’s an answer that seems obvious: clearly set the goal up front. Are you trying to do a personal project for fun, or are you trying to make a novel research contribution? Just proactively broadcast your intent, and most people will be better at asking questions that are aligned with your goals.

Unfortunately, in my experience, things don’t work out this cleanly. Many of the best new ideas start out as playful explorations, and over time snowball into a larger project that are worthy of a serious research contribution.

A strategy I’ve found helpful is to start from a place of personal creativity. If the initial goal is playful exploration for its own sake, that creates free space to explore and quells early doubts (from both myself and others). It doesn’t matter if it’s new or good (yet), I’m just having fun.

Occasionally a project grows into something more. At that point it can be appropriate to apply a critical academic lens.

Starting from the other side seems a lot tougher. If you start off saying “we’re going to make a big serious contribution no one’s ever done before,” that sets up high stakes and invites harsh critique from the start. Maybe this approach works for some projects with narrower success criteria, but it doesn’t seem to work well for most of what I do.

A final thing to keep in mind: when I’m on the side of giving feedback, I always try to first understand the creator’s goals. This can be a subtle art when they don’t even know their own goals yet. The weekend baker may just need encouragement, not critique.

Related wisdom

Richard Feynman, on spinning plates:

Physics disgusts me a little bit now, but I used to enjoy doing physics. Why did I enjoy it? I used to play with it… I’m going to play with physics, whenever I want to, without worrying about any importance whatsoever.

Patrick Dubroy, on playing like a kid:

It was another instance of unconsciously adopting a restrictive set of assumptions, telling myself that if I wasn’t done “right”, it wasn’t worth doing at all… And guess what — when I decided to let go of those assumptions, I started having fun on my side projects again.

7 books that stood the test of time in 2023

2023-12-18 01:01:00

It’s the most wonderful time of the year: when people proudly announce how many books they have read in the past 12 months. 10 books, 20 books, 57 books! Worry not—I know you don’t care, and besides, I have no idea how many books I read this year.

In lieu of that, here’s a short list of some favorite books I read before 2023 that have stuck with me this year and changed the way I think. Seven masterpieces on AI, cooking, art, houses, product design, computational media, and trees:

Six books on a floor, corresponding to the list below in this post
Six of the seven books. The seventh I only have on Kindle, sorry Ken!

The Most Human Human, by Brian Christian

A book about humanity, disguised as a book about AI. It taught me how to have deeper conversations and find more meaning in my work. Amid a sea of spilled ink on AI, Brian Christian has simply asked more interesting questions. Notably, this book was written in 2011, before the current wave—yet it’s still remarkably relevant.

See it on Goodreads

An Everlasting Meal, by Tamar Adler

This book changed the way I cook. It teaches the correct way to think about home cooking – not as a chore, an “obstacle”, or an optimized process… but as a simple, natural act of creativity. One of the wisest books I know.

See it on Goodreads

Art & Fear, by David Bayles and Ted Orland

A slim little manual about how to overcome the fear and keep creating. Subtle tips on the role of talent, managing the vision-execution gap, quantity vs quality. I might not have kept going with research if I hadn’t read this book.

See it on Goodreads

The Production of Houses, by Christopher Alexander et al.

Christopher Alexander thought people could design their own homes. His most famous books, The Timeless Way of Building and A Pattern Language, are brilliant but can be a bit abstract. The Production of Houses shows what actually happened, concretely, when he and his team helped some people do the thing and design their own homes.

The result: some great successes, some strange contradictions to ponder.

See it on Goodreads

Creative Selection, by Ken Kocienda

This book shows that most product design is a dead end. It describes, in great detail, the Apple way—hard to achieve, but worth striving towards. I’m constantly remembering stories from this book in my own work. “Pick one keyboard!”

See it on Goodreads

Changing Minds, by Andy diSessa

A foundational text for my research. I am always amazed how many people have not even heard of it. If you care about “future of computing”, Bret Victor’s work, “computational literacy”… go read this book! I promise it will change your mind. I reference diSessa’s “nightmare bicycle concept all the time.

See it on Goodreads

The Overstory, by Richard Powers

To the extent that it’s possible to see the world from the perspective of trees, this novel got me to that place. Every time I’m in a forest now, I think about the trees: how long they’ve been there, what they’re communicating to one another.

See it on Goodreads


Look, I could write so much more about any one of these books (and I’m happy to answer any questions!) but honestly, it feels hard to do them justice.

They’re all 5 stars, on both substance and prose. Well worth your time, and could be a great gift to the right person. I hope you have a great holidays!

Codifying a ChatGPT workflow into a malleable GUI

2023-07-26 01:15:00

In my previous post, Malleable software in the age of LLMs, I laid out a theory for how LLMs might enable a new era of people creating their own personal software:

I think it’s likely that soon all computer users will have the ability to develop small software tools from scratch, and to describe modifications they’d like made to software they’re already using.

In other words, LLMs will represent a step change in tool support for end-user programming: the ability of normal people to fully harness the general power of computers without resorting to the complexity of normal programming. Until now, that vision has been bottlenecked on turning fuzzy informal intent into formal, executable code; now that bottleneck is rapidly opening up thanks to LLMs.

Today I’ll share a real example where I found it useful to build custom personal software with an LLM. Earlier this week, I used GPT-4 to code an app that helps me draft text messages in English and translate them to Japanese. The basic idea: I paste in the context for the text thread and write my response in English; I get back a translation into Japanese. The app has a couple other neat features, too: I can drag a slider to tweak the formality of the language, and I can highlight any phrase to get a more detailed explanation.

The whole thing is ugly and thrown together in no time, but it has exactly the features I need, and I’ve found it quite useful for planning an upcoming trip to Japan.

The app uses the GPT-4 API to do the actual translations. So there are two usages of LLMs going on here: I used an LLM to code the app, and then the app also uses an LLM when it runs to do the translations. Sorry if that’s confusing, 2023 is weird.

You may ask: why bother making an app for this? Why not just ask ChatGPT to do the translations? I’m glad you asked—that’s what this post is all about! In fact, I started out doing these translations in ChatGPT, but I ended up finding this GUI nicer to use than raw ChatGPT for several reasons:

  • It encodes a prescriptive workflow so I don’t need to fuss with prompts as much.
  • It offers convenient direct manipulation affordances like text boxes and sliders.
  • It makes it easier to share a workflow with other people.

(Interestingly, these are similar to the reasons that so many startups are building products wrapping LLM prompts—the difference here is that I’m just building the tool for myself, and not trying to make a product.)

A key point is that making this personal GUI is only worth it because GPT also lowers the cost of making and iterating on the GUI! Even though I’m a programmer, I wouldn’t have made this tool without LLM support. It’s not only the time savings, it’s also the fact that I don’t need to turn on my “programmer brain” to make these tools; I can think at a higher level and let the LLM handle the details.

There are also tradeoffs to consider when moving from ChatGPT into a GUI tool: the resulting workflow is more rigid and less open-ended than a ChatGPT session. In a sense this is the whole point of a GUI. But the GUI isn’t necessarily as limiting as it might seem, because remember, it’s malleable—I built it myself using GPT and can quickly make further edits. This is a very different situation that using a fixed app that someone else made! Below I’ll share one example of how I edited this tool on the fly as I was using it.

Overall I think this experience suggests an intriguing workflow of codifying a ChatGPT workflow into a malleable GUI: starting out with ChatGPT, exploring the most useful way to solve a task, and then once you’ve landed on a good approach, codifying that approach in a GUI tool that you can use in a repeatable way going forward.

Alright, on to the story of how this app came about.


ChatGPT is a good translator (usually 🙃)

I’m going on a trip to Japan soon and have been on some text threads where I need to communicate in Japanese. I grew up in Japan but my writing is rusty and painfully slow these days. One particular challenge for me is using the appropriate level of formality with extended family and other family acquaintances—I have fluent schoolyard Japanese but the nuances of formal grown-up Japanese can be tricky.

I started using ChatGPT to make this process faster by asking it to produce draft messages in Japanese based on my English input. I quickly realized there are some neat benefits to ChatGPT vs. a traditional translation app. I can give it the full context of the text thread so it can incorporate that into its translation. I can steer it with prompting: asking it to tweak the formality or do a less word-for-word translation. I can ask follow-up questions about the meaning of a word. These capabilities were all gamechangers for this task; they really show why smart chatbots can be so useful!

You may be wondering: how good were the translations? I’d say: good enough to be spectacularly useful to me, given that I can verify and edit. Often they were basically perfect. Sometimes they were wrong in huge, hilarious ways—flipping the meaning of a sentence, or swapping the name of a train station for another one (sigh, LLMs…).

In practice these mistakes didn’t matter too much though. I’m slow at writing in Japanese but can read basic messages easily, so I just fix the errors and they aren’t dealbreakers. When creation is slow and verification is fast, it’s a sweet spot for using an LLM.

Honing the workflow

As I translated more messages and saw ways that the model failed, I developed some little prompting tricks that seemed to produce better translations. Things like this:

Below is some context for a text message thread:

…paste thread…

Now translate my message below to japanese. make it sound natural in the flow of this conversation. don’t translate word for word, translate the general meaning.

…write message…

I also learned some typical follow-up requests I would often make after receiving the initial translation: things like asking to adjust the formality level up or down.

Once I had landed on these specific prompt patterns, it made my interactions more scripted. Each time I would need to dig up my prompt text for this task, copy-paste it in, and fill in the blanks for this particular translation. When asking follow-up questions I’d also copy-paste phrasings from previous chats that had proven successful. At this point it didn’t feel like an open-ended conversation anymore; it felt like I was tediously executing a workflow made up of specific chat prompts.

I also found myself wanting to have more of a feeling of a solid tool that I could return to. ChatGPT chats feels a bit amorphous and hard to return to: where do I store my prompts? How do I even remember what useful workflows I’ve come up with? I basically wanted a window I could pop open and get a quick translation.

Making a GUI with GPT

So, I asked GPT-4 to build me a GUI codifying this workflow. The app is a frontend-only React.js web app. It’s hosted on Replit, which makes it easy to spin up a new project in one click and then share a link with people. (You can see the current code here if you’re curious.) I just copy-pasted the GPT-generated code into Replit.

The initial version of the app was very simple: it basically just accepted a text input and then made a request to the GPT-4 API asking for a natural-sounding translation. The early designs generated by ChatGPT were super primitive:

Asking it for a “professional and modern” redesign helped get the design looking passable. I then asked GPT to add a formality slider to the app. The new app requests three translations of varying formality, and then lets the user drag a slider to instantly choose between them 😎

GPT-4 did most of the coding of the UI. I didn’t measure how long it took, but subjectively, the whole thing felt pretty effortless; it felt more like asking a friend to build an app for me than building it myself, and I never engaged my detailed programmer brain. I still haven’t looked very closely at the code. GPT generally produced good results on every iteration. At one point it got confused about how to call the OpenAI API, but pasting in some recent documentation got it sorted out. I’ve included some of the coding prompts I used at the bottom of this post if you’re curious about the details.

At the same time, it’s important to note that my programming background did substantially help the process along and I don’t think it would have gone that well if I didn’t know how to make React UIs. I was able to give the LLM a detailed spec, which was natural for me to write. For example: I suggested storing the OpenAI key as a user-provided setting in the app UI rather than putting it in the code, because that would let us keep the app frontend-only. I also helped fix some minor bugs.

I do believe it’s possible to get to the point where an LLM can support non-programmers in building custom GUIs (and that’s in fact one of my main research goals at the moment). But it’s a much harder goal than supporting programmers, and will require a lot more work on tooling. More on this later.

Iterating on the fly

A few times I noticed that the Japanese translations included phrases I didn’t understand. Once this need came up a few times, I decided to add it as a feature in my GUI. I asked GPT to modify the code so that I can select a phrase and click a button to get an explanation in context:

This tight iteration loop felt awesome. Going from wanting the feature to having it in my app was accomplished in minutes with very little effort. This shows the benefit of having a malleable GUI which I control and I can quickly edit using an LLM. My feature requests aren’t trapped in a feedback queue, I can just build them for myself. It’s not the best-designed interaction ever, but it gets the job done.

I’ve found that having the button there encourages me to ask for explanations more often. Before, when I was doing the translations in ChatGPT, I would need to explicitly think to write a follow-up message asking for an explanation. Now I have a button reminding me to do it, and the button also uses a high-quality prompt that I’ve developed.

Sharing the tool

My brother asked me to try the tool. I sent him the Replit link and he was able to use it.

I think sharing a GUI is probably way more effective than trying to share a complex ChatGPT workflow with various prompts patched together. The UI encodes what I’ve learned about doing this particular task effectively, and provides clear affordances that anyone can pick up quickly.

From chatbot to GUI

What general lessons can we take away from my experience here? I think it gestures at two big ideas.

The first one is that chatbots are not always the best interface for a task, even one like translation that involves lots of natural language and text. Amelia Wattenberger wrote a great piece explaining some of the reasons. It’s worth reading the whole thing, but here’s a key excerpt about the value of affordances:

Good tools make it clear how they should be used. And more importantly, how they should not be used. If we think about a good pair of gloves, it’s immediately obvious how we should use them. They’re hand-shaped! We put them on our hands. And the specific material tells us more: metal mesh gloves are for preventing physical harm, rubber gloves are for preventing chemical harm, and leather gloves are for looking cool on a motorcycle.

Compare that to looking at a typical chat interface. The only clue we receive is that we should type characters into the textbox. The interface looks the same as a Google search box, a login form, and a credit card field.

This principle clearly holds when designing a product that other people are going to use. But perhaps surprisingly, in my experience, affordances are actually useful even when designing a tool for myself! Good affordances can help my future self remember how to use the tool. The “explain phrase” button reminds me that I should ask about words I don’t know.

I also find that making a UI makes a tool more memorable. My custom GUI is a visually distinctive artifact that lives at a URL; this helps me remember that I have the tool and can use it. Having a UI makes my tool feel more like a reusable artifact than a ChatGPT prompt.

Now, it’s not quite as simple as “GUI good, chatbot bad"—there are tradeoffs. For my translation use case, I found ChatGPT super helpful for my initial explorations. The open-endedness of the chatbot gave it a huge leg up over Google Translate, a more traditional application with more limited capabilities and clearer affordances. I was able to explore a wide space of useful features and find the ones that I wanted to keep using.

I think this suggests a natural workflow: start in chat, and then codify a UI if it’s getting annoying doing the same chat workflow repeatedly.

By the way, one more thing: there are obviously many other visual affordances to consider besides the ones I used in this particular example. For example, here’s another example of a GPT-powered GUI tool I built a couple months ago, where I can drag-and-drop in a file and see useful conversions of that file into different formats:

The joy of editing our tools

Another takeaway: it feels great to use a tiny GUI made just for my own needs. It does only what I want it to do, nothing more. The design isn’t going to win any awards or get VC funding, but it’s good enough for what I want. When I come across more things that the app needs to do, I can add them.

Robin Sloan has this delightful idea that an app can be a home-cooked meal:

When you liberate programming from the requirement to be professional and scalable, it becomes a different activity altogether, just as cooking at home is really nothing like cooking in a commercial kitchen. I can report to you: not only is this different activity rewarding in almost exactly the same way that cooking for someone you love is rewarding, there’s another feeling, too, specific to this realm. I have struggled to find words for this, but/and I think it might be the crux of the whole thing:

This messaging app I built for, and with, my family, it won’t change unless we want it to change. There will be no sudden redesign, no flood of ads, no pivot to chase a userbase inscrutable to us. It might go away at some point, but that will be our decision. What is this feeling? Independence? Security? Sovereignty?

Is it simply … the feeling of being home?

Software doesn’t always need to be mass-produced like restaurant food, it can be produced intimately at small scale. My translator app feels this way to me.

In this example, using GPT-4 to code and edit the app is what enabled the feeling of malleability for me. It feels magical describing an app and having it appear on-screen within seconds. Little React apps seem to be the kind of simple code that GPT-4 is good at producing. You could even argue that it’s "just regurgitating other code it’s already seen”, but I don’t care—it made me the tool that I wanted.

I’m a programmer and I could have built this app manually myself without too much trouble. And yet, I don’t think I would have. The LLM is an order of magnitude faster than me at getting the first draft out and producing new iterations, this makes me much more likely to just give it a shot. This reminds me of how Simon Willison says that AI-enhanced development makes him more ambitious with his projects:

In the past I’ve had plenty of ideas for projects which I’ve ruled out because they would take a day—or days—of work to get to a point where they’re useful. I have enough other stuff to build already!

But if ChatGPT can drop that down to an hour or less, those projects can suddenly become viable.

Which means I’m building all sorts of weird and interesting little things that previously I wouldn’t have invested the time in.

Simon’s description applies perfectly to my example.

It’s not just about the initial creation, it’s also about the fast iteration loop. I discussed the possibility of LLMs updating a GUI app in my previous post:

Next, consider LLMs applied to the app model. What if we started with an interactive analytics application, but this time we had a team of LLM developers at our disposal? As a start, we could ask the LLM questions about how to use the application, which could be easier than reading documentation.

But more profoundly than that, the LLM developers could go beyond that and update the application. When we give feedback about adding a new feature, our request wouldn’t get lost in an infinite queue. They would respond immediately, and we’d have some back and forth to get the feature implemented. Of course, the new functionality doesn’t need to be shipped to everyone; it can just be enabled for our team. This is economically viable now because we’re not relying on a centralized team of human developers to make the change.

It simply feels good to be using a GUI app, have an idea for how it could be different, and then have that new version running within seconds.

There’s a caveat worth acknowleding here: the story I shared in this post only worked under specific conditions. The app I made is extremely simple in functionality; a more complex app would be much harder to modify.

And I’m pretty confident that the coding workflow I shared in this post only worked because I’m a programmer. The LLM makes me much, much faster at building these simple kinds of utilities, but my programming knowledge still feels essential to keeping the process running. I’m writing fairly detailed technical specs, I’m making architectural choices, I’m occasionally directly editing the code or fixing a bug. The app is so small and simple that it’s easy for me to keep up with what’s going on.

I yearn for non-programmers to also experience software this way, as a malleable artifact they can change in the natural course of use. LLMs are clearly a big leap forward on this dimension, but there’s also a lot of work ahead. We’ll need to find ways for LLMs to work with non-programmers to specify intent, to help them understand what’s going on, and to fix things when they go wrong.

I’m optimistic that a combination of better tooling and improved models can get us there, at least for simpler use cases like my translator tool. I guess there’s only one way to find out 🤓 (Subscribe to my email newsletter if you want to follow along with my research in this area.)


Recently…

In the past few months I’ve given a couple talks relevant to the themes in this post.

In April I spoke at Causal Islands about Potluck, a programmable notes prototype I worked on with Max Schoening, Paul Shen, and Paul Sonnentag at Ink & Switch. In my talk I share a bunch of demos from our published essay, but I also show some newer demos of integrating LLMs to help author spreadsheets. (The embed below will jump you right to the LLM demos)

Also: a couple weeks ago, I presented my PhD thesis defense at MIT! I gave a talk called Building Personal Software with Reactive Databases. I talk about what makes spreadsheets great, and show a few projects I’ve worked on that aim to make it easier to build software using techniques from spreadsheets and databases.


Related reading

If you’re interested in diving deeper into ways of interacting with LLMs besides chatbots, I strongly recommend the following readings:

And for a more abstract angle on the example in this post, check out my previous post, Malleable software in the age of LLMs!


Appendix: prompts

Here are some of the prompts I used to make the translator app.

First, my general system prompt for UI coding:

You are a helpful AI coding assistant. Make sure to follow the user’s instructions precisely and to the letter. Always reason aloud about your plans before writing the final code.

Write code in ReactJS. Keep the whole app in one file. Only write a frontend, no backend.

If the specification is clear, you can generate code immediately. If there are ambiguities, ask key clarifying questions before proceeding.

When the user asks you to make edits, suggest minimal edits to the code, don’t regenerate the whole file.

Initial prompt for the texting app:

I’d like you to make me an app that helps me participate in a text message conversation in Japanese by using an LLM to translate. Here’s the basic idea:

  • I paste in a transcript of a text message thread into a box
  • I write the message I want to reply with (in english) into a different box
  • I click a button
  • the app shows me a Japanese translation of my message as output; there’s a copy button so i can copy-paste it easily.
  • the app talks to openai gpt-4 to do the translation. the prompt can be something like “here’s a text thread in japanese: . now translate my new message below to japanese. make it sound natural in the flow of this conversation. don’t translate word for word, translate the general meaning.” use the openai js library, some sample code pasted below.
  • the user can paste in their openai key in a settings pane, it gets stored in localstorage

One of the iterative edits for the texting app:

make the following edits and output new code:

  • write a css file and style the app to look professional and modern.
  • arrange the text thread in a tall box on the left, and then the new message and translation vertically stacked to the right
  • give the app a title: Japanese Texting Helper
  • hide the openai key behind a settings section that gets toggled open/closed at the bottom of the app

Malleable software in the age of LLMs

2023-03-26 03:05:00

A robot and a human coding together. Image from Midjourney.

It’s been a wild few weeks for large language models. OpenAI released GPT-4, which shows impressive gains on a variety of capabilities including coding. Microsoft Research released a paper showing how GPT-4 was able to produce quite sophisticated code like a 3D video game without much prompting at all. OpenAI also released plugins for ChatGPT, which are a productized version of the ReAct tool usage pattern I played around with in my previous post about querying NBA statistics using GPT.

Amid all this chaos, many people are naturally wondering: how will LLMs affect the creation of software?

One answer to that question is that LLMs will make skilled professional developers more productive. This is a safe bet since GitHub Copilot has already shown it’s viable. It’s also a comforting thought, because developers can feel secure in their future job prospects, and it doesn’t suggest structural upheaval in the way software is produced or distributed 😉

However, I suspect this won’t be the whole picture. While I’m confident that LLMs will become useful tools for professional programmers, I also think focusing too much on that narrow use risks missing the potential for bigger changes ahead.

Here’s why: I think it’s likely that soon all computer users will have the ability to develop small software tools from scratch, and to describe modifications they’d like made to software they’re already using. In other words, LLMs will represent a step change in tool support for end-user programming: the ability of normal people to fully harness the general power of computers without resorting to the complexity of normal programming. Until now, that vision has been bottlenecked on turning fuzzy informal intent into formal, executable code; now that bottleneck is rapidly opening up thanks to LLMs.

If this hypothesis indeed comes true, we might start to see some surprising changes in the way people use software:

  • One-off scripts: Normal computer users have their AI create and execute scripts dozens of times a day, to perform tasks like data analysis, video editing, or automating tedious tasks.
  • One-off GUIs: People use AI to create entire GUI applications just for performing a single specific task—containing just the features they need, no bloat.
  • Build don’t buy: Businesses develop more software in-house that meets their custom needs, rather than buying SaaS off the shelf, since it’s now cheaper to get software tailored to the use case.
  • Modding/extensions: Consumers and businesses demand the ability to extend and mod their existing software, since it’s now easier to specify a new feature or a tweak to match a user’s workflow.
  • Recombination: Take the best parts of the different applications you like best, and create a new hybrid that composes them together.

All of these changes would go beyond just making our current software production process faster. They would be changing when software gets created, by whom, for what purpose.

LLMs + malleable software: a series

Phew, there’s a lot to unpack here. 😅

In a series of posts starting with this one, I’ll dig in and explore these kinds of broad changes LLMs might enable in the creation and distribution of software, and even more generally in the way people interact with software. Some of the questions I’ll cover include:

  • Interaction models: Which interaction model will make sense for which tasks? When will people want a chatbot, a one-off script, or a custom throwaway GUI?
  • Software customization: How might LLMs enable malleable software that can be taken apart, recombined, and extended by users?
  • Intent specification: How will end-users work interactively with LLMs to specify their intent?
  • Fuzzy translators: How might the fuzzy data translation capabilities of LLMs enable shared data substrates which weren’t possible before?
  • User empowerment: How should we think about empowerment and agency vs delegation and automation in the age of LLMs?

If you want to subscribe to get future posts about these ideas, you can sign up for my email newsletter or subscribe via RSS. Posts should be fairly infrequent, monthly at most.

When to chatbot, when to not?

Today, we’ll start with a basic question: how will user interaction models evolve in the LLM era? In particular, what kinds of tasks might be taken over by chatbots? I think the answer matters a lot when we consider different ways to empower end-users.

As a preview of where this post is headed: I’ll argue that, while ChatGPT is far more capable than Siri, there are many tasks which aren’t well-served by a chat UI, for which we still need graphical user interfaces. Then I’ll discuss hybrid interaction models where LLMs help us construct UIs.

By the end, we’ll arrive at a point in the design space I find intriguing: open-ended computational media, directly learnable and moldable by users, with LLMs as collaborators within that media. And at that point this weird diagram will make sense 🙃:

One disclaimer before diving in: expect a lot of speculation and uncertainty. I’m not even trying to predict how fast these changes will happen, since I have no idea. The point is to imagine how a reasonable extrapolation from current AI might support new kinds of interactions with computers, and how we might apply this new technology to maximally empower end-users.

Opening up the programming bottleneck

Why might LLMs be a big deal for empowering users with computation?

For decades, pioneers of computing have been reaching towards a vision of end-user programming: normal people harnessing the full, general power of computers, not just using prefabricated applications handed down to them by the programmer elite. As Alan Kay wrote in 1984: “We now want to edit our tools as we have previously edited our documents.”

There are many manifestations of this idea. Modern examples of end-user programming systems you may have used include spreadsheets, Airtable, Glide, or iOS Shortcuts. Older examples include HyperCard, Smalltalk, and Yahoo Pipes. (See this excellent overview by my collaborators at Ink & Switch for a historical deep dive)

Although some of these efforts have been quite successful, until now they’ve also been limited by a fundamental challenge: it’s really hard to help people turn their rough ideas into formal executable code. System designers have tried super-high-level languages, friendly visual editors and better syntax, layered levels of complexity, and automatically generating simple code from examples. But it’s proven hard to get past a certain ceiling of complexity with these techniques.

Here’s one example of the programming bottleneck in my own work. A few years ago, I developed an end-user programming system called Wildcard which would let people customize any website through a spreadsheet interface. For example, in this short demo you can see a user sorting articles on Hacker News in a different order, and then adding read times to the articles in the page, all by manipulating a spreadsheet synced with the webpage.

Neat demo, right?

But if you look closely, there are two slightly awkward programming bottlenecks in this system. First, the user needs to be able to write small spreadsheet formulas to express computations. This is a lot easier than learning a full-fledged programming language, but it’s still a barrier to initial usage. Second, behind the scenes, Wildcard requires site-specific scraping code (excerpt shown below) to connect the spreadsheet to the website. In theory these adapters could be written and maintained by developers and shared among a community of end-users, but that’s a lot of work.

Now, with LLMs, these kinds of programming bottlenecks are less of a limiting factor. Turning a natural language specification into web scraping code or a little spreadsheet formula is exactly the kind of code synthesis that current LLMs can already achieve. We could imagine having the LLM help with scraping code and generating formulas, making it possible to achieve the demo above without anyone writing manual code. When I made Wildcard, this kind of program synthesis was just a fantasy, and now it’s rapidly becoming a reality.

This example also suggests a deeper question, though. If we have LLMs that can modify a website for us, why bother with the Wildcard UI at all? Couldn’t we just ask ChatGPT to re-sort the website for us and add read times?

I don’t think the answer is that clear cut. There’s a lot of value to seeing the spreadsheet as an alternate view of the underlying data of the website, which we can directly look at and manipulate. Clicking around in a table and sorting by column headers feels good, and is faster than typing “sort by column X”. Having spreadsheet formulas that the user can directly see and edit gives them more control.

The basic point here is that user interfaces still matter. We can imagine specific, targeted roles for LLMs that help empower users to customize and build software, without carelessly throwing decades of interaction design out the window.

Next we’ll dive deeper into this question of user interfaces vs. chatbots. But first let’s briefly go on a tangent and ask: can GPT really code?

Cmon, can it really code though?

How good is GPT-4’s coding ability today? It’s hard to summarize in general terms. The best way to understand the current capabilities is to see many positive and negative examples to develop some fuzzy intuition, and ideally to try it yourself.

It’s not hard to find impressive examples. Personally, I’ve had success using GPT-4 to write one-off Python code for data processing, and I watched my wife use ChatGPT to write some Python code for scraping data from a website. A recent paper from Microsoft Research found GPT-4 could generate a sophisticated 3D game running in the browser, with a zero-shot prompt (shown below).

It’s also not hard to find failures. In my experience, GPT-4 still gets confused when solving relatively simple algorithms problems. I tried to use it the other day to make a React application for performing some simple video editing tasks, and it got 90% of the way there but couldn’t get some dragging/resizing interactions quite right. It’s very far from perfect. In general, GPT-4 feels like a junior developer who is very fast at typing and knows about a lot of libraries, but is careless and easily confused.

Depending on your perspective, this summary might seem miraculous or underwhelming. If you’re skeptical, I want to point out a couple reasons for optimism which weren’t immediately obvious to me.

First, iteration is a natural part of the process with LLMs. When the code doesn’t work the first time, you can simply paste in the error message you got, or describe the unexpected behavior, and GPT will adjust. For one example, see this Twitter thread where a designer (who can’t write game code) creates a video game over many iterations. There were also some examples of iterating with error messages in the GPT-4 developer livestream. When you think about it, this mirrors the way humans write code; it doesn’t always work on the first try.

A joke that comes up often among AI-skeptical programmers goes something like this: “Great, now no one will have to write code, they’ll only have to write exact, precise specifications of computer behavior…” (implied: oh wait, that is code!) I suspect we’ll look back on this view as short-sighted. LLMs can iteratively work with users and ask them questions to develop their specifications, and can also fill in underspecified details using common sense. This doesn’t mean those are trivial challenges, but I expect to see progress on those fronts. I’ve already had success prompting GPT-4 to ask me clarifying questions about my specifications.

Another important point: GPT-4 seems to be a lot better than GPT-3 at coding, per the MSR paper and my own limited experiments. The trend line is steep. If we’re not plateauing yet, then it’s very plausible that the next generation of models will be significantly better once again.

Coding difficulty varies by context, and we might expect to see differences between professional software engineering and end-user programming. On the one hand, one might expect end-user programming to be easier than professional coding, because lots of tasks can be achieved with simple coding that mostly involves gluing together libraries, and doesn’t require novel algorithmic innovation.

On the other hand, failures are more consequential when a novice end-user is driving the process than when a skilled programmer is wielding control. The skilled programmer can laugh off the LLM’s silly suggestion, write their own code, or apply their own skill to work with the LLM to debug. An end-user is more likely to get confused or not even notice problems in the first place. These are real problems, but I don’t think they’re intractable. End-users already write messy buggy spreadsheet programs all the time, and yet we somehow muddle through—even if that seems offensive or perhaps even immoral to a correctness-minded professional software developer.

Chat is an essentially limited interaction

Now, with those preliminaries out of the way, let’s move on to the main topic of this post: how will interaction models evolve in this new age of computing? We’ll start by assessing chat as an interaction mode. Is the future of computing just talking to our computers in natural language?

To think clearly about this question, I think it’s important to notice that chatbots are frustrating for two distinct reasons. First, it’s annoying when the chatbot is narrow in its capabilities (looking at you Siri) and can’t do the thing you want it to do. But more fundamentally than that, chat is an essentially limited interaction mode, regardless of the quality of the bot.

To show why, let’s pick on a specific example: this tweet from OpenAI’s Greg Brockman during the ChatGPT Plugins launch this week, where he uses ChatGPT to trim the first 5 seconds of a video using natural language:

On the one hand, this is an extremely impressive demo for anyone who knows how computers work, and I’m excited about all the possibilities it implies.

And yet… in another sense, this is also a silly demo, because we already have direct manipulation user interfaces for trimming videos, with rich interactive feedback. For example, consider the iPhone UI for trimming videos, which offers rich feedback and fine control over exactly where to trim. This is much better than going back and forth over chat saying “actually trim just 4.8 seconds please”!

Now, I get that the point of Greg’s demo wasn’t just to trim a video, it was to gesture at an expanse of possibilities. But there’s still something important to notice here: a chat interface is not only quite slow and imprecise, but also requires conscious awareness of your thought process.

When we use a good tool—a hammer, a paintbrush, a pair of skis, or a car steering wheel—we become one with the tool in a subconscious way. We can enter a flow state, apply muscle memory, achieve fine control, and maybe even produce creative or artistic output. Chat will never feel like driving a car, no matter how good the bot is. In their 1986 book Understanding Computers and Cognition, Terry Winograd and Fernando Flores elaborate on this point:

In driving a car, the control interaction is normally transparent. You do not think “How far should I turn the steering wheel to go around that curve?” In fact, you are not even aware (unless something intrudes) of using a steering wheel…The long evolution of the design of automobiles has led to this readiness-to-hand. It is not achieved by having a car communicate like a person, but by providing the right coupling between the driver and action in the relevant domain (motion down the road).

Consultants vs apps

Let’s zoom out a bit on this question of chat vs direct manipulation. One way to think about it is to reflect on what it’s like to interact with a team of human consultants over Slack, vs. just using an app to get the job done. Then we’ll see how LLMs might play in to that picture.

So, imagine you want to get some metrics about your business, maybe a sales forecast for next quarter. How do you do it?

One approach is to ask your skilled team of business analysts. You can send them a message asking your question. It probably takes hours to get a response because they’re busy, and it’s expensive because you’re paying for people’s time. Seems like overkill for a simple task, but the key benefit is flexibility: you’re hoping that the consultants have a broad, general intelligence and can perform lots of different tasks that you ask of them.

In contrast, another option is to use a self-serve analytics platform where you can click around in some dashboards. When this works, it’s way faster and cheaper than bothering the analysts. The dashboards offer you powerful direct manipulation interactions like sorting, filtering, and zooming. You can quickly think through the problem yourself.

So what’s the downside? Using the app is less flexible than working with the bespoke consultants. The moment you want to perform a task which this analytics platform doesn’t support, you’re stuck asking for help or switching to a different tool. You can try sending an email to the developers of the analysis platform, but usually nothing will come of it. You don’t have a meaningful feedback loop with the developers; you’re left wishing software were more flexible.

Now with that baseline comparison established, let’s imagine how LLMs might fit in.

Assume that we could replace our human analyst team with ChatGPT for the tasks we have in mind, while preserving the same degree of flexibility. (This isn’t true of today’s models, but will become increasingly true to some approximation.) How would that change the picture? Well, for one thing, the LLM is a lot cheaper to run than the humans. It’s also a lot faster at responding since it’s not busy taking a coffee break. These are major advantages. But still, dialogue back and forth with it takes seconds, if not minutes, of conscious thought—much slower than feedback loops you have with a GUI or a steering wheel.

Next, consider LLMs applied to the app model. What if we started with an interactive analytics application, but this time we had a team of LLM developers at our disposal? As a start, we could ask the LLM questions about how to use the application, which could be easier than reading documentation.

But more profoundly than that, the LLM developers could go beyond that and update the application. When we give feedback about adding a new feature, our request wouldn’t get lost in an infinite queue. They would respond immediately, and we’d have some back and forth to get the feature implemented. Of course, the new functionality doesn’t need to be shipped to everyone; it can just be enabled for our team. This is economically viable now because we’re not relying on a centralized team of human developers to make the change.

Note that this is just a rough vision at this point. We’re missing a lot of details about how this model might be made real. A lot of the specifics of how software is built today make these kinds of on-the-fly customizations quite challenging.

The important thing, though, is that we’ve now established two loops in the interaction. On the inner loop, we can become one with the tool, using fast direct manipulation interfaces. On the outer loop, when we hit limits of the existing application, we can consciously offer feedback to the LLM developers and get new features built. This preserves the benefits of UIs, while adding more flexibility.

From apps to computational media

Does this double interaction loop remind you of anything?

Think about how a spreadsheet works. If you have a financial model in a spreadsheet, you can try changing a number in a cell to assess a scenario—this is the inner loop of direct manipulation at work.

But, you can also edit the formulas! A spreadsheet isn’t just an “app” focused on a specific task; it’s closer to a general computational medium which lets you flexibly express many kinds of tasks. The “platform developers"—the creators of the spreadsheet—have given you a set of general primitives that can be used to make many tools.

We might draw the double loop of the spreadsheet interaction like this. You can edit numbers in the spreadsheet, but you can also edit formulas, which edits the tool:

So far, I’ve labeled the spreadsheet in the above diagram as "kinda” flexible. Why? Well, when any individual user is working with a spreadsheet, it’s easy for them to hit the limits of their knowledge. In real life, spreadsheets are actually way more flexible than this. The reason is that this diagram is missing a critical component of spreadsheet usage: collaboration.

Collaboration with local developers

Most teams have a mix of domain experts and technical experts, who work together to put together a spreadsheet. And, importantly, the people building a spreadsheet together have a very different relationship than a typical “developer” and “end-user”. Bonnie Nardi and James Miller explain in their 1990 paper on collaborative spreadsheet development, imagining Betty, a CFO who knows finance, and Buzz, an expert in programming spreadsheets:

Betty and Buzz seem to be the stereotypical end-user/developer pair, and it is easy to imagine their development of a spreadsheet to be equally stereotypical: Betty specifies what the spreadsheet should do based on her knowledge of the domain, and Buzz implements it.

This is not the case. Their cooperative spreadsheet development departs from this scenario in two important ways:

(1) Betty constructs her basic spreadsheets without assistance from Buzz. She programs the parameters, data values and formulas into her models. In addition, Betty is completely responsible for the design and implementation of the user interface. She makes effective use of color, shading, fonts, outlines, and blank cells to structure and highlight the information in her spreadsheets.

(2) When Buzz helps Betty with a complex part of the spreadsheet such as graphing or a complex formula, his work is expressed in terms of Betty’s original work. He adds small, more advanced pieces of code to Betty’s basic spreadsheet; Betty is the main developer and he plays an adjunct role as consultant.

This is an important shift in the responsibility of system design and implementation. Non-programmers can be responsible for most of the development of a spreadsheet, implementing large applications that they would not undertake if they had to use conventional programming techniques. Non-programmers may never learn to program recursive functions and nested loops, but they can be extremely productive with spreadsheets. Because less experienced spreadsheet users become engaged and involved with their spreadsheets, they are motivated to reach out to more experienced users when they find themselves approaching the limits of their understanding of, or interest in, more sophisticated programming techniques.

So, a more accurate diagram of spreadsheet usage includes “local developers” like Buzz, who provide another outer layer of iteration, where the user can get help molding their tools. Because they’re on the same team as the user, it’s a lot easier to get help than appealing to third-party application or platform developers. And most importantly, over time, the user naturally learns to use more features of spreadsheets on their own, since they’re involved in the development process.

In general, the local developer makes the spreadsheet more flexible, although they also introduce cost, because now you have a human technical expert in the mix. What if you don’t have a local spreadsheet expert handy, perhaps because you can’t afford to hire that person? Then you’re back to doing web searches for complex spreadsheet programming…

In those cases, what if you had an LLM play the role of the local developer? That is, the user mainly drives the creation of the spreadsheet, but asks for technical help with some of the formulas when needed? The LLM wouldn’t just create an entire solution, it would also teach the user how to create the solution themselves next time.

This picture shows a world that I find pretty compelling. There’s an inner interaction loop that takes advantage of the full power of direct manipulation. There’s an outer loop where the user can also more deeply edit their tools within an open-ended medium. They can get AI support for making tool edits, and grow their own capacity to work in the medium. Over time, they can learn things like the basics of formulas, or how a VLOOKUP works. This structural knowledge helps the user think of possible use cases for the tool, and also helps them audit the output from the LLMs.

In a ChatGPT world, the user is left entirely dependent on the AI, without any understanding of its inner mechanism. In a computational medium with AI as assistant, the user’s reliance on the AI gently decreases over time as they become more comfortable in the medium.

If you like this diagram too, then it suggests an interesting opportunity. Until now, the design of open-ended computational media has been restricted by the programming bottleneck problem. LLMs seem to offer a promising way to more flexibly turn natural language into code, which then raises the question: what kinds of powerful computational media might be a good fit for this new situation?

Demos of on-the-fly UI

Update 3/31: In the days after I originally posted this essay, I found a few neat demos on Twitter from people exploring ideas in this space; I’ve added them here.

OK, enough diagrams, what might on-the-fly UI generation actually feel like to use?

Here’s Sean Grove demonstrating on-the-fly generation of an interactive table view, a map view with a lat/long output, and a simple video editing UI:

And here’s Vasek Mlejnsky showing an IDE that can create a form for submitting server requests:

Finally, here’s a little video mockup I made of GPT answering a question by returning an interactive spreadsheet. Note how I can tweak numbers and get immediate feedback. I can also inspect the underlying formulas and ask the model to explain them to me to level up my spreadsheet knowledge. (GPT actually did generate this spreadsheet data, I just copied the raw data into Excel to demonstrate the interactive element.)

I think these demos nicely illustrate the general promise of on-the-fly UI, but there’s still a ton of work ahead. One particular challenge: interesting UIs usually can’t be generated in a single shot; there has to be an iterative process with the user. In my experience, that iteration process can still often be very rough at the moment.

Next time: extensible software

That’s it for now. There are a lot of questions in the space that we still haven’t covered.

Next time I plan to discuss the architectural foundations required to make GUI applications extensible and composable by people using LLMs.

If you’re interested in that, you can sign up for my email newsletter or subscribe via RSS.

Related reading

Quick reads:

Deep, deep dives:

Designing and Programming Malleable Software: Philip Tchernavskij’s 2019 PhD thesis, which coined the term Malleable Software, and brilliantly motivates and defines the problem. “Malleable software aims to increase the power of existing adaptation behaviors by allowing users to pull apart and re-combine their interfaces at the granularity of individual UI elements”

The State of the Art in End-User Software Engineering: an academic paper from 2011 that illustrates many of the challenges ahead for supporting normal people in building software. “Although these end-user programmers may not have the same goals as professional developers, they do face many of the same software engineering challenges, including understanding their requirements, as well as making decisions about design, reuse, integration, testing, and debugging.”

The Malleable Systems Catalog, a list of projects exploring user-editable software, curated by J. Ryan Stinnett and co.