MoreRSS

site iconHypercriticalModify

Written by John Siracusa, a software developer, podcaster, and writer, started writing for Ars Technica in 1999.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Hypercritical

Hyperspace Update

2025-05-01 23:43:31

Hyperspace icon on a star field

Two months ago, I launched Hyperspace, a Mac app for reclaiming disk space without removing files. The feature set of version 1.0 was intentionally very conservative. As I wrote in my launch post, Hyperspace modifies files that it did not create and does not own. This is an inherently risky proposition.

The first release of Hyperspace mitigated these risks, in part, by entirely avoiding certain files and file system locations. I knew lifting these limitations would be a common request from potential customers. My plan was to launch 1.0 with the safest possible feature set, then slowly expand the app’s capabilities until all these intentional 1.0 limitations were gone.

With the release of Hyperspace 1.3 earlier this week, I have accomplished that goal. Here’s the timeline for overcoming the three major 1.0 feature limitations:

  • 1.0: February 24, 2025 - Launch
  • 1.1: March 14, 2025 - Packages
  • 1.2: April 3, 2025 - Cloud storage
  • 1.3: April 28, 2025 - Libraries

Here’s an explanation of those limitations, why they existed, and what it took to overcome them.

Packages

A “package” is a directory that is presented to the user as a file. For example, an .rtfd document (a “Rich Text Document With Attachments”) created by TextEdit is actually a directory that contains an .rtf file plus any attachments (e.g., images included in the document). The Finder displays and handles this .rtfd directory as if it were a single file.

For a package to work, all its contents must be intact. Hyperspace works hard to handle and recover from all sorts of errors, but in the rare case that manual intervention is required, asking the user to fix a problem within a package is undesirable. Since packages appear as single files, most people are not accustomed to cracking them open and poking around in their guts.

This may all seem esoteric, but there are some kinds of packages that are widely used and often contain vast amounts of data. Let’s start with the big one: Apple Photos libraries are packages. So are iMovie libraries, some Logic projects, and so on. These packages are all ideal targets for Hyperspace in terms of potential space savings. But they also often contain some of people’s most precious data.

For the most part, files within packages don’t need to be treated any differently than “normal” files. The delay in lifting this limitation was to allow the app to mature a bit first. Though I had a very large set of beta testers, there’s nothing like real customer usage to find bugs and edge cases. After five 1.0.x releases, I finally felt confident enough in Hyperspace’s basic functionality to allow access to packages.

I did so cautiously, however, by adding settings to enable package access, but leaving them turned off by default. I also provided separate settings for scanning inside packages and reclaiming files within packages. Enabling scanning but not reclamation within packages allows files within packages to be used as “source files”, which are never modified.

Finally, macOS requires special permissions for accessing Photos libraries, so there’s a separate setting for that as well.

Oh, and there’s one more common package type that Hyperspace still ignores: applications (i.e., .app packages). The contents of app packages are subject to Apple’s code signing system and are very sensitive to changes. I still might tackle apps someday, but it hasn’t been a common customer request.

Cloud Storage

Any file under the control of Apple’s “file provider” system is considered to be backed by cloud storage. In the past, iCloud Drive was the only example. Today, third-party services also use Apple’s file provider system. Examples include Microsoft OneDrive, Google Drive, and some versions of Dropbox.

There’s always the potential for competition between Hyperspace and other processes when accessing a given file. But in the case of cloud storage, we know there’s some other process that has its eye on every cloud-backed file. Hyperspace must tread lightly. Also, files backed by cloud storage might not actually be fully downloaded to the local disk. And even if they are, they might not be up-to-date.

Unlike files within packages, files backed by cloud storage are not just like other files. They require special treatment using different APIs. After nailing down “normal” file handling, including files within packages, I was ready to tackle cloud storage.

In the end, there were no major problems. Apple’s APIs for wrangling cloud-backed files mostly seem to work, with only a few oddities. And if Hyperspace can’t get an affirmative assurance from those APIs that a file is a valid candidate for reclamation, it will err on the side of caution and skip the file instead.

Libraries

In the early years of Mac OS X, there were tragicomic tales of users finding a folder named “Library” in their home directory and deciding they didn’t need it or its contents, then moving them to the Trash. Today, macOS hides that folder by default—for good reason. Its contents are essential for the correct functionality of your Mac! The same goes for the “Library” directory at the top level of the boot volume.

Hyperspace avoided Library folders for so long because their contents are so important, and because those contents are updated with surprising frequency. As with packages, it was important for me to have confidence in the basic functionality of Hyperspace before I declared open season on Library folders.

This capability was added last because the other two were more highly requested. As usual, Library access is enabled with a setting, which is off by default. Due to the high potential for contention (running apps are constantly fiddling with their files within the Library folder), this is probably the riskiest of the three major features, which is another reason I saved it for last. I might not have added it at all, if not for the fact that Library folders are a surprisingly rich source for space savings.

The Future

There’s more to come, including user interface improvements and an attempt to overcome some of the limitations of sandboxing, potentially allowing Hyperspace to reclaim space across more than one user account. (That last one is a bit of a “stretch goal”, but I’ve done it before.)

If you want to know more about how Hyperspace works, please read the extensive documentation. If you're interested in beta testing future versions of Hyperspace, email me.

In some ways, Hyperspace version 1.3 is what I originally envisioned when I started the project. But software development is never a straight line. It’s a forest. And like a forest it’s easy to lose your way. Launching with a more limited version 1.0 led to some angry reviews and low ratings in the Mac App Store, but it made the app safer from day one, and ultimately better for every user, now and in the future.

Love, Death & Robots

2025-04-11 00:43:26

A frame from Love, Death & Robots, season 3, episode 3: “The Very Pulse of the Machine”. It shows a woman in a space suit on a yellow-tinted planet looking anxious ane expectant.

Love, Death & Robots is an animated anthology series on Netflix. Each episode is a standalone story, though there is the barest of cross-season continuity in the form of one story featuring characters from a past season.

I love animation, but I’m hesitant to recommend Love, Death & Robots to casual viewers for a couple of reasons. First, this show is not for kids. It features a lot of violence, gore, nudity, and sex. That’s not what most people expect from animation.

Second, the quality is uneven. I don’t mean the quality of the animation, which is usually excellent. I mean how well they work as stories. Each episode has only a ten- to fifteen-minute runtime, during which it has to introduce its characters, its (usually sci-fi) setting, and then tell a satisfying story. It’s a challenging format.

Three seasons of Love, Death & Robots have been released since 2019. With season four set to debut in May, I thought I’d take a shot at convincing more people to give this show a chance. This is a rare case where I don’t recommend starting with season 1, episode 1 and viewing in order. The not-so-great episodes will surely drive most people away. Instead, I’m going to tell you where the gems are.

Here’s my list of the very best episodes of Love, Death & Robots in seasons 1–3. They’re standalone stories, so you can watch them in any order, but (back on brand) I do recommend that you watch them in the order listed below.

One last warning: Though not every episode is filled with gore and violence, most of them are—often including sexual violence. If this is not something you want to see, then I still recommend watching the handful of episodes that avoid these things. Remember, each episode is a standalone story, so watching even just one is fine.

  • Sonnie’s Edge (Season 1, Episode 1) - This is a perfect introduction to the series. It’s grim, violent, gory, beautifully animated, but with some unexpected emotional resonance.

  • Three Robots (Season 1, Episode 2) - The characters introduced in this story have become the unofficial mascots of the series. You’ll be seeing them again. The episode is lighthearted, cute, and undercut by a decidedly grim setting.

  • Good Hunting (Season 1, Episode 8) - Yes, traditional 2D animation is still a thing! But don’t expect something Disney-like. This story combines fantasy, myth, sci-fi, sex, love, death, and…well, cyborgs, at least.

  • Lucky 13 (Season 1, Episode 13) - If you like sci-fi action as seen in movies like Aliens and Edge of Tomorrow, this is the episode for you. As expected for this series, there’s a bit of a cerebral and emotional accent added to the stock sci-fi action.

  • Zima Blue (Season 1, Episode 14) - This is my favorite episode of the series, but it’s a weird one. I’m sure it doesn’t work at all for some people, but it got me. There’s no violence, sex, or gore—just a single, simple idea artfully realized.

  • Snow in the Desert (Season 2, Episode 4) - There’s a full movie’s worth of story crammed into this 18-minute episode, including some nice world-building and a lot of familiar themes and story beats. There’s nothing unexpected, but the level of execution is very high.

  • Three Robots: Exit Strategies (Season 3, Episode 1) - Our lovable robot friends are at it again, with an extra dose of black humor.

  • Bad Travelling (Season 3, Episode 2) - Lovecraftian horror on the high seas. It’s extremely dark and extremely gross.

  • The Very Pulse of the Machine (Season 3, Episode 3) - I guess I like the sappy, weird ones the best, because this is my second-favorite episode. It combines the kind of sci-fi ideas usually only encountered in novels with an emotional core. The animation is a beautiful blend of 3D modeling and cel shading. (As seen in Frame Game #75)

  • Swarm (Season 3, Episode 6) - I’ll see your Aliens-style sci-fi and raise you one pile of entomophobia and body horror. Upsetting and creepy.

  • In Vaulted Halls Entombed (Season 3, Episode 8) - “Space marines” meets Cthulhu. It goes about as well as you’d expect for our heroes.

  • Jibaro (Season 3, Episode 9) - The animation style in this episode is bonkers. I have never seen anything like it. The story, such as it is, is slight. This episode makes the list entirely based on its visuals, which are upsetting and baffling and amazing in equal measure. I’m not sure I even “like” this episode, but man, is it something.

If you’ve read all this and still can’t tell which are the “safest” episodes for those who want to avoid gore, sex, and violence, I’d recommend Three Robots (S1E2), Zima Blue (S1E14), Three Robots: Exit Strategies (S3E1), and The Very Pulse of the Machine (S3E3). But remember, none of these episodes are really suitable for children.

If you watch and enjoy any of these, then check out the rest of the episodes in the series. You may find some that you like more than any of my favorites.

Also, if you see these episodes in a different order in your Netflix client, the explanation is that Netflix rearranges episodes based on your viewing habits and history. Each person may see a different episode order within Netflix. Since viewing order doesn’t really matter in an anthology series, this doesn’t change much, but it is unexpected and, I think, ill-advised. Regardless, the links above should take you directly to each episode.

I’m so excited that a series like this even exists. It reminds me of Liquid Television from my teen years: a secret cache of odd, often willfully transgressive animation hiding in plain sight on a mainstream media platform. They’re not all winners, but I treasure the ones that succeed on their own terms.

Hyperspace

2025-02-25 23:00:10

Hyperspace screenshot

My interest in file systems started when I discovered how type and creator codes1 and resource forks contributed to the fantastic user interface on my original Macintosh in 1984. In the late 1990s, when it looked like Apple might buy Be Inc. to solve its operating system problems, the Be File System was the part I was most excited about. When Apple bought NeXT instead and (eventually) created Mac OS X, I was extremely enthusiastic about the possibility of ZFS becoming the new file system for the Mac. But that didn’t happen either.

Finally, at WWDC 2017, Apple announced Apple File System (APFS) for macOS (after secretly test-converting everyone’s iPhones to APFS and then reverting them back to HFS+ as part of an earlier iOS 10.x update in one of the most audacious technological gambits in history).

APFS wasn’t ZFS, but it was still a huge leap over HFS+. Two of its most important features are point-in-time snapshots and copy-on-write clones. Snapshots allow for more reliable and efficient Time Machine backups. Copy-on-write clones are based on the same underlying architectural features that enable snapshots: a flexible arrangement between directory entries and their corresponding file contents.

Today, most Mac users don’t even notice that using the “Duplicate” command in the Finder to make a copy of a file doesn’t actually copy the file’s contents. Instead, it makes a “clone” file that shares its data with the original file. That’s why duplicating a file in the Finder is nearly instant, no matter how large the file is.

Despite knowing about clone files since the APFS introduction nearly eight years ago, I didn’t give them much thought beyond the tiny thrill of knowing that I wasn’t eating any more disk space when I duplicated a large file in the Finder. But late last year, as my Mac’s disk slowly filled, I started to muse about how I might be able to get some disk space back.

If I could find files that had the same content but were not clones of each other, I could convert them into clones that all shared a single instance of the data on disk. I took an afternoon to whip up a Perl script (that called out to a command-line tool written in C and another written in Swift) to run against my disk to see how much space I might be able to save by doing this. It turned out to be a lot: dozens of gigabytes.

At this point, there was no turning back. I had to make this into an app. There are plenty of Mac apps that will save disk space by finding duplicate files and then deleting the duplicates. Using APFS clones, my app could reclaim disk space without removing any files! As a digital pack rat, this appealed to me immensely.

By the end of that week, I’d written a barebones Mac app to do the same thing my Perl script was doing. In the months that followed, I polished and tested the app, and christened it Hyperspace. I’m happy to announce that Hyperspace is now available in the Mac App Store.

The Hyperspace app icon, created by Iconfactory

Download Hyperspace from the Mac App Store

Hyperspace is a free download, and it’s free to scan to see how much space you might save. To actually reclaim any of that space, you will have to pay for the app.

Like all my apps, Hyperspace is a bit difficult to explain. I’ve attempted to do so, at length, in the Hyperspace documentation. I hope it makes enough sense to enough people that it will be a useful addition to the Mac ecosystem.

For my fellow developers who might be curious, this is my second Mac app that uses SwiftUI and my first that uses the SwiftUI life cycle. It’s also my second app to use Swift 6 and my first to do so since very early in its development. I found it much easier to use Swift 6 from (nearly) the start than to convert an existing, released app to Swift 6. Even so, there are still many rough edges to Swift 6, and I look forward to things being smoothed out a bit in the coming years.

In a recent episode of ATP, I described the then-unnamed Hyperspace as “An Incredibly Dangerous App.” Like the process of converting from HFS+ to APFS, Hyperspace modifies files that it did not create and does not own. It is, by far, the riskiest app I’ve created. (Reclaiming disk space ain’t like dusting crops…) But I also think it might be the most useful to the largest number of people. I hope you like it.


  1. Please note that type and creator codes are not stored in the resource fork.

The iMessage Halo Effect

2024-02-10 04:15:39

Pazu catches Sheeta as she floats down from the sky in the 1986 movie Castle in the Sky, written and directed by Hayao Miyazaki

The recent Beeper controversy briefly brought the “blue bubbles vs. green bubbles” topic back into the mainstream. Here’s a brief review for those of you who are (blessedly) unaware of this issue. Messages sent using the iMessage service appear in blue text bubbles within the Messages app. Messages sent using something other than the iMessage service (e.g., SMS, or (soon) RCS) appear in green text bubbles.

The iMessage service and the Messages app are only available on Apple devices. This is usually presented as a competitive advantage for the iPhone. If you want to use the iMessage service, the only (legitimate) way to do so is to buy an Apple device. If Apple were to make iMessage available on non-Apple platforms, that would remove one reason to buy an iPhone—or so the argument goes.

I think this popular conception of the issue is slightly wrong—or right for a different reason, at least. The iMessage service is not so good that it makes the iPhone more attractive to customers. It’s the iPhone that makes iMessage attractive. The iPhone gives iMessage its cachet, not the other way around.

This truth is plainly evident at the core of the “blue bubbles vs. green bubbles” debate. One of the biggest reasons green bubbles are looked down upon is that they indicate that the recipient doesn’t have an iPhone. iPhones are expensive, fancy, and desirable. Blue bubbles put the sender into the “in” crowd of iPhone owners.

The iMessage service itself, when considered in isolation, has considerably less draw. Here’s an assessment from 2013 from within Apple, as revealed during the recent Epic trial by internal emails discussing the idea of making iMessage work on non-Apple devices.

Eddy Cue: We have the best messaging app and we should make it the industry standard. […]

Craig Federighi: Do you have any thoughts on how we would make switching to iMessage (from WhatsApp) compelling to masses of Android users who don’t have a bunch of iOS friends? iMessage is a nice app/service, but to get users to switch social networks we’d need more than a marginally better app.

While I appreciate Eddy’s enthusiasm, I think Craig is closer to the mark: if iMessage is better than its competitors at all—and this is highly debatable—it is only marginally so.

Those Apple emails were written more than a decade ago. In the years since, iMessage has improved, but so has the competition. Today, it still feels like the iPhone is carrying iMessage. Anecdotally, both my teenage children have iPhones, but their group chats with their friends take place in WhatsApp.

Apple has almost certainly missed the most advantageous window of time to make iMessage “the industry standard” messaging service. But as the old saying goes, the best time to plant a tree is 30 years ago, and the second-best time is now. Apple has little to lose by expanding iMessage to other platforms, and there still may be something to be gained (even if it’s just making mixed Android/iPhone conversations in Messages a bit more smooth).

Spatial Computing

2024-01-31 06:44:06

The "secret About screen" from the classic Macintosh Finder. It shows a somewhat abstract black-and-white image of the Sierra(?) mountain range with the sun setting behind it.

The graphical user interface on the original Macintosh was a revelation to me when I first used it at the tender age of 8 years old. Part of the magic was thanks to its use of "direct manipulation." This term was coined in the 1980s to describe the ability to control a computer without using the keyboard to explain what you wanted it to do. Instead of typing a command to move a file from one place to another, the user could just grab it and drag it to a new location.

The fact that I’m able to write the phrase “grab it and drag it to a new location” and most people will understand what I mean is a testament to the decades-long success of this kind of interface. In the context of personal computers like the Mac, we all understand what it means to “grab” something on the screen and drag it somewhere using a mouse. We understand that the little pictures represent things that have meaning to both us and the computer, and we know what it means to manipulate them in certain ways. For most of us, it has become second nature.

With the advent of the iPhone and ubiquitous touchscreen interfaces, the phrase “direct manipulation” is now used to draw a contrast between touch interfaces and Mac-style GUIs. The iPhone has “direct manipulation.” The Mac does not. On an iPhone, you literally touch the thing you want to manipulate with your actual finger—no “indirect” pointing device needed.

The magic, the attractiveness, the fundamental success of both of these forms of “direct manipulation” has a lot to do with the physical reality of our existence as human beings. The ability to reason about and manipulate objects in space is a cornerstone of our success as a species. It is an essential part of every aspect of our lives. Millions of years of natural selection has made these skills a foundational component of our very being. We need these skills to survive, and so all of us survivors are the ones who have these skills.

Compare this with the things we often put under the umbrella of “knowing how to use computers”: debugging Wi-Fi problems, understanding how formulas work in Excel, splitting a bezier curve in Illustrator, converting a color image to black and white in Photoshop, etc. These are all things we must learn how to do specifically for the purpose of using the computer. There has not been millions of years of reproductive selection to help produce a modern-day population that inherently knows how to convert a PDF into a Word document. Sure, the ability to reason and learn is in our genes, but the ability to perform any specific task on a computer is not.

Given this, interfaces that leverage the innate abilities we do have are incredibly powerful. They have lower cognitive load. They feel good. “Ease of use” was what we called it in the 1980s.

The success of the GUI was driven, in large part, by the fact that our entire lives—and the lives of all our ancestors—have prepared us with many of the skills necessary to work with interfaces where we see things and then use our hands to manipulate them. The “indirection” of the GUI—icons that represent files, windows that represent documents that scroll within their frames—fades away very quickly. The mechanical functions of interaction become second nature, allowing us to concentrate on figuring out how the heck to remove the borders on a table in Google Docs1, or whatever.

The more a user interface presents a world that is understandable to us, where we can flex our millennia-old kinesthetic skills, the better it feels. The Spatial Finder, which had a simple, direct relationship between each Finder window and a location in the file hierarchy, was a defining part of the classic Macintosh interface. Decades later, the iPhone launched with a similarly relentlessly spatial home-screen interface: a grid of icons, recognizable by their position and appearance, that go where we move them and stay where we put them.

Now here we are, 40 years after the original Macintosh, and Apple is introducing what it calls its first "spatial computer." I haven’t tried the Vision Pro yet (regular customers won’t receive theirs for at least another three days), but the early reviews and Apple’s own guided tour provide a good overview of its capabilities.

How does the Vision Pro stack up, spatially speaking? Is it the new definition of “direct manipulation,” wresting the title from touch interfaces? In one obvious way, it takes spatial interfaces to the next level by committing to the simulation of a 3D world in a much more thorough way than the Mac or iPhone. Traditional GUIs are often described as being “2D,” but they’ve all taken advantage of our ability to parse and understand objects in 3D space by layering interface elements on top of each other, often deploying visual cues like shadows to drive home the illusion.

Vision Pro’s commitment to the bit goes much further. It breaks the rigid perpendicularity and shallow overall depth of the layered windows in a traditional GUI to provide a much deeper (literally) world within which to do our work.

Where Vision Pro may stumble is in its interface to the deep, spatial world it provides. We all know how to reach out and “directly manipulate” objects in the real world, but that’s not what Vision Pro asks us to do. Instead, Vision Pro requires us to first look at the thing we want to manipulate, and then perform an “indirect” gesture with our hands to operate on it.

Is this look-then-gesture interaction any different than using a mouse to “indirectly” manipulate a pointer? Does it leverage our innate spatial abilities to the same extent? Time will tell. But I feel comfortable saying that, in some ways, this kind of Vision Pro interaction is less “direct” than the iPhone’s touch interface, where we see a thing on a screen and then literally place our fingers on it. Will there be any interaction on the Vision Pro that’s as intuitive, efficient, and satisfying as flick-scrolling on an iPhone screen? It’s a high bar to clear, that’s for sure.

As the Vision Pro finally starts to arrive in customers’ hands, I can’t help but view it through this spatial-interface lens when comparing it to the Mac and the iPhone. Both its predecessors took advantage of our abilities to recognize and manipulate objects in space to a greater extent than any of the computing platforms that came before them. In its current form, I’m not sure the same can be said of the Vision Pro.

Of course, there’s a lot more to the Vision Pro than the degree to which it taps into this specific set of human skills. Its ability to fill literally the entire space around the user with its interface is something the Mac and iPhone cannot match, and it opens the door to new experiences and new kinds of interfaces.

But I do wonder if the Vision Pro’s current interaction model will hold up as well as that of the Mac and iPhone. Perhaps there’s still at least one technological leap yet to come to round out the story. Or perhaps the tools of the past (e.g., physical keyboards and pointing devices) will end up being an essential part of a productive, efficient Vision Pro experience. No matter how it turns out, I’m happy to see that the decades-old journey of “spatial computing” continues.


  1. Select the whole table, then click the “Border width” toolbar icon, then select 0pt.

I Made This

2024-01-12 02:51:57

The main character from Anthony Clark’s well-known comic about the Internet contemplating a strange sphere made by someone else.

While the utility of Generative AI is very clear at this point, the moral, ethical, and legal questions surrounding it are decidedly less so. I’m not a lawyer, and I’m not sure how the many current and future legal battles related to this topic will shake out. Right now, I’m still trying to understand the issue well enough to form a coherent opinion of how things should be. Writing this post is part of my process.

Generative AI needs to be trained on a vast amount of data that represents the kinds of things it will be asked to generate. The connection between that training data and the eventual generated output is a hotly debated topic. An AI model has no value until it’s trained. After training, how much of the model’s value is attributable to any given piece of training data? What legal rights, if any, can the owners of that training data exert on the creator of the model or its output?

A human’s creative work is inextricably linked to their life experiences: every piece of art they’ve ever seen, everything they’ve done, everyone they’ve ever met. And yet we still say the creative output of humans is worthy of legal protection (with some fairly narrow restrictions for works that are deemed insufficiently differentiated from existing works).

Some say that generative AI is no different. Its output is inextricably linked to its “life experience” (training data). Everything it creates is influenced by everything it has ever seen. It’s doing the same thing a human does, so why shouldn’t its output be treated the same as a human’s output?

And if it generates output that’s insufficiently differentiated from some existing work, well, we already have laws to handle that. But if not, then it’s in the clear. There’s no need for any sort of financial arrangement with the owners of the training data any more than an artist needs to pay every other artist whose work she’s seen each time she makes a new painting.

This argument does not sit well for me, for both practical and ethical reasons. Practically speaking, generative AI changes the economics and timescales of the market for creative works in a way that has the potential to disincentivize non-AI-generated art, both by making creative careers less viable and by narrowing the scope of creative skill that is valued by the market. Even if generative AI develops to the point where it is self-sustaining without (further) human input, the act of creation is an essential part of a life well-lived. Humans need to create, and we must foster a market that supports this.

Ethically, the argument that generative AI is “just doing what humans do” seems to draw an equivalence between computer programs and humans that doesn’t feel right to me. It was the pursuit of this feeling that led me to a key question at the center of this debate.

Computer programs don’t have rights1, but people who use computer programs do. No one is suggesting that generative AI models should somehow have the rights to the things they create. It’s the humans using these AI models that are making claims about the output—either that they, the human, should own the output, or, at the very least, that the owners of the model’s training data should not have any rights to the output.

After all, what’s the difference between using generative AI to create a picture and using Photoshop? They’re both computer programs that help humans make more, better creative works in less time, right?

We’ve always had technology that empowers human creativity: pencils, paintbrushes, rulers, compasses, quills, typewriters, word processors, bitmapped and vector drawing programs—thousands of years of technological enhancement of creativity. Is generative AI any different?

At the heart of this question is the act of creation itself. Ownership and rights hinge on that act of creation. Who owns a creative work? Not the pencil, not the typewriter, not Adobe Photoshop. It’s the human who used those tools to create the work that owns it.

There can, of course, be legal arrangements to transfer ownership of the work created by one human to another human (or a legal entity like a corporation). And in this way, value is exchanged, forming a market for creativity.

Now then, when someone uses generative AI, who is the creator? Is writing the prompt for the generative AI the act of creation, thus conferring ownership of the output to the prompt-writer without any additional legal arrangements?

Suppose Bob writes an email to Sue, who has no existing business relationship with Bob, asking her to draw a picture of a polar bear wearing a cowboy hat while riding a bicycle. If Sue draws this picture, we all agree that Sue is the creator, and that some arrangement is required to transfer ownership of this picture to Bob. But if Bob types that same email into a generative AI, has he now become the creator of the generated image? If not, then who is the creator?

Where is the act of creation?

This question is at the emotional, ethical (and possibly legal) heart of the generative AI debate. I’m reminded of the well-known web comic in which one person hands something to another and says, “I made this.” The recipient accepts the item, saying “You made this?” The recipient then holds the item silently for a moment while the person who gave them the item departs. In the final frame of the comic, the recipient stands alone holding the item and says, “I made this.”

This comic resonates with people for many reasons. To me, the key is the second frame in which the recipient holds the item alone. It’s in that moment that possession of the item convinces the person that they own it. After all, they’re holding it. It’s theirs! And if they own it, and no one else is around, then they must have created it!

This leads me back to the same question. Where is the act of creation? The person in the comic would rather not think about it. But generative AI is forcing us all to do so.

I’m not focused on this point for reasons of fairness or tradition. Technology routinely changes markets. Our job as a society is to ensure that technology changes things for the better in the long run, while mitigating the inevitable short-term harm.

Every new technology has required new laws to ensure that it becomes and remains a net good for society. It’s rare that we can successfully adapt existing laws to fully manage a new technology, especially one that has the power to radically alter the shape of an existing market like generative AI does.

In its current state, generative AI breaks the value chain between creators and consumers. We don’t have to reconnect it in exactly the same way it was connected before, but we also can’t just leave it dangling. The historical practice of conferring ownership based on the act of creation still seems sound, but that means we must be able to unambiguously identify that act. And if the same act (absent any prior legal arrangements) confers ownership in one context but not in another, then perhaps it’s not the best candidate.

I’m not sure what the right answer is, but I think I’m getting closer to the right question. It’s a question I think we’re all going to encounter a lot more frequently in the future: Who made this?


  1. Non-sentient computer programs, that is. If we ever create sentient computer programs, we’ll have a whole host of other problems to deal with.