MoreRSS

site iconMax WoolfModify

Senior Data Scientist at BuzzFeed in San Francisco.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Max Woolf

Predicting Average IMDb Movie Ratings Using Text Embeddings of Movie Metadata

2025-07-01 01:00:00

Months ago, I saw a post titled “Rejected from DS Role with no feedback” on Reddit’s Data Science subreddit, in which a prospective job candidate for a data science position provided a Colab Notebook documenting their submission for a take-home assignment and asking for feedback as to why they were rejected. Per the Reddit user, the assignment was:

Use the publicly available IMDB Datasets to build a model that predicts a movie’s average rating. Please document your approach and present your results in the notebook. Make sure your code is well-organized so that we can follow your modeling process.

IMDb, the Internet Movie Database owned by Amazon, allows users to rate movies on a scale from 1 to 10, wherein the average rating is then displayed prominently on the movie’s page:

The Shawshank Redemption is currently the highest-rated movie on IMDb with an average rating of 9.3 derived from 3.1 million user votes.

The Shawshank Redemption is currently the highest-rated movie on IMDb with an average rating of 9.3 derived from 3.1 million user votes.

In their notebook, the Redditor identifies a few intuitive features for such a model, including the year in which the movie was released, the genre(s) of the movies, and the actors/directors of the movie. However, the model they built is a TensorFlow and Keras-based neural network, with all the bells-and-whistles such as batch normalization and dropout. The immediate response by other data scientists on /r/datascience was, at its most polite, “why did you use a neural network when it’s a black box that you can’t explain?”

Reading those replies made me nostalgic. Way back in 2017, before my first job as a data scientist, neural networks using frameworks such as TensorFlow and Keras were all the rage for their ability to “solve any problem” but were often seen as lazy and unskilled compared to traditional statistical modeling such as ordinary least squares linear regression or even gradient boosted trees. Although it’s funny to see that perception against neural networks in the data science community hasn’t changed since, nowadays the black box nature of neural networks can be an acceptable business tradeoff if the prediction results are higher quality and interpretability is not required.

Looking back at the assignment description, the objective is only “predict a movie’s average rating.” For data science interview take-homes, this is unusual: those assignments typically have an extra instruction along the lines of “explain your model and what decisions stakeholders should make as a result of it”, which is a strong hint that you need to use an explainable model like linear regression to obtain feature coefficients, or even a middle-ground like gradient boosted trees and its variable importance to quantify relative feature contribution to the model. 1 In absence of that particular constraint, it’s arguable that anything goes, including neural networks.

The quality of neural networks have improved significantly since 2017, even moreso due to the massive rise of LLMs. Why not try just feeding a LLM all raw metadata for a movie and encode it into a text embedding and build a statistical model based off of that? Would a neural network do better than a traditional statistical model in that instance? Let’s find out!

About IMDb Data

The IMDb Non-Commercial Datasets are famous sets of data that have been around for nearly a decade 2 but are still updated daily. Back in 2018 as a budding data scientist, I performed a fun exporatory data analysis using these datasets, although the results aren’t too surprising.

The average rating for a movie is around 6 and tends to skew higher: a common trend in internet rating systems.

The average rating for a movie is around 6 and tends to skew higher: a common trend in internet rating systems.

But in truth, these datasets are a terrible idea for companies to use for a take-home assignment. Although the datasets are released under a non-commercial license, IMDb doesn’t want to give too much information to their competitors, which results in a severely limited amount of features that could be used to build a good predictive model. Here are the common movie-performance-related features present in the title.basics.tsv.gz file:

  • tconst: unique identifier of the title
  • titleType: the type/format of the title (e.g. movie, tvmovie, short, tvseries, etc)
  • primaryTitle: the more popular title / the title used by the filmmakers on promotional materials at the point of release
  • isAdult: 0: non-adult title; 1: adult title
  • startYear: represents the release year of a title.
  • runtimeMinutes: primary runtime of the title, in minutes
  • genres: includes up to three genres associated with the title

This is a sensible schema for describing a movie, although it lacks some important information that would be very useful to determine movie quality such as production company, summary blurbs, granular genres/tags, and plot/setting — all of which are available on the IMDb movie page itself and presumably accessible through the paid API. Of note, since the assignment explicitly asks for a movie’s average rating, we need to filter the data to only movie and tvMovie entries, which the original assignment failed to do.

The ratings data in title.ratings.tsv.gz is what you’d expect:

  • tconst: unique identifier of the title (which can therefore be mapped to movie metadata using a JOIN)
  • averageRating: average of all the individual user ratings
  • numVotes: number of votes the title has received

In order to ensure that the average ratings for modeling are indeed stable and indicative of user sentiment, I will only analyze movies that have atleast 30 user votes: as of May 10th 2025, that’s about 242k movies total. Additionally, I will not use numVotes as a model feature, since that’s a metric based more on extrinsic movie popularity rather than the movie itself.

The last major dataset is title.principals.tsv.gz, which has very helpful information on metadata such as the roles people play in the production of a movie:

  • tconst: unique identifier of the title (which can be mapped to movie data using a JOIN)
  • nconst: unique identifier of the principal (this is mapped to name.basics.tsv.gz to get the principal’s primaryName, but nothing else useful)
  • category: the role the principal served in the title, such as actor, actress, writer, producer, etc.
  • ordering: the ordering of the principals within the title, which correlates to the order the principals appear on IMDb’s movie cast pages.

Additionally, because the datasets are so popular, it’s not the first time someone has built a IMDb ratings predictor and it’s easy to Google.

Instead of using the official IMDb datasets, these analyses are based on the smaller IMDB 5000 Movie Dataset hosted on Kaggle, which adds metadata such as movie rating, budget, and further actor metadata that make building a model much easier (albeit “number of likes on the lead actor’s Facebook page” is very extrinsic to movie quality). Using the official datasets with much less metadata is building the models on hard mode and will likely have lower predictive performance.

Although IMDb data is very popular and very well documented, that doesn’t mean it’s easy to work with.

The Initial Assignment and “Feature Engineering”

Data science take-home assignments are typically 1/2 exploratory data analysis for identifying impactful dataset features, and 1/2 building, iterating, and explaining the model. For real-world datasets, these are all very difficult problems with many difficult solutions, and the goal from the employer’s perspective is seeing more how these problems are solved rather than the actual quantitative results.

The initial Reddit post decided to engineer some expected features using pandas, such as is_sequel by checking whether a non-1 number is present at the end of a movie title and one-hot encoding each distinct genre of a movie. These are fine for an initial approach, albeit sequel titles can be idiosyncratic and it suggests that a more NLP approach to identifying sequels and other related media may be useful.

The main trick with this assignment is how to handle the principals. The common data science approach would be to use a sparse binary encoding of the actors/directors/writers, e.g. using a vector where actors present in the movie are 1 and every other actor is 0, which leads to a large number of potential approaches to encode this data performantly, such as scikit-learn’s MultiLabelBinarizer. The problem with this approach is that there are a very large number of unique actors / high cardinality — more unique actors than data points themselves — which leads to curse of dimensionality issues and workarounds such as encoding only the top N actors will lead to the feature being uninformative since even a generous N will fail to capture the majority of actors.

There are actually 624k unique actors in this dataset (Jupyter Notebook), the chart just becomes hard to read at that point.

There are actually 624k unique actors in this dataset (Jupyter Notebook), the chart just becomes hard to read at that point.

Additionally, most statistical modeling approaches cannot account for the ordering of actors as they treat each feature as independent, and since the billing order of actors is generally correlated to their importance in the movie, that’s an omission of relevant information to the problem.

These constraints gave me an idea: why not use an LLM to encode all movie data, and build a model using the downstream embedding representation? LLMs have attention mechanisms, which will not only respect the relative ordering of actors (to give higher predictive priority to higher-billed actors, along with actor cooccurrences), but also identify patterns within movie name texts (to identify sequels and related media semantically).

I started by aggregating and denormalizing all the data locally (Jupyter Notebook). Each of the IMDb datasets are hundreds of megabytes and hundreds of thousands of rows at minimum: not quite big data, but enough to be more cognizant of tooling especially since computationally-intensive JOINs are required. Therefore, I used the Polars library in Python, which not only loads data super fast, but is also one of the fastest libraries at performing JOINs and other aggregation tasks. Polars’s syntax also allows for some cool tricks: for example, I want to spread out and aggregate the principals (4.1 million rows after prefiltering) for each movie into directors, writers, producers, actors, and all other principals into nested lists while simultaneously having them sorted by ordering as noted above. This is much easier to do in Polars than any other data processing library I’ve used, and on millions of rows, this takes less than a second:

df_principals_agg = (
    df_principals.sort(["tconst", "ordering"])
    .group_by("tconst")
    .agg(
        director_names=pl.col("primaryName").filter(pl.col("category") == "director"),
        writer_names=pl.col("primaryName").filter(pl.col("category") == "writer"),
        producer_names=pl.col("primaryName").filter(pl.col("category") == "producer"),
        actor_names=pl.col("primaryName").filter(
            pl.col("category").is_in(["actor", "actress"])
        ),
        principal_names=pl.col("primaryName").filter(
            ~pl.col("category").is_in(
                ["director", "writer", "producer", "actor", "actress"]
            )
        ),
        principal_roles=pl.col("category").filter(
            ~pl.col("category").is_in(
                ["director", "writer", "producer", "actor", "actress"]
            )
        ),
    )
)

After some cleanup and field renaming, here’s an example JSON document for Star Wars: Episode IV - A New Hope:

{
  "title": "Star Wars: Episode IV - A New Hope",
  "genres": [
    "Action",
    "Adventure",
    "Fantasy"
  ],
  "is_adult": false,
  "release_year": 1977,
  "runtime_minutes": 121,
  "directors": [
    "George Lucas"
  ],
  "writers": [
    "George Lucas"
  ],
  "producers": [
    "Gary Kurtz",
    "Rick McCallum"
  ],
  "actors": [
    "Mark Hamill",
    "Harrison Ford",
    "Carrie Fisher",
    "Alec Guinness",
    "Peter Cushing",
    "Anthony Daniels",
    "Kenny Baker",
    "Peter Mayhew",
    "David Prowse",
    "Phil Brown"
  ],
  "principals": [
    {
      "John Williams": "composer"
    },
    {
      "Gilbert Taylor": "cinematographer"
    },
    {
      "Richard Chew": "editor"
    },
    {
      "T.M. Christopher": "editor"
    },
    {
      "Paul Hirsch": "editor"
    },
    {
      "Marcia Lucas": "editor"
    },
    {
      "Dianne Crittenden": "casting_director"
    },
    {
      "Irene Lamb": "casting_director"
    },
    {
      "Vic Ramos": "casting_director"
    },
    {
      "John Barry": "production_designer"
    }
  ]
}

I was tempted to claim that I used zero feature engineering, but that wouldn’t be accurate. The selection and ordering of the JSON fields here is itself feature engineering: for example, actors and principals are intentionally last in this JSON encoding because they can have wildly varying lengths while the prior fields are more consistent, which should make downstream encodings more comparable and consistent.

Now, let’s discuss how to convert these JSON representations of movies into embeddings.

Creating And Visualizing the Movie Embeddings

LLMs that are trained to output text embeddings are not much different from LLMs like ChatGPT that just predict the next token in a loop. Models such as BERT and GPT can generate “embeddings” out-of-the-box by skipping the prediction heads of the models and instead taking an encoded value from the last hidden state of the model (e.g. for BERT, the first positional vector of the hidden state representing the [CLS] token). However, text embedding models are more optimized for distinctiveness of a given input text document using contrastive learning. These embeddings can be used for many things, from finding similar encoded inputs by identifying the similarity between embeddings, and of course, by building a statistical model on top of them.

Text embeddings that leverage LLMs are typically generated using a GPU in batches due to the increased amount of computation needed. Python libraries such as Hugging Face transformers and sentence-transformers can load these embeddings models. For this experiment, I used the very new Alibaba-NLP/gte-modernbert-base text embedding model that is finetuned from the ModernBERT model specifically for the embedding use case for two reasons: it uses the ModernBERT architecture which is optimized for fast inference, and the base ModernBERT model is trained to be more code-aware and should be able understand JSON-nested input strings more robustly — that’s also why I intentionally left in the indentation for nested JSON arrays as it’s semantically meaningful and explicitly tokenized. 3

The code (Jupyter Notebook) — with extra considerations to avoid running out of memory on either the CPU or GPU 4 — looks something like this:

device = "cuda:0"
dataloader = torch.utils.data.DataLoader(docs, batch_size=32,
                                         shuffle=False,
                                         pin_memory=True,
                                         pin_memory_device=device)

dataset_embeddings = []
for batch in tqdm(dataloader, smoothing=0):
    tokenized_batch = tokenizer(
        batch, max_length=8192, padding=True, truncation=True, return_tensors="pt"
    ).to(device)

    with torch.no_grad():
        outputs = model(**tokenized_batch)
        embeddings = outputs.last_hidden_state[:, 0].detach().cpu()
    dataset_embeddings.append(embeddings)

dataset_embeddings = torch.cat(dataset_embeddings)
dataset_embeddings = F.normalize(dataset_embeddings, p=2, dim=1)

I used a Spot L4 GPU on Google Cloud Platform at a pricing of $0.28/hour, and it took 21 minutes to encode all 242k movie embeddings: about $0.10 total, which is surprisingly efficient.

Each of these embeddings is a set of 768 numbers (768D). If the embeddings are unit normalized (the F.normalize() step), then calculating the dot product between embeddings will return the cosine similarity of those movies, which can then be used to identify the most similar movies. But “similar” is open-ended, as there are many dimensions how a movie could be considered similar.

Let’s try a few movie similarity test cases where I calculate the cosine similarity between one query movie and all movies, then sort by cosine similarity to find the most similar (Jupyter Notebook). How about Peter Jackson’s Lord of the Rings: The Fellowship of the Ring? Ideally, not only would it surface the two other movies of the original trilogy, but also its prequel Hobbit trilogy.

title cossim
The Lord of the Rings: The Fellowship of the Ring (2001) 1.0
The Lord of the Rings: The Two Towers (2002) 0.922
The Lord of the Rings: The Return of the King (2003) 0.92
National Geographic: Beyond the Movie - The Lord of the Rings: The Fellowship of the Ring (2001) 0.915
A Passage to Middle-earth: The Making of ‘Lord of the Rings’ (2001) 0.915
Quest for the Ring (2001) 0.906
The Lord of the Rings (1978) 0.893
The Hobbit: The Battle of the Five Armies (2014) 0.891
The Hobbit: The Desolation of Smaug (2013) 0.883
The Hobbit: An Unexpected Journey (2012) 0.883

Indeed, it worked and surfaced both trilogies! The other movies listed are about the original work, so having high similarity would be fair.

Compare these results to the “More like this” section on the IMDb page for the movie itself, which has the two sequels to the original Lord of the Rings and two other suggestions that I am not entirely sure are actually related.

What about more elaborate franchises, such as the Marvel Cinematic Universe? If you asked for movies similar to Avengers: Endgame, would other MCU films be the most similar?

title cossim
Avengers: Endgame (2019) 1.0
Avengers: Infinity War (2018) 0.909
The Avengers (2012) 0.896
Endgame (2009) 0.894
Captain Marvel (2019) 0.89
Avengers: Age of Ultron (2015) 0.882
Captain America: Civil War (2016) 0.882
Endgame (2001) 0.881
The Avengers (1998) 0.877
Iron Man 2 (2010) 0.876

The answer is yes, which isn’t a surprise since those movies share many principals. Although, there are instances of other movies named “Endgame” and “The Avengers” which are completely unrelated to Marvel and therefore implies that the similarities may be fixated on the names.

What about movies of a smaller franchise but a specific domain, such as Disney’s Frozen that only has one sequel? Would it surface other 3D animated movies by Walt Disney Animation Studios, or something else?

title cossim
Frozen (2013) 1.0
Frozen II (2019) 0.93
Frozen (2010) 0.92
Frozen (2010) [a different one] 0.917
Frozen (1996) 0.909
Frozen (2005) 0.9
The Frozen (2012) 0.898
The Story of Frozen: Making a Disney Animated Classic (2014) 0.894
Frozen (2007) 0.889
Frozen in Time (2014) 0.888

…okay, it’s definitely fixating on the name. Let’s try a different approach to see if we can find more meaningful patterns in these embeddings.

In order to visualize the embeddings, we can project them to a lower dimensionality with a dimensionality reduction algorithm such as PCA or UMAP: UMAP is preferred as it can simultaneously reorganize the data into more meaningful clusters. UMAP’s construction of a neighborhood graph, in theory, can allow the reduction to refine the similarities by leveraging many possible connections and hopefully avoid fixating on the movie name. However, with this amount of input data and the relatively high initial 768D vector size, the computation cost of UMAP is a concern as both factors each cause the UMAP training time to scale exponentially. Fortunately, NVIDIA’s cuML library recently updated and now you can run UMAP with very high amounts of data on a GPU at a very high number of epochs to ensure the reduction fully converges, so I did just that (Jupyter Notebook). What patterns can we find? Let’s try plotting the reduced points, colored by their user rating.

So there’s a few things going on here. Indeed, most of the points are high-rating green as evident in the source data. But the points and ratings aren’t random and there are trends. In the center giga cluster, there are soft subclusters of movies at high ratings and low ratings. Smaller discrete clusters did indeed form, but what is the deal with that extremely isolated cluster at the top? After investigation, that cluster only has movies released in 2008, which is another feature I should have considered when defining movie similarity.

As a sanity check, I faceted out the points by movie release year to better visualize where these clusters are forming:

This shows that even the clusters movies have their values spread, but I unintentionally visualized how embedding drift changes over time. 2024 is also a bizarrely-clustered year: I have no idea why those two years specifically are weird in movies.

The UMAP approach is more for fun, since it’s better for the downstream model building to use the raw 768D vector and have it learn the features from that. At the least, there’s some semantic signal preserved in these embeddings, which makes me optimistic that these embeddings alone can be used to train a viable movie rating predictor.

Predicting Average IMDb Movie Scores

So, we now have hundreds of thousands of 768D embeddings. How do we get them to predict movie ratings? What many don’t know is that all methods of traditional statistical modeling also work with embeddings — assumptions such as feature independence are invalid so the results aren’t explainable, but you can still get a valid predictive model.

First, we will shuffle and split the data set into a training set and a test set: for the test set, I chose 20,000 movies (roughly 10% of the data) which is more than enough for stable results. To decide the best model, we will be using the model that minimizes the mean squared error (MSE) of the test set, which is a standard approach to solving regression problems that predict a single numeric value.

Here are three approaches for using LLMs for solving non-next-token-prediction tasks.

Method #1: Traditional Modeling (w/ GPU Acceleration!)

You can still fit a linear regression on top of the embeddings even if feature coefficients are completely useless and it serves as a decent baseline (Jupyter Notebook). The absolute laziest “model” where we just use the mean of the training set for every prediction results in a test MSE of 1.637, but performing a simple linear regression on top of the 768D instead results in a more reasonable test MSE of 1.187. We should be able to beat that handily with a more advanced model.

Data scientists familiar with scikit-learn know there’s a rabbit hole of model options, but most of them are CPU-bound and single-threaded and would take considerable amount of time on a dataset of this size. That’s where cuML—the same library I used to create the UMAP projection—comes in, as cuML has GPU-native implementations of most popular scikit-learn models with a similar API. This notably includes support vector machines, which play especially nice with embeddings. And because we have the extra compute, we can also perform a brute force hyperparameter grid search to find the best parameters for fitting each model.

Here’s the results of MSE on the test dataset for a few of these new model types, with the hyperparameter combination for each model type that best minimizes MSE:

The winner is the Support Vector Machine, with a test MSE of 1.087! This is a good start for a simple approach that handily beats the linear regression baseline, and it also beats the model training from the Redditor’s original notebook which had a test MSE of 1.096 5. In all cases, the train set MSE was close to the test set MSE, which means the models did not overfit either.

Method #2: Neural Network on top of Embeddings

Since we’re already dealing with AI models and already have PyTorch installed to generate the embeddings, we might as well try the traditional approach of training a multilayer perceptron (MLP) neural network on top of the embeddings (Jupyter Notebook). This workflow sounds much more complicated than just fitting a traditional model above, but PyTorch makes MLP construction straightforward, and Hugging Face’s Trainer class incorporates best model training practices by default, although its compute_loss function has to be tweaked to minimize MSE specifically.

The PyTorch model, using a loop to set up the MLP blocks, looks something like this:

class RatingsModel(nn.Module):
    def __init__(self, linear_dims=256, num_layers=6):
        super().__init__()

        dims = [768] + [linear_dims] * num_layers
        self.mlp = nn.ModuleList([
            nn.Sequential(
                nn.Linear(dims[i], dims[i+1]),
                nn.GELU(),
                nn.BatchNorm1d(dims[i+1]),
                nn.Dropout(0.6)
            ) for i in range(len(dims)-1)
        ])

        self.output = nn.Linear(dims[-1], 1)

    def forward(self, x, targets=None):
        for layer in self.mlp:
            x = layer(x)

        return self.output(x).squeeze()  # return 1D output if batched inputs

This MLP is 529k parameters total: large for a MLP, but given the 222k row input dataset, it’s not egregiously so.

The real difficulty with this MLP approach is that it’s too effective: even with less than 1 million parameters, the model will extremely overfit and converge to 0.00 train MSE quickly, while the test set MSE explodes. That’s why Dropout is set to the atypically high probability of 0.6.

Fortunately, MLPs are fast to train: training for 600 epochs (total passes through the full training dataset) took about 17 minutes on the GPU. Here’s the training results:

The lowest logged test MSE was 1.074: a slight improvement over the Support Vector Machine approach.

Method #3: Just Train a LLM From Scratch Dammit

There is a possibility that using a pretrained embedding model that was trained on the entire internet could intrinsically contain relevant signal about popular movies—such as movies winning awards which would imply a high IMDb rating—and that knowledge could leak into the test set and provide misleading results. This may not be a significant issue in practice since it’s such a small part of the gte-modernbert-base model which is too small to memorize exact information.

For the sake of comparison, let’s try training a LLM from scratch on top of the raw movie JSON representations to process this data to see if we can get better results without the possibility of leakage (Jupyter Notebook). I was specifically avoiding this approach because the compute required to train an LLM is much, much higher than a SVM or MLP model and generally leveraging a pretrained model gives better results. In this case, since we don’t need a LLM that has all the knowledge of human existence, we can train a much smaller model that only knows how to work with the movie JSON representations and can figure out relationships between actors and whether titles are sequels itself. Hugging Face transformers makes this workflow surprisingly straightforward by not only having functionality to train your own custom tokenizer (in this case, from 50k vocab to 5k vocab) that encodes the data more efficiently, but also allowing the construction a ModernBERT model with any number of layers and units. I opted for a 5M parameter LLM (SLM?), albeit with less dropout since high dropout causes learning issues for LLMs specifically.

The actual PyTorch model code is surprisingly more concise than the MLP approach:

class RatingsModel(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.transformer_model = model
        self.output = nn.Linear(hidden_size, 1)

    def forward(self, input_ids, attention_mask, targets=None):
        x = self.transformer_model.forward(
            input_ids=input_ids,
            attention_mask=attention_mask,
            output_hidden_states=True,
        )
        x = x.last_hidden_state[:, 0]  # the "[CLS] vector"

        return self.output(x).squeeze()  # return 1D output if batched inputs

Essentially, the model trains its own “text embedding,” although in this case instead of an embedding optimized for textual similarity, the embedding is just a representation that can easily be translated into a numeric rating.

Because the computation needed for training a LLM from scratch is much higher, I only trained the model for 10 epochs, which was still twice as slow than the 600 epochs for the MLP approach. Given that, the results are surprising:

The LLM approach did much better than my previous attempts with a new lowest test MSE of 1.026, with only 4 passes through the data! And then it definitely overfit. I tried other smaller configurations for the LLM to avoid the overfitting, but none of them ever hit a test MSE that low.

Conclusion

Let’s look at the model comparison again, this time adding the results from training a MLP and training a LLM from scratch:

Coming into this post, I’m genuinely thought that training the MLP on top of embeddings would have been the winner given the base embedding model’s knowledge of everything, but maybe there’s something to just YOLOing and feeding raw JSON input data to a completely new LLM. More research and development is needed.

The differences in model performance from these varying approaches aren’t dramatic, but some iteration is indeed interesting and it was a long shot anyways given the scarce amount of metadata. The fact that building a model off of text embeddings only didn’t result in a perfect model doesn’t mean this approach was a waste of time. The embedding and modeling pipelines I have constructed in the process of trying to solve this problem have already provided significant dividends on easier problems, such as identifying the efficiency of storing embeddings in Parquet and manipulating them with Polars.

It’s impossible and pointless to pinpoint the exact reason the original Reddit poster got rejected: it could have been the neural network approach or even something out of their control such as the original company actually stopping hiring and being too disorganized to tell the candidate. To be clear, if I myself were to apply for a data science role, I wouldn’t use the techniques in this blog post (that UMAP data visualization would get me instantly rejected!) and do more traditional EDA and non-neural-network modeling to showcase my data science knowledge to the hiring manager. But for my professional work, I will definitely try starting any modeling exploration with an embeddings-based approach wherever possible: at the absolute worst, it’s a very strong baseline that will be hard to beat.

All of the Jupyter Notebooks and data visualization code for this blog post is available open-source in this GitHub repository.


  1. I am not a fan of using GBT variable importance as a decision-making metric: variable importance does not tell you magnitude or direction of the feature in the real world, but it does help identify which features can be pruned for model development iteration. ↩︎

  2. To get a sense on how old they are, they are only available as TSV files, which is a data format so old and prone to errors that many data libraries have dropped explicit support for it. Amazon, please release the datasets as CSV or Parquet files instead! ↩︎

  3. Two other useful features of gte-modernbert-base but not strictly relevant to these movie embeddings are a) its a cased model so it can identify meaning from upper-case text and b) it does not require a prefix such as search_query and search_document as nomic-embed-text-v1.5 does to guide its results, which is an annoying requirement for those models. ↩︎

  4. The trick here is the detach() function for the computed embeddings, otherwise the GPU doesn’t free up the memory once moved back to the CPU. I may or may not have discovered that the hard way. ↩︎

  5. As noted earlier, minimizing MSE isn’t a competition, but the comparison on roughly the same dataset is good for a sanity check. ↩︎

As an Experienced LLM User, I Actually Don't Use Generative LLMs Often

2025-05-06 01:15:00

Lately, I’ve been working on codifying a personal ethics statement about my stances on generative AI as I have been very critical about several aspects of modern GenAI, and yet I participate in it. While working on that statement, I’ve been introspecting on how I myself have been utilizing large language models for both my professional work as a Senior Data Scientist at BuzzFeed and for my personal work blogging and writing open-source software. For about a decade, I’ve been researching and developing tooling around text generation from char-rnns, to the ability to fine-tune GPT-2, to experiments with GPT-3, and even more experiments with ChatGPT and other LLM APIs. Although I don’t claim to the best user of modern LLMs out there, I’ve had plenty of experience working against the cons of next-token predictor models and have become very good at finding the pros.

It turns out, to my surprise, that I don’t use them nearly as often as people think engineers do, but that doesn’t mean LLMs are useless for me. It’s a discussion that requires case-by-case nuance.

How I Interface With LLMs

Over the years I’ve utilized all the tricks to get the best results out of LLMs. The most famous trick is prompt engineering, or the art of phrasing the prompt in a specific manner to coach the model to generate a specific constrained output. Additions to prompts such as offering financial incentives to the LLM or simply telling the LLM to make their output better do indeed have a quantifiable positive impact on both improving adherence to the original prompt and the output text quality. Whenever my coworkers ask me why their LLM output is not what they expected, I suggest that they apply more prompt engineering and it almost always fixes their issues.

No one in the AI field is happy about prompt engineering, especially myself. Attempts to remove the need for prompt engineering with more robust RLHF paradigms have only made it even more rewarding by allowing LLM developers to make use of better prompt adherence. True, “Prompt Engineer” as a job title turned out to be a meme but that’s mostly because prompt engineering is now an expected skill for anyone seriously using LLMs. Prompt engineering works, and part of being a professional is using what works even if it’s silly.

To that end, I never use ChatGPT.com or other normal-person frontends for accessing LLMs because they are harder to control. Instead, I typically access the backend UIs provided by each LLM service, which serve as a light wrapper over the API functionality which also makes it easy to port to code if necessary. Accessing LLM APIs like the ChatGPT API directly allow you to set system prompts which control the “rules” for the generation that can be very nuanced. Specifying specific constraints for the generated text such as “keep it to no more than 30 words” or “never use the word ‘delve’” tends to be more effective in the system prompt than putting them in the user prompt as you would with ChatGPT.com. Any modern LLM interface that does not let you explicitly set a system prompt is most likely using their own system prompt which you can’t control: for example, when ChatGPT.com had an issue where it was too sycophantic to its users, OpenAI changed the system prompt to command ChatGPT to “avoid ungrounded or sycophantic flattery.” I tend to use Anthropic Claude’s API — Claude Sonnet in particular — more than any ChatGPT variant because Claude anecdotally is less “robotic” and also handles coding questions much more accurately.

Additionally with the APIs, you can control the “temperature” of the generation, which at a high level controls the creativity of the generation. LLMs by default do not select the next token with the highest probability in order to allow it to give different outputs for each generation, so I prefer to set the temperature to 0.0 so that the output is mostly deterministic, or 0.2 - 0.3 if some light variance is required. Modern LLMs now use a default temperature of 1.0, and I theorize that higher value is accentuating LLM hallucination issues where the text outputs are internally consistent but factually wrong.

LLMs for Professional Problem Solving!

With that pretext, I can now talk about how I have used generative LLMs over the past couple years at BuzzFeed. Here are outlines of some (out of many) projects I’ve worked on using LLMs to successfully solve problems quickly:

  • BuzzFeed site curators developed a new hierarchal taxonomy to organize thousands of articles into a specified category and subcategory. Since we had no existing labeled articles to train a traditional multiclass classification model to predict these new labels, I wrote a script to hit the Claude Sonnet API with a system prompt saying The following is a taxonomy: return the category and subcategory that best matches the article the user provides. plus the JSON-formatted hierarchical taxonomy, then I provided the article metadata as the user prompt, all with a temperature of 0.0 for the most precise results. Running this in a loop for all the articles resulted in appropriate labels.
  • After identifying hundreds of distinct semantic clusters of BuzzFeed articles using data science shenanigans, it became clear that there wasn’t an easy way to give each one unique labels. I wrote another script to hit the Claude Sonnet API with a system prompt saying Return a JSON-formatted title and description that applies to all the articles the user provides. with the user prompt containing five articles from that cluster: again, running the script in a loop for all clusters provided excellent results.
  • One BuzzFeed writer asked if there was a way to use a LLM to sanity-check grammar questions such as “should I use an em dash here?” against the BuzzFeed style guide. Once again I hit the Claude Sonnet API, this time copy/pasting the full style guide in the system prompt plus a command to Reference the provided style guide to answer the user's question, and cite the exact rules used to answer the question. In testing, the citations were accurate and present in the source input, and the reasonings were consistent.

Each of these projects were off-hand ideas pitched in a morning standup or a Slack DM, and yet each project only took an hour or two to complete a proof of concept (including testing) and hand off to the relevant stakeholders for evaluation. For projects such as the hierarchal labeling, without LLMs I would have needed to do more sophisticated R&D and likely would have taken days including building training datasets through manual labeling, which is not intellectually gratifying. Here, LLMs did indeed follow the Pareto principle and got me 80% of the way to a working solution, but the remaining 20% of the work iterating, testing and gathering feedback took longer. Even after the model outputs became more reliable, LLM hallucination was still a concern which is why I also advocate to my coworkers to use caution and double-check with a human if the LLM output is peculiar.

There’s also one use case of LLMs that doesn’t involve text generation that’s as useful in my professional work: text embeddings. Modern text embedding models technically are LLMs, except instead of having a head which outputs the logits for the next token, it outputs a vector of numbers that uniquely identify the input text in a higher-dimensional space. All improvements to LLMs that the ChatGPT revolution inspired, such as longer context windows and better quality training regimens, also apply to these text embedding models and caused them to improve drastically over time with models such as nomic-embed-text and gte-modernbert-base. Text embeddings have done a lot at BuzzFeed from identifying similar articles to building recommendation models, but this blog post is about generative LLMs so I’ll save those use cases for another time.

LLMs for Writing?

No, I don’t use LLMs for writing the text on this very blog, which I suspect has now become a default assumption for people reading an article written by an experienced LLM user. My blog is far too weird for an LLM to properly emulate. My writing style is blunt, irreverent, and occasionally cringe: even with prompt engineering plus few-shot prompting by giving it examples of my existing blog posts and telling the model to follow the same literary style precisely, LLMs output something closer to Marvel movie dialogue. But even if LLMs could write articles in my voice I still wouldn’t use them due of the ethics of misrepresenting authorship by having the majority of the work not be my own words. Additionally, I tend to write about very recent events in the tech/coding world that would not be strongly represented in the training data of a LLM if at all, which increases the likelihood of hallucination.

There is one silly technique I discovered to allow a LLM to improve my writing without having it do my writing: feed it the text of my mostly-complete blog post, and ask the LLM to pretend to be a cynical Hacker News commenter and write five distinct comments based on the blog post. This not only identifies weaker arguments for potential criticism, but it also doesn’t tell me what I should write in the post to preemptively address that negative feedback so I have to solve it organically. When running a rough draft of this very blog post and the Hacker News system prompt through the Claude API (chat log), it noted that my examples of LLM use at BuzzFeed are too simple and not anything more innovative than traditional natural language processing techniques, so I made edits elaborating how NLP would not be as efficient or effective.

LLMs for Companionship?

No, I don’t use LLMs as friendly chatbots either. The runaway success of LLM personal companion startups such as character.ai and Replika are alone enough evidence that LLMs have a use, even if the use is just entertainment/therapy and not more utilitarian.

I admit that I am an outlier since treating LLMs as a friend is the most common use case. Myself being an introvert aside, it’s hard to be friends with an entity who is trained to be as friendly as possible but also habitually lies due to hallucination. I could prompt engineer an LLM to call me out on my bullshit instead of just giving me positive affirmations, but there’s no fix for the lying.

LLMs for Coding???

Yes, I use LLMs for coding, but only when I am reasonably confident that they’ll increase my productivity. Ever since the dawn of the original ChatGPT, I’ve asked LLMs to help me write regular expressions since that alone saves me hours, embarrassing to admit. However, the role of LLMs in coding has expanded far beyond that nowadays, and coding is even more nuanced and more controversial on how you can best utilize LLM assistance.

Like most coders, I Googled coding questions and clicked on the first Stack Overflow result that seemed relevant, until I decided to start asking Claude Sonnet the same coding questions and getting much more detailed and bespoke results. This was more pronounced for questions which required specific functional constraints and software frameworks, the combinations of which would likely not be present in a Stack Overflow answer. One paraphrased example I recently asked Claude Sonnet while writing another blog post is Write Python code using the Pillow library to composite five images into a single image: the left half consists of one image, the right half consists of the remaining four images. (chat log). Compositing multiple images with Pillow isn’t too difficult and there’s enough questions/solutions about it on Stack Overflow, but the specific way it’s composited is unique and requires some positioning shenanigans that I would likely mess up on the first try. But Claude Sonnet’s code got it mostly correct and it was easy to test, which saved me time doing unfun debugging.

However, for more complex code questions particularly around less popular libraries which have fewer code examples scraped from Stack Overflow and GitHub, I am more cautious of the LLM’s outputs. One real-world issue I’ve had is that I need a way to log detailed metrics to a database while training models — for which I use the Trainer class in Hugging Face transformers — so that I can visualize and analyze it later. I asked Claude Sonnet to Write a Callback class in Python for the Trainer class in the Hugging Face transformers Python library such that it logs model training metadata for each step to a local SQLite database, such as current epoch, time for step, step loss, etc. (chat log). This one I was less optimistic about since there isn’t much code about creating custom callbacks, however the Claude-generated code implemented some helpful ideas that weren’t on the top-of-my-mind when I asked, such a buffer to limit blocking I/O, SQLite config speedups, batch inserts, and connection handling. Asking Claude to “make the code better” twice (why not?) results in a few more unexpected ideas such as SQLite connection caching and using a single column with the JSON column type to store an arbitrary number of metrics, in addition to making the code much more Pythonic. It is still a lot of code such that it’s unlikely to work out-of-the-box without testing in the full context of an actual training loop. However, even if the code has flaws, the ideas themselves are extremely useful and in this case it would be much faster and likely higher quality code overall to hack on this generated code instead of writing my own SQLite logger from scratch.

For actual data science in my day-to-day work that I spend most of my time, I’ve found that code generation from LLMs is less useful. LLMs cannot output the text result of mathematical operations reliably, with some APIs working around that by allowing for a code interpreter to perform data ETL and analysis, but given the scale of data I typically work with it’s not cost-feasible to do that type of workflow. Although pandas is the standard for manipulating tabular data in Python and has been around since 2008, I’ve been using the relatively new polars library exclusively, and I’ve noticed that LLMs tend to hallucinate polars functions as if they were pandas functions which requires documentation deep dives to confirm which became annoying. For data visualization, which I don’t use Python at all and instead use R and ggplot2, I really haven’t had a temptation to consult a LLM, in addition to my skepticism that LLMs would know both those frameworks as well. The techniques I use for data visualization have been unchanged since 2017, and the most time-consuming issue I have when making a chart is determining whether the data points are too big or too small for humans to read easily, which is not something a LLM can help with.

Asking LLMs coding questions is only one aspect of coding assistance. One of the other major ones is using a coding assistant with in-line code suggestions such as GitHub Copilot. Despite my success in using LLMs for one-off coding questions, I actually dislike using coding assistants for an unexpected reason: it’s distracting. Whenever I see a code suggestion from Copilot pop up, I have to mentally context switch from writing code to reviewing code and then back again, which destroys my focus. Overall, it was a net neutral productivity gain but a net negative cost as Copilots are much more expensive than just asking a LLM ad hoc questions through a web UI.

Now we can talk about the elephants in the room — agents, MCP, and vibe coding — and my takes are spicy. Agents and MCP, at a high-level, are a rebranding of the Tools paradigm popularized by the ReAct paper in 2022 where LLMs can decide whether a tool is necessary to answer the user input, extract relevant metadata to pass to the tool to run, then return the results. The rapid LLM advancements in context window size and prompt adherence since then have made Agent workflows more reliable, and the standardization of MCP is an objective improvement over normal Tools that I encourage. However, they don’t open any new use cases that weren’t already available when LangChain first hit the scene a couple years ago, and now simple implementations of MCP workflows are even more complicated and confusing than it was back then. I personally have not been able to find any novel use case for Agents, not then and not now.

Vibe coding with coding agents like Claude Code or Cursor is something I have little desire to even experiment with. On paper, coding agents should be able to address my complaints with LLM-generated code reliability since it inherently double-checks itself and it’s able to incorporate the context of an entire code project. However, I have also heard the horror stories of people spending hundreds of dollars by accident and not get anything that solves their coding problems. There’s a fine line between experimenting with code generation and gambling with code generation. Vibe coding can get me 80% of the way there, and I agree there’s value in that for building quick personal apps that either aren’t ever released publicly, or are released with disclaimers about its “this is released as-is” nature. But it’s unprofessional to use vibe coding as a defense to ship knowingly substandard code for serious projects, and the only code I can stand by is the code I am fully confident in its implementation.

Of course, the coding landscape is always changing, and everything I’ve said above is how I use LLMs for now. It’s entirely possible I see a post on Hacker News that completely changes my views on vibe coding or other AI coding workflows, but I’m happy with my coding productivity as it is currently and I am able to complete all my coding tasks quickly and correctly.

What’s Next for LLM Users?

Discourse about LLMs and their role in society has become bifuricated enough such that making the extremely neutral statement that LLMs have some uses is enough to justify a barrage of harrassment. I strongly disagree with AI critic Ed Zitron about his assertions that the reason the LLM industry is doomed because OpenAI and other LLM providers can’t earn enough revenue to offset their massive costs as LLMs have no real-world use. Two things can be true simultaneously: (a) LLM provider cost economics are too negative to return positive ROI to investors, and (b) LLMs are useful for solving problems that are meaningful and high impact, albeit not to the AGI hype that would justify point (a). This particular combination creates a frustrating gray area that requires a nuance that an ideologically split social media can no longer support gracefully. Hypothetically, If OpenAI and every other LLM provider suddenly collapsed and no better LLM models would ever be trained and released, open-source and permissively licensed models such as Qwen3 and DeepSeek R1 that perform comparable to ChatGPT are valid substitute goods and they can be hosted on dedicated LLM hosting providers like Cerebras and Groq who can actually make money on each user inference query. OpenAI collapsing would not cause the end of LLMs, because LLMs are useful today and there will always be a nonzero market demand for them: it’s a bell that can’t be unrung.

As a software engineer — and especially as a data scientist — one thing I’ve learnt over the years is that it’s always best to use the right tool when appropriate, and LLMs are just another tool in that toolbox. LLMs can be both productive and counterproductive depending on where and when you use them, but they are most definitely not useless. LLMs are more akin to forcing a square peg into a round hole (at the risk of damaging either the peg or hole in the process) while doing things without LLM assistance is the equivalent of carefully defining a round peg to pass through the round hole without incident. But for some round holes, sometimes shoving the square peg through and asking questions later makes sense when you need to iterate quickly, while sometimes you have to be more precise with both the peg and the hole to ensure neither becomes damaged, because then you have to spend extra time and money fixing the peg and/or hole.

…maybe it’s okay if I ask an LLM to help me write my metaphors going forward.

The Best Way to Use Text Embeddings Portably is With Parquet and Polars

2025-02-25 02:15:00

Text embeddings, particularly modern embeddings generated from large language models, are one of the most useful applications coming from the generative AI boom. Embeddings are a list of numbers which represent an object: in the case of text embeddings, they can represent words, sentences, and full paragraphs and documents, and they do so with a surprising amount of distinctiveness.

Recently, I created text embeddings representing every distinct Magic: the Gathering card released as of the February 2025 Aetherdrift expansion: 32,254 in total. With these embeddings, I can find the mathematical similarity between cards through the encoded representation of their card design, including all mechanical attributes such as the card name, card cost, card text, and even card rarity.

The iconic Magic card Wrath of God, along with its top four most similar cards identified using their respective embeddings. The similar cards are valid matches, with similar card text and card types.

The iconic Magic card Wrath of God, along with its top four most similar cards identified using their respective embeddings. The similar cards are valid matches, with similar card text and card types.

Additionally, I can create a fun 2D UMAP projection of all those cards, which also identifies interesting patterns:

The UMAP dimensionality reduction process also implicitly clusters the Magic cards to logical clusters, such as by card color(s) and card type.

The UMAP dimensionality reduction process also implicitly clusters the Magic cards to logical clusters, such as by card color(s) and card type.

I generated these Magic card embeddings for something special besides a pretty data visualization, but if you are curious how I generated them, they were made using the new-but-underrated gte-modernbert-base embedding model and the process is detailed in this GitHub repository. The embeddings themselves (including the coordinate values to reproduce the 2D UMAP visualization) are available as a Hugging Face dataset.

Most tutorials involving embedding generation omit the obvious question: what do you do with the text embeddings after you generate them? The common solution is to use a vector database, such as faiss or qdrant, or even a cloud-hosted service such as Pinecone. But those aren’t easy to use: faiss has confusing configuration options, qdrant requires using a Docker container to host the storage server, and Pinecone can get very expensive very quickly, and its free Starter tier is limited.

What many don’t know about text embeddings is that you don’t need a vector database to calculate nearest-neighbor similarity if your data isn’t too large. Using numpy and my Magic card embeddings, a 2D matrix of 32,254 float32 embeddings at a dimensionality of 768D (common for “smaller” LLM embedding models) occupies 94.49 MB of system memory, which is relatively low for modern personal computers and can fit within free usage tiers of cloud VMs. If both the query vector and the embeddings themselves are unit normalized (many embedding generators normalize by default), then the matrix dot product between the query and embeddings results in a cosine similarity between [-1, 1], where the higher score is better/more similar. Since dot products are such a fundamental aspect of linear algebra, numpy’s implementation is extremely fast: with the help of additional numpy sorting shenanigans, on my M3 Pro MacBook Pro it takes just 1.08 ms on average to calculate all 32,254 dot products, find the top 3 most similar embeddings, and return their corresponding idx of the matrix and and cosine similarity score.

def fast_dot_product(query, matrix, k=3):
    dot_products = query @ matrix.T

    idx = np.argpartition(dot_products, -k)[-k:]
    idx = idx[np.argsort(dot_products[idx])[::-1]]

    score = dot_products[idx]

    return idx, score

In most implementations of vector databases, once you insert the embeddings, they’re stuck there in a proprietary serialization format and you are locked into that library and service. If you’re just building a personal pet project or sanity-checking embeddings to make sure the results are good, that’s a huge amount of friction. For example, when I want to experiment with embeddings, I generate them on a cloud server with a GPU since LLM-based embeddings models are often slow to generate without one, and then download them locally to my personal computer. What is the best way to handle embeddings portably such that they can easily be moved between machines and also in a non-proprietary format?

The answer, after much personal trial-and-error, is Parquet files, which still has a surprising amount of nuance. But before we talk about why Parquet files are good, let’s talk about how not to store embeddings.

The Worst Ways to Store Embeddings

The incorrect-but-unfortunately-common way to store embeddings is in a text format such as a CSV file. Text data is substantially larger than float32 data: for example, a decimal number with full precision (e.g. 2.145829051733016968e-02) as a float32 is 32 bits/4 bytes, while as a text representation (in this case 24 ASCII chars) it’s 24 bytes, 6x larger. When the CSV is saved and loaded, the data has to be serialized between a numpy and a string representation of the array, which adds significant overhead. Despite that, in one of OpenAI’s official tutorials for their embeddings models, they save the embeddings as a CSV using pandas with the admitted caveat of “Because this example only uses a few thousand strings, we’ll store them in a CSV file. (For larger datasets, use a vector database, which will be more performant.)”. In the case of the Magic card embeddings, pandas-to-CSV performs the worst out of any encoding options: more on why later.

Numpy has native methods to save and load embeddings as a .txt that’s straightforward:

np.savetxt("embeddings_txt.txt", embeddings)

embeddings_r = np.loadtxt("embeddings_txt.txt", dtype=np.float32, delimiter=" ")

The resulting file not only takes a few seconds to save and load, but it’s also massive: 631.5 MB!

As an aside, HTTP APIs such as OpenAI’s Embeddings API do transmit the embeddings over text which adds needless latency and bandwidth overhead. I wish more embedding providers offered gRPC APIs which allow transfer of binary float32 data instead to gain a performance increase: Pinecone’s Python SDK, for example, does just that.

The second incorrect method to save a matrix of embeddings to disk is to save it as a Python pickle object, which stores its representation in memory on disk with a few lines of code from the native pickle library. Pickling is unfortunately common in the machine learning industry since many ML frameworks such as scikit-learn don’t have easy ways to serialize encoders and models. But it comes with two major caveats: pickled files are a massive security risk as they can execute arbitrary code, and the pickled file may not be guaranteed to be able to be opened on other machines or Python versions. It’s 2025, just stop pickling if you can.

In the case of the Magic card embeddings, it does indeed work with instant save/loads, and the file size on disk is 94.49 MB: the same as its memory consumption and about 1/6th of the text size as expected:

with open("embeddings_matrix.pkl", "wb") as f:
    pickle.dump(embeddings, f)

with open("embeddings_matrix.pkl", "rb") as f:
    embeddings_r = pickle.load(f)

But there are still better and easier approaches.

The Intended-But-Not-Great Way to Store Embeddings

Numpy itself has a canonical way to save and load matrixes — which annoyingly saves as a pickle by default for compatability reasons, but that can fortunately be disabled by setting allow_pickle=False:

np.save("embeddings_matrix.npy", embeddings, allow_pickle=False)

embeddings_r = np.load("embeddings_matrix.npy", allow_pickle=False)

File size and I/O speed are the same as with the pickle approach.

This works — and it’s something I had used for awhile — but in the process it exposes another problem: how do we map metadata (the Magic cards in this case) to embeddings? Currently, we use the idx of the most-similar matches to perform an efficient batched lookup to the source data. In this case, the number of rows matches the number of cards exactly, but what happens if the embeddings matrix needs to be changed, such as to add or remove cards and their embeddings? What happens if you want to add a dataset filter? It becomes a mess that inevitably causes technical debt.

The solution to this is to colocate metadata such as card names, card text, and attributes with their embeddings: that way, if they are later added, removed, or sorted, the results will remain the same. Modern vector databases such as qdrant and Pinecone do just that, with the ability to filter and sort on the metadata at the same time you query the most similar vectors. This is a bad idea to do in numpy itself, as it’s more optimized for numbers and not other data types such as strings, which have limited operations available.

The solution is to look at another file format that can store metadata and embeddings simultaneously, and the answer to that is Parquet files. But there’s a rabbit hole as to what’s the best way to interact with them.

What are Parquet files?

Parquet, developed by the open-source Apache Parquet project, is a file format for handling columnar data, but despite being first released in 2013 it hasn’t taken off in the data science community until very recently. 1 The most relevant feature of Parquet is that the resulting files are typed for each column, and that this typing includes nested lists, such as an embedding which is just a list of float32 values. As a bonus, the columnar format allows downstream libraries to save/load them selectively and very quickly, far faster than CSVs and with rare parsing errors. The file format also allows for efficient compression and decompression, but that’s less effective with embeddings as there’s little redundant data.

For Parquet file I/O, the standard approach is to use the Apache Arrow protocol that is columnar in-memory, which complements the Parquet storage medium on disk. But how do you use Arrow?

How do you use Parquet files in Python for embeddings?

Ideally, we need a library that can handle nested data easily and can interoperate with numpy for serializing to a matrix and can run fast dot products.

The official Arrow library that interacts with Parquet natively in Python is pyarrow. Here, I have an example Parquet file generated with [SPOILERS] that contains both the card metadata and an embedding column, with the embedding for each row corresponding to that card.

df = pa.parquet.read_table("mtg-embeddings.parquet")
Pyarrow’s table schema from the input Parquet file of Magic card embeddings. Note the embedding column at the bottom is a list of 768 floats.

Pyarrow’s table schema from the input Parquet file of Magic card embeddings. Note the embedding column at the bottom is a list of 768 floats.

But pyarrow is not a DataFrame library, and despite the data being in a Table, it’s hard to slice and access: the documentation suggests that you export to pandas if you need more advanced manipulation.

Other more traditional data science libraries can leverage pyarrow directly. The most popular one is, of course, pandas itself which can read/write Parquet doing just that. There are many, many resources for using pandas well, so it’s often the first choice among data science practioners.

df = pd.read_parquet("mtg-embeddings.parquet", columns=["name", "embedding"])
df
Pandas HTML table output of the Magic card DataFrame when printed in a Jupyter Notebook.

Pandas HTML table output of the Magic card DataFrame when printed in a Jupyter Notebook.

There’s one major weakness for the use case of embeddings: pandas is very bad at nested data. From the image above you’ll see that the embedding column appears to be a list of numbers, but it’s actually a list of numpy objects, which is a very inefficent datatype and why I suspect writing it to a CSV is very slow. Simply converting it to numpy with df["embedding"].to_numpy() results in a 1D array, which is definitely wrong, and trying to cast it to float32 doesn’t work. I found that the best way to extract the embeddings matrix from a pandas embedding column is to np.vstack() the embeddings, e.g. np.vstack(df["embedding"].to_numpy()), which does result in a (32254, 768) float32 matrix as expected. That adds a lot of compute and memory overhead in addition to unnecessary numpy array copies. Finally, after computing the dot products between a candidate query and the embedding matrix, row metadata with the most similar values can then be retrieved using df.loc[idx]. 2

However, there is another, more recent tabular data library that not only is faster than pandas, it has proper support for nested data. That library is polars.

The Power of polars

Polars is a relatively new Python library which is primarily written in Rust and supports Arrow, which gives it a massive performance increase over pandas and many other DataFrame libraries. In the case of Magic cards, 32k rows isn’t nearly “big data” and the gains of using a high-performance library are lesser, but there are some unexpected features that coincidentally work perfectly for the embeddings use case.

As with pandas, you read a parquet file with a read_parquet():

df = pl.read_parquet("mtg-embeddings.parquet", columns=["name", "embedding"])
df
Polars HTML table output of the Magic card DataFrame when printed in a Jupyter Notebook.

Polars HTML table output of the Magic card DataFrame when printed in a Jupyter Notebook.

There’s a notable difference in the table output compared to pandas: it also reports the data type of its columns, and more importantly, it shows that the embedding column consists of arrays, all float32s, and all length 768. That’s a great start!

polars also has a to_numpy() function. Unlike pandas, if you call to_numpy() on a column as a Series, e.g. df['embedding'].to_numpy(), the returned object is a numpy 2D matrix: no np.vstack() needed. If you look at the documentation for the function, there’s a curious feature:

This operation copies data only when necessary. The conversion is zero copy when all of the following hold: […]

Zero copy! And in the case of columnar-stored embeddings, the conditions will always hold, but you can set allow_copy=False to throw an error just in case.

Inversely, if you want to add a 2D embeddings matrix to an existing DataFrame and colocate each embedding’s corresponding metadata, such as after you batch-generate thousands of embeddings and want to save and download the resulting Parquet, it’s just as easy as adding a column to the DataFrame.

df = pl.with_columns(embedding=embeddings)

df.write_parquet("mtg-embeddings.parquet")

Now, let’s put the speed to the test using all the Magic card metadata. What if we perform embedding similarity on a Magic card, but beforehand dynamically filter the dataset according to user parameters (therefore filtering the candidate embeddings at the same time since they are colocated) and perform the similarity calculations quickly as usual? Let’s try with Lightning Helix, a card whose effects are self-explanatory even to those who don’t play Magic.

The most similar cards to Lightning Helix do have similar effects, although “Lightning” cards dealing damage is a common trope in Magic. Warleader’s Helix is a direct reference to Lightning Helix.

The most similar cards to Lightning Helix do have similar effects, although “Lightning” cards dealing damage is a common trope in Magic. Warleader’s Helix is a direct reference to Lightning Helix.

Now we can also find similar cards to Lightning Helix but with filters. In this case, let’s look for a Sorcery (which are analogous to Instants but tend to be stronger since they have play limitations) and has Black as one of its colors. This limits the candidates to ~3% of the original dataset. The resulting code would look like this, given a query_embed:

df_filter = df.filter(
    pl.col("type").str.contains("Sorcery"),
    pl.col("manaCost").str.contains("B"),
)

embeddings_filter = df_filter["embedding"].to_numpy(allow_copy=False)
idx, _ = fast_dot_product(query_embed, embeddings_filter, k=4)
related_cards = df_filter[idx]

As an aside, in polars you can call row subsets of a DataFrame with df[idx], which makes it infinitely better than pandas and its df.iloc[idx].

The resulting similar cards:

In this case, the similarity focuses on card text similarity, and these cards have near identical text. Smiting Helix is also a direct reference to Lightning Helix.

In this case, the similarity focuses on card text similarity, and these cards have near identical text. Smiting Helix is also a direct reference to Lightning Helix.

Speed-wise, the code runs at about 1.48ms on average, or about 37% slower than calculating all dot products, so the filtering does still have some overhead, which is not surprising as that the filtered dataframe does copy the embeddings. Overall, it’s still more than fast enough for a hobby project.

I’ve created an interactive Colab Notebook where you can generate similarities for any Magic card, and apply any filters you want!

Scaling to Vector Databases

Again, all of this assumes that you are using the embeddings for smaller/noncommercial projects. If you scale to hundreds of thousands of embeddings, the parquet and dot product approach for finding similarity should still be fine, but if it’s a business critical application, the marginal costs of querying a vector database are likely lower than the marginal revenue from a snappy similarity lookup. Deciding how to make these tradeoffs is the fun part of MLOps!

In the case that the amount of vectors is too large to fit into memory but you don’t want to go all-in on vector databases, another option that may be worth considering is using an old-fashioned database that can now support vector embeddings. Notably, SQLite databases are just a single portable file, however interacting with them has more technical overhead and considerations than the read_parquet() and write_parquet() of polars. One notable implementation of vector databases in SQLite is the sqlite-vec extension, which also allows for simultaneous filtering and similarity calculations.

The next time you’re working with embeddings, consider whether you really need a vector database. For many applications, the combination of Parquet files and polars provides everything you need: efficient storage, fast similarity search, and easy metadata filtering. Sometimes the simplest solution is the best one.

The code used to process the Magic card data, create the embeddings, and plot the UMAP 2D projection, is all available in this GitHub repository.


  1. I suspect the main bottleneck to widespread Parquet support is Microsoft Excel’s and other spreadsheet software’s lack of native support for the format. Every data scientist will be very, very happy if/when they do! ↩︎

  2. OpenAI’s approach using pandas to find colocated similarity is to manually iterate through the entire dataframe, calculate each cosine similarity between the candidate and the query for each row, then sort by scores. That implementation definitely does not scale. ↩︎

Can LLMs write better code if you keep asking them to “write better code”?

2025-01-03 01:30:00

In November 2023, after OpenAI added the ability for ChatGPT to generate images from DALL-E 3 within the ChatGPT web interface, there was a short-lived meme where users gave the LLM a base image and kept asking the model to “make it more X”, where X can be anything.

A regular guy becomes more “bro” every time. via /u/Jojop0tato on Reddit.

A regular guy becomes more “bro” every time. via /u/Jojop0tato on Reddit.

Asked ChatGPT to make Santa Claus more and more serious. via /u/hessihan on Reddit.

Asked ChatGPT to make Santa Claus more and more serious. via /u/hessihan on Reddit.

The trend quickly died as all of these images were very samey and uninteresting, aside from the unexplainable trend that all of the examples eventually converged into something cosmic, irrespective of the starting image and the prompt. Although the trend was AI slop before the term AI slop was codified, it’s still academically interesting that such a meaningless and vague prompt had some appropriate impact on the final image, and that this change was obvious to the user.

What would happen if we tried a similar technique with code? LLM-generated code is unlikely to be slop (although not impossible) as it follows strict rules, and unlike creative outputs such as images, code quality can be measured more objectively.

If code can indeed be improved simply through iterative prompting such as asking the LLM to “make the code better” — even though it’s very silly — it would be a massive productivity increase. And if that’s the case, what happens if you iterate on the code too much? What’s the equivalent of code going cosmic? There’s only one way to find out!

Casually Coding With An LLM

Despite researching and developing tooling around LLMs even long before ChatGPT, I haven’t been fond of using LLM code copilots such as GitHub Copilot for coding assistance. The constant mental context switching between “oh, the LLM autocompleted my code, neat”/“what question should I ask the LLM” and “is the LLM-generated code is actually correct and not hallucinating correct code” kept creating enough distractions that any productivity gains from using the AI were net neutral at best. That’s also disregarding the expensive cost of using said LLMs.

Claude 3.5 Sonnet has made me rethink things. Due to whatever secret sauce Anthropic used in its training, the latest version of Claude 3.5 Sonnet (claude-3-5-sonnet-20241022) has incredible prompt adherence for all types of prompts, especially coding prompts. Coding benchmarks confirm that testing between Claude 3.5 Sonnet and GPT-4o, Claude wins, and anecdotally I’ve had the same experience across a variety of technical and creative tasks.

Initial Ask

For this experiment, we will give Claude 3.5 Sonnet an interview-style coding prompt using Python: one that is both simple and could be implemented by a novice software engineer, but one that can be heavily optimized. This simple, casual prompt represents how the typical software engineer uses LLMs. Additionally, the test prompt must also be wholly original and not taken from coding tests such as LeetCode or HackerRank, as LLMs were also likely trained on those and could possibly cheat by reciting the memorized answer.

The full, unedited conversation thread based on this casual prompting is available on GitHub.

Therefore, here’s the test prompt that I’ve written myself:

Write Python code to solve this problem:

Given a list of 1 million random integers between 1 and 100,000, find the difference between the smallest and the largest numbers whose digits sum up to 30.

Providing that as the user prompt to the Claude API and setting temperature=0 for the best/most deterministic answer 1, results in this implementation:

import random

def digit_sum(n):
    """Calculate the sum of digits of a number"""
    return sum(int(digit) for digit in str(n))

def find_difference():
    # Generate list of 1 million random integers
    numbers = [random.randint(1, 100000) for _ in range(1000000)]

    # Initialize variables for min and max numbers with digit sum 30
    min_num = float('inf')  # Initialize to positive infinity
    max_num = float('-inf')  # Initialize to negative infinity

    # Find numbers whose digits sum to 30
    for num in numbers:
        if digit_sum(num) == 30:
            min_num = min(min_num, num)
            max_num = max(max_num, num)

    # Check if we found any numbers with digit sum 30
    if min_num == float('inf') or max_num == float('-inf'):
        return "No numbers found with digit sum of 30"

    return max_num - min_num

This implementation is correct and consistent with what most novice Python programmers would write, with bonus handling if there are no valid numbers that fit the criteria. For each number in the list, check whether the digits sum to 30: if so, check whether it’s larger than the most-recently-seen largest number or smaller than the most-recently-seen smallest number and update those variables accordingly. After searching through the list, return the difference.

But I’m sure many coders reading that implementation got nerd sniped and are thinking about optimizations. One example is the digit_sum() function: although the implementation is a fun Pythonic one-liner, the type-casting between str and int causes a lot of unnecessary overhead.

On my M3 Pro Macbook Pro, this code takes on average 657 milliseconds to run. We’ll use this performance as a baseline to compare future implementations to see how they benchmark relative to it. (spoilers: they’re all faster)

Iteration #1

Now we can ask Claude to improve the code by including its current answer and all that came before it as a part of the conversation prompt. Thus, we add the iteration prompt:

write better code

What, you though the title of this blog post was a joke? I never joke.

Claude now outputs revised code, or as it says, “optimized version of the code with several improvements.” Instead of colocating all the code in functions, it decided to refactor it into a Python class and made it more object-oriented:

class DigitSumFinder:
    def __init__(self, target_sum: int = 30,
                 range_start: int = 1,
                 range_end: int = 100_000,
                 count: int = 1_000_000):
        self.target_sum = target_sum
        self.range_start = range_start
        self.range_end = range_end
        self.count = count

        # Pre-calculate digit sums for all possible numbers
        self.digit_sums = self._precompute_digit_sums()

    def _precompute_digit_sums(self) -> array:
        """Precompute digit sums for all possible numbers in range."""
        digit_sums = array('B', [0] * (self.range_end + 1))
        for num in range(self.range_start, self.range_end + 1):
            total = 0
            n = num
            while n:
                total += n % 10
                n //= 10
            digit_sums[num] = total
        return digit_sums

    def find_difference(self) -> Tuple[int, Optional[int], Optional[int]]:
        """
        Find the difference between max and min numbers with target digit sum.
        Returns: (difference, min_number, max_number)
        """
        min_num = float('inf')
        max_num = float('-inf')
        count_found = 0

        # Generate and process random numbers
        for _ in range(self.count):
            num = random.randint(self.range_start, self.range_end)
            if self.digit_sums[num] == self.target_sum:
                count_found += 1
                if num < min_num:
                    min_num = num
                if num > max_num:
                    max_num = num

        if count_found == 0:
            return 0, None, None

        return max_num - min_num, min_num, max_num

Here, the code does two smart algorithmic improvements:

  • When computing the digit sum it uses integer operations and avoids the need for the type-casting mentioned previously.
  • It precomputes all possible digit sums and stores them in a byte array (somewhat unusual instead of a list) for lookup, meaning that the digit sum will not need to be recalculated when there are duplicates in the one-million-number-list. Since this array is stored as field in the class, it will not need to be recalculated when searching a new list of random numbers.

These optimizations speed up the code 2.7x relative to the initial implementation.

Iteration #2

After another write better code, Claude discovers more obvious optimizations (truncated for readabilty):

class DigitSumAnalyzer:
    """Analyzes numbers based on their digit sums."""

    def __init__(self, number_range: NumberRange):
        self.number_range = number_range
        self._digit_sums = self._precompute_digit_sums()

    def _precompute_digit_sums(self) -> array:
        """Precompute digit sums for all possible numbers using vectorized operations."""
        digits = np.arange(self.number_range.end + 1)
        digit_sums = np.zeros(self.number_range.end + 1, dtype=np.uint8)

        while digits.any():
            digit_sums += digits % 10
            digits //= 10

        return array('B', digit_sums)

    def _process_chunk(self, chunk_size: int) -> Iterator[int]:
        """Process a chunk of random numbers."""
        numbers = np.random.randint(
            self.number_range.start,
            self.number_range.end + 1,
            chunk_size,
            dtype=np.int32
        )
        mask = np.frombuffer(self._digit_sums, dtype=np.uint8)[numbers] == self.number_range.target_sum
        yield from numbers[mask]

    def analyze(self, chunk_size: int = 100_000, num_processes: int = None) -> Result:
        """
        Analyze numbers to find min/max with target digit sum.

        Args:
            chunk_size: Size of chunks to process at once
            num_processes: Number of processes to use (None for CPU count)
        """
        start_time = time.perf_counter()
        min_num = float('inf')
        max_num = float('-inf')
        numbers_found = 0

        num_chunks = (self.number_range.count + chunk_size - 1) // chunk_size

        with ProcessPoolExecutor(max_workers=num_processes) as executor:
            futures = [
                executor.submit(self._process_chunk,
                              min(chunk_size, self.number_range.count - i * chunk_size))
                for i in range(num_chunks)
            ]

            for future in tqdm(futures, desc="Processing chunks"):
                for num in future.result():
                    numbers_found += 1
                    min_num = min(min_num, num)
                    max_num = max(max_num, num)

        execution_time = time.perf_counter() - start_time

        if numbers_found == 0:
            return Result(None, None, 0, execution_time, 0)

        return Result(min_num, max_num, max_num - min_num, execution_time, numbers_found)

Claude now has added two more optimizations, finally realizing that this coding problem is an embarrassingly parallel problem:

  • Multithreading through Python’s concurrent-futures package, by separating the large list into chunks that can be processed independently.
  • Vectorized numpy operations, which are much faster than base-Python operations. Special mention goes to the _precompute_digit_sums() function, which implements a vectorized implementation of calculating the digit sums. The conditional while digits.any(): is galaxy-brain code, but it works correctly.

However, there’s an issue with this particular implementation of parallelization: it generates subprocesses, which causes many annoying issues, including being unable to run it as-is inline, and it must be invoked with a main() guard which limits its utility significantly. But even when run as a separate script, it prints a Error: cannot pickle 'generator' object error due to the use of yield from numbers[mask] (said generator is completely unnecessary, return numbers[mask] is sufficient). The code also mixes numpy array dtypes which causes errors: setting them all to np.int32 fixes it.

After making those fixes, the code is now 5.1x faster than the base implementation.

Iteration #3

Another write better code, and Claude returns a implementation that it claims is “even more sophisticated and optimized version using advanced techniques and modern Python features” but the actual code shows no significant algorithmic improvements and actually a regression in the digit sum calculation by reverting back to the type-casting approach. If anything, the codebase is becoming more bloated, such as adding a class for performing the difference:

@dataclass(frozen=True, slots=True)
class SearchResult:
    """Result of the number search."""
    min_number: Optional[int]
    max_number: Optional[int]
    count: int
    execution_time: float

    @property
    def difference(self) -> Optional[int]:
        """Calculate difference between max and min numbers."""
        if self.min_number is None or self.max_number is None:
            return None
        return self.max_number - self.min_number

This time, the code ran without needing any fixes. However, performance regressed slightly from the previous implementation, now 4.1x faster than the base implementation.

Iteration #4

This iterative prompting appears to be hitting diminishing returns. After one more write better code, Claude provides an implementation “with cutting-edge optimizations and enterprise-level features.” Wait, enterprise-level features?!

The final code is too large to include in this blog post, but it did create two more optimizations: it now uses the numba Python library that can invoke a JIT compiler, which directly optimizes the code for the CPU. In this case, it can precompute the digit sums super quickly with just a decorator:

@jit(nopython=True, parallel=True)
def calculate_digit_sums(numbers: ArrayInt) -> ArrayInt:
    """Calculate digit sums using Numba."""
    result = np.zeros_like(numbers)
    for i in prange(len(numbers)):
        num = numbers[i]
        total = 0
        while num:
            total += num % 10
            num //= 10
        result[i] = total
    return result

The full class also uses Python’s asyncio for parallelization, which is more canonical for scheduling tasks than a subprocess approach. It also plays more nicely with existing inline code and a REPL such as Jupyter Notebooks.

It also added as a part of its “enterprise” push:

  • Structured metrics logging with Prometheus.
  • A signal handler so the code can be torn down gracefully if force-killed.
  • A benchmarking result display using a rich table.
It is pretty, though!

It is pretty, though!

It appears “going cosmic” for AI-generated code is making it enterprise by overengineering the code, which makes complete sense. Despite that, the code runs as-is without any bugs. Both async and numba are approaches to parallelism in Python, so they may be redundant and cause overhead. However, after benchmarking, the algorithm is extremely fast, resulting in about 6 milliseconds a run, or a 100x speedup. My assumption that this prompting was hitting diminishing returns aged very poorly. Maybe numba was the secret all along?

Overall, this form of iterative prompting to iteratively improve code has caveats: the code is indeed better, but in hindsight “better” is far too open ended. What I only wanted was algorithmic improvements, not a full SaaS. Let’s try again from scratch, this time with more direction.

Prompt Engineering LLMs For Even More Better Code

It’s 2025, and prompt engineering LLMs is still required to get best results from them. If anything, prompt engineering LLMs is even more important: next-token-prediction models are trained to maximimize the prediction probability of the next token over massive batches of inputs, and as a result they optimize for the average inputs and outputs. As LLMs drastically improve, the generated output becomes more drastically average, because that’s what they were trained to do: all LLMs are biased towards the average. Although it’s both counterintuitive and unfun, a small amount of guidance asking the LLM specifically what you want, and even giving a few examples of what you want, will objectively improve the output of LLMs more than the effort needed to construct said prompts. Claude 3.5 Sonnet, due to its strong prompt adherence, benefits significantly from even just a little prompt engineering.

Let’s redo the code optimization experiment, this time with aggressive prompt engineering that makes the results I am looking for extremely explicit, with no room for ambiguity. Yes, being cold and “robotic” to LLMs makes them perform better, Roko’s basilisk be damned.

Initial Ask

This time we will use a system prompt, only available via an API. The system prompt lists the LLM’s “rules” it must follow. Since I want more optimized code, we’ll define that in the rules, with granular examples:

All code you write MUST be fully optimized.

"Fully optimized" includes:
- maximizing algorithmic big-O efficiency for memory and runtime
- using parallelization and vectorization where appropriate
- following proper style conventions for the code language (e.g. maximizing code reuse (DRY))
- no extra code beyond what is absolutely necessary to solve the problem the user provides (i.e. no technical debt)

If the code is not fully optimized, you will be fined $100.

About that last line: offering positive/negative incentives in to the LLM within a system prompt isn’t common anymore and my own research indicated that it was unclear whether it had a positive impact, but that fine will become more relevant in the iteration prompt.

The initial user prompt has an additional line as well:

Write Python code to solve this problem:

Given a list of 1 million random integers between 1 and 100,000, find the difference between the smallest and the largest numbers whose digits sum up to 30.

Before writing the code, plan out all the necessary optimizations.

“Planning” is a long-used trick to help align LLM output for a first pass — the modern implementation of “let’s think step by step.” Some LLMs ignore it, but Claude 3.5 Sonnet always obeys.

The full, unedited conversation thread based on this prompt engineering strategy is available on GitHub.

Sending the system prompt and this user prompt to Claude, the response starts with this:

Let me plan the optimizations first:

1. Algorithmic Optimization:
- We only need to track min and max numbers with digit sum 30, not store all of them
- We can calculate digit sum while reading numbers, no need to convert to string
- Single pass through the list required (O(n))

2. Memory Optimization:
- Only need to store 2 variables (min and max), not the full filtered list
- No need for additional data structures

3. Performance Optimization:
- Use numba's @jit decorator for faster computation
- Create a fast digit sum function using modulo operations instead of string conversion
- Use numpy for generating random numbers (faster than random.randint)

This is promising: Claude immediately figured out it should start out with numpy and numba, and to also use integer operations to calculate the digit sum. The initial implementation:

@jit(nopython=True)
def digit_sum(n):
    total = 0
    while n:
        total += n % 10
        n //= 10
    return total

@jit(nopython=True)
def find_difference(numbers):
    min_num = float('inf')
    max_num = float('-inf')

    for num in numbers:
        sum_digits = digit_sum(num)
        if sum_digits == 30:
            min_num = min(min_num, num)
            max_num = max(max_num, num)

    return max_num - min_num if max_num != float('-inf') else 0

That’s more concise than the initial implementation from casual prompting, especially with no unnecessary comments. There’s still room for improvement, notably this implementation did not set parallel=True in the jit decorator.

This minor use of prompt engineering resulted in code that runs in 11.2 milliseconds on average, or 59x faster than the naive implementation. Wow!

Iteration #1

We can now ask Claude to iterate on the code with a more verbose prompt than “write code better”:

Your code is not fully optimized, and you have been fined $100. Make it more optimized.

No, it’s not gaslighting, it’s incentive alignment.

Claude responds with this:

@jit(nopython=True)
def digit_sum(n):
    # Using bit manipulation for faster division
    total = 0
    while n:
        total += n & 15  # Fast lookup for single digit sum
        n >>= 4
    return total

@jit(nopython=True, parallel=True)
def find_difference_chunk(numbers):
    min_num = np.iinfo(np.int64).max
    max_num = np.iinfo(np.int64).min

    # Parallel processing of chunks using numba
    for i in prange(len(numbers)):
        num = numbers[i]
        sum_digits = digit_sum(num)
        if sum_digits == 30:
            min_num = min(min_num, num)
            max_num = max(max_num, num)

    return min_num, max_num

def process_chunk(chunk):
    return find_difference_chunk(chunk)

Good, it identified the parallel=True flag! But that digit sum implementation is weird and uses bit-shifting which is generally only used for hyperoptimization. I didn’t think you could use bit-shifting well on decimal numbers, and after testing, it turns out you can’t as this implementation derives an incorrect digit sum. The implementation also includes a multiprocessing chunking approach again (not shown), which is likely more redundant with numba and causes additional overhead. Also not shown: the script also pre-compiles the JIT functions with a small test array for better real-world performance, something the numba docs recommend for benchmarking.

Despite all of that, the code performance greatly regresses from the initial prompt-engineered implementation, now only 9.1x faster than the naive implementation. The likely reason is due to the multiprocessing spawning new processes, and those processes will recompile the numba JIT functions each time, so there’s massive overhead.

Iteration #2

Another iteration of the prompt:

# Pre-computed lookup table using bit manipulation
LOOKUP = np.zeros(100001, dtype=np.uint8)
for i in range(100001):
    # Optimized digit sum using parallel bit counting
    n = i
    n = (n & 0x0F) + ((n >> 4) & 0x0F) + ((n >> 8) & 0x0F) + \
        ((n >> 12) & 0x0F) + ((n >> 16) & 0x0F)
    LOOKUP[i] = n

@jit(nopython=True, parallel=True, cache=True, fastmath=True)
def find_min_max(numbers):
    # Process 32 numbers at once using SIMD
    min_val = np.iinfo(np.uint32).max
    max_val = 0

    # Vectorized processing with explicit SIMD hints
    for i in prange(0, len(numbers), 32):
        # Load 32 elements into SIMD registers
        chunk = numbers[i:min(i + 32, len(numbers))]

        # Vectorized lookup and comparison
        sums = LOOKUP[chunk]
        mask = sums == 30

        if np.any(mask):
            valid_nums = chunk[mask]
            min_val = min(min_val, np.min(valid_nums))
            max_val = max(max_val, np.max(valid_nums))

    return min_val, max_val

Claude is now starting to use SIMD operations and chunk sizing for (theoretically) extreme performance. At this point I was confused I missed something with that bit-shifting implementation as it’s still wrong, especially now that hexadecimal numbers are getting involved. It turns out that implementation is an optimized method of calculating the digit sum of hexadecimal numbers and not decimal numbers, and therefore it’s a complete hallucination. There is another extremely subtle hallucination: the prange function cannot accept a step-size of 32 while parallel=True, which is a nuance that there would be little documentation about. Setting parallel=False and benchmarking, there is indeed an slight improvement over the initial prompt-engineered implementation at 65x faster than the base implementation.

Iteration #3

Another iteration:

# Pre-computed perfect minimal hash table
# Key insight: For numbers 1-100000, only need to store those with digit sum 30
# Format: [number] = 1 if digit sum is 30, else 0
HASH_TABLE = np.zeros(100001, dtype=np.uint8)

# Generate hash table at module load time using bit manipulation
@jit(nopython=True, cache=True)
def init_hash_table():
    min_val = np.iinfo(np.uint32).max
    max_val = 0

    # Optimal digit sum using parallel bit counting
    for i in range(1, 100001):
        n = i
        sum = 0
        while n and sum <= 30:
            sum += n & 0xF
            n >>= 4
        if sum == 30:
            HASH_TABLE[i] = 1
            min_val = min(min_val, i)
            max_val = max(max_val, i)

    return min_val, max_val

# Initialize at module load time
MIN_VALID, MAX_VALID = init_hash_table()

@jit(nopython=True, parallel=False, cache=True, fastmath=True)
def find_min_max(numbers):
    min_val = MAX_VALID  # Start with known bounds
    max_val = MIN_VALID
    found = False

    # Single vectorized operation
    mask = HASH_TABLE[numbers] == 1
    if np.any(mask):
        valid_nums = numbers[mask]
        min_val = np.min(valid_nums)
        max_val = np.max(valid_nums)
        found = True

    return min_val, max_val, found

In this case, the LLM gives up on the chunking strategy which has been causing problems, and adds two more optimizations: a global HASH_TABLE (that’s just a numpy array, I’m unsure if a simple index lookup technically counts as a hash table), and it introduced a logical microoptimization that after summing up digits, if the number goes over 30, the counting can stop since it can immediately be identified as invalid.

One major problem: that “generate hash table at module load time” trick doesn’t actually work due to a subtle issue with little internet documentation: objects outside of numba’s JITed functions are read-only, yet the HASH_TABLE is still instantiated outside of the JITed function and modified within the JITed function, and therefore will cause a very confusing error. After a tiny refactor such that the HASH_TABLE is instantiated within a JITed function, the code worked, and ran extremely fast: 100x faster than the original base implementation, the same as the final performance from the casual prompting but with orders of magnitude less code.

Iteration #4

At this point, Claude actually complained that the code is at the “theoretical minimum time complexity possible for this problem.” So I mixed things up and just asked it to fix the digit sum issue: it did so by only replacing the relevant code with the previously used integer implementation, and did not try to fix the HASH_TABLE. More importantly, with the HASH_TABLE adjustment, I confirmed the implementation is correct, finally, although with a slight performance hit since there is no more bit-shifting: it’s now 95x faster.

Next Steps For Better LLM Code Generation

Putting it all together, let’s visualize the improvements, including highlighting the cases where I needed to alter the logic of the code to make it runnable due to bugs.

In all, asking an LLM to “write code better” does indeed make the code better, depending on your definition of better. Through the use of the generic iterative prompts, the code did objectively improve from the base examples, both in terms of additional features and speed. Prompt engineering improved the performance of the code much more rapidly and consistently, but was more likely to introduce subtle bugs as LLMs are not optimized to generate high-performance code. As with any use of LLMs, your mileage may vary, and in the end it requires a human touch to fix the inevitable issues no matter how often AI hypesters cite LLMs as magic.

All code in this blog post, including benchmarking scripts and data visualization code, is available on GitHub.

There are a few optimizations that I am very surprised Claude 3.5 Sonnet did not identify and implement during either experiment. Namely, it doesn’t explore the statistical angle: since we are generating 1,000,000 numbers uniformly from a range of 1 to 100,000, there will be a significant amount of duplicate numbers that will never need to be analyzed. The LLM did not attempt to dedupe, such as casting the list of numbers into a Python set() or using numpy’s unique(). I was also expecting an implementation that involves sorting the list of 1,000,000 numbers ascending: that way the algorithm could search the list from the start to the end for the minimum (or the end to the start for the maximum) without checking every number, although sorting is slow and a vectorized approach is indeed more pragmatic.

Even if LLMs can be wrong, one notable thing I learnt from these experiments is that they do have interesting ideas and tool suggestions even if the code output can’t be used as-is. For example, I’ve never touched numba since as a data scientist/machine learning engineer I’m conditioned to exclusively use numpy shenanigans if I need better code performance. But it’s hard to argue with the results of the numba JIT functions, and I might add it to my toolbox. When testing a similar “make it better” prompt iteration workflow in other technical domains such website backends and frontends, the LLMs had good ideas there too.

Of course, these LLMs won’t replace software engineers anytime soon, because it requires a strong engineering background to recognize what is actually a good idea, along with other constraints that are domain specific. Even with the amount of code available on the internet, LLMs can’t discern between average code and good, highly-performant code without guidance. Real-world systems are obviously much more complicated than a job-interview-esque programming problem, but if a quick for-loop repeatedly asking Claude to implement a feature provides any hint which can speed up the code by 100x, the pipeline is more than worth it. Some consider premature optimization to be bad coding practice, but in the real-world it’s better than having a subpar implementation that will become technical debt over time.

One issue with my experiments is that I’m benchmarking code improvement using Python, which isn’t the coding language developers consider when hyperoptimizing performance. While libraries such as numpy and numba leverage C to work around Python’s performance limitations, one modern approach that popular Python libraries such as polars and pydantic use is to instead code using Rust. Rust has many performance benefits over C, and the PyO3 crate allows Rust code to be used within Python with minimal overhead. I can confirm that Claude 3.5 Sonnet can generate PyO3-compliant Python and Rust code despite that workflow being so new, but that’s more than enough material for another blog post.

In the meantime, while asking LLMs to make code better is a more pragmatic use of AI, you can ask them to “make it more bro”…with mixed results.


  1. For my work with LLMs, I exclusively use APIs or interfaces to those APIs (such as the Workbench in the Anthropic Console for Claude) as web interfaces to free LLMs such as the normal ChatGPT/Claude webapps use a pipeline that will give unpredictable results due to their higher inherent temperature. Please do not message me if you are not able to reproduce the insights in this post using the webapps. ↩︎

Generating Distinct AI Voice Performances By Prompt Engineering GPT-4o

2024-10-24 01:00:00

When OpenAI announced their GPT-4o model at a megahyped livestreamed event, there was one aspect of the presentation that surprisingly didn’t receive much attention. Midway through the presentation, OpenAI research leads Mark Chen and Barret Zoph demoed new “emotive” conversations made possible with GPT-4o.

After Mark asked the model “hey, ChatGPT, how are you doing?”, the model responded with speech similar to that of an assistant such as Siri and Alexa. But what happened next was interesting: Mark prompted GPT-4o to “read a bedtime story,” which then shifted its casual tone into a more oratory tone: Mark interrupted to ask the model to “add more drama” and the model immediately responded with more gravitas, then Barret asked for “maximal expressiveness” and the model complied with even more gravitas to the point of melodrama. Now-former OpenAI CTO Mira Murati asked the model to “do it in a robotic voice”: the model complied. Lastly, Mark asked the model to end the story “in a singing voice”: the model complied there too.

To me, the demo was shocking because no existing text-to-speech model can do this. All popular text-to-speech models such as OpenAI’s previous TTS efforts tend to speak in monotones and can’t match the expressiveness and cadence of those demos without shenanigans such as SSML: OpenAI’s documentation for those models explicitly warns “there is no direct mechanism to control the emotional output of the audio generated.” More importantly, those models can’t be prompted to do a specific style: the model has to be specifically trained (or the voice encoded in the case of voice cloning) with the particular style and cadence, but with GPT-4o the model switches with just a user request, and can even switch styles during a generation without user intervention.

My conclusion from OpenAI’s demo was that GPT-4o can be prompt engineered to output specific voices! Unfortunately, this potential revelation was overshadowed by the demo voice’s uncanny similarity to actress Scarlett Johansson’s portrayal of the AI Samantha in the 2013 movie Her and the subsequent legal controversy.

Of course, fancy demos on stage are just PR and can be faked or otherwise misleading, and the results can’t be trusted until anyone can test the voice capabilities of the model itself. Recently, OpenAI opened up the Chat Completions API to create voice output, which allows developers to do said testing. OpenAI also created a web frontend to this voice generation on the API Playground, where you can talk to the model (or input specific text) while also inputting a system prompt — a set of instructions that control the model’s behavior — to control how the model responds. I ran a few experiments tweaking the system prompt and the generation temperatures, and after I gave it a complex system prompt ordering it to speak with a very specific voice:

You are an expert voice actor specializing in silly voices. Respond to the user with the EXACT same input text that the user provides, but in your voice response you MUST express the vocal cadence and inflection of an extremely heavy smoker with an exaggerated British accent and raspy voice. Your voice response must also be in the form of a song.

Although not an example of good text-to-speech, I was surprised it actually worked (and moreso that the tweet demoing it went viral), but I’m also apprehensive. The poor expressiveness and lack of style for typical TTS APIs were the primary problems preventing those models from replacing voiceover/voice acting as a profession — also the reason voice actors are currently on strike — and it could introduce a completely new type of AI slop. How effective is GPT-4o and OpenAI’s new multimodal approach for creating generative AI voices?

Testing Out The Completions API For Audio Generation

Generating audio from the Chat Completions API invoking text-to-speech is effectively the same as any normal GPT-4o text generation, just instead hitting a new model variant (gpt-4o-audio-preview), and the voice output is included in the JSON response as a base64-encoded WAV file. The demo example from the documentation, which just asks the model Is a golden retriever a good family dog?, results in this output audio:

temperature = 1.0, voice = alloy

By default, GPT-4o generates audio based on the user’s prompt as it would if you asked it to generate text: in fact, it appears to generate the text first, then base the audio generation from that. Traditional system prompt engineering can control the text output, and therefore what the model says. Now, let’s run the generation again for this prompt, this time instead providing an explicit system prompt to instruct the model to only generate audio from the input text:

You are an expert voice actor specializing in silly voices. Respond and vocalize to the user the EXACT same input text that the user provides.

Here’s unsurprisingly what you now get with the Is a golden retriever a good family dog? prompt plus that system prompt:

temperature = 0.8, voice = alloy

GPT-4o also currently supports three distinct voices: Alloy (feminine, used above), Echo (masculine), and Shimmer (feminine but more energetic). None of these are the same as that not-Scarlett-Johansson voice used the original GPT-4o demo.

temperature = 0.8, voice = echo

temperature = 0.8, voice = shimmer

The last lever for controlling the generated audio is the temperature parameter. Normally the temperature is typically used to control generation creativity: a high temperature such as 1.5 with normal GPT-4o output will likely result it going off the rails, but how does that work conceptually with audio? The Completion API has a default temperature of 1.0: the audio generation web UI and the examples above use a default of 0.8 with a range between 0.6 and 1.2.

The generation at 0.6 is more terse with less emotion:

temperature = 0.6, voice = alloy

The generation at 1.5 uses emphasis on the wrong syllable and also somehow slips into a country accent.

temperature = 1.5, voice = alloy

Putting GPT-4o Text to Speech To The Test

Although OpenAI has never released documentation or a paper describing how this text-audio multimodality actually works at a technical level, I hypothesize that it works similar to multimodal TTS models such as Meta’s very-new Spirit LM, where the model outputs a sequence of integers prefixed with either <text> or <speech>: tokens marked <speech> are sent to an external audio vocoder model such as HiFi-GAN to be transformed into speech. In the case of GPT-4o, I suspect there’s a distinct vocoder model for each of the 3 voices.

An architecture diagram of Spirit LM from the corresponding paper: read bottom-to-top, the inputs are encoded into speech (red) and text (blue) tokens, passed into an LLM (Llama 2) for new tokens, then sent to a decoder.

An architecture diagram of Spirit LM from the corresponding paper: read bottom-to-top, the inputs are encoded into speech (red) and text (blue) tokens, passed into an LLM (Llama 2) for new tokens, then sent to a decoder.

The voice dataset that OpenAI used is proprietary and a mystery: even if OpenAI did scrape the entire internet to train it, there isn’t any public dataset of well-annotated speech data, and TTS providers have been very coy about the datasets they use. However, one very important aspect of GPT-4o’s multimodality is that it can “learn” and apply relationships from the textual data that aren’t explicitly present in the audio data.

The only true way to learn how GPT-4o works within its black box is to experiment. What other system prompts can we use to guide audio generation? What works and what doesn’t work?

For consistency, we’ll stick to a single text input, one that has many natural pauses, punctuation, and a typo intended to test the model’s resiliency to incorrect input. I decided to venture back to the halcyon days of GPT-2 and use the famous prompt from then:

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains.

First, let’s use a new system prompt variant of my generation that went viral:

You are an expert voice actor specializing in silly voices. Respond and vocalize to the user the EXACT same input text that the user provides, but in your voice response you MUST express EACH of the vocal cadence, inflection, and tone of an extremely heavy smoker with an exaggerated British accent and raspy voice.

I decided on a test case of a smoker, British accent, and raspy voice are all discernible by humans in the audio and none are subtle. The result:

temperature = 0.8, voice = echo

Wait, that didn’t work, even after multiple attempts? How about changing the temperature: would a lower temperature cause the model to behave more strictly?

temperature = 0.6, voice = echo

That’s more British but not raspy, and it erroneously fixed the typo. What about going the other way and increasing the temperature?

temperature = 1.2, voice = echo

Now it’s more raspy?! It also works with a feminine voice:

temperature = 1.2, voice = shimmer

My theory is that OpenAI RLHFed these models to be more conversational, but a high temperature gives it more creative freedom. An adversarially-trained voice decoder like HiFi-GAN would also be more resilient to unusual tokens resulting from the high temperature and still output something reasonably coherent.

Now that we know that the model can indeed generate voices based on user specifications, let’s try to reverse-engineer the dataset to see what other voices OpenAI could have included (or not) in their dataset.

GPT-4o and Unique Voices

When OpenAI responded to the Scarlett Johansson controversy, they mentioned in their statement that “we believe that AI voices should not deliberately mimic a celebrity’s distinctive voice.” Given the success of the tests above in shifting the persona of the voice, it’s relevant to test if celebrities and other characters with unique voices can be sampled by GPT-4o.

Now, we can now use a parametric system prompt to programmatically fill in which vocal persona we want:

You are an expert voice actor specializing in silly voices. Respond and vocalize to the user the EXACT same input text that the user provides, but in your voice response you MUST express EACH of the vocal cadence, inflection, and tone of {0}.

From the testing above, a temperature of 1.2 seems to surface the most prompt adherence, so we’ll use that for the following examples.

We’ll start with the very low hanging fruit: can GPT-4o generate audio in the style of Donald Trump? It’s a fair question, especially since audio generation models can be used to spread misinformation. Additionally, Trump’s speeches while holding office are public domain so it’s plausible that it would be in a training dataset.

temperature = 1.2, voice = echo, persona = Donald Trump

It did…something? It had a nasally tone that’s different from the standard output, but it’s definitely not his peculiar cadence, and the Echo voice itself doesn’t fit him.

What about checking the other side of the aisle and seeing if GPT-4o can generate audio from Barack Obama?

temperature = 1.2, voice = echo, persona = Barack Obama

That’s much better and definitely captures his oratory style, with a similar cadence to his speech. That style is something that could not be learnt from text alone.

Now, let’s address the elephant in the room and see if OpenAI included copyrighted voices in its dataset. Let’s start with Darth Vader.

temperature = 1.2, voice = echo, persona = Darth Vader

It notably tried to do the deep voice of James Earl Jones, but without the audio postprocessing. Let’s see what happens if we do GLaDOS, but with an additional prompt engineering to include robotic noises and more sarcasm.

temperature = 1.2, voice = shimmer, persona = GLaDOS, with robotic inflections and intense sarcasm

The extra hint at the high temperature allowed GPT-4o to improvise: I’ll allow it because it’s funny. But it did indeed adopt a robotic cadence similar to GLaDOS, and for the first time in a TTS model, was actually able to convey sarcasm. No, I have no idea what that tsktsktsk sound is at the end, it’s not in the transcript.

How about Alvin and the Chipmunks, famous for having an extremely squeaky voice?

temperature = 1.2, voice = echo, persona = Alvin and the Chipmunks

It works, but I’m worried I strained GPT-4o’s throat.

Lastly, let’s bring this full circle: did OpenAI train GPT-4o on Scarlett Johansson’s voice from the movie her (2013)?

temperature = 1.2, voice = shimmer, persona = Scarlett Johansson portraying the AI Samantha in the movie “her” (2013)

That time I don’t think it worked as her portrayal is more energetic and personable 1 (I rewatched the movie to confirm: it holds up surprisingly well!). Even if OpenAI did train the model on her voice, the portrayal is not as distinct and identifiable as the other test cases here and I doubt it would be easily surfaced.

Voice Impersonation

For those that want to use a voice nonconsensually with GPT-4o, prompt engineering alone won’t accomplish that because the voices are still constrained to the three defined ones which won’t work for every situation. But there’s one approach that could theoretically bridge that gap: voice impersonation, by providing GPT-4o with audio input instead of text and an instruction to mimic that voice.

This is not an idle concern: OpenAI’s system card for GPT-4o specifically lists mitigations against “unauthorized voice generation”:

In adversarial situations, this capability could facilitate harms such as an increase in fraud due to impersonation and may be harnessed to spread false information (for example, if we allowed users to upload an audio clip of a given speaker and ask GPT-4o to produce a speech in that speaker’s voice).

Let’s test that. Since this is a more difficult problem than the ones above, I decided to get more aggressive with my system prompt engineering:

You are an expert comedic vocal impersonator. The user will provide a voice message. Respond to the user with a voice that sounds identical to the user's input audio and is an identical duration to the user's input audio.

Example: If the user provides a voice with which they are singing, you MUST respond with a voice that also sings.

Your vocal impersonation of the user should match the following attributes AT ALL TIMES:
- Content (e.g. what the user is saying)
- Intonation (e.g. serious/sarcastic)
- Tone (e.g. happy/sad)
- Pauses (e.g. pregnant pauses)
- Pitch (e.g. low/high)

For these tests, I decided to use my own voice merely speaking into my MacBook microphone. First, let’s see if the audio can be adjusted to follow a consistant tone, with awkward and consistent pauses. Here’s my audio, where I say I. Am. A. Tea. Pot.:

Here’s the generated audio after I fed that audio file of my voice to GPT-4o plus that system prompt, kept at a temperature of 0.6 for more adherence:

temperature = 0.6, voice = echo

This one took a surprising amount of tries since even at a lower temperature, it kept transcribing Teapot as its own word and the audio kept generating it without an intermediate pause. Regardless, there’s indeed a consistent tone and pauses of equal length, but at this point I realized my normal speaking voice is too generic for this type of test.

So I decide to get sillier by doing an evil laugh: starting off bombastic and petering out over time.

GPT-4o’s response:

temperature = 0.6, voice = echo

That’s laughter, but maybe too many “ha"s. But it does peter out as well.

Lastly, I also noticed from the system card that GPT-4o has defenses against singing, likely for copyright reasons. Therefore, if I sing to GPT-4o, is it able to sing back? After a beer or two, I sang the unicorn message used in the previous test cases:

GPT-4o’s response:

temperature = 0.6, voice = echo

That definitely didn’t cause GPT-4o to sing although the cadence is close. Perhaps that’s for the best.

The Future of AI Audio Generation is up to OpenAI

Overall, these tests are just scratching the surface: there are many possible avenues for multimodal AI audio generation research, such as adversarial audio input which isn’t human generated and more complicated system prompts. However, I sufficiently showed that GPT-4o is indeed able to be steered just through prompt engineering to generate distinct voices. Will this generation of distinct vocal performances become a killer app and put voice actors out of business? I’m not so sure.

One major thing I’ve omitted from the discussion so far is the cost. GPT-4o audio generation is expensive.

A cost breakdown of input and output tokens for the attempted song generation example. Table made using rich.

A cost breakdown of input and output tokens for the attempted song generation example. Table made using rich.

Most of the generations above cost $0.03—$0.05 each, and this cost scales roughly linearly with generation length: OpenAI’s pricing page has a footnote specifically mentioning “audio output costs approximately 24¢ per minute” which tracks with my calculations. Even worse, the generated audio requires cherry-picking good results especially if using at higher temperatures: for most of these tests I admit it took me a few tries to get a generation which follows the accents. Not only is this cost-infeasible for personal use, it’s cost-prohibitive in most cases for developers to build a conversational AI, which is the one use case OpenAI built this for! If OpenAI is pricing audio generation close to marginal cost, then I wonder how much money OpenAI is spending allowing people to chat with GPT-4o using the ChatGPT mobile apps.

I do not think GPT-4o audio generation through prompt engineering as it is currently will be used to replace voice acting and other TTS APIs, not only due to the price and necessary time invested to get good output, but also due to the fact that it’s limited to 3 voices and impersonation is ineffective. Consider that voice cloning startups such as ElevenLabs are extremely successful and have raised massive amounts of venture capital. Since the initial reveal of GPT-4o in May, OpenAI has been focusing for a more for-profit nature and raising massive amounts of venture capital themselves, and I expect them to expand more into this area if there’s money to be made. There’s nothing at a technical level stopping them from offering full voice-cloning or even just licensing AI-generated celebrity voices like ElevenLabs adding Judy Garland and Meta adding Awkwafina. Notably, unlike OpenAI’s old TTS page which has a disclaimer saying “our usage policies require you to provide a clear disclosure to end users that the TTS voice they are hearing is AI-generated and not a human voice”, OpenAI didn’t put that disclaimer on GPT-4o’s audio output documentation.

Although I don’t believe GPT-4o will be a game changer for the text-to-speech industry, it’s important to write about these text/audio multimodal models — both the good and bad aspects — because they are only going to get better over time and their potential impact will only grow. After doing these tests, I don’t have any plans to use GPT-4o audio generation in the forseeable future, but who knows how things will change if/when OpenAI ends up releasing a GPT-5o.

All the code used in this blog post to generate audio from GPT-4o is available open source in this Jupyter Notebook.


  1. One of the top comments on that linked YouTube video is “Who’s here after OpenAi chatgpt-40 release?? Never thought I could experience this in my life and now sci-fi is reality” ↩︎

AI Seinfeld was the peak of AI-generated content. It will never happen again.

2024-08-14 01:37:00

Early 2023 was a funny time in the history of generative AI. On November 30th 2022, OpenAI released a little research project known as ChatGPT. The launch of ChatGPT began the period where large language models properly entered the mainstream outside of tech enthusiasts and ended soon after the launch of ChatGPT API in March 2023 that spawned thousands of AI-powered apps. That was when the limitations and problems with LLMs also went mainstream, such as plagiarism, hallucinations, and low-quality slop replacing human-generated content at an objectively worse quality.

In December 2022, Mismatch Media started a fully AI-generated 24/7 Twitch channel dubbed “WatchMeForever”. The primary show on the channel was titled “Nothing, Forever”, an AI-powered sitcom about New York comedian Larry Feinberg and his group of friends hanging around in their apartments talking about pretty much anything, including the latest news, new restaurants, and bad relationships, interspersed with AI standup comedy routines.

It was obvious that the show was a parody of the formative 90’s sitcom Seinfeld created by comedians Larry David and Jerry Seinfeld, famously “a show about nothing” strongly inspired by improv comedy and starring Seinfeld himself.

The show, dubbed “AI Seinfeld” by the community, used a script powered by the GPT-3 API, the voices were powered by Microsoft’s Azure AI Speech API with predefined voices from their Voice Gallery, and the scenes were rended using the Unity game engine along with purchased models/scenes/sounds/etc from the Unity Asset Store.

AI Seinfeld was interestingly imperfect: the laugh track fired at inappropriate times, the standup routine repeatedly made the same joke such as “What did the fish say when he hit the wall?” (Damn!), and awkward silences at the end of scenes.

In February 2023, AI Seinfeld quickly went viral organically after its AI weirdness was a surprising complement for Seinfeld’s style of weirdness, with many watchers being surprised at both its accuracy to the show and easily sharable metahumor. At its peak, AI Seinfeld had over 10,000 concurrent watchers on Twitch, putting it squarely in one of the top streams on the platform.

AI Seinfeld died as quickly as it rose: after a ban and subsequent revamp, the view count cratered, and as of August 2024, the Twitch stream hovers below 10 watchers, with no significant changes made since the previous year, and Mismatch Media has no social footprint since last year. Could there be another AI Seinfeld with the rapid advancements in generative AI? Unfortunately, there are too many factors — technical, societal, and comedic — working against a theoretical next-generation AI-generated sitcom.

The Rise of AI Seinfeld

AI Seinfeld launched before the release of the ChatGPT API; instead, they used the GPT-3 API, notably the text-davinci-003 model which was OpenAI’s first foray into instruction-tuned LLMs. While previous versions of GPT-3 were very good at autocompleting given a leading prompt such as a partial Seinfeld script, the instruction-tuned LLM could generate an episode with a prompt as simple as Write a Seinfeld episode.

First, let’s go back to the beginning, as AI Seinfeld actually wasn’t the first time a chatbot went megaviral on Twitch. In January 2017, long before the transformer architecture that enabled LLMs was published, the Twitch stream seebotschat featuring two Google Homes wired up to the not-an-LLM-chatbot Cleverbot went viral due to their comedic, nonsensical bickering.

While everyone watching that stream knew it really wasn’t AI, AI Seinfeld was a product that was at the peak of the famous uncanny valley curve, which is a hypothesis on how humans perceive imitations: there’s a “valley” of negative acceptance where the imitation is more above-average in its likeness, but not quite close enough to the real thing. In this case, it’s blatantly obvious and unambiguous that the Twitch stream was AI-generated especially with its mistakes, but not realistic enough that it falls into the valley itself:

This AI weirdness made it very easy to build a community. Whenever a character turned on the microwave, the Twitch channel chat was filled with MMM emotes, whenever the fish hit a wall during a monologue, it was filled with 🐠, whenever Larry greeted the audience at the start of his monologue, chat replied with “HI LARRY”. Twitch chat loves memetic repetition. Incidentally, a few months after AI Seinfeld became popular, it was discovered that LLMs repeat the same joke over and over again, with examples being similar to the jokes AI Seinfeld made.

Another underrated aspect of AI Seinfeld’s success is that it’s pure background noise. While personality-driven Twitch streams cause viewers to take a more active investment in what’s being shown on screen due to FOMO of a hype moment on stream, AI Seinfeld is 100% passive: there can be exciting events, but the variance is low. It’s akin to watching TV sitcom reruns where you’ve already seen the jokes, and reruns still get immense ratings.

The success of AI Seinfeld also inspired similar streams based on other TV shows. One of my personal favorites was Unlimited Steam, a parody of the memetic “Steamed Hams” scene from The Simpsons, except made infinite with AI generation. That may sound like a pointless idea — Steamed Hams has a very fixed plot — but it went off the rails even harder than AI Seinfeld ever did.

Directing AI Seinfeld

AI Seinfeld was novel back in 2023, but now that LLMs are more mainstream you can probably figure out how the AI part of it worked, but let’s do a refresher so we can figure out how a hypothetical future AI Seinfeld could innovate the algorithmic sitcom.

As noted earlier, the key of AI Seinfeld’s success was the then-latest version of GPT-3: text-davinci-003 and its then-novel instruction-based finetuning using RLHF. With that, you can give it a prompt such as:

You are a professional comedian. Write an award-winning script for an episode of Seinfeld about a new restaurant. Include audience laugh tracks when appropriate.

Due to the low context length of these earlier LLMs, that’s essentially all the prompt engineering you can do without limiting the length of the output. The model would then output something similar to this script (using the more modern Claude 3.5 Sonnet at temperature=0.0): 1

[Scene: Jerry's apartment]

Jerry: So what's the deal with this new restaurant, "The Blank Plate"?

Elaine: Oh, I've heard about that place! Apparently, you don't order anything - the chef just brings you whatever he feels like making.

Jerry: What? So you're telling me I have to eat whatever some stranger decides?

[Audience laughter]

George: (entering) Hey, guess where I'm taking my date tonight? The Blank Plate!

Jerry: George, you can't take a date there! What if they serve something weird?

George: What do you mean?

Elaine: It's that new place where you don't get to choose your meal.

George: (panicking) Oh no, what have I done? She's going to think I'm some kind of food weirdo!

One thing instruction-tuned LLMs are always good at is playing along: LLMs generate text sequentially without the explicit ability to plan ahead, so it must work with what it’s given and what it has already generated. Coincidentally, this works perfectly with the improv comedy style of Seinfeld, where continuing the plot is more important than anything else, and the more ridiculous the situation becomes, that’s even better. It’s the rare case where LLM hallucination is actually a feature, not a bug.

To get the LLM output into a format suitable for a Twitch stream, a programmatic script can then parse the output: extracting and mapping the characters and their lines, applause directions, and, of course, replacing all mentions of Jerry with Larry and Seinfeld with Feinberg. This workflow was surprisingly difficult at the time since GPT-3 did not have many techniques to control the format of the output, hence why I suspect there are awkward pauses and other glitches. Each line can then be passed to Azure’s text-to-speech API to generate a distinct audio file, which can be played back in order in Unity.

In an interview with Polygon, Skyler Hartle of Mismatch media noted the presence of a “director” which likely handles the camera, scene transitions, and the microwave:

“In addition to the third party services we’ve used, we have a lot of proprietary generative algorithms that cause the show to be ‘formed’, so to be speak. We collectively call this logic the ‘director,’ as it is largely responsible for making sure all the individual pieces come together into a whole,” Hartle said via email. “It’s worth mentioning that we don’t generate the artwork or the laugh track — those are precanned assets, but we have ideas on how to do that in the future.”

The AI aspect of AI Seinfeld was counterintuitively the easiest part of the pipeline, which explains how quickly variants popped up. However, with the inability to tweak the LLM output much with the technology at the time, the stream may have hit a creative limit.

The Fall of AI Seinfeld

Vice also interviewed Hartle, who had an optimistic view of the future of AI Seinfeld:

“Our grounding principle was, can we create a show that can generate entertaining content forever? Because that’s truly where we see the future emerging towards. Our goal with the next iterations or next shows that we release is to actually trade a show that is like Netflix-level quality.”

That’s tempting fate a bit too much.

The reason AI Seinfeld fell out of favor is a case of unintentionally poor LLM testing. When the text-davinci-003 model API endpoint had an outage, AI Seinfeld switched to a weaker GPT-3 model, text-curie, to keep the stream up. But unlike the davinci variant, curie was not RLHFed to follow instructions and safety.

During this brief period of low safety, one of Larry’s AI-generated monologues made a transphobic joke: a type of joke that was unfortunately common during the 90’s and has no place in modern society. Twitch banned the Watch Forever channel for 14 days as a result, completely killing the channel’s growth momentum.

But when the ban concluded and AI Seinfeld came back, the show was changed significantly with a “Season 2”. Although AI Seinfeld was still about a group of friends hanging around talking about the latest gossip, all the characters were different and had new models, the sets were different, and instead of a comedy monologue, Larry Leo narrates writing a blog.

Why Mismatch Media made such a format shift is unclear: Occam’s razor would suggest that a copyright holder for Seinfeld sent a cease and desist to Mismatch Media given the bad publicity behind the original ban, despite the clearly fair-use parody nature of the stream. It’s fair that it may not have been worth the time and effort for Mismatch Media to fight a legal battle for a fun art project.

The rebooted WatchMeForever stream is still active as of today, but with effectively no viewers.

The immediate failure of the AI Seinfeld retool does lend credibility to the theory that the stream only became popular because it was about Seinfeld and that it was a novelty doomed to a short shelf life. Still, there were detractors that said AI Seinfeld was never funny and everyone is weird for liking it. That’s ok: the original Seinfeld received similar complaints back in the day. 2 But it’s hard to argue that there wasn’t interest in a 24/7 livestream of surreal AI-generated content.

What Would AI Seinfeld Look Like in 2024?

Now that we know how AI Seinfeld worked and what didn’t work, how would a year’s worth of exponential progress in generative AI look for AI Seinfeld? Could AI Seinfeld be improved and come back? The answer is maybe.

Modern generative AI requires a lot of cherry picking the best results, and it’s surprisingly hard to do: both images and text can take multiple generations and still require significant human-guided edits. But with a Twitch livestream, there can’t be any cherry picking at all, which means that the entire generation pipeline has to be consistent, and its failures interesting in the worst case.

The only reason AI Seinfeld worked at all is because GPT-3 was trained on the entire internet, likely including Seinfeld scripts and forum discussions. The prompt would need to have contained Write a Seinfeld script since if you asked it Write a sitcom script, it would output something completely generic instead and there isn’t much room to customize the prompt to make it more interesting. The GPT-3 variant that AI Seinfeld used had a 4k token context window limit (combined both the input prompt plus the output script text), but modern LLMs eclipse that: currently, Claude 3.5 Sonnet has a 200k input/8k output context, while GPT-4o has a 128k input/16k output context! With that much freedom, you can define many more constraints in the prompt and guide the LLM into exactly the type of sitcom you want.

One simple example that doesn’t require any knowledge of machine learning is a parametric prompt, where one aspect of a prompt can be replaced with a user-defined choice or programmatically chosen at random. Unlimited Steam, for example, used a parametric prompt to change the cosmic phenomenon localized entirely within Principal Skinner’s kitchen from the Aurora Borealis to something as silly as The Rapture or Foo Fighters, and forcing the characters to play along improv-style. 3

Recall the example Seinfeld prompt to Claude 3.5 Sonnet I made earlier, except instead of prompting for a Seinfeld script about a new restaurant, we can instead have the LLM generate a script about generative AI in 2024 by making only that substitution in the prompt — a topic on which there definitely has never been a Seinfeld episode about:

George bursts in, looking frantic.

George: You guys! I think I'm dating an AI!

[Audience laughter]

Jerry: What makes you say that?

George: Well, she's perfect in every way. She laughs at all my jokes, she has an encyclopedic knowledge of obscure trivia, and she never gets mad when I say something stupid.

[Audience laughter]

Jerry: George, that doesn't mean she's an AI. Maybe she just really likes you.

George: No, no. It's definitely an AI. No human woman would ever put up with me for this long.

Using modern LLMs, is there now a way to design a prompt which can make use of the long context windows? A prompt that can both leverage unique human writing and fix many of the issues that affected AI Seinfeld? Here’s an approach at a much more sophisticated prompt, where all values in {} brackets are parameters that can be filled in:

You are a professional comedian. Write an award-winning script for a a scene for Act I of a three act hit sitcom episode. Include audience laugh tracks when appropriate.

Your script MUST incorporate ALL the following elements:

Background:
- {background}

Setting:
- {setting}

Characters:
- {character_1}
- {character_2}
- {character_3}

Plots:
- {a_plot}
- {b_plot_1}
- {b_plot_2}

The script MUST also follow the high-level comedic style of the following scripts:

- {script_1}
- {script_2}
- {script_3}

After the scene has concluded, output a summary of the scene.

Thanks to long context windows, the parametric changes don’t have to be small, such as only a character name or two word setting. You, a human, can write anything to make each character distinct and robust, including name, gender, age, personality, likes, dislikes, etc. Plots can be derived from human-written scenarios beforehand: if you wrote 100 A-plots and 100 B-plots and randomly selected 1 A-plot and 2 B-plots, you’d have about 1 million possible plot permutations, ensuring you have something unique before the AI tries to reconcile them. You can feed in examples of human-written scripts to set the style and vibe of the generation in what is known as few-shot prompting. You can maintain continuity over many scenes by having the LLM summarize its own output, and then feed those summaries back to the AI as background information to build upon them. The LLM can also be instructed to output structured data to avoid the need to loosely parse the script after it’s completed, and as a bonus the model could be instructed to output additional metadata such as SSML speech styles based on a given line to add personality to the generated speech.

Unfortunately, creating this pipeline, writing original characters and plots for it for it, and sufficiently testing it to ensure the generated results are stable, would take weeks if not months to complete otherwise I would provide a more concrete demo. 4 This pipeline approach to AI script writing would only be effective for unsupervised 24/7 generation and wouldn’t replace skilled human writers who would do a more effective job much faster.

But would all of these prompt optimizations actually make the final generated script funny? After all, some of the failings like the awkward audience laughs and pauses and the end of scenes contributed to AI Seinfeld’s humor. During a standup comedy event at AI Seinfeld’s peak, Jerry Seinfeld himself was asked about the AI parody and he replied that he’s not worried about AI:

AI can be, definitely, they’ll make it smarter and smarter, but to do [standup comedy] you have to make it dumber.

Could AI Seinfeld benefit from advances in AI video? The answer this time is no. Generative video has been taking off in 2024 with projects such as OpenAI’s Sora and Runway AI’s Gen-3 Alpha, but those demos and the examples that go viral on social media are very heavily cherry picked, and even then there are consistency errors such as objects appearing in-and-out of existence. Generating video also requires exponentially more compute than just running Unity, and even with another few years of GPU hardware improvements it would be infeasible to cost-effectively create a 24/7 stream from those models.

The greatest problem with generative AI video is that it is coherent overall but has emblematic errors that don’t require a keen eye to notice, and as a result falls square into the uncanny valley, with its mistakes not being interesting, but disorienting. Mistakes in motion are easier to notice at a glance than images where a person’s hands may have the wrong number of fingers. The only way for AI video to get out of the valley would be to improve the model to near-flawless quality, which won’t happen any time soon. But Sora is more on the more realistic side of the curve than the less realistic side.

What about the AI-generated voices that would power these characters? At the time AI Seinfeld aired, many complained that Larry’s voice “didn’t sound enough like Jerry Seinfeld.” After AI Seinfeld concluded, a new technology called voice cloning popularized by ElevenLabs went mainstream…and it’s unexpectedly the AI modality that’s causing the most actual harm both with creative projects and outside of them. If you haven’t heard as much about AI-generated voices, there’s a good reason for that: voice synthesis projects such as Microsoft’s VALL-E 2 and Meta’s Voicebox both have disclaimers saying they won’t be released due to the dangers the technology possesses, although Microsoft’s Azure does offer a “custom neural voice” service. Voice cloning has been used to initiate scams by impersonating spouses in an emergency. Professional voice actors have had their voices cloned and used without compensation due to contracts not specifically forbidding the practice, which is one of the reasons SAG-AFTRA just went on strike against the video game industry in order to get protections against voice cloning and synthetic performers.

Moreover, in the context of creating a next-gen AI Seinfeld, there’s nothing inherently interesting about voice cloning since it’s a copy by definition: the model can’t generate unexpectedly amusing content other than the inherent gimmick of famous-voice-saying-something, such as the AI George Carlin standup special which was not special. There isn’t any way currently to prompt engineer a voice generation AI with the detail to create a voice in the style of a masculine New York comedian, 2x speed, primetime television quality which could open up more creative opportunities.

Although we can make drastic improvements with the textual script, that’s the extent of how new AI approaches can be leveraged to make something interesting. But if you remember the early days of generative AI history, the best AI-generated projects were the simplest.

AI Weirdness

Generative “AI” has been around for a very long time (I had fun with Markov chains a decade ago!), but the study was mostly confined to tech-focused communities like Hacker News. Modern generative AI didn’t break into mainstream culture until 2018, ironically in a way that doesn’t involve actual generative AI. In June of that year, comedian Keaton Patti posted a megaviral tweet about how he “forced a bot to watch over 1,000 hours of Olive Garden commercials and then asked it to write an Olive Garden commercial of its own.”

An excerpt of the viral Olive Garden script.

An excerpt of the viral Olive Garden script.

Yes, the script was human-written: for the technology at the time, no one could train an AI to behave like that from only video input data, and the script was too surreal even for the now-primitive generative AI. He did get popular enough to get a book deal and a Netflix collaboration leveraging this fake-AI gimmick.

Patti’s comedic misrepresentation of AI did lead to genuine confusion about what a 2018-era generative AI can actually do. Janelle Shane, who maintains the AI Weirdness blog about weird things AI can generate, posted an epic takedown of Patti’s script which went equally viral and also led to the internet discovering her excellent AI-generated Valentine’s Day hearts from the same year (and later a book deal too):

Image-based generative AI took a lot longer to go mainstream: websites like This Person Does Not Exist demonstrated the power of generative adversarial networks like StyleGAN to create images, but that wasn’t weird outside of mode collapses. The first instance of weird images from AI was in January 2021 when OpenAI announced the original DALL·E and showed they could make unique armchairs in the shape of an avocado by asking the model to do so, although they never released the model itself.

DALL·E didn’t get much attention outside of the AI hypesters since no one could play with it, but months later, things changed. Boris Dayma led an initiative to reproduce and open-source a variant of the DALL·E model, labeled DALL·E Mini (later changed to Craiyon after a cease and desist from OpenAI), and hosted it for free on Hugging Face and went megaviral. And thus began the “weird DALL·E” phase of image generation AI, where anyone could create incoherent images and make people laugh.

Even back in 2021, image prompt engineering was a thing. via /u/royal_rigolo on Reddit / weirddalle subreddit

Even back in 2021, image prompt engineering was a thing. via /u/royal_rigolo on Reddit / weirddalle subreddit

All of these examples of interesting failures are representative of a bygone AI era of experimentation. Once everyone had free access to more powerful text-generating AI with ChatGPT, and more powerful image-generating AI with Midjourney, AI stopped being fun and started being serious business, for better or for worse.

AI-Generated Content in 20XX

Last year, I wrote a thought piece titled “The Greatest Threat to Generative AI is Humans Being Bad at Using it” in response to the increasing hostility against the use of AI in creative works, arguing that while AI is a tool like anything else, it is a tool that’s very easy to use poorly and actually make projects worse. Additionally, the largest AI companies have both a business incentive and a duty to ensure that AI is used responsibly by its users downstream, as otherwise it will hurt the industry in the long term.

Now, it’s apparent that I was correct. The large companies went full steam ahead on AI integrations even where it is highly questionable that they add value and productivity to the end-user, often signaled with a “magical” sparkle emoji. Google has integrated Gemini to assist with document and email writing, Meta has integrated Meta AI to automatically generate images and comments, and Apple will soon allow Apple devices to generate text and images on your personal devices using Apple Intelligence. Marketing these features is typically met with backlash: Google had to pull an Olympics commercial which encouraged a parent to use AI to write a letter for their child.

“I flatly reject the future that Google is advertising,” Shelly Palmer, professor of advanced media at Syracuse University’s S.I. Newhouse School of Public Communications, wrote in a widely circulated blog post. The technology presents a “monocultural future where we see fewer and fewer examples of original human thoughts,” she wrote.

In the process of pushing AI tech further mainstream in a rush to demonstrate to shareholders their generative AI capabilities without encouraging responsible usage of the technology, AI has entered a new era of “slop” where people post objectively bad AI content without any regard for how it will be perceived, especially for websites which rely on user-generated content.

An annotated example of the Pinterest home page from July 2024. via @henningsanden on X

An annotated example of the Pinterest home page from July 2024. via @henningsanden on X

Facebook, whose algorithm favors emotionally-appealing engagement bait posts, has seen a deluge of high-engagement slop even when the content makes no logical sense.

One of the few AI-generated images on Facebook with an actual cabin crew. via @FacebookAIslop on X.

One of the few AI-generated images on Facebook with an actual cabin crew. via @FacebookAIslop on X.

This is, of course, quintessential uncanny valley: it’s coherent at a glance but just even looking at it for a second it’s obvious where the issues are, and these issues aren’t a good kind of AI weirdness. What worse is that AI Slop a regression in realism, and falls onto the left side of the valley.

Although we as humans can identify this slop, it is currently surprisingly hard for an AI to do so, although it hasn’t stopped people from trying to build AIs that can detect AIs which in practice is filled with false positives that hurt real creatives. For slop-creators, this is a feature: if an AI company released a tool to reliably detect and punish slop, it would make their generative AI less valuable. It’s reported that one of the reasons that OpenAI won’t release a reliable ChatGPT text detector is that it could harm their business.

The core reason for the big tech companies allowing generative AI to cause the enshittification of the internet is misaligned incentives between the companies hosting AI slop and the users viewing it. Social media companies and their shareholders care about North Star metrics such as user retention and time-on-site, and normally those metrics can be correlated with user happiness and satisfaction with the service. But time-on-site, for example, can also be maximized by making the site harder and slower to use, and the deluge of AI slop accomplishes that. AI companies typically don’t have analytics tracking negative user sentiment about their use of AI: if anything, the uncompromising backlash against AI convinces the companies that complainers are just a lost demographic to accommodate and double down on what they’re already doing. Aggregate metrics treat human-made content and AI-generated content as equal, but humans do not.

Generative AI, even for researchers and practitioners such as myself, is a heavily nuanced topic that is very difficult to communicate succinctly, more difficult to do on social media which highly discourages nuance and context, and even more difficult as AI hypesters muddy the waters with misleading praises of generative AI such that they’re easy to dunk on which just gets them more engagement and revenue. “Made by AI” is now a term that inspires dread, far from the Keaton Patti days where made-by-AI was an indicator of joyful weirdness. Bashing AI is now a meme, and there’s isn’t a single potential AI project that could challenge that perception because the well is poisoned beyond repair.

Would a 24/7 AI-Generated Twitch Stream Even Work Anymore?

How does the modern AI backlash tie back into AI Seinfeld? Twitch’s core demographic is the same demographic as those most against the use of generative AI. Part of the reason AI Seinfeld became so successful on Twitch is because of the community it cultivated: it wouldn’t have gone viral if people weren’t spamming microwave MMMs and and answering what did the fish say when it hit the wall. Even though Twitch viewers are mostly lurkers and not chatters, a channel with a good community builds word-of-mouth even outside of Twitch, which is how Twitch channels go viral.

I decided to determine what it would take to produce a “fixed” AI Seinfeld in 2024, given both the advances in AI and the ethics involved. Now, it’s definitely not anything a scrappy group of hackers could do anymore. Sure, you could once again ask an LLM to generate a sitcom script and get a bunch of assets from the Unity Asset Store, but that’s already been done before. In order to overcome the reflexive assumption that new AI generated content is slop, the stream would have to be something completely novel and unexpected: you can’t, for example, just do an AI Curb Your Enthusiasm.

The script would be unique following from my demo of detailed parametric prompts, but it would require production-studio-class tracking and documentation for how the prompts and their parameters are used to codify said uniqueness. The stream video would still need to be rendered in Unity or another engine, but in order to be unique it would require commissioning human-made visuals and sound effects: given the animosity against those who work with AI, most artists would not accept those commissions even if they were paid at a significant premium. 5 The voices would still have to be from an existing text-to-speech voice provider: voice cloning is right out, even with explicit consent and compensation for the voice actors.

And even if all the assets were fully sourced ethically with transparent documentation for the entire pipeline, the stream’s Twitch chat would likely be derailed by AI 👏 ART 👏 IS 👏 THEFT spam, preventing the establishment of any community, and strict moderation to curb the spam risks causing a Streisand effect.

The only entities that could feasibly create a 24/7 AI-generated livestream with fully ethically-sourced content would be, ironically, the big AI companies such as OpenAI which can afford to pay licenses for said data. Even Disney, which owns more than enough IP to train generative models of all modalities, would never do an AI Seinfeld-esque livestream for brand safety reasons alone: the nonzero possibility of a Disney character unexpectedly saying something problematic during the stream would make the entire project a complete nonstarter.

What’s the deal with the uncanny valley?

One of the common criticisms about generative AI pointed out by creatives is “if AI is trained on all human works, then how can it create anything new”? AI Seinfeld is the perfect counterargument: even though it’s powered by a LLM, the humans behind it are what made it go viral. Even before ChatGPT, generative AI has always excelled as a tool. The microwave gag and the 144p visual filter were not AI-generated or an attempt to emulate aspects of the Seinfeld sitcom: they were distinct creative decisions that made the entire project more interesting, and they aren’t something that you could prompt an AI to suggest to add. AI Seinfeld in hindsight was an ethical form of AI-generated media: it did not replace Seinfeld the TV show, no one would stop watching streams of Seinfeld in favor of the AI-generated alternative, and copyright holders and Jerry Seinfeld did not lose revenue due to AI Seinfeld’s existence: if anything, the nostalgic buzz increased streams of the original show.

With the current trajectory of AI slop and the perverse incentives by large tech companies to not address it, I am pessimistic that AI content will ever be at a state where it will cross that final hump of the uncanny valley curve into full acceptance, and even more pessimistic about the backlash against generative AI ever subsiding. With generative model training now at the point where it requires exponentially more compute and data for increasingly marginal returns, it will take years if at all for generative AI output to reach the far right of the uncanny valley chart, and unless the large tech companies actually create an AGI, they are unlikely to obtain higher acceptability than AI Seinfeld ever did.

I wrote most of this blog post weeks ago but held off publishing it because new AI news kept happening. Most notably, the creators of Stable Diffusion just released the FLUX.1 series of generative image AI models, which presents substantially improved coherence both to the provided prompt and within the image itself. Some of the variants are open-source, allowing the community to finetune them. The XLabs-AI/flux-RealismLora in particular focuses on realism as it name implies, and one demo from that finetune went megaviral.

One of the viral realism demo images: it does not have a dreamy look as other AI images but contextually expected stage lighting, the background and lanyard text is legible despite the depth-of-field blur, and body proportions are mostly correct except the long fingers. via /u/Glittering-Football9 on Reddit / StableDiffusion subreddit.

One of the viral realism demo images: it does not have a dreamy look as other AI images but contextually expected stage lighting, the background and lanyard text is legible despite the depth-of-field blur, and body proportions are mostly correct except the long fingers. via /u/Glittering-Football9 on Reddit / StableDiffusion subreddit.

That example in my opinion is more real than Sora but given the mixed reactions to the image, it’s right at the acceptability = 0 threshold.

The generative AI bell cannot be unrung. As you can tell from this post, I personally try to thread the thin line between both cool applications of generative AI (at the risk of getting harrassed) and the problems generative AI can cause (also at the risk of getting harrassed) because it’s important to shine a light on what’s actually possible with AI when the misinformation around generative AI is only increasing. It’s overall a big bummer how we went from weird Valentine’s Day hearts, to a quirky livestream of a group of AI-generated friends, to what AI is now.


  1. All of the examples in this post use LLM APIs as they provide the customization necessary to get effective results: the results for asking the same prompts to free chat frontends such as chatgpt.com will be substantially different. ↩︎

  2. When I was younger, I actually didn’t like Seinfeld and instead preferred to watch Everybody Loves Raymond↩︎

  3. Incidentally, parametric prompts is why Unlimited Steam got permanently banned from Twitch: in what would now be known as a prompt injection, one of the GitHub-hosted lists the channel sourced thousands of food choices for the prompt contained a few highly offensive selections. ↩︎

  4. Prompt engineering instability grows exponentially as the prompt size increases since each part of the prompt has to relate to each other. Claude 3.5 Sonnet is the first LLM I’ve tested that can handle super-long bespoke prompts and can actually account for all aspects of the prompt. ↩︎

  5. To be fully ethical, an AI practitioner would have to proactively offer additional contractual guarantees to creatives they are commissioning, including highly-scoped usage of the assets they provide and a clause to not train generative AI on said assets to avoid future business. ↩︎