2025-03-24 08:00:00
A tiny tool to calculate when your baby might arrive
2025-03-14 00:30:58
Revealed: How the UK tech secretary uses ChatGPT for policy advice
The New Scientist used freedom of information laws to get the ChatGPT records of the UK's technology secretary.
The headline hints at a damning exposé, but ends up being a story about a politician making pretty reasonable and sensible use of language models to be more informed and make better policy decisions.
He asked it why small business owners are slow to adopt AI, which popular podcasts he should appear on, and to define terms like antimatter and digital inclusion.
This all seems extremely fine to me. Perhaps my standards for politicians are too low, but I assume they don't actually know much and rely heavily on advisors to define terms for them and decide on policy improvements. And I think ChatGPT connected to some grounded sources would be a decent policy advisor. Better than most human policy advisors. At least when it comes to consistency, rapidly searching and synthesising lots of documents, and avoiding personal bias. Models still carry the bias of their creators, but it all becomes a trade-off between human flaws and model flaws.
Claiming language models should have anything to do with national governance feels slightly insane. But we're also sitting in a moment where Trump and Musk are implementing policies that trigger trade wars and crash the U.S. economy. And I have to think "What if we just put Claude in charge?"
I joke. Kind of.
2025-03-05 08:00:00
Well, I've had a dramatic start to the year.
Normally, the design agency I joined a short eight months ago, unexpectedly closed down in January. Despite running for a decade and working with almost every major tech company, client work slowed down and the founders decided to close up shop.
It's been a sad time. Everyone I worked with there was exceptionally talented and kind. I'm thankful I got to build with them for a short while.
I was already due to start maternity leave in March, so Normally closing just moved that date up a bit sooner. But I managed to fit in a couple of months of work with Deep Mirror before taking my baby break. They're a London-based startup using machine learning to speed up the drug discovery process, specifically by helping medicinal chemists generate ideas for new molecules.
While I was completely new to the field of drug discovery, many of the design challenges echoed the ones I'd worked on with Elicit – complex research workflows, information-dense interfaces, and making the inner workings of models and their reasoning process visible to users. I've learned I like this shape of work; AI/ML tools designed to help scientific researchers who have high standards and need to thoroughly understand how models “reason” and how answers are generated. It's fertile ground for responsible AI interface design.
My baby break has now started. Only two weeks remain until the new human arrives. A terrifyingly short timeline. Luckily, the excitement of meeting our child and the physical discomfort of late pregnancy outweigh any fears about birth or the impending marathon of sleep deprivation. I'd happily start labour tomorrow if I had any say in the matter.
Given that I won't be in a 9-5 job for the next six months, I've stocked up on new books. Though it's naïve to think I'll have the mental capacity to read any of them in between baby feedings and waking up a dozen times a night. But one can hope. I've added the full pile to my Antilibrary, but these are the ones I'm most excited about:
<a href="https://www.google.co.uk/books/edition/Soldiers_and_Kings/EzPBEAAAQBAJ"><strong>Soldiers and Kings: Survival and Hope in the World of Human Smuggling</strong></a> by Jason De Leon
This got my attention when it started popping up on all the “best of” ethnography lists in 2024, and then went on to win the national book award for non-fiction. I expect it to be a slightly intense read, but well-researched ethnographies are my favourite genre.
<a href="https://www.google.co.uk/books/edition/Cue_the_Sun/GObnEAAAQBAJ"><strong>Cue the Sun! The Invention of Reality TV</strong></a> by Emily Nussbaum
Like most of us, I have a love/hate/fascination/repulsion relationship with reality TV. I've watched my fair share of trash series, but will happily defend (most of) them as time well spent. They're always insightful windows into our collective value systems and cultural narratives, and I'm keen to read Nussbaum's critical take on the medium.
<a href="https://www.google.co.uk/books/edition/The_Invention_of_Nature/w1WNBQAAQBAJ"><strong>The Invention of Nature: The Adventures of Alexander von Humboldt</strong></a> by Andrea Wulf
Given my long standing preoccupation with how we try to define and divide “nature” from “culture”, it's about time I did a bit more historical reading into the origins of this cultural dichotomy.
I've been using a bit of my pre-baby time to build as well. I added a new section to this garden called Smidgeons. These are teeny tiny posts: links with a bit of commentary, research papers I enjoyed, or one-liners that would otherwise go on Bluesky.
I'm also quite deep into a new research project and set of prototypes I'm calling Lodestone. It's an exploration of how language models might be able to get us to think more, not less. Specifically, I'm interested in whether models can enable me to be a better critical thinker and rigorous writer. Not by writing for me, but by guiding me through a well-defined process of understanding what claims I'm making, what evidence I have to support it, and how my argument structure fits together. I'm tackling it from a few angles, but here's some previews from the latest prototype:
The code is all open source on Github, though it'll evolve a lot from here. I'll publish more about it soon, but the ideas still feel early and my thesis is unproven. I'll wait until it all gels together a bit more.
I should mention that starting this summer I'll be looking for a new role as a Design Engineer or technically-inclined Product Designer. I'm planning to be on maternity leave until early September, but I'm happy to start talking to companies, teams, and founders now if you think we could be a good fit. Just email hello at maggieappleton.com or DM me on Bluesky.
2025-02-20 19:42:03
We have a new(ish) benchmark, cutely named “Humanity's Last Exam.”
If you're not familiar with benchmarks, they're how we measure the capabilities of particular AI models like o1 or Claude Sonnet 3.5. Each one is a standardised test designed to check a specific skill set.
For example:
When you run a model on a benchmark it gets a score, which allows us to create leaderboards showing which model is currently the best for that test. To make scoring easy, the answers are usually formatted as multiple choice, true/false, or unit tests for programming tasks.
Among the many problems with using benchmarks as a stand-in for “intelligence” (other than the fact they're multiple choice standardised tests – do you think that's a reasonable measure of human capabilities in the real world?), is that our current benchmarks aren't hard enough.
New models routinely achieve 90%+ on the best ones we have. So there's a clear need for harder benchmarks to measure model performance against.
Hence, Humanity's Last Exam.
Made by ScaleAI and the Center for AI Safety, they've crowdsourced "the hardest and broadest set of questions ever" by experts across domains. 2,700 questions at the moment, some of which they're keeping private to prevent future models training on the dataset and memorising answers ahead of time. Questions like this:
<img src="/images/smidgeons/last-exam-1.png" alt="Samples of the diverse and challenging questions submitted to Humanity's Last Exam." />
<img src="/images/smidgeons/last-exam-2.png" alt="Samples of the diverse and challenging questions submitted to Humanity's Last Exam." />
<img src="/images/smidgeons/last-exam-3.png" alt="Samples of the diverse and challenging questions submitted to Humanity's Last Exam." />
So far, it's doing it's job well – the highest scoring model is OpenAI's Deep Research at 26.6%, with other common models like GPT-4o, Grok, and Claude only getting 3-4% correct. Maybe it'll last a year before we have to design the next “last exam.”
When people make sweeping statements like “language models are bullshit machines” or “ChatGPT lies,” it usually tells me they're not seriously engaged in any kind of AI/ML work or productive discourse in this space.
First, because saying a machine “lies” or “bullshits” implies motivated intent in a social context, which language models don't have. Models doing statistical pattern matching aren't purposefully trying to deceive or manipulate their users.
And second, broad generalisations about “AI”'s correctness, truthfulness, or usefulness is meaningless outside of a specific context. Or rather, a specific model measured on a specific benchmark or reproducible test.
So, next time you hear someone making grand statements about AI capabilities (both critical and overhyped), ask: which model are they talking about? On what benchmark? With what prompting techniques? With what supporting infrastructure around the model? Everything is in the details, and the only way to be a sensible thinker in this space is to learn about the details.
2025-01-26 18:00:35
If you're not distressingly embedded in the torrent of AI news on Twixxer like I reluctantly am, you might not know what DeepSeek is yet. Bless you.
From what I've gathered:
<img src="/images/smidgeons/deepseek1.png" alt="DeepSeek R1 showing its thinking" />
Here's the announcement Tweet:
TLDR high-quality reasoning models are getting significantly cheaper and more open-source. This means companies like Google, OpenAI, and Anthropic won't be able to maintain a monopoly on access to fast, cheap, good quality reasoning. This is net good for everyone.
2025-01-12 22:52:12
Common Misconceptions About the Complexity in Robotics vs AI
A roboticist breaks down common misconceptions about what's hard and easy in robotics. A response to everyone asking “can't we just stick a large language model into its brain to make it more capable?”
Contrary to the assumptions of many people, making robots perceive and move in the world in the way humans can turns out to be an extraordinarily hard problem to solve. While seemingly “hard” problems like scoring well on intelligence tests, winning at chess, and acing the GMAT turn out to be much easier.
Everyone thought it would be extremely hard and computationally expensive to teach computers language, and easy to teach them to identify objects visually. The opposite turned out to be true. This is known as Moravec's Paradox.
Especially liked the ending where Dan explores why people are so resistant to the idea picking up a cup is more complex than solving logic puzzles. Partly anthropocentrism; humans are special because we can do higher order thinking. Any lowly animal can sense the world and move through it. Partly social class bias; people who work manual labour jobs using their bodies are less valued then people who sit still using their intellect to solve problems.