MoreRSS

site iconUnderstanding AIModify

By Timothy B. Lee, a tech reporter with a master’s in computer science, covers AI progress and policy.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Understanding AI

Tesla is (still) following in Waymo’s footsteps

2025-09-09 04:17:33

In the two and a half months since Tesla launched its robotaxi service in Austin, the company has made a flurry of follow-up announcements:

Read more

No, OpenAI is not doomed

2025-09-05 04:14:07

Thanks to everyone who submitted a question in response to my post a couple of weeks ago. Below are my answers to three great questions. I may answer some of the others in future articles.

OpenAI is going to be fine

OpenAI CEO Sam Altman. (Photo by Kevin Dietsch / Getty Images)

Nick McGreivy, the author of the excellent AI-for-science piece I published bac…

Read more

I chatted with the Argument's Kelsey Piper about AI and jobs

2025-08-29 04:06:29

Click the play button to watch the video!

New evidence strongly suggests AI is killing jobs for young programmers

2025-08-28 23:46:54

Last fall, I started hearing that demand for entry-level programmers was in free fall thanks to competition from AI. I decided to do some reporting to figure out if this was true.

One of my first calls was to Nicholas Bergson-Shilcock, CEO of the Recurse Center, which provides space for programmers to upgrade their skills and then connects them with potential employers.

“Starting in late 2022, it's been the worst market for hiring engineers since we started” in 2011, Bergson-Shilcock told me. He attributed the initial decline mainly to macroeconomic factors: the Fed raised interest rates in 2022 to ward off inflation, which led to a wave of Big Tech layoffs and less venture capital funding for startups.

When I talked to him in November, two years after that initial wave of layoffs, demand for programmers still hadn’t recovered. He thought this was “in part attributable to AI” because “companies, particularly early-stage companies that might have hired three to five people, now they're hiring one or two people and expecting them to get more done.”

As interesting as his perspective was, I didn’t feel comfortable writing about it without some hard data to back it up. And nobody could point me to clear evidence linking the weak market for programmers to AI adoption.

In May, I checked in with economists to see if there was new evidence of AI-driven job losses. But once again I didn’t feel like I’d gained enough clarity to merit a write-up.

Finally this week, I emailed Nicholas Bergson-Shilcock for an update.

“We've seen a big resurgence in 2025,” he responded. “The tech recruiting market is very much ‘back.’”

A couple of recent studies seemed to back this up. One from Stanford economist Bharat Chandar found that programmers had seen significant job growth over the last year. Another from the Economic Innovation Group found that across the economy, occupations that were more vulnerable to automation from generative AI did not seem to be suffering from higher unemployment rates or other negative outcomes.

So I decided to finally write an article about this topic. By Tuesday afternoon, I was putting the finishing touches on a piece arguing that the best available evidence suggested AI was probably not driving recent job losses in software development or other white-collar occupations—though I acknowledged there was quite a lot of uncertainty about this.

As I was getting ready to publish that article, I noticed that Chandar had just published another paper—this time with two colleagues. I started reading their study, and when I got to this chart, my heart sank. Because I immediately knew I’d have to re-write my article from scratch.

The dotted vertical line is set to November 2022, the month that ChatGPT was released. And the lines are normalized so that they’re all equal to 1 in that month.

This chart doesn’t settle the debate over whether AI is undercutting demand for early-career programmers, but it gives us a much clearer picture of what’s happening. And that picture seems entirely consistent with the “blame AI” thesis.

Read more

I want to hear from you!

2025-08-22 00:33:59

If you are getting this email you are one of the 1,900 paying subscribers to Understanding AI. Thank you for enabling me to work on this newsletter full time with full editorial independence. I know this is a rare privilege, and I’m trying to deliver the best newsletter I can.

Read more

Is GPT-5 a "phenomenal" success or an "underwhelming" failure?

2025-08-15 03:53:48

It was inevitable that people would be disappointed with last week’s release of GPT-5. That’s not because OpenAI did a poor job, and it’s not even because OpenAI did anything in particular to hype up the new version. The problem was simply that OpenAI’s previous “major” model releases—GPT-2, GPT-3, and GPT-4—have been so consequential:

  • GPT-2 was the first language model that could write coherent sentences and paragraphs across a wide range of topics.

  • GPT-3 was the first language model that could be prompted to perform a wide range of tasks without retraining.

  • GPT-4 delivered such a dramatic performance gain that it took competitors a year to catch up.

So of course people had high expectations for GPT-5. And OpenAI seems to have worked hard to meet those expectations.

After OpenAI released GPT-4.5 back in February, I argued that you could think of it as the model everyone was expecting to be called GPT-5. It was a much larger model than GPT-4 and was trained with a lot more compute. Unfortunately, its performance was so disappointing that OpenAI called it GPT-4.5 instead. Sam Altman gave it a distinctly half-hearted introduction, calling it a “giant, expensive model” that “won’t crush benchmarks.”

OpenAI probably should have given the GPT-5 name to o1, the reasoning model OpenAI announced last September. That model really did deliver a dramatic performance improvement over previous models. It was followed by o3, which pushed this paradigm—based on reinforcement learning and long chains of thought—to new heights. But we haven't seen another big jump in performance over the last six months, suggesting that the reasoning paradigm may also be reaching a point of diminishing returns (though it’s hard to know for certain).

Regardless, OpenAI found itself in a tough spot in early 2025. It needed to release something it could call GPT-5, but it didn’t have anything that could meet the sky-high expectations that had developed around that name. So rather than using the GPT-5 name for a dramatically better model, it decided to use it to signal a reboot of ChatGPT as a product.

Sam Altman explained the new approach back in February. “We realize how complicated our model and product offerings have gotten,” Altman tweeted. “We hate the model picker as much as you do and want to return to magic unified intelligence.”

Altman added that “we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model.”

This background helps to explain why the reactions to GPT-5 have been so varied. Smart, in-the-trenches technologists have praised the release, with Nathan Lambert calling it “phenomenal.” On the other hand, many AI pundits—especially those with a skeptical bent—have panned it. Gary Marcus, for example, called it “overdue, overhyped and underwhelming.”

The reality is that GPT-5 is a solid model (or technically suite of models—we’ll get to that) that performs as well or better than anything else on the market today. In my own testing over the last week, I found GPT-5 to be the most capable model I’ve ever used. But it’s not the kind of dramatic breakthrough people expected from the GPT-5 name. And it has some rough edges that OpenAI is still working to sand down.

A new product, not just a new model

When OpenAI released ChatGPT back in 2022, the organization was truly a research lab. It did have a commercial product—a version of GPT-3 developers could access via an API—but that product had not yet gained much traction. In 2022, OpenAI’s overwhelming focus was on cutting-edge research that advanced the capabilities of its models.

Things are different now. OpenAI still does a lot of research, of course. But it also runs the world’s leading AI chatbot—one with hundreds of millions of weekly average users. Indeed, some rankings show ChatGPT as the fifth most popular website in the world, beating out Wikipedia, Amazon, Reddit, and X.com.

So in building GPT-5, OpenAI executives were thinking not only about how to advance the model’s raw capabilities, but also how to make ChatGPT a more compelling product.

Read more