2026-04-30 16:08:34
In this era of powerful tools to find software bugs, we now see tools find a lot of problems at a high speed. This causes problems for developers, as dealing with the growing list of issues is hard. It may take a longer time to address the problems than to find them – not to mention to put them into releases and then it takes yet another extended time until users out in the wild actually get that updated version into their hands.
In order to find many bugs fast, they have to already exist in source code. These new tools don’t add or create the problems. They just find them, filter them out and bring them to the surface for exposure. A better filter in the pool filters out more rubbish.
The more bugs we fix, the fewer bugs remain in the code. Assuming the developers manage to fix problems at a decent enough pace.
For every bugfix we merge, there is a risk that the change itself introduces one more more new separate problems. We also tend to keep adding features and changing behavior as we want to improve our products, and when doing so we occasionally slip up and introduce new problems as well.
Source code analyzing tools is a concept as old as source code itself. There has always existed tools that have tried to identify coding mistakes. Now they just recently got better so they can find more mistakes.
These new tools, similar to the old ones, don’t find all the problems. Even these new modern tools sometimes suggest fixes to the problems they find that are incomplete and in fact sometimes downright buggy.
Undoubtedly code analyzer tooling will improve further. The tools of tomorrow will find even more bugs, some of them were not found when the current generation of tools scanned the code yesterday.
Of course, we now also introduce these tools in CI and general development pipelines, which should make us land better code with fewer mistakes going forward. Ideally.
If we assume that we fix bugs faster than we introduce new ones and we assume that the AI tools can improve further, the question is then more how much more they can improve and for how long that improvement can go on. Will the tools find 10% more bugs? 100%? 1000%? Is the tool improving going to gradually continue for the next two, ten or fifty years? Can they actually find all bugs?
Can we reach the utopia where we have no bugs left in a given software project and when we do merge a new one, it gets detected and fixed almost instantly?
If we assume that there is at least a theoretical chance to reach that point, how would we know when we reach it? Or even just if we are getting closer?
I propose that one way to measure if we are getting closer to zero bugs is to check the age of reported and fixed bugs. If the tools are this good, we should soon only be fixing bugs we introduced very recently.
In the curl project we don’t keep track of the age of regular bugs, but we do for vulnerabilities. The worst kind of bugs. If the tools can find almost all problems, they should soon only be finding very recently added vulnerabilities too. The age of new finds should plummet and go towards zero.
If the age of newly reported vulnerabilities are getting younger, it should make the average and median age of the total collection go down over time.
The average and median time vulnerabilities had existed in the curl source code by the time they were found and reported to the project.

When the tools have found most problems there should be less bugs left to fix. The bugfix rate should go down rapidly – independently of how you count them or how liberal we are in counting exactly what is a bugfix.

Given the data from the curl project, there does not seem to be fewer bugfixes done – yet. Maybe the bugfix speed goes up before it goes down?
Given the look of these graphs I don’t think we are close to zero bugs yet. These two curves do not seem to even start to fall yet.
Yes, these graphs are based on data from a single project, which makes it super weak to draw statistical conclusions from, but this is all I have to work with.
I think that’s mostly an indication of what you believe the tooling can do and how good they can eventually end up becoming.
I don’t know. I will keep fixing bugs.
2026-04-30 14:49:47

In appendix A of the book Root cause: Stories and lessons from two decades of Backend Engineering Bugs, author Hussein Nasser has these wonderful words to say about me:
Daniel Stenberg is a Swedish engineer and the creator of curl (cURL), one of the most widely used tools and libraries for fetching content over various protocols. I’ve always admired Daniel’s work, reading his blogs and watching his talks on YouTube. He is one of the engineers who inspired me to start my own YouTube channel and teach backend engineering.
It warms my heart to read this. Words like this give me energy and motivation. My work has meaning.
2026-04-29 14:27:01
You always find the new curl releases on the curl site!
the 274th release
8 changes
49 days (total: 10,761)
282 bugfixes (total: 13,922)
521 commits (total: 38,545)
0 new public libcurl function (total: 100)
0 new curl_easy_setopt() option (total: 308)
0 new curl command line option (total: 273)
73 contributors, 45 new (total: 3,664)
28 authors, 12 new (total: 1,463)
8 security fixes (total: 188)
As mentioned elsewhere, the security reporting volume has been intense lately. We publish eight new curl vulnerabilities this time.
The official count says over 260 bugfixes were merged in this 49 day cycle. See the changelog for all the details.
Planned upcoming removals include:
If you are concerned about any of these, speak up on the curl-library ASAP.
Unless we messed up this one and need to do a patch release, the pending next release is scheduled to happen on June 24.
2026-04-22 19:44:40
As I have been preparing slides for my coming talk at foss-north on April 28, 2026 I figured I could take the opportunity and share a glimpse of the current reality here on my blog. The high quality chaos era, as I call it.
I complained and I complained about the high frequency junk submissions to the curl bug-bounty that grew really intense during 2025 and early 2026. To the degree that we shut it down completely on February 1st this year. At the time we speculated if that would be sufficient or if the flood would go on.
Now we know.
In March 2026, the curl project went back to Hackerone again once we had figured out that GitHub was not good enough.
From that day, the nature of the security report submissions have changed.
The slop situation is not a problem anymore.

The report frequency is higher than ever. Recently it’s been about double the rate we had through 2025, which already was more than double from previous years.

The quality is higher. The rate of confirmed vulnerabilities is back to and even surpassing the 2024 pre-AI level, meaning somewhere in the 15-16% range.

In addition to that, the share of reports that identify a bug, meaning that they aren’t vulnerabilities but still some kind of problem, is significantly higher than before.

Almost every security report now uses AI to various degrees. You can tell by the way they are worded, how the report is phrased and also by the fact that they now easily get very detailed duplicates in ways that can’t be done had they been written by humans.
The difference now compared to before however, is that they are mostly very high quality.
The reporters rarely mention exactly which AI tool or model they used (and really, we don’t care), but the evidence is strong that they used such help.
I did a quick unscientific poll on Mastodon to see if other Open Source projects see the same trends and man, do they! Friends from the following projects confirmed that they too see this trend. Of course the exact numbers and volumes vary, but it shows its not unique to any specific project.
Apache httpd, BIND, curl, Django, Elasticsearch Python client, Firefox, git, glibc, GnuTLS, GStreamer, Haproxy, Immich, libssh, libtiff, Linux kernel, OpenLDAP, PowerDNS, python, Prometheus, Ruby, Sequoia PGP, strongSwan, Temporal, Unbound, urllib3, Vikunja, Wireshark, wolfSSL, …
I bet this list of projects is just a random selection that just happened to see my question. You will find many more experiencing and confirming this reality view.
When we ship curl 8.20.0 in the middle of next week – end of April 2026, we expect to announce at least six new vulnerabilities. Assuming that the trend keeps up for at least the rest of the year, and I think that is a fair assumption, we are looking at an estimated explosion and a record amount of CVEs to be published by the curl project this year.
We might publish closer to 50 curl vulnerabilities in 2026.

Given this universal trend, I cannot see how this pattern can not also be spotted and expected to happen in many other projects as well.
The tools are still improving. We keep adding flaws when we do bugfixes and add new features.
Someone has suggested it might work as with fuzzing, that we will see a plateau within a few years. I suppose we just have to see how it goes.
This avalanche is going to make maintainer overload even worse. Some projects will have a hard time to handle this kind of backlog expansion without any added maintainers to help.
It is probably a good time for the bad guys who can easily find this many problems themselves by just using the same tools, before all the projects get time, manpower and energy to fix them.
Then everyone needs to update to the newly released fixed versions of all packages, which we know is likely to take an even longer time.
We are up for a bumpy ride.
2026-03-26 18:09:07
Software and digital security should rely on verification, rather than trust. I want to strongly encourage more users and consumers of software to verify curl. And ideally require that you could do at least this level of verification of other software components in your dependency chains.
With every source code commit and every release of software, there are risks. Also entirely independent of those.
Some of the things a widely used project can become the victim of, include…
In the event any of these would happen, they could of course also happen in combinations and in a rapid sequence.
curl, mostly in the shape of libcurl, runs in tens of billions of devices. Clearly one of the most widely used software components in the world.
People ask me how I sleep at night given the vast amount of nasty things that could occur virtually at any point.
There is only one way to combat this kind of insomnia: do everything possible and do it openly and transparently. Make it a little better this week than it was last week. Do software engineering right. Provide means for everyone to verify what we do and what we ship. Iterate, iterate, iterate.
If even just a few users verify that they got a curl release signed by the curl release manager and they verify that the release contents is untainted and only contains bits that originate from the git repository, then we are in a pretty good state. We need enough independent outside users to do this, so that one of them can blow the whistle if anything at any point would look wrong.
I can’t tell you who these users are, or in fact if they actually exist, as they are and must be completely independent from me and from the curl project. We do however provide all the means and we make it easy for such users to do this verification.
The few outsiders who verify that nothing was tampered with in the releases can only validate that the releases are made from what exists in git. It is our own job to make sure that what exists in git is the real thing. The secure and safe curl.
We must do a lot to make sure that whatever we land in git is okay. Here’s a list of activities we do.
-Werror that converts warnings to errors and fail the builds.zizmor and other code analyzer tools on the CI job config scripts to reduce the risk of us running or using insecure CI jobs.All this done in the open with full transparency and full accountability. Anyone can follow along and verify that we follow this.
Require this for all your dependencies.
We plan for the event when someone actually wants and tries to hurt us and our users really bad. Or when that happens by mistake. A successful attack on curl can in theory reach widely.
This is not paranoia. This setup allows us to sleep well at night.
This is why users still rely on curl after thirty years in the making.
I recently added a verify page to the curl website explaining some of what I write about in this post.
2026-03-25 16:05:41
I hope I don’t have to spell it out but I will do it anyway: in these cases I don’t know anything about their products and I cannot help them. Quite often I first need to search around only to figure out what the product is or does, that the person asks me about.
Over the years I have collected such emails that end up in my inbox. Out of those that I have received, I have cherry-picked my favorites: the best, the weirdest, the most offensive and the most confused ones and I put them up online. A few of then also triggered separate blog posts of their own in the past.
They help us remember that the world is complicated and hard to understand.
Today, my online collection reached the magical amount: 100 emails. The first one in the stash was received in 2009 and the latest arrived just the other day. I expect I’ll keep adding occasional new ones going forward as well.
Enjoy!