MoreRSS

site iconDaniel StenbergModify

Swedish open source developer and curl maintainer.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Daniel Stenberg

A family of forks

2025-06-23 17:26:27

curl supports getting built with eleven different TLS libraries. Six of these libraries are OpenSSL or forks of OpenSSL. Allow me to give you a glimpse of their differences, similarities and some insights into what it takes to support them all.

SSLeay

It all started with SSLeay. This was the first SSL library I found out to exist and we added the first HTTPS support to curl using this library in the spring of 1998. Apparently the SSLeay project was started already back in 1995.

This was back in the days we still only had SSL; TLS would come later.

OpenSSL

This project was created (forked) from the ashes of SSLeay in late 1998 and curl supported it already from its start. SSLeay was abandoned.

OpenSSL always had a quirky, inconsistent and extremely large API set (a good chunk of that was inherited from SSLeay), that is further complicated by documentation that is sparse at best and leaves a lot to the users’ imagination and skill to dig through source code to get the last details answered (still today in 2025). In curl we keep getting occasional problems reported with how we use this library even decades in. Presumably this is the same for every OpenSSL user out there.

The OpenSSL project is often criticized for having dropped the ball on performance since they went to version 3 a few years back. They have also been slow and/or unwilling at adopt new TLS technologies like for QUIC and ECH.

In spite of all this, OpenSSL has become a dominant TLS library especially in Open Source.

LibreSSL

Back in the days of Heartbleed, the LibreSSL forked off and became its own. They trimmed off things they think didn’t belong in the library, they created their own TLS library API and a few years in, Apple ships curl on macOS using LibreSSL. They have some local patches on their build to make it behave differently than others.

LibreSSL was late to offer QUIC, they don’t support SSLKEYLOGFILE, ECH and generally seem to be even slower than OpenSSL to implement new things these days.

curl has worked perfectly with LibreSSL since it was created.

BoringSSL

Forked off by Google in the Heartbleed days. Done by Google for Google without any public releases they have cleaned up the prototypes and variable types a lot, and were leading the QUIC API push. In general, most new TLS inventions have since been implemented and supported by BoringSSL before the other forks.

Google uses this in Android in other places.

curl has worked perfectly with BoringSSL since it was created.

AmiSSL

A fork or flavor of OpenSSL done for the sole purpose of making it build and run properly on AmigaOS. I don’t know much about it but included it here for completeness. It seems to be more or less a port of OpenSSL for Amiga.

curl works with AmiSSL when built for AmigaOS.

QuicTLS

As OpenSSL dragged their feet and refused to provide the QUIC API the other forks did back in the early 2020s (for reasons I have yet to see anyone explain), Microsoft and Akamai forked OpenSSL and produced QuicTLS which has since tried to be a light-weight fork that mostly just adds the QUIC API in the same style BoringSSL and LibreSSL support. Light-weight in the meaning that they were tracking upstream closely and did not intend to deviate from that in other ways than the QUIC API.

With OpenSSL3.5 they finally shipped a QUIC API that is different than the QUIC API the forks (including QuicTLS) provide. I believe this triggered QuicTLS to reconsider their direction going forward but we are still waiting to see exactly how.

curl has worked perfectly with QuicTLS since it was created.

AWS-LC

This is a fork off BoringSSL maintained by Amazon. As opposed to BoringSSL, they do actual (frequent) releases and therefore seem like a project even non-Amazon users could actually use and rely on – even though their stated purpose for existing is to maintain a secure libcrypto that is compatible with software and applications used at AWS. Strikingly, they maintain more than “just” libcrypto though.

This fork has shown a lot of activity recently, even in the core parts. Benchmarks done by the HAProxy team in May 2025 shows that AWS-LC outperforms OpenSSL significantly.

The API AWS-LC offers is not identical to BoringSSL’s.

curl works perfectly with AWS-LC since early 2023.

Family Tree

The family life

Each of these six different forks has its own specifics, APIs and features that also change and varies over their different versions. We remain supporting these six forks for now as people still seem to use them and maintenance is manageable.

We support all of them using the same single source code with an ever-growing #ifdef maze, and we verify builds using the forks in CI – albeit only with a limited set of recent versions.

Over time, the forks seem to be slowly drifting apart more and more. I don’t think it has yet become a concern, but we are of course monitoring the situation and might at some point have to do some internal refactors to cater for this.

Future

I can’t foresee what is going to happen. If history is a lesson, we seem to rather go towards more forks rather than fewer, but every reader of this blog post of course now ponders over how much duplicated efforts are spent on all these forks and the implied inefficiencies of that. On the libraries themselves but also on users such as curl.

I suppose we just have to wait and see.

Dropping some TLS laggards

2025-06-11 15:10:26

In the curl project we have a long tradition of supporting a range of different third party libraries that provide similar functionality. The person who builds curl needs to decide which of the backends they want to use out of the provided alternatives. For example when selecting which TLS library to use.

This is a fundamental and appreciated design principle of curl. It allows different users to make different choices and priorities depending on their use cases.

Up until May 2025, curl has supported thirteen different TLS libraries. They differ in features, footprint, speed and licenses.

Raising the bar

We implicitly tell the user that you can use one of the libraries from this list to get good curl functionality. The libraries we support have met our approval. They passed the tests. They are okay.

As we support a large number of them, we can raise the bar and gradually increase the requirements we set for them to remain approved. For the good of our users. To make sure that the ones we support truly are good quality choices to build upon – ideally for years to come.

TLS 1.3

The latest TLS version is called TLS 1.3 and the corresponding RFC 8443 was published in August 2018, almost seven years ago. While there are no known major problems or security issues with the predecessor version 1.2, a modern TLS library that has not yet implemented and provide support for TLS 1.3 is a laggard. It is behind.

We take this opportunity to raise the bar and say that starting June 2025, curl only supports TLS libraries that supports TLS 1.3 (in their modern versions). The first curl release shipping with this change is the pending 8.15.0 release, scheduled for mid July 2025.

This move has been announced, planned and repeatedly communicated for over a year. It should not come as a surprise, even if I have no doubt that this will be considered a such by some.

This makes sure that users and applications that decide to lean on curl are more future-proof. We no longer recommend using one of the laggards.

Removed

This action affects these two specific TLS backends:

  • BearSSL
  • Secure Transport

BearSSL

This embedded and small footprint focused library is probably best replaced by wolfSSL or mbedTLS.

Secure Transport

This is a native library in Apple operating systems that has been deprecated by Apple themselves for a long time. There is no obvious native replacement for this, but we probably recommend either wolfSSL or an OpenSSL fork. Apple themselves have used libreSSL for their curl builds for a long time.

The main feature user might miss from Secure Transport that is not yet provided by any other backend, is the ability to use the native CA store on the Apple operating systems – iOS, macOS etc. We expect this feature to get implemented for other TLS backends soon.

Network framework

On Apple operating systems, there is a successor to Secure Transport: the Network framework. This is however much more than just a TLS layer and because of their design decisions and API architecture it is totally unsuitable for curl’s purposes. It does not expose/use sockets properly and the only way to use it would be to hand over things like connecting, name resolving and parts of the protocol management to it, which is totally unacceptable and would be a recipe for disaster. It is therefore highly unlikely that curl will again have support for a native TLS library on Apple operating systems.

Eleven remaining TLS backends in curl

In the order we added them.

  1. OpenSSL
  2. GnuTLS
  3. wolfSSL
  4. SChannel
  5. libressl – an OpenSSL fork
  6. BoringSSL – an OpenSSL fork
  7. mbedTLS
  8. AmiSSL – an OpenSSL fork
  9. rustls
  10. quictls – an OpenSSL fork
  11. AWS-LC – an OpenSSL fork

Eight removed TLS backends

With these two new removals, the set of TLS libraries we have removed support for over the years are, in the order we removed them:

  1. QsoSSL
  2. axTLS
  3. PolarSSL
  4. MesaLink
  5. NSS
  6. gskit
  7. BearSSL
  8. Secure Transport

Going forward

Currently we have no plans for removing support for any other TLS backends, but we will of course reserve ourselves the right to do so when we feel the need, for the good of the project and our users.

We similarly have no plans to add support for any additional TLS libraries, but if someone would bring such work to the project for one of the few remaining quality TLS libraries that exist that curl does not already support, then we would most probably welcome such an effort.

What we can’t measure

2025-06-05 14:36:31

The curl project is an independent Open Source project. Our ambition is to do internet transfers right and securely with the features “people” want. But how do we know if we do this successfully or not?

Possibly one rough way to measure if users are happy would be to know if the number of users go up or down.

How do we know?

Number of users

We don’t actually know how many users we have – which devices, tools and services that are powered by our code. We don’t know how many users install curl. We also don’t know how many install it and then immediately uninstall it again because there is something about it they don’t like.

Most of our users install curl and libcurl from a distribution, unless they already had it installed there from the beginning without them having to do anything. They don’t download anything from us. Most users likely never visit our website for any purpose.

No telemetry nor logs

We cannot do and will never try to do any kind of telemetry in the command line tool or library, so there is no automated way we can actually know how much any of them are used unless we are told explicitly.

We can search the web, guess and ask around.

Tarball downloads

We can estimate how many people download the curl release tarballs from the website every month, but that is a nearly useless number without meaning. What does over a million downloads per month mean in this context? Presumably a fair share of these are just repeated CI jobs.

A single download of a curl tarball be used to build curl for a long time, for countless products and get installed in several billions of devices or never get used anywhere. Or somewhere in between. We will never know.

GitHub

Our GitHub repository has a certain amount of stars. This number does not mean anything as just a random subset of developers ever see it, and just some of those decide to do the rather meaningless act of staring it. The git repository has been forked on GitHub several thousand times but that’s an almost equally pointless number.

We can get stats for how often our source code git repository is cloned, but then again that number probably gets heavily skewed as CI use of it goes up and down.

Binary downloads

We offer curl for Windows binaries but since we run a website entirely without logs, those downloads are bundled with the tarballs in our rough stats. We only know how many objects in the 1M-10M size range are downloaded over a period of time. Besides, Windows ships with curl bundled so most Windows users never download anything from us.

We provide curl containers and since they are hosted by others, we can get some “pull” numbers. They mostly tell us people use the containers – but growing and shrinking trends don’t help us much as we don’t know who or why.

Ecosystems

Because how libcurl is a fairly low-level C library, it is usually left outside of all ecosystems. With most infrastructure tooling for listing, counting and tracking dependencies etc, libcurl is simply left out and invisible. As if it is not actually used. Presumably just assumed to be part of the operating system or something.

These tools are typically done for Python, Node, Java, Rust, Perl, etc ecosystems where dependencies are easy to track via their package systems. Therefore, we cannot easily check how many projects or products that depend on libcurl with these tools. Because that number would be strangely low.

Users

I try to avoid talking about number of users because for curl and libcurl I can’t really tell what a user is. curl is used directly by users, sure, but it is also used in countless scripts that run without a user directly running it.

libcurl is used many magnitudes more than the curl tool, and that is a component built-in into devices, tools and services that often operate independent of users being present.

Installations

I tend to make my (wild) guesses on number of (lib)curl installations even though that is also highly error-prone.

I don’t know even nearly all the tools, games, devices and services that use libcurl because most of them never tell me or anyone else. They don’t have to. If we find out while searching the web or someone points us to a credit mention then we know. Otherwise we don’t.

I don’t know how many of those libcurl using applications exist in the world. New versions come, old versions die.

The largest volume libcurl users are most probably the mobile phones: libcurl is part of the operating systems in Apple’s iOS and in both Google’s and Samsung’s default Android setup. Probably in a few of the other popular Androids as well.

Since the libcurl API is not exposed by the mobile phone operating systems, a large amount of mobile phone applications subsequently build their own libcurl and ship with their apps, on both iOS and Android. This way, a single mobile phone can easily contain a dozen different libcurl installations, depending on exactly what set of apps that are used.

There is an estimated seven billion smart phones and one billion tablets in the world. Do they all have five applications on average that bundle libcurl? Who knows. If they do, that makes roughly eight billion times six installations.

Also misleading

Staring on and focusing that outrageously large number is also complicated and may not be a particularly good indicator that we are on the right path. So ten or perhaps forty-eight billion libcurl installations are controlled and done by basically just a handful of applications and companies. Should some of them switch over to an alternative the number would dwindle immediately. And similarly if we get twice that amount of new users but on low volume installations (compared to smart phones everything is low volume), the total number of installations won’t really change, but we may have more satisfied users.

Maybe the best indicator of us keeping on the right track is the number of different users or applications that are using libcurl and then we would count Android, iOS and the mobile YouTube application as three. Of course we have no means to even guess how many different users there are. That’s again also a very time-specific question as maybe there are a few new since yesterday and tomorrow a few existing users may ditch libcurl for something else.

We just don’t know and we can’t tell. With no expectations of this to change.

Success

In many ways this is of course a success beyond our wildest dreams and a luxury position many projects only dream of. Don’t read this blog post as a complaint in any way. It just describes a challenge and reality.

The old fashioned way

With no way to automatically or even half-decently guess how we are doing, we instead do it the old way. We rely on users to tell us what they think. We work on issues, we respond to questions and we do an annual survey. We try to be open for feedback and listen in how people and users want modern internet transfers done.

We make an effort to ship quality products and run a tight ship. To score top scores in all and every way you can evaluate a software project and our products.

Hopefully this will keep us on the right track. Let me know if you ever think we veer off.

curl 8.14.1

2025-06-04 13:48:34

This is a patch-release done only a week since the previous version with no changes merged only bugfixes. Because some of the regressions in 8.14.0 were a little too annoying to leave unattended for a full cycle.

Release presentation

Numbers

the 268th release
0 changes
7 days (total: 9,938)
35 bugfixes (total: 12,049)
48 commits (total: 35,238)
0 new public libcurl function (total: 96)
0 new curl_easy_setopt() option (total: 308)
0 new curl command line option (total: 269)
20 contributors, 4 new (total: 3,431)
9 authors, 1 new (total: 1,376)
1 security fix (total: 167)

Security

CVE-2025-5399: WebSocket endless loop. A malicious WebSocket server can send a particularly crafted packet which makes libcurl get trapped in an endless busy-loop. Severity LOW.

Bugfixes

We count about 31 bugfixes, view them all on the 8.14.1 changelog page.

Decomplexification

2025-05-30 01:57:06

(Clearly a much better word than simplification.)

I believe we generally accept the truth that we should write simple and easy to read code in order to make it harder to create bugs and cause security problems. The more complicated code we write, the easier it gets to slip up, misunderstand or forget something along the line.

And yet, at the same time, over time functions tend to grow and become more and more complicated as we address edge cases and add new funky features we did not anticipate when we first created the code many decades ago.

Complexity

Cyclomatic complexity is a metric used to indicate the complexity of a program. You can click the link there and read all the fine details, but it boils down to: a higher number means a more complex function. A function that contains many statements and code paths.

There is this fine old command line tool called pmccabe that is able to scan C code and output a summary of all functions and their corresponding complexity scores.

Invoking this tool on your own C code is a perfect way to get a toplist of functions possibly in need of refactor. While of course the idea of what a complex function is and exactly how to count the score is not entirely objective, I believe this method works to a decently sufficient degree.

curl

Last year I created a graph to the curl dashboard that shows the complexity scores of the worst function in curl as well as the 99th percentile. Later, I also added a plot for 90th percentile.

This graph shows how the worst complexity in curl has shifted over time – and like always when there is a graph or you measure something – suddenly we get the urge to do something about the bad look of that graph. Because it looked bad.

The worst

I grabbed my scalpel and refactored a few of the most complex functions we had, and I basically halved the complexity score of the worst-in-curl functions. The steep drop at the right side of the graph felt nice.

I left the state there then for a while, quite pleased with having at least improved the state of things.

A few months later I returned back to the topic. I figured we could do more as the worst was still quite bad. We should have a goal set to extinguish (improve really) all functions in the curl code with a score higher than N.

A goal

In my mail to the team I proposed the acceptable complexity limit to be 100, which is not super aggressive. When I sent the email, there were seven functions ranked over 100 and the worst offender scored 196. After all, only a couple of months earlier, the worst function had scored over 350.

Maybe we could start with 100 as a max and lower it going forward if that works?

To get additional visualization of the curl code complexity (and ideally how we improve the situation) I also created two more graphs for the dashboard.

Graph complexity distribution

The first one gets the function’s complexity score for every line of source code and then shows how large percentage of the source code that has which complexity scores. The ideal of course being that almost the entire thing should have low scores.

This graph shows 100% of the source code, independent of its size at any given time because I think that is what is relevant: the complexity distribution at any particular point in time independent of the size. The size of the code has grown almost linearly all through the period this graph shows, so of course 50% of the code in 2010 was much less code than what 50% is today.

This graph shows how we have had periods of quite a lot of code with complexity over 200 and that today we finally have erased all complexity above 100. It’s a little hard to see in the graph, but the yellow field goes all the way up as of May 28 2025.

Graph average complexity

The second graph would get the same complexity score per source code line, and then calculate the average complexity score of all lines of code at that point in time. Ideally, that line should shrink over time.

It now shows a rather steep drop in mid 2025 after our latest efforts. The average complexity has more than halved since 2022.

Analyzers like it too

Static code analyzers also get better results and fewer false positives when they get to work with smaller and simpler functions. It too helps produce better code.

Refactors could shake things up

Of course, refactoring a complex function into several smaller and simpler functions can be anywhere from straight forward to quite complicated. A refactor in the name of simplification that might be hard. An oxymoron and one that of course might shake things up and could potentially rather add bugs than fix them.

Doing this of course needs to be done with care and there needs to be a solid test suite around the functions to validate that most of the functionality is still there and with the same behavior as before the refactor.

Function length

The most complex functions are also the longest, as there is a strong correlation. For that reason, I also produce a graph for the worst and the 99th percentile function lengths used in curl source code.

(something is wrong in this graph, as the P99 cannot be higher than the worst but the plot seems to indicate it was that in late 2024?)

A CI job to keep us honest

To make absolutely sure not a single function accidentally increases complexity above the permitted level in a pull-request, we created a script that makes a CI job turn red if any function goes over 100 in the complexity check. It is now in place. Maybe we can lower the limit going forward?

Towards the goal

The goal is not so much a goal as a process. An attempt to make us write simpler code, which in turn should help us write better and more secure code. Let’s see where we are in ten years!

As of this writing, here is the toplist of the most complex functions in curl right now. The ones with scores over 70:

100   lib/vssh/libssh.c:myssh_statemach_act
99 lib/setopt.c:setopt_long
92 lib/urlapi.c:curl_url_get
91 lib/ftplistparser.c:parse_unix
88 lib/http.c:http_header
83 src/tool_operate.c:single_transfer
80 src/config2setopts.c:config2setopts
79 lib/setopt.c:setopt_cptr
79 lib/vtls/openssl.c:cert_stuff
75 src/tool_cb_wrt.c:tool_write_cb
73 lib/socks.c:do_SOCKS5
72 lib/vssh/wolfssh.c:wssh_statemach_act
71 lib/vtls/wolfssl.c:Curl_wssl_ctx_init
71 lib/rtsp.c:rtsp_do
71 lib/socks_sspi.c:Curl_SOCKS5_gssapi_negotiate

This is just a snapshot of the moment. I hope things will continue to improve going forward. If even a little slower perhaps as we now have fixed all the most terrible cases.

Everything is public

All the scripts for this, the graphs shown, and the data behind them all are of course publicly available.

curl 8.14.0

2025-05-28 13:48:12

Welcome to another curl release.

Release presentation

Numbers

the 267th release
6 changes
56 days (total: 9,931)
229 bugfixes (total: 12,015)
406 commits (total: 35,190)
0 new public libcurl function (total: 96)
1 new curl_easy_setopt() option (total: 308)
1 new curl command line option (total: 269)
91 contributors, 47 new (total: 3,426)
36 authors, 17 new (total: 1,375)
2 security fixes (total: 166)

Security

Changes

  • When doing MQTT, curl now sends pings
  • The Schannel backend now supports pkcs12 client certificates containing CA certificates
  • Added CURLOPT_SSL_SIGNATURE_ALGORITHMS and --sigalgs for the OpenSSL backend
  • ngtcp2 + OpenSSL’s new QUIC API is now supported. Requires OpenSSL 3.5 or later.
  • wcurl comes bundled in the curl tarball
  • websocket can now disable auto-pong

Bugfixes

See the changelog on the curl site for the full set, or watch the release presentation for a “best of” collection.