So the solution is to have a thread waiting on the future. Technically you'd need a thread per future, which is not exactly scalable. The article uses a pool which has its own problems.
The article even mentions an arguably better approach (check on a timer), but for some reasons claims it is worse.
Those integrations are not exactly good designs regardless; simply don't use std::future is the solution, and use non-blocking async mechanisms that can cooperate on the same thread instead. Standard C++ has one albeit somewhat overcomplicated, senders and receivers. Asio also works.
I think the idea is that you're using some external library (e.g. database drivers) which do not use asio but returns a std::future. You can't just "not use std::future" if that's what your library uses without fully rewriting your external library.
The other option is as you mention polling using a timer, but I don't see how that's better, I'd rather move the work off of the event loop to a thread. And you then have to do the "latency vs. CPU time" tradeoff dance, trying to judge how often to poll vs. how much latency you're willing to accept.
>The article even mentions an arguably better approach (check on a timer), but for some reasons claims it is worse.
How do you know what timeout to use for the timer? You may end up with tons of unnecessary polling if your timeout is too short, or high latency if your timeout is too long.
>Standard C++ has one albeit somewhat overcomplicated, senders and receivers
An imperfect but really useful standard for whether or not a C++ feature is going to work out is how long and troublesome the standardization process is. Modules? Never happening boys, the time has passed when anyone cares. Coroutines? Break gdb forever? Keep dreaming.
Look at what they can do when it's clearly a good idea and has the backing of the absolute apex predator experts: reflection. If reflection can fucking sail through and your thing struggles, it's not going to make it.
Andrew Kelley just proposed the first plausible IO monad for a systems language, and if I wanted to stay relevant in async C++ innovation I'd just go copy it. Maybe invert the life/unlift direction.
The coroutines TS is heavily influenced by folly coroutines (or vice versa), a thing with which I have spent many a late night debugging segfaults. Not happening.
Besides, if threads are too slow or big now? Then everything but liburing is.
I'm not sure what you're trying to say. Coroutines have been standardized in C++20 and they are fully supported by all major compilers. They are successfully used in production. I've switched to coroutines for all networking in my personal projects and I'm not looking back.
A second heuristic for things that aren't going to work out well is stuff that came from Boost. Oh sure, there was a time back in the TR11 days when it was practically part of the standard. But if it's on anyone's "cool, we'll link that, no problem" list in 2025? I don't know them.
It is a bit sad to see this for newer features. Maybe the committee should re-evaluate how quickly new designs are pushed into the standard and allow for a bit more time for evaluation. Moving fast makes sense when it's ok to break thinks, not so much when you need to support the result forever.
Although C++ is one of my favourite languages, I feel the current WG21 process is broken, it is one of the few language evolution processes where proposals are allowed to be voted in without any kind of preview implementation for community feedback, or even to actually validate the idea.
I have to acknowledge that none of the other ISO languages, including C, are this radical.
That is how we are getting so much warts of lately.
Unfortunelly there doesn't seem to exist any willingness to change this, until it will be too late to matter.
std::future was caught in coroutine/network/concurrency/parallelism master plan that has been redesigned way too many times. Sender/Receivers is the the current direction, and while I don't dislike it, we are still far for a final design to cover all use cases (we still don't have a sender/receiver network library proposal I think).
Whatever we end up with, std::future just wasn't a good base for an high performance async story. Still just adding a readiness callback to std::future would make it infinitely more useful even if suboptimal. At least it would be usable where performance is not a concern.
C++ really needs a fast-deprecate and kick out strategy for features that have proven to be poor - whether by bad design or bad implementation. And compilers should auto warn about such features.
On the contrary, I think they should move faster and provide more convenience functions that are "good enough" for 90% of use cases. For power users, there will always be a library that addresses domain-specific issues better than the standard could ever hope to.
Instead, the comitee attempts to work towards perfect solutions that don't exist, and ends up releasing overengineered stuff that is neither the most convenient, performant, nor efficient solution. Like <random>
And who gets to implement those ideas faster, many of which were never implemented before being added into the standard in first place?
The surviving three compilers are already lagging as it is, none of them is fully 100% C++20 compliant, C++23 might only become 100% on two of them, lets see how C++26 compliance turns out to be, meanwhile C++17 parallel algorithms are only fully available in one of them, while the two other ones require TBB and libstdc++ to actually make use of them.
I'm obviously not talking about modules-level features that may never get to see the light of day.
A random(min, max) function isn't rocket science and already a major inprovement over the three-liner that is currently necessary. The major compiler devs won't take long to implement these cases, just as it did not take them long to implement simple yet useful functionality in previous versions of the standard. And the standard library is full with these cases of missing convenience functions over deliberately over-engineered functions.
Modules have already seen the light of day in VC++ and clang.
Anyone using a recent version of Office, is using code that was written with C++20 modules.
It is relatively easy to see how far behind compiler developers are regarding even basic features.
Note that two of the three major surviving compilers are open source projects, and in all three major compilers, the big names have ramped down their contributions, as they rather invest into their own languages, seeing the current versions as good enough for existing codebases.
Every time I read an article like this I thank the day when I switched from C++ to go. I know why C++ is like this, I understand all the hard work that went into evolving it over 40 years, but I simply refuse to deal with all this stuff anymore. I have better things to worry about in my life.
It does about 20 different steps with a ton of opportunities for overloading and type conversion. Insanely complicated!
And they kept up the pattern of throwing UB everywhere:
> Falling off the end of the coroutine is equivalent to co_return;, except that the behavior is undefined if no declarations of return_void can be found in the scope of Promise.
Why?? Clearly they have learnt nothing from decades of C++ bugs.
it doesn't look meaningfully more complex than C#'s spec (which has absolutely horrendous stuff like :throw-up-emoji: inheriting from some weird vendor type like "System.Runtime.CompilerServices.INotifyCompletion")?
In modern C++ development, coroutines have brought revolutionary changes to asynchronous programming. However, when using boost::asio or standalone asio, we often encounter scenarios where we need to convert traditional std::future<T> to asio::awaitable<T>. This article will detail an efficient, thread-safe conversion method.
If you post to HN, you can choose between a text or a url. When you pick url, you can add some text, but it's added as a comment. I guess that's what happened.
Coroutines in Python are fantastically useful and allow more reliable implementation of networking applications. There is a complexity cost to pay but it's small and resolves other complexity issues with using threads instead, so overall you end up with simpler code that is easier to debug. "Hello world" (e.g., with await sleep(1) to make it non-trivially async) is just a few lines.
But coroutines in C++ are so stupendously complicated I can't imagine using them in practice. The number of concepts you have to learn to write a "hello world" application is huge. Surely just using callback-on-completion style already possible with ASIO (where the callback is usually another method in the same object as the current function), which was already possible, is going to lead to simpler code, even if it's a few lines longer than with coroutines?
Edit: We have a responsibility as senior devs (those of us that are) to ensure that code isn't just something we can write but that other can read, including those that don't spend their spare time reading about obscure C++ ideas. I can't imagine who in good faith thinks that C++ coroutines fall into this category.
As to the complexity: It is complex because it is very low-level. In JavaScript (using that as an example but I suspect Python is the same) they build in async/await keywords such that they are in cahoots with the Promise class. C++ takes a different path where there isn't a built-in Promise class, rather it provides you lower-level primitive you can use to build a Promise class. You can build a library around it, and it will be as simple as in other languages - both for the awaiter and for implementing libraries that you can await on :) But I agree it is really complicated. I remember once in a while thinking "ah it can't really be that complicated" only to dive into it again. It doesn't help that practically any term they use (promise, awaiter etc.) is used differently than in all other contexts I've worked. If you just expect it will be as easy as understand JavaScript async/await/promise you are in for a rude surprise. Raymond Chen has written co-routines tutorials which span three SERIES. Here's a map of those: https://devblogs.microsoft.com/oldnewthing/20210504-01/?p=10...
As for how we got to here without:
1) Using large number processes/threads
2) Raw callback oriented mechanisms (with all the downsides)
3) Structured async where you pass in lambda's - benefit is you preserve the sequential structure and can have proper error handlign if you stick to the structure. Downside is you are effectively duplicating language facilities in the methods (e.g. .then(), .exception() ). Stack traces are often unreadable. I.
4) Raw use of various callback-oriented mechanisms like epoll and such, with the cost in code readability etc. and/or coupled with custom-written strategies to ease readability (so a subset of #3 really)
With C++ couroutines the benefit is you can write it almost like you usually do (line-by-line sequentially) even though it works asynchronously.
The majority of the complexity is in the library/executor, rather than in callers. We have an implementation at my company which is now being widely rolled out and it's a pretty dramatic readability win to convert callback based codes to nearly-straight line coroutine code.
Boost ASIO seemed to be the first serious coroutine library for C++ and that seemed complex to use (I'm saying that as a long-time user of its traditional callback API) but that's perhaps not surprising given that it had to fit with its existing API. But then there was a library (I forget which) posted to HN that was supposed to be a clean fresh coroutine library implementation and that still seems more complex than ASIO and callbacks - it seemed like you needed to know practically every underlying C++ coroutine concept. But maybe there just needed to be time for libraries to mature a bit.
Actually. I found it pretty straightforward. I switched from callbacks to coroutines un my personal project and it is a massive win! Now I can write simple loops instead of nested callbacks. Also, most state can now stay in local variables.
But the great thing about async (at least it's the killer feature for me) is the really top notch support for cancellation. You can also typically create and join async tasks more easily than spawning and joining threads.
Sure, but then you need one thread per socket, which has its own set of problems (most notably, the need for thread synchronization). I definitely prefer async + coroutines over blocking + thread-per-socket.
Java's new philosophy (in "Loom" - in production OpenJDK now) seems to be virtual threads that are cheap and can therefore be plentiful compared to native threads. This allows you to write the code in the old way without programmer-visible async.
So the solution is to have a thread waiting on the future. Technically you'd need a thread per future, which is not exactly scalable. The article uses a pool which has its own problems.
The article even mentions an arguably better approach (check on a timer), but for some reasons claims it is worse.
Those integrations are not exactly good designs regardless; simply don't use std::future is the solution, and use non-blocking async mechanisms that can cooperate on the same thread instead. Standard C++ has one albeit somewhat overcomplicated, senders and receivers. Asio also works.
I think the idea is that you're using some external library (e.g. database drivers) which do not use asio but returns a std::future. You can't just "not use std::future" if that's what your library uses without fully rewriting your external library.
The other option is as you mention polling using a timer, but I don't see how that's better, I'd rather move the work off of the event loop to a thread. And you then have to do the "latency vs. CPU time" tradeoff dance, trying to judge how often to poll vs. how much latency you're willing to accept.
>The article even mentions an arguably better approach (check on a timer), but for some reasons claims it is worse.
How do you know what timeout to use for the timer? You may end up with tons of unnecessary polling if your timeout is too short, or high latency if your timeout is too long.
>Standard C++ has one albeit somewhat overcomplicated, senders and receivers
*in C++26
Or CUDA, which is getting its own compute version.
An imperfect but really useful standard for whether or not a C++ feature is going to work out is how long and troublesome the standardization process is. Modules? Never happening boys, the time has passed when anyone cares. Coroutines? Break gdb forever? Keep dreaming.
Look at what they can do when it's clearly a good idea and has the backing of the absolute apex predator experts: reflection. If reflection can fucking sail through and your thing struggles, it's not going to make it.
Andrew Kelley just proposed the first plausible IO monad for a systems language, and if I wanted to stay relevant in async C++ innovation I'd just go copy it. Maybe invert the life/unlift direction.
The coroutines TS is heavily influenced by folly coroutines (or vice versa), a thing with which I have spent many a late night debugging segfaults. Not happening.
Besides, if threads are too slow or big now? Then everything but liburing is.
I'm not sure what you're trying to say. Coroutines have been standardized in C++20 and they are fully supported by all major compilers. They are successfully used in production. I've switched to coroutines for all networking in my personal projects and I'm not looking back.
Not everyone is using C++ on Linux with GCC.
There are people where modules and co-routines already happened, and there is better debugging experiences out there than gdb.
Coroutines are well supported in boost::asio and are deployed in production in more places than you would think.
A second heuristic for things that aren't going to work out well is stuff that came from Boost. Oh sure, there was a time back in the TR11 days when it was practically part of the standard. But if it's on anyone's "cool, we'll link that, no problem" list in 2025? I don't know them.
Asio is actually independent from boost and it is by far the most popular C++ networking library.
Boost also has a better future that at least allows composition.
But yes, do not use std::future except for the most simple tasks.
I work with C++, but the amount of don't use standard feature X because reason' is crazy.
It is a bit sad to see this for newer features. Maybe the committee should re-evaluate how quickly new designs are pushed into the standard and allow for a bit more time for evaluation. Moving fast makes sense when it's ok to break thinks, not so much when you need to support the result forever.
Although C++ is one of my favourite languages, I feel the current WG21 process is broken, it is one of the few language evolution processes where proposals are allowed to be voted in without any kind of preview implementation for community feedback, or even to actually validate the idea.
I have to acknowledge that none of the other ISO languages, including C, are this radical.
That is how we are getting so much warts of lately.
Unfortunelly there doesn't seem to exist any willingness to change this, until it will be too late to matter.
std::future was caught in coroutine/network/concurrency/parallelism master plan that has been redesigned way too many times. Sender/Receivers is the the current direction, and while I don't dislike it, we are still far for a final design to cover all use cases (we still don't have a sender/receiver network library proposal I think).
Whatever we end up with, std::future just wasn't a good base for an high performance async story. Still just adding a readiness callback to std::future would make it infinitely more useful even if suboptimal. At least it would be usable where performance is not a concern.
C++ really needs a fast-deprecate and kick out strategy for features that have proven to be poor - whether by bad design or bad implementation. And compilers should auto warn about such features.
On the contrary, I think they should move faster and provide more convenience functions that are "good enough" for 90% of use cases. For power users, there will always be a library that addresses domain-specific issues better than the standard could ever hope to.
Instead, the comitee attempts to work towards perfect solutions that don't exist, and ends up releasing overengineered stuff that is neither the most convenient, performant, nor efficient solution. Like <random>
And who gets to implement those ideas faster, many of which were never implemented before being added into the standard in first place?
The surviving three compilers are already lagging as it is, none of them is fully 100% C++20 compliant, C++23 might only become 100% on two of them, lets see how C++26 compliance turns out to be, meanwhile C++17 parallel algorithms are only fully available in one of them, while the two other ones require TBB and libstdc++ to actually make use of them.
I'm obviously not talking about modules-level features that may never get to see the light of day.
A random(min, max) function isn't rocket science and already a major inprovement over the three-liner that is currently necessary. The major compiler devs won't take long to implement these cases, just as it did not take them long to implement simple yet useful functionality in previous versions of the standard. And the standard library is full with these cases of missing convenience functions over deliberately over-engineered functions.
Modules have already seen the light of day in VC++ and clang.
Anyone using a recent version of Office, is using code that was written with C++20 modules.
It is relatively easy to see how far behind compiler developers are regarding even basic features.
Note that two of the three major surviving compilers are open source projects, and in all three major compilers, the big names have ramped down their contributions, as they rather invest into their own languages, seeing the current versions as good enough for existing codebases.
I wouldn't mind if they actually _fix_ features afterward, even if it means breaking change.
Every time I read an article like this I thank the day when I switched from C++ to go. I know why C++ is like this, I understand all the hard work that went into evolving it over 40 years, but I simply refuse to deal with all this stuff anymore. I have better things to worry about in my life.
Yeah... I used C++ coroutines a bit and they're super powerful and can do anything you want... But... I mean look at how complex co_await is:
https://en.cppreference.com/w/cpp/language/coroutines.html#c...
It does about 20 different steps with a ton of opportunities for overloading and type conversion. Insanely complicated!
And they kept up the pattern of throwing UB everywhere:
> Falling off the end of the coroutine is equivalent to co_return;, except that the behavior is undefined if no declarations of return_void can be found in the scope of Promise.
Why?? Clearly they have learnt nothing from decades of C++ bugs.
Hopefully Rust gets coroutines soon...
> I mean look at how complex co_await is
it doesn't look meaningfully more complex than C#'s spec (which has absolutely horrendous stuff like :throw-up-emoji: inheriting from some weird vendor type like "System.Runtime.CompilerServices.INotifyCompletion")?
https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...
Out of curiosity, what would you use coroutines for, in Rust?
My personal use was clearly `async`/`await`, and this landed quite some time ago.
For stimulus generation for SystemVerilog tests. I think you might be able to use `async`/`await` for that but I'm not 100% sure - I haven't tried.
In modern C++ development, coroutines have brought revolutionary changes to asynchronous programming. However, when using boost::asio or standalone asio, we often encounter scenarios where we need to convert traditional std::future<T> to asio::awaitable<T>. This article will detail an efficient, thread-safe conversion method.
Did you just copy-paste the first paragraph of the article.
If you post to HN, you can choose between a text or a url. When you pick url, you can add some text, but it's added as a comment. I guess that's what happened.
It's literally his article.
Who is asking for C++ coroutines and how did we get to C++20 without it?
I'm interested in this too.
Coroutines in Python are fantastically useful and allow more reliable implementation of networking applications. There is a complexity cost to pay but it's small and resolves other complexity issues with using threads instead, so overall you end up with simpler code that is easier to debug. "Hello world" (e.g., with await sleep(1) to make it non-trivially async) is just a few lines.
But coroutines in C++ are so stupendously complicated I can't imagine using them in practice. The number of concepts you have to learn to write a "hello world" application is huge. Surely just using callback-on-completion style already possible with ASIO (where the callback is usually another method in the same object as the current function), which was already possible, is going to lead to simpler code, even if it's a few lines longer than with coroutines?
Edit: We have a responsibility as senior devs (those of us that are) to ensure that code isn't just something we can write but that other can read, including those that don't spend their spare time reading about obscure C++ ideas. I can't imagine who in good faith thinks that C++ coroutines fall into this category.
As to the complexity: It is complex because it is very low-level. In JavaScript (using that as an example but I suspect Python is the same) they build in async/await keywords such that they are in cahoots with the Promise class. C++ takes a different path where there isn't a built-in Promise class, rather it provides you lower-level primitive you can use to build a Promise class. You can build a library around it, and it will be as simple as in other languages - both for the awaiter and for implementing libraries that you can await on :) But I agree it is really complicated. I remember once in a while thinking "ah it can't really be that complicated" only to dive into it again. It doesn't help that practically any term they use (promise, awaiter etc.) is used differently than in all other contexts I've worked. If you just expect it will be as easy as understand JavaScript async/await/promise you are in for a rude surprise. Raymond Chen has written co-routines tutorials which span three SERIES. Here's a map of those: https://devblogs.microsoft.com/oldnewthing/20210504-01/?p=10...
As for how we got to here without:
1) Using large number processes/threads 2) Raw callback oriented mechanisms (with all the downsides) 3) Structured async where you pass in lambda's - benefit is you preserve the sequential structure and can have proper error handlign if you stick to the structure. Downside is you are effectively duplicating language facilities in the methods (e.g. .then(), .exception() ). Stack traces are often unreadable. I. 4) Raw use of various callback-oriented mechanisms like epoll and such, with the cost in code readability etc. and/or coupled with custom-written strategies to ease readability (so a subset of #3 really)
With C++ couroutines the benefit is you can write it almost like you usually do (line-by-line sequentially) even though it works asynchronously.
The majority of the complexity is in the library/executor, rather than in callers. We have an implementation at my company which is now being widely rolled out and it's a pretty dramatic readability win to convert callback based codes to nearly-straight line coroutine code.
That's very promising.
Boost ASIO seemed to be the first serious coroutine library for C++ and that seemed complex to use (I'm saying that as a long-time user of its traditional callback API) but that's perhaps not surprising given that it had to fit with its existing API. But then there was a library (I forget which) posted to HN that was supposed to be a clean fresh coroutine library implementation and that still seems more complex than ASIO and callbacks - it seemed like you needed to know practically every underlying C++ coroutine concept. But maybe there just needed to be time for libraries to mature a bit.
I was just going to mention ASIO.
> and that seemed complex to use
Actually. I found it pretty straightforward. I switched from callbacks to coroutines un my personal project and it is a massive win! Now I can write simple loops instead of nested callbacks. Also, most state can now stay in local variables.
There is another way to write code which lets you write simple loops and isn’t coroutines. Blocking code.
But the great thing about async (at least it's the killer feature for me) is the really top notch support for cancellation. You can also typically create and join async tasks more easily than spawning and joining threads.
Sure, but then you need one thread per socket, which has its own set of problems (most notably, the need for thread synchronization). I definitely prefer async + coroutines over blocking + thread-per-socket.
Java's new philosophy (in "Loom" - in production OpenJDK now) seems to be virtual threads that are cheap and can therefore be plentiful compared to native threads. This allows you to write the code in the old way without programmer-visible async.
That sounds interesting, I'll take a look! (although not using native threads is almost never about perf)
Ok, but virtual threads still need thread synchronization.
which isn't a problem unless you are abusing threads.
If you avoid synchronization, like javascript then you also don't get pre-emption or parallelism.
Seems prone to deadlocking- I would avoid making the thread-pool globally scoped, and instead provide it as arguments to the helper methods.