Rust only provides what’s necessary to model asynchronous operations in the language. Basically, it provides the following:
- A common interface that represents an operation, which will be completed in the future through the Future trait
- An ergonomic way of creating tasks (stackless coroutines to be precise) that can be suspended and resumed through the async and await keywords
- A defined interface to wake up a suspended task through the Waker type
That’s really what Rust’s standard library does. As you see there is no definition of non-blocking I/O, how these tasks are created, or how they’re run. There is no non-blocking version of the standard library, so to actually run an asynchronous program, you have to either create or decide on a runtime to use.
I/O vs CPU-intensive tasks
As you know now, what you normally write are called non-leaf futures. Let’s take a look at this async block using pseudo-Rust as an example:
let non_leaf = async {
let mut stream =
TcpStream::connect(“127.0.0.1:3000”).await.unwrap();
// request a large dataset
let result =
stream.write(get_dataset_request).await.unwrap();
// wait for the dataset
let mut response = vec![];
stream.read(&mut response).await.unwrap();
// do some CPU-intensive analysis on the dataset
let report = analyzer::analyze_data(response).unwrap();
// send the results back
stream.write(report).await.unwrap();
};
I’ve highlighted the points where we yield control to the runtime executor. It’s important to be aware that the code we write between the yield points runs on the same thread as our executor.
That means that while our analyzer is working on the dataset, the executor is busy doing calculations instead of handling new requests.
Fortunately, there are a few ways to handle this, and it’s not difficult, but it’s something you must be aware of:
- We could create a new leaf future, which sends our task to another thread and resolves when the task is finished. We could await this leaf-future like any other future.
- The runtime could have some kind of supervisor that monitors how much time different tasks take and moves the executor itself to a different thread so it can continue to run even though our analyzer task is blocking the original executor thread.
- You can create a reactor yourself that is compatible with the runtime, which does the analysis any way you see fit and returns a future that can be awaited.
Now, the first way is the usual way of handling this, but some executors implement the second method as well. The problem with #2 is that if you switch runtime, you need to make sure that it supports this kind of supervision as well or else you will end up blocking the executor.
The third method is more of theoretical importance; normally, you’d be happy to send the task to the thread pool that most runtimes provide.
Most executors have a way to accomplish #1 using methods such as spawn_blocking.
These methods send the task to a thread pool created by the runtime where you can either perform CPU-intensive tasks or blocking tasks that are not supported by the runtime.
Summary
So, in this short chapter, we introduced Rust’s futures to you. You should now have a basic idea of what Rust’s async design looks like, what the language provides for you, and what you need to get elsewhere. You should also have an idea of what a leaf future and a non-leaf future are.
These aspects are important as they’re design decisions built into the language. You know by now that Rust uses stackless coroutines to model asynchronous operations, but since a coroutine doesn’t do anything in and of itself, it’s important to know that the choice of how to schedule and run these coroutines is left up to you.
We’ll get a much better understanding as we start to explain how this all works in detail as we move forward.
Now that we’ve seen a high-level overview of Rust’s futures, we’ll start explaining how they work from the ground up. The next chapter will cover the concept of futures and how they’re connected with coroutines and the async/await keywords in Rust. We’ll see for ourselves how they represent tasks that can pause and resume their execution, which is a prerequisite to having multiple tasks be in progress concurrently, and how they differ from the pausable/resumable tasks we implemented as fibers/green threads in Chapter 5.