Finishing thoughts – Creating Our Own Fibers

I want to round off this chapter by pointing out some of the advantages and disadvantages of this approach, which we went through in Chapter 2, since we now have first-hand experience with this topic.

First of all, the example we implemented here is an example of what we called a stackful coroutine. Each coroutine (or thread, as we call it in the example implementation) has its own stack. This also means that we can interrupt and resume execution at any point in time. It doesn’t matter if we’re in the middle of a stack frame (in the middle of executing a function); we can simply tell the CPU to save the state we need to the stack, return to a different stack and restore the state it needs there, and resume as if nothing has happened.

You can also see that we have to manage our stacks in some way. In our example, we just create a static stack (much like the OS does when we ask it for a thread, but smaller), but for this to be more efficient than using OS threads, we need to select a strategy to solve that potential problem.

If you look at our slightly expanded example in ch05/d-fibers-closure, you’ll notice that we can make the API pretty easy to use, much like the API used for std::thread::spawn in the standard library. The flipside is of course the complexity of implementing this correctly on all combinations of ISA/ABIs that we want to support, and while specific to Rust, it’s challenging to create a great and safe API over these kinds of stackful coroutines without any native language support for it.

To tie this into Chapter 3, where we discuss event queues and non-blocking calls, I want to point out that if you use fibers to handle concurrency, you would call yield after you’ve made a read interest in your non-blocking call. Typically, a runtime would supply these non-blocking calls, and the fact that we yield would be opaque to the user, but the fiber is suspended at that point. We would probably add one more state to our State enum called Pending or something else that signifies that the thread is waiting for some external event.

When the OS signals that the data is ready, we would mark the thread as State::Ready to resume and the scheduler would resume execution just like in this example.

While it requires a more sophisticated scheduler and infrastructure, I hope that you have gotten a good idea of how such a system would work in practice.

Summary

First of all, congratulations! You have now implemented a super simple but working example of fibers. You’ve set up your own stack and learned about ISAs, ABIs, calling conventions, and inline assembly in Rust.

It was quite the ride we had to take, but if you came this far and read through everything, you should give yourself a big pat on the back. This is not for the faint of heart, but you pulled through.

This example (and chapter) might take a little time to fully digest, but there is no rush for that. You can always go back to this example and read the code again to fully understand it. I really do recommend that you play around with the code yourself and get to know it. Change the scheduling algorithm around, add more context to the threads you create, and use your imagination.

You will probably experience that debugging problems in low-level code like this can be pretty hard, but that’s part of the learning process and you can always revert back to a working version.

Now that we have covered one of the largest and most difficult examples in this book, we’ll go on to learn about another popular way of handling concurrency by looking into how futures and async/await works in Rust. The rest of this book is in fact dedicated solely to learning about futures and async/await in Rust, and since we’ve gained so much fundamental knowledge at this point, it will be much easier for us to get a good and deep understanding of how they work. You’ve done a great job so far!

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Post