MPSC Channel API for painless usage of threads with GTK in Rust

A very common question that comes up on IRC or elsewhere by people trying to use the gtk-rs GTK bindings in Rust is how to modify UI state, or more specifically GTK widgets, from another thread.

Due to GTK only allowing access to its UI state from the main thread and Rust actually enforcing this, unlike other languages, this is less trivial than one might expect. To make this as painless as possible, while also encouraging a more robust threading architecture based on message-passing instead of shared state, I’ve added some new API to the glib-rs bindings: An MPSC (multi-producer/single-consumer) channel very similar to (and based on) the one in the standard library but integrated with the GLib/GTK main loop.

While I’ll mostly write about this in the context of GTK here, this can also be useful in other cases when working with a GLib main loop/context from Rust to have a more structured means of communication between different threads than shared mutable state.

This will be part of the next release and you can find some example code making use of this at the very end. But first I’ll take this opportunity to also explain why it’s not so trivial in Rust first and also explain another solution.

Table of Contents

  1. The Problem
  2. One Solution: Safely working around the type system
  3. A better solution: Message passing via channels

The Problem

Let’s consider the example of an application that has to perform a complicated operation and would like to do this from another thread (as it should to not block the UI!) and in the end report back the result to the user. For demonstration purposes let’s take a thread that simply sleeps for a while and then wants to update a label in the UI with a new value.

Naively we might start with code like the following

This does not compile and the compiler tells us (between a wall of text containing all the details) that the label simply can’t be sent safely between threads. Which is absolutely correct.

In, e.g. C, this would not be a problem at all, the compiler does not know about GTK widgets and generally all GTK API to be only safely usable from the main thread, and would happily compile the above. It would the our (the programmer’s) job then to ensure that nothing is ever done with the widget from the other thread, other than passing it around. Among other things, it must also not be destroyed from that other thread (i.e. it must never have the last reference to it and then drop it).

One Solution: Safely working around the type system

So why don’t we do the same as we would do in C and simply pass around raw pointers to the label and do all the memory management ourselves? Well, that would defeat one of the purposes of using Rust and would require quite some unsafe code.

We can do better than that and work around Rust’s type system with regards to thread-safety and instead let the relevant checks (are we only ever using the label from the main thread?) be done at runtime instead. This allows for completely safe code, it might just panic at any time if we accidentally try to do something from wrong thread (like calling a function on it, or dropping it) and not just pass the label around.

The fragile crate provides a type called Fragile for exactly this purpose. It’s a wrapper type like Box, RefCell, Rc, etc. but it allows for any contained type to be safely sent between threads and on access does runtime checks if this is done correctly. In our example this would look like this

Not many changes to the code and it compiles… but at runtime we of course get a panic because we’re accessing the label from the wrong thread

What we instead need to do here is to somehow defer the change of the label to the main thread, and GLib provides various API for doing exactly that. We’ll make use of the first one here but it’s mostly a matter of taste (and trait bounds: the former takes a FnOnce closure while the latter can be called multiple times and only takes FnMut because of that).

So far so good, this compiles and actually works too. But it feels kind of fragile, and that’s not only because of the name of the crate we use here. The label passed around in different threads is like a landmine only waiting to explode when we use it in the wrong way.

It’s also not very nice because now we conceptually share mutable state between different threads, which is the underlying cause for many thread-safety issues and generally increases complexity of the software considerable.

Let’s try to do better, Rust is all about fearless concurrency after all.

A better solution: Message passing via channels

As the title of this post probably made clear, the better solution is to use channels to do message passing. That’s also a pattern that is generally preferred in many other languages that focus a lot on concurrency, ranging from Erlang to Go, and is also the the recommended way of doing this according to the Rust Book.

So how would this look like? We first of all would have to create a Channel for communicating with our main thread.

As the main thread is running a GLib main loop with its corresponding main context (the loop is the thing that actually is… a loop, and the context is what keeps track of all potential event sources the loop has to handle), we can’t make use of the standard library’s MPSC channel. The Receiver blocks or we would have to poll in intervals, which is rather inefficient.

The futures MPSC channel doesn’t have this problem but requires a futures executor to run on the thread where we want to handle the messages. While the GLib main context also implements a futures executor and we could actually use it, this would pull in the futures crate and all its dependencies and might seem like too much if we only ever use it for message passing anyway. Otherwise, if you use futures also for other parts of your code, go ahead and use the futures MPSC channel instead. It basically works the same as what follows.

For creating a GLib main context channel, there are two functions available: glib::MainContext::channel() and glib::MainContext::sync_channel(). The latter takes a bound for the channel, after which sending to the Sender part will block until there is space in the channel again. Both are returning a tuple containing the Sender and Receiver for this channel, and especially the Sender is working exactly like the one from the standard library. It can be cloned, sent to different threads (as long as the message type of the channel can be) and provides basically the same API.

The Receiver works a bit different, and closer to the for_each() combinator on the futures Receiver. It provides an attach() function that attaches it to a specific main context, and takes a closure that is called from that main context whenever an item is available.

The other part that we need to define on our side then is how the messages should look like that we send through the channel. Usually some kind of enum with all the different kinds of messages you want to handle is a good choice, in our case it could also simply be () as we only have a single kind of message and without payload. But to make it more interesting, let’s add the new string of the label as payload to our messages.

This is how it could look like for example

While this is a bit more code than the previous solution, it will also be more easy to maintain and generally allows for clearer code.

We keep all our GTK widgets inside the main thread now, threads only get access to a sender over which they can send messages to the main thread and the main thread handles these messages in whatever way it wants. There is no shared mutable state between the different threads here anymore, apart from the channel itself.

11 thoughts on “MPSC Channel API for painless usage of threads with GTK in Rust”

  1. I am certainly not a GTK expert (its been a long time ago that I even dabbled with it), and also not a Rust expert.

    But how scalable (code wise, not performance wise) is this solution?
    In a real world program I imagine there might be 10s to 100s of these background operations all with their specific extra data they need.

    So at some point you will end up having an enum with 100s of silly and maybe confusing names.
    I imagine one mitigiation might be that each widget might create its own sender/receiver pair, so you keep the enums seperate (eg if you have a class responsible for loading files, and one for saving files, you have a LoadingFileMessage and a SavingFileMessage with each only maybe 2 or 3 operations/enum values).

    1. You’d only really use this for passing things from worker/background threads to update your UI state. There are not going to be too many different cases for this in your application usually, at least not in the same place, or otherwise you should probably consider re-designing it in a way to split up things a bit more.

      For async IO, look at the futures based async functions from GIO. Those also allow you handle everything in a nice way from the main thread.

  2. Pingback: Pop!_Planet
  3. Shouldn’t one attach to the receiver queue *before* spawning the thread?
    In your example, where the worker thread actually sleeps for 10 seconds, this is certainly not a problem, but since people finding this blog post will have different use-cases, it might be better to give them a fool-proof example.

    1. Why do you think it generally matters to attach the receiver before the worker thread is started? If the worker thread is sending values to the channel before the receiver is attached, the channel will simply buffer them (or block if it’s full).

      1. Oh, right! Of course you are right, I didn’t realize that what matters is that the channel exists before the thread is started 🙂

  4. I have an issue: I cannot fork my thread. In Main: I setup the app, “connect_activate” to a closure and “connect_shutdown” to a closure. I need 2 channels: GTK -> backend, backend -> GTK. The channel for backend -> GTK I call the GTK-channel (for the other channel I can use the much more flexible crossbeam channel). The “receiver” of the GTK channel must be known in the “connect_activate” closure: I want to access ui elements when a signal is received. Unfortunately “attach” consumes the receiver. Thus I cannot move “receiver” into the closure (such could not be consumed) but I have to set it up inside the closure.

    Well, the sender of the GTK-channel must go into my thread. Since the receiver is setup inside the “connect_activate” closure, I will have to setup the background thread inside that closure as well.

    How to I get my JoinHandle into the “connect_shutdown” closure if I setup the thread in the “connect_activate” closure?

      1. Even in the given example that’s quite questionable: what if I shutdown the app long before the loop is done? The thread stays active.

        It will be stopped immediately once the main function returns. In this example that’s not a problem, there’s no work the threads have to finish before they’re stopped.

        Even in the given example that’s quite questionable: what if I shutdown the app long before the loop is done? The thread stays active.

        You could use a channel or for example an Rc>> that is shared between the main function (or shutdown signal handler) and the activate signal handler. The latter would fill the JoinHandles in there, the latter would join on all them.

    1. for the other channel I can use the much more flexible crossbeam channel

      You can also use the channels from the futures crate or any other futures-aware channel implementation on the GLib/GTK main loop. It is also a Rust futures executor.

      Unfortunately “attach” consumes the receiver. Thus I cannot move “receiver” into the closure (such could not be consumed) but I have to set it up inside the closure.

      You can store the receiver in an RefCell> that you pass to the closure and then foo.get_mut().take() to get it out a single time.

      How to I get my JoinHandle into the “connect_shutdown” closure if I setup the thread in the “connect_activate” closure?

      You don’t have to spawn the thread there (see above), but if you decide to do so you can also use an Rc>> “receiver” / one item channel that you create outside the two closures, pass to both, fill in one and empty in the other. Or you can use any of the oneshot (or other) channels to achieve the same.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.