20

Closely related questions have been asked before:

But the answers to those questions still leave me unclear on some points.

  • The asker of the first question asked if multi-threading would help performance, and the answerers mostly said that it would not, because it is very unlikely that the GUI would be the bottleneck in a 2D application on modern hardware. But this seems to me a sneaky debating tactic. Sure, if you have carefully structured your application to do nothing other than UI calls on the UI thread you won't have a bottleneck. But that might take a lot of work and make your code more complicated, and if you had a faster core or could make UI calls from multiple threads, maybe it wouldn't be worth doing.

  • A commonly advocated architectural design is to have view components that don't have callbacks and don't need to lock anything except maybe their descendants. Under such an architecture, can't you let any thread invoke methods on view objects, using per-object locks, without fear of deadlock?

  • I am less confident about the situation with UI controls, but as long their callbacks are only invoked by the system, why should they cause any special deadlock issues? After all, if the callbacks need to do anything time consuming, they will delegate to another thread, and then we're right back in the multiple threads case.

  • How much of the benefit of a multi-threaded UI would you get if you could just block on the UI thread? Because various emerging abstractions over async in effect let you do that.

  • Almost all of the discussion I have seen assumes that concurrency will be dealt with using manual locking, but there is a broad consensus that manual locking is a bad way to manage concurrency in most contexts. How does the discussion change when we take into consideration the concurrency primitives that the experts are advising us to use more, such as software transactional memory, or eschewing shared memory in favor of message passing (possibly with synchronization, as in go)?

Community
  • 1
  • 1
gmr
  • 712
  • 6
  • 17

3 Answers3

24

TL;DR

It is a simple way to force sequencing to occur in an activity that is going to ultimately be in sequence anyway (the screen draw X times per second, in order).

Discussion

Handling long-held resources which have a single identity within a system is typically done by representing them with a single thread, process, "object" or whatever else represents an atomic unit with regard to concurrency in a given language. Back in the non-emptive, negligent-kernel, non-timeshared, One True Thread days this was managed manually by polling/cycling or writing your own scheduling system. In such a system you still had a 1::1 mapping between function/object/thingy and singular resources (or you went mad before 8th grade).

This is the same approach used with handling network sockets, or any other long-lived resource. The GUI itself is but one of many such resources a typical program manages, and typically long-lived resources are places where the ordering of events matters.

For example, in a chat program you would usually not write a single thread. You would have a GUI thread, a network thread, and maybe some other thread that deals with logging resources or whatever. It is not uncommon for a typical system to be so fast that its easier to just put the logging and input into the same thread that makes GUI updates, but this is not always the case. In all cases, though, each category of resources is most easily reasoned about by granting them a single thread, and that means one thread for the network, one thread for the GUI, and however many other threads are necessary for long-lived operations or resources to be managed without blocking the others.

To make life easier its common to not share data directly among these threads as much as possible. Queues are much easier to reason about than resource locks and can guarantee sequencing. Most GUI libraries either queue events to be handled (so they can be evaluated in order) or commit data changes required by events immediately, but get a lock on the state of the GUI prior to each pass of the repaint loop. It doesn't matter what happened before, the only thing that matters when painting the screen is the state of the world right then. This is slightly different than the typical network case where all the data needs to be sent in order and forgetting about some of it is not an option.

So GUI frameworks are not multi-threaded, per se, it is the GUI loop that needs to be a single thread to sanely manage that single long-held resource. Programming examples, typically being trivial by nature, are often single-threaded with all the program logic running in the same process/thread as the GUI loop, but this is not typical in more complex programs.

To sum up

Because scheduling is hard, shared data management is even harder, and a single resource can only be accessed serially anyway, a single thread used to represent each long-held resource and each long-running procedure is a typical way to structure code. GUIs are only one resource among several that a typical program will manage. So "GUI programs" are by no means single-threaded, but GUI libraries typically are.

In trivial programs there is no realized penalty to putting other program logic in the GUI thread, but this approach falls apart when significant loads are experienced or resource management requires either a lot of blocking or polling, which is why you will often see event queue, signal-slot message abstractions or other approaches to multi-threading/processing mentioned in the dusty corners of nearly any GUI library (and here I'm including game libraries -- while game libs typically expect that you want to essentially build your own widgets around your own UI concept, the basic principles are very similar, just a bit lower-level).

[As an aside, I've been doing a lot of Qt/C++ and Wx/Erlang lately. The Qt docs do a good job of explaining approaches to multi-threading, the role of the GUI loop, and where Qt's signal/slot approach fits into the abstraction (so you don't have to think about concurrency/locking/sequencing/scheduling/etc very much). Erlang is inherently concurrent, but wx itself is typically started as a single OS process that manages a GUI update loop and Erlang posts update events to it as messages, and GUI events are sent to the Erlang side as messages -- thus permitting normal Erlang concurrent coding, but providing a single point of GUI event sequencing so that wx can do its GUI update looping thing.]

zxq9
  • 13,020
  • 1
  • 43
  • 60
  • Thanks, I will check out these links tomorrow. Just to clarify, I'm certainly aware that GUI programs are often heavily multithreaded. That is part of why the single UI thread can feel restrictive for programmers! My understanding was/is that the main reason for the restriction is to avoid deadlocks (or other concurrency bugs). Maybe message-based async APIs are the right approach (but why have they not been more popular?). – gmr Aug 05 '15 at 03:12
  • @gmr In my experience the combination of message passing and *not sharing state* has been a big win because it allows one to isolate scheduling from state issues. One reason message-passing isn't very popular is, I think, that the vocabulary of OOP conflates "messages" with "calls". They are not the same thing. A message between concurrent processes/objects is better represented by a queue running in its own thread. In any case, many GUI applications do run in a single thread simply because there is no/low execution penalty, and its much easier to reason about in algol-style languages. – zxq9 Aug 05 '15 at 03:26
  • I was thinking about this today. The problem with message passing (async) is that you never know what the actual state is, as it's a combination of the state stored in components + the pending state changes in their message pumps. If you aren't sharing state you're duplicating it of course. I like the conceptual simplicity of the pattern though. – Robinson Nov 18 '16 at 18:06
1

Because the GUI main thread code is old. Very old and therefore very much designed for low resource usage. If someone would write everything from scratch again (and even Android as the most recent GUI OS didn't) it would be working well and be better in multithreading.

For example the best two improvements that would help for MT are

  1. Now we have MVVM (Model-View-ViewModel) pattern, this is an extra duplication of data. When the toolskits were developed even a single duplication in a MVC was highly debated. MVVM makes multithreading much easier. IMHO this was the main reason for Microsoft to invent it in the first place in .NET not the data binding.

  2. The scene graph approach. Android, iOS, Windows UWP (based on CoreWindow not hWnd until Windows11 Project Reunion), Gtk4 is decoupling the GPU part from the model. Yes it is in fact a MVVMGM now (Model-View-ViewModel-GPUModel). So another memory intense layer. If you duplicate stuff you need less synchronisation. Combine on Android and SwiftUI on MacOS/iOS is using immutability of GUI widgets now to further improve this View->GPUModel.

Especially with the GPU Model/Scene Graph, the statement that GUIs are single threaded is not true anymore.

Lothar
  • 12,537
  • 6
  • 72
  • 121
0

Two reasons, as far as I can tell:

It is much easier to reason about single-threaded code; thus the event loop model reduces the likelihood of bugs.

2D User interfaces are not CPU intensive. An old computer with a wimpy graphics card can smoothly render all the windows, frames, widgets, etc. you could possibly desire without skipping a beat.

Basically, if single-threaded code is easier and tends to have fewer bugs, favor that over multithreaded code unless you have a compelling need for parallelization or speed. Your typical GUI frameworks don't have this need.

Now, of course we've all experienced lagginess and freezes from GUI applications before. I'd argue that the vast majority of the time, this is the fault of the developer: putting long-running synchronous code for an event that should have been handled asynchronously (which is a mechanism all the major UI frameworks have).