-1

https://en.wikipedia.org/wiki/Context_switch

In computing, a context switch is the process of storing the state of a process or thread, so that it can be restored and resume execution at a later point, and then restoring a different, previously saved, state.[1] This allows multiple processes to share a single central processing unit (CPU), and is an essential feature of a multitasking operating system.

The precise meaning of the phrase "context switch" varies. In a multitasking context, it refers to the process of storing the system state for one task, so that task can be paused and another task resumed. A context switch can also occur as the result of an interrupt, such as when a task needs to access disk storage, freeing up CPU time for other tasks. Some operating systems also require a context switch to move between user mode and kernel mode tasks. The process of context switching can have a negative impact on system performance.[2]: 28

and the second question 2):

If I understand correctly, on a single-core processor ONLY ONE thread can be executed AT A TIME (that's why context switching is INEVITABLE), so there is virtual parallelism.

So, is it completely SAFE not to use locks (like mutex, etc) to access shared resources (variables) on single-core processors (there are almost no such processors nowadays but take it as a "theoretical" question)? Thanks

  • Please don't ask two independent questions in the same post. (Also, I don't think you need to use bold and caps quite as much as you do.) – Nate Eldredge Nov 04 '22 at 14:27
  • 2
    For the title, of course context switching can happen on a multi-core processor. How else could a machine with 4 cores run a program with 6 threads? – Nate Eldredge Nov 04 '22 at 14:27
  • hi @Nate Eldredge excellent +1 –  Nov 04 '22 at 14:40

2 Answers2

3

is it completely SAFE not to use locks (like mutex, etc) to access shared resources (variables) on single-core processors?

Probably not. It can be safe, if the code is running under the regime of cooperative multitasking, and if the programmer takes care to ensure that no thread executes any yield point while it has shared variables in some invalid state. But, Most operating systems these days use preemptive multitasking, in which the OS can take the CPU away from one thread and give it to another at any time, and with no warning.

When writing multi-threaded code for a single-CPU system (see below, for more about that) one need not worry so much about the system's memory model, as when programming for an SMP architecture or a NUMA architecture, but one still must take care to prevent the threads from interfering with each other.

(there are almost no such processors nowadays...)

Ha! Try telling that to an embedded software developer (E.g., myself.) There are single-CPU computers embedded in all manner of different things these days. Your microwave oven, your thermostat, a CPAP machine, a bluettooth headset... Your car might contain dozens of them. So might a mobile robot or a complex, automated factory assembly line.

Solomon Slow
  • 25,130
  • 5
  • 37
  • 57
  • hi @Solomon Slow thanks, I have seen the term time-slice/time slicing in context switching and Preemption (computing), what does it mean? –  Nov 10 '22 at 03:39
  • 2
    @GeorgeMeijer, "preemption" means that the operating scheduler can take the CPU away from a thread at _any_ time—literally, between any one instruction and the next. "time slice" means, that each time the scheduler "restores" some thread to the/a CPU, it grants the thread a certain number of milliseconds to run before it will preempt the thread again. The length of the time slice usually will be the same for all threads that run at the same priority, but may be longer for lower priority threads or shorter for higher priority threads.... – Solomon Slow Nov 10 '22 at 04:09
  • 2
    A lower priority thread also can be preempted before its time-slice has been completed if a higher priority thread suddenly becomes runnable (e.g., because an I/O operation completed, because a mutex was released, etc.) – Solomon Slow Nov 10 '22 at 04:10
  • excellent @Solomon Slow "preempted", in this sense, means basically "interrupted", ¿right? –  Nov 10 '22 at 04:25
  • 2
    @GeorgeMeijer, Yes. I'm pretty sure that the only way that the OS can preempt a running thread is by means of a hardware interrupt. It could be; a timer interrupt that signals the end of a time slice, or an I/O completion interrupt that results in some higher-priority thread becoming runnable, or maybe\* an interrupt requested by a different CPU on which a thread did something (e.g., released a mutex) that causes a higher-priority thread to become runnable. [\* I'm less sure about that last case, I never studied the details of how operating systems typically work on SMP hardware.] – Solomon Slow Nov 10 '22 at 14:58
1

Yes, context switches occur on multicore processors, for the same reasons as on single core ones.

No, of course it's not always safe to have multiple threads access shared resources without locks. Doesn't matter how many cores you have. (Only maybe if you use very, very restricted definitions of what "safe" and "shared resource" mean.)

If you have two threads running code like the following with the same shared variable:

read variable
mutate value
write result back to variable

Then if a context switch happens in the middle of this sequence, and you have no mutex lock on the variable, you'll get inconsistent results. "Inconsistent" could easily include behavior that would cause memory leaks or crash the program: imagine if the variable is part of a data structure like a linked list or tree. Nothing about this needs a separate core.

Dan Getz
  • 8,774
  • 6
  • 30
  • 64
  • 1
    hi @Dan Getz, thanks I think I see it better now. "Then if a context switch happens in the middle of this sequence, and you have no mutex lock on the variable, you'll get inconsistent results" - but, the mutex would ONLY help to PROTECT the shared resource, wouldn't it? or does it ALSO PREVENT CONTEXT CHANGES ? –  Nov 04 '22 at 13:55
  • 2
    @GeorgeMeijer I haven't heard of ways to prevent context changes. Might exist, but I'm used to seeing code that assumes the switch might happen, and is written to ensure the code works the same regardless – Dan Getz Nov 04 '22 at 13:58
  • 2
    undestand, @Dan Getz +1. "Then if a context switch happens in the middle of this sequence, and you have no mutex lock on the variable, you'll get inconsistent results" - the danger of another thread modifying that variable WILL STILL EXIST because in the context switch (in that other thread to be resumed) the variable may be modified, right? –  Nov 04 '22 at 14:09
  • 2
    @GeorgeMeijer: Generally the only way to *prevent* context switches is to disable interrupts, which is a privileged operation and can only be done in kernel or bare-metal programming. There are also ways to do similar things in real-time programming, which blurs the line between kernel and user space. But if an ordinary user-level program on a multiuser system could prevent context switches, it could prevent any other process from running, and thus lock up the entire system. – Nate Eldredge Nov 04 '22 at 14:23
  • 2
    @GeorgeMeijer: So no, locking a mutex doesn't prevent a context switch. It just ensures that if a context switch occurs and another thread wakes up, then if that thread should try to lock the same mutex, it will go back to sleep instead of proceeding into the critical section. Eventually the first thread will be switched back to, and complete its work. – Nate Eldredge Nov 04 '22 at 14:26
  • hello @Nate Eldredge thank you, + 2 I understand. In the context switch, that other thread (the one that is about to start) could modify the same variable as the previous thread (the one that was paused) and there the operation would not be atomic, hence it would need the mutex to do the atomic operations no matter if they get interrupted, right? –  Nov 04 '22 at 14:39