-2

So what I would like to know is when the thread scheduler performs a context switch how is the CPU interrupted so the task of switching over to a different process gets accomplished? Does the CPU get paused or go into sleep mode while the new context loaded onto the CPU ?

  • The CPU has built in circuitry for interrupting the current program. The scheduler is itself a program and typically sets up periodic interrupts(say every 1/100th of a second), which then call the scheduler. – PiRocks Oct 30 '20 at 03:42

1 Answers1

3

It actually is "interrupted," in the true sense of the word.

The CPU never "pauses" during a context switch. It's actually quite busy doing the switch. The context switch starts when a timer interrupt is triggered. Virtually every CPU in existence has a configurable timer that triggers an interrupt when it goes off.

When an interrupt is triggered on a CPU for any reason, the result is that the CPU looks up a specific memory address, which is the "interrupt vector" for that interrupt. This is a table of addresses, one for each interrupt that could occur (there is a small number of them, so this table is not huge). It simply says that when the interrupt occurs, the next instruction is whatever that address is. It stops running

This address is the address of an interrupt handler or interrupt service routine (ISR), depending on who you talk to. This is a very specialized function which obeys some very strict rules in order to be able to function right on top of any arbitrary stack. In the case of this handler, it calls the scheduler, asking it to do a context switch.

The scheduler is also designed very carefully to allow one to save off the "context," which includes things like IP addresses, stack configuration, registers, and virtual memory layouts. It then chooses a thread to run next, loads up its information, and finally sets the IP address to where it left off last time that thread was suspended.

This process is very busy for the CPU. It is anything but idle. In particular, it has to flush many architecture-specific caches, which account for much of the latency one sees which switching contexts.

Cort Ammon
  • 10,221
  • 31
  • 45
  • Re, "the next instruction is whatever that address is" Might be worth mentioning that the interrupt works like a "call" instruction, rather than a "jmp." The hardware saves enough of the interrupted thread's context so that the ISR can "return" to it if and when it is appropriate to do so. The ISR may choose to save more of the context if needed, and if it's the scheduler deciding which thread to run next, it certainly will save _all_ of the the interrupted context. – Solomon Slow Oct 30 '20 at 21:39
  • 1
    Re, "...IP addresses..." That could be confusing to a newbie who is familiar with the Internet Protocol, but who is not accustomed to thinking about CPU registers such as the Instruction Pointer. – Solomon Slow Oct 30 '20 at 21:40
  • @Solomon thanks for the response but I have a couple of questions to ask. In your first reply you say that the ISR does the saving of the running context but I was under the impression that it’s the thread scheduler not the ISR that does the saving. Which is the correct implementation? Secondly how does a running thread get preempted before it has completed a time slice ? – Vikki Mehra Oct 31 '20 at 03:17
  • 1
    @VikkiMehra For those details, Solomon's description is more precise than mine was. When the interrupt is triggered, and the ISR is invoked, it basically "calls" the ISR, pushing the instruction pointer onto the stack just like it would if you call a function. If a context switch is required, additional code (part of the scheduler) will take the time to save off the rest of the data properly. – Cort Ammon Oct 31 '20 at 04:08
  • As for how a running threat gets preempted before its timeslice ends, the general answer is it doesn't. That's what time slices are for. A thread typically only gives up its time slice if it calls an OS function, like acquiring a mutex or reading a file, which cannot proceed until another thread or piece of hardware completes their task. *Usually* the only time a thread is truly preempted is at the end of its time slice. However, there's nothing that prohibits an OS from preempting it earlier, other than that it would need to trigger an interrupt in some way shape or form. – Cort Ammon Oct 31 '20 at 04:11
  • 1
    As an example, a real time operating system may be monitoring some interrupt, and processing that data on a real time thread (meaning it has hard real-time latency requirements). It may be running some background threads when the interrupt signaling the data is available occurs. That real time OS may choose to immediately end the background thread's time slice and start doing real time processing. Its still an interrupt that caused the context switch, it's just not the interrupt associated with the time slicing. – Cort Ammon Oct 31 '20 at 04:13
  • And if it helps, interrupt triggering is typically done at the micro-architecture level. When an interrupt is triggered, it tells the CPU, under no uncertain terms, that the next instruction *will* be servicing an interrupt, and the CPU acts on this before executing the next fetched instruction. – Cort Ammon Oct 31 '20 at 04:14
  • One thing that might help is to split the problem into two parts. On most processors, *anything* related to preemption is done using interrupts as the hardware mechanism to do the preemption. The second half is the idea of threads and context switches, which is an OS level concept. Your question, regarding the scheduler interrupting the CPU, is what you get when you layer these on top of eachother, using interrupts to drive the preemptive context switching. But you may have better luck with it by thinking about the two parts separately, then combining them later. – Cort Ammon Oct 31 '20 at 04:45
  • @CortAmmon thank you. Can you recommend where I can read more about context switching ? Would a resource such as the book Windows Internals 5th or 6th or 7th edition help ? – Vikki Mehra Nov 01 '20 at 03:08
  • I think Windows Internals may have a section on Windows thread scheduling. Personally, I learned from websites which showed the assembly code for various implemenentations like [this one](https://samwho.dev/blog/context-switching-on-x86/). I also recommend looking at a microkernel like L4. Monolithic OSs like Linux and Windows tend to have slower more bloated context switch routines, and they just don't invoke them as often. Microkernels, like L4, have to *constantly* be context switching. They take an effort to streamline... – Cort Ammon Nov 01 '20 at 07:56
  • ... the process, which can help make the examples seem a bit more focused. – Cort Ammon Nov 01 '20 at 07:57
  • @CortAmmond thank you. I hope I can ask another question. You said in your first reply the following “ In the case of this handler, it calls the scheduler, asking it to do an interrupt.” Can you elaborate what you meant by this ? What sort interrupt does the scheduler do ? – Vikki Mehra Nov 01 '20 at 15:45
  • @VikkiMehra Sorry, that was a typo. I should have said "... asking it to do a context switch." I've fixed that. – Cort Ammon Nov 01 '20 at 17:06
  • @CortAmmon many thanks. I do have one last question to ask. In the very first response you said the following “The context switch starts when a timer interrupt is triggered.” Does this mean that every time the timer interrupt is triggered a context switch takes place ? What if there are no ready threads in the queue with a higher priority ? – Vikki Mehra Nov 02 '20 at 15:16
  • That will get into the nitty grittity implementation details, so you are not guaranteed to see the same behavior on all OSs. One can think of the situation with no other threads ready as a context switch from the thread to itself, saving off all of the settings and then restoring them. Or an OS may be smart enough to realize that there are no other thread ready, and simply not do a context switch at all. The former is simpler, just less efficient. – Cort Ammon Nov 02 '20 at 15:35
  • As a concrete example of how the implementation details differ, [the Linux scheduler](https://www.cs.montana.edu/~chandrima.sarkar/AdvancedOS/CSCI560_Proj_main/index.html) minimizes context switching by giving threads up to 0.4s before context switching in CPU-bound situations. This yields maximum performance, but is poor for interactive behaviors like mouse clicks. So what ends up happening is the timer is setup to go off every 20ms (known as a "quantum", and the thread keeps a counter. Every 20 timer interrupts (0.4s), the scheduler will evict the current thread. However, if a mouse ... – Cort Ammon Nov 02 '20 at 15:42
  • ... click or a keyboard press occurs, it sets a flag which makes it so that, on the next 20ms timer, the CPU bound task is evicted, regardless of how many 20ms quantums it has left before it hit 0.4s. Then the scheduler context switches to the thread which was waiting for that mouse/keyboard event. – Cort Ammon Nov 02 '20 at 15:43
  • @CortAmmon thanks again but can you tell me for either Windows or Linux where does the thread scheduler save the context? – Vikki Mehra Nov 03 '20 at 17:20
  • They save it off into so-called "kernel memory," which is just normal memory which the kernel marked as not readable/writable/executable by user-space processes to protect it. The specifics of linux [are described here](https://www.star.bnl.gov/~liuzx/lki/lki-2.html), but the answer is basically "they dynamically allocate the memory." – Cort Ammon Nov 03 '20 at 17:42