-1

Which interrupt handler is responsible for context switching in multitasking system?

Sorry for my English.

Nik Novák
  • 569
  • 1
  • 9
  • 24
  • What operating system are you talking about? It's up to the OS what it does every timer interrupt. But as I understand it, "context switch" usually means running a different user-space thread on the CPU, not just switching from user to kernel (and back) for a system call or interrupt. – Peter Cordes Dec 10 '15 at 21:05
  • Thank you for answer. I know what context switch is, but ... look at this. I'll try to explain it with my poor skill in English. So, assume we have a CPU running OS. Now, when system tick interrupt occurs, can its interrupt routine execute context switch? Or there is timer exactly for context switches and separate for system tick? Or am I absolutely confused? – Nik Novák Dec 10 '15 at 21:46

1 Answers1

0

The OS can do a context switch from the timer IRQ handler if it wants to. This is generally what happens to CPU-hog processes when they've used up their timeslice. (The kernel returns to a different process / thread, instead of back to the context that was running when the IRQ fired.)

Maybe https://en.wikipedia.org/wiki/Preemption_(computing)#Time_slice can help?

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
  • Thank you for the answer, but I wrote above that I know how can occurs and what is context swtich. So, please, can you answer my simple question? What is system tick? – Nik Novák Dec 10 '15 at 22:21
  • @NikNovák: "system tick" isn't a normal English phrase (at least not that I'm familiar with). There are timer interrupts (hardware IRQ triggered by a timer), and there are timeslices that the OS's scheduler allows processes to run for. – Peter Cordes Dec 10 '15 at 22:58
  • Thank you for your patience, but I don't need explain how context switch works. Sorry for my English and I understand that you probably can't know what I meant if I wrote it wrong. Ok, so, let's introduce a simple example, which can answer my question indirectly and I can describe it right. Ok, how works on hardware level that application wants to sleep for 1 sec. How hardware and OS unsure that this application will be woken in about 1 sec + serving context switches with other threads? – Nik Novák Dec 10 '15 at 23:56
  • 1
    Hardware isn't involved in a process sleeping for 1 sec. That's all software. Every time slice, the OS's scheduler decides what should be running *now*. Scheduler algorithms like Linux's keep some history data about each process, but they don't plan exactly what's going to happen in the future. A sleeping process just has a time-stamp for when it should wake. Every timer tick, the OS checks if any sleeping processes should wake up now (using a Priority Queue sorted on wake-time), and adds them into the pool of threads that want CPU time. – Peter Cordes Dec 11 '15 at 00:02
  • If that doesn't answer your question, then I think language-barrier is a bigger problem than anything else. There's probably someone / somewhere you can ask about this in your native language, or at least someone that can help translate. It's hard for me to figure out what's a misunderstanding of how things work, and what's a correct understanding badly described in English. – Peter Cordes Dec 11 '15 at 00:05
  • Oh, yes, I am really close to answer. Again - thank you very much for your patience. Probably the last question is connected with term that you used. It is a timer tick. So, with this all, you want to tell my that in the system can be only one hardware timer that ensures only timer ticks and all other time scheduling (for example for applications) are derived from that ticks? It would mean that user application cannot measure and plan events for higher precision than is timer tick precision? – Nik Novák Dec 11 '15 at 00:51
  • @NikNovák: No, that's not right either. I was simplifying things. For high-precision wake-ups, the OS can program another HW timer to interrupt it at a certain time. (Like x86's [HPET](http://wiki.osdev.org/HPET)). For high precision `gettimeofday()` stuff, *querying* the current time with high precision can be done *much* more efficiently than timer interrupts that frequent (For this, x86 has [a ~3GHz counter](http://stackoverflow.com/a/31831555/224132) that only takes ~25 clock cycles to query, and maybe another 5 or 10 to map to seconds/nanoseconds.) – Peter Cordes Dec 11 '15 at 01:13
  • Also note that each CPU (in a multi-core system) has its own local timer interrupts. In Linux, each CPU runs the scheduler function to decide what to run next, rather than having one CPU plan what the others should do. – Peter Cordes Dec 11 '15 at 01:15