0

I have a student question

I recently learned about how a system call such as read can block a process, and such processes will be put in the Blocked state until data becomes available in whatever it's reading from. Or waitpid can block a process until a child process state has changed.

More generally, wikipedia says

A process transitions to a blocked state when it cannot carry on without an external change in state or event occurring https://en.wikipedia.org/wiki/Process_state

How does the kernel listen to these state changes or events in a way that minimizes overhead cost? I am still learning but surely it couldn't be using polling since that would be too slow right? Also "events" seem to be abstract which would mean that I can't assume if these are specifically hardware events in which case hardware interrupts can be used.

  • "How does the kernel listen to these state changes or events" - All these unblocking events (e.g. data become available) are performed inside the kernel. Once an event is fired, all processes waited for it are awoken, one by one. – Tsyvarev Feb 08 '21 at 08:05
  • *"Also "events" seem to be abstract"* -- The *"event"* is whatever the process is waiting for. In your own example of *"a system call such as read can block a process"*, the *"event"* that can/will unblock that process is the *"data [when it] becomes available"*. The kernel is processing this *"data"*, so it can also treat it as an *"event"*. *"I recently learned ..."* -- Re-evaluate that if the Wikipedia article confuses you. – sawdust Feb 08 '21 at 08:13
  • @sawdust well I was just reading linux man pages for read and waitpid, which it simply said they will block processes. I ended up digging deeper and landed on that wiki page, and then my next question was how the kernel actually does the orchestration. Obviously I'm bouncing off here and there and not learning efficiently. If there's any structured way of learning this, I'm open for recommendations. – user3884723 Feb 08 '21 at 08:58
  • By definition, an *"event"* simply occurs, and then has to be recognized/detected (if external source) and handled. Hardware events (i.e. interrupts) are handled by drivers, i.e. *low-level* kernel code. Process scheduling is its own distinct subsystem. The kernel has many layers and sub-systems, so beware of conflation. The kernel uses distinct internal interfaces to convey information between layers and subsystems. Perhaps a book on operating system concepts should be studied. Otherwise recommendations for study is off-topic for this site. – sawdust Feb 09 '21 at 00:10

0 Answers0