How does the process/thread scheduler work on a typical system, with respect to fairness and granularity? Does the scheduler pass instructions to the processor by switching between processes or between threads? If the latter is the case, then, can I improve the performance of my compute-bound jobs by spawning more threads in my process? The literature on this topic seems to use process and thread interchangeably. For clarity, I use the definition of a process where it is a collection of 1 or more threads of execution.
I presume that multiple threads do NOT improve compute-bound jobs on a single processor, implying that the scheduler's scheduling granularity is at the process level. For example, if there are N processes, each process gets 1/N-th of the processor, no matter how many threads if spawns (if the scheduler is fair, of course).
I found a related conversation here: How Linux handles threads and process scheduling
Linux doesn't differentiate between threads and processes, so threads are actually treated like processes with shared memory. If this is the case, it would seem that I could improve compute-bound run time by spawning more threads. Am I interpreting this correctly?