22

I know the question is only partially programming-related because the answer I would like to get is originally from these two questions:

Why are CPU cores number so low (vs GPU)? and Why aren't we using GPUs instead of CPUs, GPUs only or CPUs only? (I know that GPUs are specialized while CPUs are more for multi-task, etc.). I also know that there are memory (Host vs GPU) limitations along with precision and caches capability. But, In term of hardware comparison, high-end to high-end CPU/GPU comparison GPUs are much much more performant.

So my question is: Could we use GPUs instead of CPUs for OS, applications, etc

The reason I am asking this questions is because I would like to know the reason why current computers are still using 2 main processing units (CPU/GPU) with two main memory and caching systems (CPU/GPU) even if it is not something a programmer would like.

Maiss
  • 1,641
  • 4
  • 18
  • 23
  • 2
    Short answer: General CPU vs Specialized CPU. – asawyer Jun 12 '12 at 22:28
  • 1
    I agree, but then why not make "general purpose GPUs"? Programmers have to learn both CPU (C++, Matlab, Python etc.) and GPU (OpenGL, OpenCL, DirectX, etc.) languages and APUs because of specialized vs general purpose, while one general purpose processing system would do both. – Maiss Jun 12 '12 at 22:31
  • 1
    I asked a related question awhile back which has some good responses... http://stackoverflow.com/questions/1126989/what-future-does-the-gpu-have-in-computing – Steve Wortham Jun 12 '12 at 22:42
  • @Steve: Very helpful, the point I wish to complete is more about high-level programming such as C++, Python, Java, etc... along with low-level programming using Cuda, OpenCL or DirectCompute, like you specified, which then would be general purpose. Is it just a question of time like the answer you had? – Maiss Jun 12 '12 at 22:57
  • 1
    @Maiss - I don't know. I mean today it's a question of determining if the algorithm is not only parallel friendly, but also GPU friendly as some of the answers describe. Moving forward I think this is something we'll always have to test for. Some algorithms are just going to be better suited for the CPU. But the prevalence and support for SDKs like OpenCL and DirectCompute will surely grow in the future at least giving us more options. – Steve Wortham Jun 12 '12 at 23:19

4 Answers4

17

Current GPUs lack many of the facilities of a modern CPU that are generally considered important (crucial, really) to things like an OS.

Just for example, an OS normally used virtual memory and paging to manage processes. Paging allows the OS to give each process its own address space, (almost) completely isolated from every other process. At least based on publicly available information, most GPUs don't support paging at all (or at least not in the way an OS needs).

GPUs also operate at much lower clock speeds than CPUs. Therefore, they only provide high performance for embarrassingly parallel problems. CPUs are generally provide much higher performance for single threaded code. Most of the code in an OS isn't highly parallel -- in fact, a lot of it is quite difficult to make parallel at all (e.g., for years, Linux had a giant lock to ensure only one thread executed most kernel code at any given time). For this kind of task, a GPU would be unlikely to provide any benefit.

From a programming viewpoint, a GPU is a mixed blessing (at best). People have spent years working on programming models to make programming a GPU even halfway sane, and even so it's much more difficult (in general) than CPU programming. Given the difficulty of getting even relatively trivial things to work well on a GPU, I can't imagine attempting to write anything even close to as large and complex as an operating system to run on one.

Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111
  • Thanks Jerry, both answers: AUAnonymous's and your updated one were the ones I was looking for. I appreciate it. – Maiss Jun 12 '12 at 23:17
  • You’re probably aware that some CPUs [such as Rigel](https://github.com/keean/zenscript/issues/41#issuecomment-407587313) have been designed as co-processors without the virtual paging, virtualization and other features needed by a modern OS. The distinction between the GPU and CPU seems to be more saliently focused on the vectorized, SIMD versus general purpose programming which you alluded to the last paragraph of your answer. The Rigel paper explains this in slightly more detail in the discussion of “throughput processors”. – Shelby Moore III Jul 27 '18 at 06:45
14

GPUs are designed for graphics related processing (obviously), which is inherently something that benefits from parallel processing (doing multiple tasks/calculations at once). This means that unlike modern CPUs, which as you probably know usually have 2-8 cores, GPUs have hundreds of cores. This means that they are uniquely suited to processing things like ray tracing or anything else that you might encounter in a 3D game or other graphics intensive activity.

CPUs on the other hand have a relatively limited number of cores because the tasks that a CPU faces usually do not benefit from parallel processing nearly as much as rendering a 3D scene would. In fact, having too many cores in a CPU could actually degrade the performance of a machine, because of the nature of the tasks a CPU usually does and the fact that a lot of programs would not be written to take advantage of the multitude of cores. This means that for internet browsing or most other desktop tasks, a CPU with a few powerful cores would be better suited for the job than a GPU with many, many smaller cores.

Another thing to note is that more cores usually means more power needed. This means that a 256-core phone or laptop would be pretty impractical from a power and heat standpoint, not to mention the manufacturing challenges and costs.

lyallcooper
  • 2,606
  • 2
  • 19
  • 18
  • 3
    Whether more cores in CPU is benefitial depends on the computation and how amenable it is to parallelization – Attila Jun 12 '12 at 22:48
  • Thanks AUAnonymous. Could you however elaborate how a multi-cores (>>10 cores) CPU would degrade the performance? I am curious on that because of what I learned from OpenCL programming. The only limitation is memory access (speed and # of seeks), but having 4 levels of caching/memory would solve that problem, no? – Maiss Jun 12 '12 at 22:48
  • 1
    To try and answer your question, it only might degrade performance, depending on the task, but generally, assuming equal cost, a cpu with more cores will have lower clock speeds than one with fewer. This means that if a task does not greatly benefit from parallel processing, the CPU with fewer cores will be able to do the task faster thanks to its higher clock speeds. Another thing to note is that is a program has not been written to take advantage of many cores, then the extra cores will almost be "wasted" because they are not being used efficiently. – lyallcooper Jun 12 '12 at 22:53
  • @AUAnonymous: I learned that most computations are parallelizable (either by task- or data-parallelisme), however sometimes, the language or the low-level operations, may in addition require communication between operations/tasks which often limits the parallelisme process. I am retaining that a multi-core General-purpose processing units (CPU or GPU) with >>50 cores, even with lower clock (which is more a Production/Design limit), would be "better". Correct me if I am wrong. – Maiss Jun 12 '12 at 23:29
  • @Maiss to better understand the scaling issues of multi-core and how we’re contemplating to solve them, see [our Github discussion](https://github.com/keean/zenscript/issues/41#issuecomment-406995325). – Shelby Moore III Jul 27 '18 at 06:40
  • The last paragraph is moot. Yes, obviously more cores take more power, but it takes **much** less power to run a larger number of cores (say, 2x) than to increase the clock of existing cores by the same amount. So, a quad-core at 2GHz is much more efficient than a dual-core at 4GHz. – Marc.2377 Oct 22 '19 at 06:35
2

Usually operating systems are pretty simple, if you look at their structure. But parallelizing them will not improve speeds much, only raw clock speed will do.

GPU's simply lack parts and a lot of instructions from their instruction sets that an OS needs, it's a matter of sophistication. Just think of the virtualization features (Intel VT-x or AMD's AMD-v).

GPU cores are like dumb ants, whereas a CPU is like a complex human, so to speak. Both have different energy consumption because of this and produce very different amounts of heat.

See this extensive superuser answer here on more info.

Community
  • 1
  • 1
sjas
  • 18,644
  • 14
  • 87
  • 92
1

Because nobody will spend money and time on this. Except for some enthusiasts like that one: http://gerigeri.uw.hu/DawnOS/history.html (now here: http://users.atw.hu/gerigeri/DawnOS/history.html)

Dawn now works on GPU-s: with a new OpenCL capable emulator, Dawn now boots and works on Graphics Cards, GPU-s and IGP-s (with OpenCL 1.0). Dawn is the first and only operating system to boot and work fully on a graphics chip.