While going through research papers I felt that micro-processor architecture is almost saturated. Could any one explain what are the new research happening in micro-processor design.
Asked
Active
Viewed 439 times
-1
-
1Seriouly? All of it?! – Leeor Nov 15 '14 at 13:41
2 Answers
1
This can generate an interesting discussion. In my opinion here are some research trends in microprocessor design:
- Power - Architects make great efforts to enable power saving features (shut down functional blocks when they are not used - e.g. Turn off L2 cache if your code is core bound)
- Super "Scalarness" - Current CPU's can execute more instructions in a single cycle. There's research for executing even more in future chips.
- Lower latency - There's always research on how to improve the latency of instructions.
- Scalability - We have CPU's with hundreds of cores. The challenge is to see scalable performance when parallelizing.
Some more research ongoing:
- Add more IP blocks to CPU's
- Improve integrated GPU to reach to a point where we play high end games on tablets
- Better I/O handling
- ISA improvements
One conclusion: Chip makers are making serious research that their architectures fit perfectly the software of tomorrow.

VAndrei
- 5,420
- 18
- 43
-
It's enough to hold dozens of research labs in the industry and academy, it had better be enough to generate a discussion. Unfortunately, that's not what StackOverflow is for. – Leeor Nov 15 '14 at 13:46
0
A large part of the academic research community if devoting its time to develop hardware accelerators. We are moving away from the traditional general purpose computing and making processors that efficiently execute application specific workloads. Some of the popular areas are:
- There is a lot of difference in the microprocessors that are being manufactured for data centers and the ones we use in our laptops. A lot of work is being done to improve efficiency of microprocessors used in data centers given that an increasing amount of computation is now being done on the cloud.
- Hardware acceleration for Neural Networks. A lot of applications from self driving cars to voice assistants, community is engaged in making special processors for training and inferring from these neural networks for quickest response time and lowest energy consumption. These are usually tagged along the main processors, integrating these co processors for seamless communication and the software stack above is another research area.

Azeus
- 1
- 2