The Wikipedia page on Turing machines states that a universal Turing machine is slower than the machines it simulates by at most a log factor. I was curious - what is the equivalent in real life, comparing a pure hardware solution (non stored program computer - e.g. ASIC) vs. a stored program computer? Is it also a log factor?
-
Do you want to consider actual physical ASICs (which are, of course finite in size) or a more theoretical "circuit complexity vs ram machine complexity"? Either way this question is probably better suited to http://cs.stackexchange.com/ – harold Apr 05 '17 at 14:36
-
I'm interested in both - I am not familiar with ram machines, just started reading about Turing machines and don't know much about hardware – Andrew Apr 05 '17 at 14:49
-
This question appears to be [off-topic](http://stackoverflow.com/help/on-topic), it is not about a specific programming problem, a software algorithm, or software tools commonly used by programmers and is not a practical, answerable problem that is unique to software development. – Apr 05 '17 at 23:36
1 Answers
The wikipedia page is a bit prudent. In fact you can write a one tape universal machine for one tape machines with no overhead. I am not aware of a precise reference, but the result somehow belongs to the folklore of the subject. There is only a small "gray" zone regarding the simulation of programs with subquadratic complexity. This is related to the fact the universal machine may need to reshape the input before starting the simulation, and such an operation, on a single tape machine, may already take quadratic time.
This is somehow related to the general problem of encoding of tuples on single tape machines that, as far as I can say, has been somehow neglected, while it is central to complexity issues.
My personal feeling, at present, is that you can overcome the problem of simulation of machines with subquadratic complexity, obtaining a really fair universal machine.

- 885
- 10
- 14