A common measure of algorithm efficiency, the Big-O notation, lets you compare rates of growth in time the algorithms take relative to each other, assuming that they run on the same hardware, and ignoring constant factors.
When hardware speeds go up, all algorithms speed up by roughly the same constant: if the speed of hardware goes up by the factor of three, all algorithms would be three times faster*. This means that the algorithms that were faster on the old hardware would still be faster on the new hardware.
Moreover, the influence of hardware speedup is a constant independent of the problem size. Depending on the way the algorithm scales with the size of the data, the speedup from improved hardware would be different for different algorithms.
For example, consider two algorithms, X that grows as O(n) and Y that grows as O(n2). Let's say you measure the time that it takes them to process a fixed amount of data. Speeding up the CPU by a factor of four would let X process roughly four times the amount of data in the same time, while Y would be able to process only twice as much data (also approximately).
At the same time, hardware optimizations could give disproportionally large speed-ups to some operations, which, if useful to algorithm X and not useful to algorithm Y, would distort the relative speeds of the two algorithms running on different hardwares.
* Unless your algorithm hit a different bottleneck: for example, it is possible that the CPU speed-up of three times would need a matching speed-up in memory access in order to achieve the expected speed-up across all algorithms.