-2

Just because of curiosity...

Is there a platform independent algorithm that produces a comparable value; so that I can

implement the algorith on different machines that were introduced to the market bi-yearly

and see how does it fit with Moore's Law by checking the returned values of the algorithm

in those machines?

pencilCake
  • 51,323
  • 85
  • 226
  • 363
  • 3
    you want to count the number of transistors in your computer programmatically? – unkulunkulu Aug 31 '11 at 09:50
  • 2
    Moores Law: "The number of transistors that can be placed inexpensively on an integrated circuit doubles approximately every two years." – Jaydee Aug 31 '11 at 09:50
  • So I want to return a value that is based on the transistors count and reliable. It does not have to be the number of transistors but its exact impact of an opearation maybe. – pencilCake Aug 31 '11 at 09:53
  • 2
    Well, it really depends on what the transistors are doing. Doing arithmetic, signal processing, registers, cache memory... so no I don't think that there will be a platform independent way of measuring this. Try comparing a graphics card to a sound card. – Jaydee Aug 31 '11 at 09:57
  • Ultimately, do you want something like a "BogoMips" http://en.wikipedia.org/wiki/Bogomips value produced by the Linux kernel, but reliable? – fvu Aug 31 '11 at 10:02
  • I was thinking based on an assumption which is "there should always be something in common in each pc" and if this comon thing is made up by bunch of transistors and if this common thing is able to process an operation, its processing performance can be in relation to 'transistors count' Stupid approach? (So I was asking, what is this common possible process AND how can I measure its performance ? ) – pencilCake Aug 31 '11 at 10:03

2 Answers2

3

Most of the transistors that are put onto your CPU by Intel and AMD are put there with the purpose of speeding it up one way or another, so a possible proxy for "how many transistors are on there?" is, "how fast is it?". Often when people talk about Moore's law in relation to a CPU it's performance that they're talking about, even though that's not what Moore said.

Benchmarking a CPU is notoriously arbitrary, though. What weightings do you give to your various speed tests? Suppose that next year, Intel invents 20 new SIMD instructions, and adds corresponding silicon to their chips to implement them. Unless your code uses those instructions, there's no way it's going to notice that they're there, so they won't affect your results and you won't report an increase in your performance/transistor index. Since they were invented after you wrote your code, you can't execute them explicitly, so the only way they're going to be used is if an up-to-date compiler, with options to target the new version of the CPU, finds some code in your benchmark that it thinks will benefit from the new instructions. Not very reliable, you simply can't detect new transistors if you can't find a way to use them.

Performance of a single core of a CPU on simple benchmarks has in any case hit something of a roadblock in the last few years. CPU manufacturers are adding cores, and adding special-purpose instructions and silicon, so programs have more resources to draw on if they know how to use them, but boring old arithmetic isn't getting much faster. It's hard to know for what special purposes CPU manufacturers will be adding transistors in 5 or 10 years time, but if you can do that then you could possibly write benchmarks now that will tell you when they've done it.

I don't know much about GPUs, but if you can somehow detect the number of GPU cores on your machine (counting parallel shaders and whatnot), that might actually be the best proxy for raw number of transistors. I guess the number of transistors in each core does go up over time too, but the number of cores on modern graphics cards is rocketing, so actually that might account for the bulk of the new transistors related to processing. Whether that will still be the case in 5 or 10 years, again, who knows.

Another big transistor count is RAM - presumably for a given type of RAM, the number of transistors is pretty much proportional to capacity, and that at least is easily measured using OS-specific functions.

If you stick a SSD in a machine, I bet you pile on the transistor count too. Is that the sort of thing you're interested in, though? Really Moore's law was about single ICs, not the total contents of a beige (well, white or silver these days) box at a given price point.

Steve Jessop
  • 273,490
  • 39
  • 460
  • 699
-1

Well algorithm could be really simple, like calculating flops(floating point operations per second). Just get system time, make 1kk floating point operations get time again and get the difference(or use LINPACK Benchmarks wich is used to rate supercomputers). However implementing this in platform independent way would be tricky.

Ivan
  • 3,567
  • 17
  • 25
  • 1
    Not really, because Moore's Law is not about performance, it's about the density of transistors on an IC, so it could be applied to e.g. how much memory you can fit on a given piece of silicon. – Paul R Aug 31 '11 at 10:02
  • @PaulR Please read 3rd comment on a question. Author adds that he would be alright with measuring something which is directly impacted by number of transistors. PS there is nothing in Moor's law about physical density, moors law is about money. eg how much memory you can fit on a piece of silicon costing say $100 – Ivan Aug 31 '11 at 10:07
  • Read the original statement of Moore's Law and consider that `number of transistors` is directly proportional to `amount of memory`. This has nothing to do with performance. The OP needs to remove the reference to Moore's Law if all he is interested in is benchmarking, e.g. MIPS or FLOPS. – Paul R Aug 31 '11 at 10:12