9

I'm interested in running a program at a specific frequency (like 25MHz) on my 2GHz+ processor. The only method I can think of for doing something like this is using a microsecond precision sleep function, but I am unsure of how to calculate how long the thread should sleep for in order to match a specific frequency. Any tips or other ideas? I'm doing this in C on an X86 Linux OS.

ytrp
  • 271
  • 2
  • 8

6 Answers6

2

There are a couple of problems here. This first what you are trying to simulate. Modern processors clock at 2Ghz but pipeline instructions so an individual instruction may take 10-30 clocks to finish. By putting a sleep in the thread you break the pipe-line. The second is how granular you want to have you simulation. Do you need to have instruction level timing of can we fake it by putting some space between functions.

My last thought is that you are likely not wanting to simulate a modern processor running at 25Mhz, but some type of ARM chip on an embedded device. If this is the case there are very good simulators for most of these chips already on the market. Compile your code to native instructions for your target chip, them use an already available simulator if one is available.


Edit:

So as I now understand it you want to execute an instruction on of a virtual processor 25M times a second. What I might try is an adaptive approach. You have lots of time to "mess around" between instructions. start by putting some spacing, sleep will probably work, between each instruction. Note in an array with as much precision as possible when each virtual clock started keep a rolling average of say the last 25, 100 or 1000 cycles. If the average rises above 25Mhz start adding more space. If it is too slow reduce the space.

As I said originally, it is very hard to calculate the amount of time an instruction takes on a modern processor. The first set of instructions may run a little too fast or slow, but a technique like this should keep it as close to the right speed as a typical oscillator on a comparable hardware implementation would.

John F. Miller
  • 26,961
  • 10
  • 71
  • 121
  • 2
    This kind of answer is not helpful. The author asked how, not where to purchase pre-existing software. – subwar May 03 '11 at 22:46
  • 1
    Subwar, when I wrote this answer, the comment "I made a virtual processor…" had not been posted. I answered the question I thought was asked which was how to simulate run a 2Ghz X86 processor at 25Mhz. The second Paragraph was a guess that this was not exactly what he wanted to do. It turns out I was half right. He did indeed want to simulate a different processor, just not a preexisting one. – John F. Miller May 03 '11 at 23:30
  • I jumped the gun. Apologies friend :) – subwar May 06 '11 at 21:35
2

I would suggest an event-driven architecture: on each STEP (1/hz), fire 1 instruction operation.

Paul Nathan
  • 39,638
  • 28
  • 112
  • 212
2

I would simply run the simulation in bursts. For example you could run 250 thousand cycles, then sleep for the remainder of a 10 msec interval. You could adjust the view of the clock that the simulation sees to make this completely transparent, unless it's interfacing with some sort of external hardware that needs to be interfaced with at a particular rate (in which case this becomes a much more difficult problem).

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
1

To sum up what the above answers have said, if you are in user-mode attempting to emulate a virtual processor at a specific frequency, you should implement some sort of manual "scheduling" of the thread that processes CPU instructions either via sleep calls or more advanced functions and features like fibers in Windows. A caveat you should look out for is that some OS sleep calls do not sleep for the exact amount of time you specify, so you may have to add additional code to calibrate the deviation from computer to computer in order to get closer to the target frequency. More often than not, you will not be able to accurately schedule your virtual processor to run at a steady 25 MHz (22-28 MHz is more likely). Anyways, I agree with both Nathan and the burst idea. Good luck in which ever path you use!

subwar
  • 279
  • 1
  • 3
  • To calculate the # of seconds you should use some assembly to time the execution of some dummy code (code that will likely be executed). Then derive some sort of algorithm (through experimentation) relating the time it takes the execution code to the size of the dummy code executed. You can use this algorithm to further calculate how long it will take to run your virtual processor code, or better yet, re-use the algorithm dynamically in your emulation loop to determine how long to sleep (factor in calibration). – subwar May 03 '11 at 22:55
1

For a virtual machine, everything is virtual, including time. For example, in 123 real seconds, you might emulate 5432 virtual seconds of processing. A common way of measuring virtual time is to increment (or add something to) a "number of cycles" counter each time a virtual instruction is emulated.

Every now and then you'd try to synchronise virtual time with real time. If virtual time is too far ahead of real time you insert a delay to let real time catch up. If virtual time is behind real time then you need to find some excuse for the slow-down. Depending on the emulated architecture there may be nothing you can do; but for some architectures there's power management features like thermal throttling (for e.g. maybe you can pretend the virtual CPU got hot and is running slower to cool down).

You also probably want to have an event queue, where different emulated devices can say "at some specific time some event will occur"; so that if the emulated CPU is idle (waiting for an event to occur) you can skip ahead to when the next event will happen. This provides a natural way for the virtual machine to catch up if it's running slow.

The next step is to determine places where timing matters and only synchronise virtual time with real time at those specific places. If the emulated machine is doing heavy processing and not doing anything that is visible to an outside observer, then an outside observer has no way to tell if virtual time is close to real time or not. When the virtual machine does do something that is visible to an outside observer (e.g. send a network packet, update the video/screen, make a sound, etc) you synchronise virtual time with real time first.

The step beyond that is using buffering to decouple when things occur inside the emulator from when they're visible to an outside observer. For an (exaggerated) example, imagine the emulated machine thinks it's 8:23 in the morning and it wants to send a network packet, but it's actually only 8:00 in the morning. The simple solution is to delay emulation for 23 minutes and then send the packet. That sounds good, but if (after the virtual machine sends the packet) the emulator struggles to keep up with real time (due to other processes running on the real computer or any other reason) the emulator can fall behind and you can have problems maintaining the illusion that virtual time is the same as real time. Alternatively you could pretend the packet was sent and put the packet into a buffer and continue emulating other things, and then send the packet later (when it actually is 8:23 in the morning in the real world). In this case, if (after the virtual machine sends the packet) the emulator struggles to keep up with real time you've still got 23 minutes of leeway.

Brendan
  • 35,656
  • 2
  • 39
  • 66
0

See the Fracas CPU emulator for an approach to this. The authors presented on this at the Heteropar workshop, a part of EUROPAR 2010. They essentially modify the OS scheduler to permit usage of only fractions of the real CPU freq to be used by user programs.

grrussel
  • 7,209
  • 8
  • 51
  • 71