7

I cannot really find anything interesting concerning this question, but I've been wondering for quite some time now how timers and delays in any programming language work at low level.

As far as I understand, a CPU continuously executes instructions in all of its cores, as fast as it can (dependent on its clock speed), and as long as there are any instructions that are to be executed (there is a running, active thread).

I don't feel that there is a straightforward way to manipulate this flow where real time is concerned. I then wonder how stuff like animations work, encountered in many, many situations:

  • In the Windows 7 OS, the start menu button gradually glows brighter when you move the mouse over it;
  • In flash, there is a timeline and all objects in the flash document are animated according to the FPS setting and the timeline;
  • jQuery supports various animations;
  • Delays in code execution...

Do computers (mainboards) have physical timers? Like a CPU has registers to do its operations and keep data in between calculations? I haven't found anything about that on the internet. Does the OS have some really complex programming that provides the lowest-level API for all things related to timing?

I'm really curious about the answer.

MarioDS
  • 12,895
  • 15
  • 65
  • 121

3 Answers3

3

Most (maybe ALL) CPUs are driven by a clock on the motherboard that "ticks" (generates a signal), every so often. This is what the Megahertz (MHZ) or Gigahertz (GHz) rating on the processor is telling you, what speed this clock runs at. This what "Overclocking" refers to, when you read that a processor can safely be overclocked up to some higher GHz setting. Most of what you describe above is triggered by the "ticks" generated from this clock. This governs how often the CPU attempts to execute the next instruction, how often it does everything in fact....

Do not confuse this clock with the Real-Time Clock, which keeps track of what time it is. All references to "system time" or "server time" use the real-time clock, which is a separate piece of hardware on your motherboard that keeps track of the time, even when the computer is turned off.

These two "clocks" are independent of one another and are used for two completely different purposes. One drives all CPU processing. If a specified process (say, multiplying two integers together) will take 127 cpu cycles, then how much real-time it will take is dependent totally on what Gigahertz the cpu clock is set to... If its set say to 3.0 Ghz, then that means the cpu can execute 3 billion processor cycles per second, so something that takes 127 cycles will take 127/3 billion seconds. If you put a different clock cpu on the motherboard, then the same multiplication will take more (or less) time. None of this has anything at all to do with the real-time clock which just keeps track of what time it is.

Community
  • 1
  • 1
Charles Bretana
  • 143,358
  • 22
  • 150
  • 216
  • 1
    I knew what CPU clock was and that it's different from real time. But how would the OS go about using those ticks and the hardware clock on the motherboard to determine a set amount of time? Does it compare a value of real time every few CPU ticks and fire triggers every few (real) miliseconds or something? Isn't that quite heavy for any system? – MarioDS Nov 02 '12 at 17:18
  • 1
    I'm pretty sure the "real-Time-Clock" (on the motherboard) keeps track of the current time itself. When you reset the machine server time, that OS process is updating the value stored in this clock chip or clock circuit. SO yes, the code in the OS compares a specified time with the current time as reported by the real-time clock. Again, this is only for proceses that need realtime or current time or elapsed time. CPU processing is driven by the other clock that runs independently based on whatever speed it is manufactured for. It does not know (or care) what the current time is. – Charles Bretana Nov 02 '12 at 20:39
  • Is this CPU clock responsible for the monotonic clock i.e. one used to measure elapsed time, like in Java's `System.nanoTime()`? – Minh Nghĩa Aug 19 '23 at 08:13
  • No. The CPU clock is more or less just a "ticker" that generates a signal or pulse every so often, based on sone physical process, kinda like the metronome thing pianists put on their pianos, although it's electronic not mechanical. Each generated "tick" causes the CPU to initiate another clock "cycle". – Charles Bretana Aug 20 '23 at 02:54
0

I can't promise that this is definitely how it works, but if I were to design it this is where I would start.

First you need some kind of known clock. This can be the same hardware clock that runs the CPU or an independent crystal clock.

Next, you need a basic counter. This is just an adder that adds 1 on every tick. You are free to then apply a multiplier if you want to vary the scale of your timer. The counter might overflow at a given rate, or more likely, it can be reset when your timer goes off.

Then you need a register that will store the timer value. This is where the programmer will enter the value they want to watch for. Since you're low enough down to only concern yourself with asynchronous logic, you can now continuously compare each bit in the counter to the corresponding bit in the register. You can do this with an equality comparer.

When they match, the comparer will send a high signal that can trigger an interrupt (basically a very low level hook or callback function that will be run immediately -- hence the name "interrupt").

If you're working at the scale where you have an OS, it can create its own set of timers. The OS is already using such timers to set up the "quanta" that the scheduler uses to divvy up time between threads on the same core. It might even have its own pure software implementation that it can make available to client software. The OS can set the hardware timer can to 1us (for example) and allow clients to register callbacks on multiples of that frequency to be run on their next quantum.

Greg
  • 404
  • 4
  • 15
-1

Operating systems support some variant of a "sleep" call, which relinquishes execution to other processes running on the system. If all processes are asleep, then the kernel tells the processor to sleep for a while—modern processors have an instruction for that explicit purpose.

user4815162342
  • 141,790
  • 18
  • 296
  • 355
  • 2
    I understand, but that does not explain how an OS can determine the duration of a thread sleeping? I.E. How does it make the thread sleep for, say, 5 seconds? – MarioDS Nov 02 '12 at 17:15
  • 1
    By instructing the scheduler to wake it at a specific point in time. By then, other threads will run as usual. – user4815162342 Nov 02 '12 at 17:17
  • 2
    Then, what does a scheduler use exactly for time references? I guess it would check the motherboard clock. But how often can it really check while staying within reasonable amounts of time and not be too straneous on the system? – MarioDS Nov 02 '12 at 17:21
  • Don't forget that other threads are still running, so the check can happen at context switch. If they're not then it tells the CPU to sleep for a while - as pointed out in the answer. – user4815162342 Nov 02 '12 at 17:24
  • 1
    How often does a context switch occur then? Timers can usually be set with millisecond accuracy. Doesn't that mean that the check must happen at least once every millisecond? – MarioDS Nov 02 '12 at 17:26
  • This depends on the OS and can be configurable. Linux calls the period "HZ" and defaults to 1 ms. – user4815162342 Nov 02 '12 at 17:28