0

I am using linux Real Time to patch and compile my own linux kernel. I was wondering what the performance implications of the following settings where.

CPU Timer frequency (100 HZ/ 300/ 1000). Is lower better? A tickles system/Dynamic Ticks?

I am running math simulations and I was wondering what kernel settings would be best or recommended for RT.

Thanks in advance.

james moore
  • 219
  • 1
  • 3
  • 7

2 Answers2

1

Bear in mind the last time I did this was back in 1999. This will need verifying, but this is what I remember: the frequency dictates how many times per second the kernel should poll for data.

When I used to run gaming servers, one of the issues faced was that some daemonized game servers could not have their "tic rate" updated beyond that of the underlying kernel, as such a patch was applied to roll our own custom kernels to have a 100hz rate, allowing us to up the "tic rate" to much huger values.

In short if you plan on doing this, I would look at how many times per second you expect to need to have the kernel update, and how this is relative to current deploys of the linux kernel. I am sorry I can not offer more on this.

Skyhawk
  • 14,200
  • 4
  • 53
  • 95
Oneiroi
  • 2,063
  • 1
  • 15
  • 28
1

If you're running "math simulations", why do you think that the real time patchset will help?

Real time does not imply faster than normal or less overhead; in fact, the opposite is true. What real time gives you is a deterministic upper bound on interrupt latency.

The timer frequency gives you the scheduling granularity. With higher frequency you get more fine grained scheduling but higher overhead due to context switching.

dynticks helps with reducing idle power consumption. Depending on how you configure the system, dynticks allows an idle cpu to go into a lower power state, thus increasing wakeup latency. Other than that, it should have no effect.

janneb
  • 3,841
  • 19
  • 22