1

I want to measure the computational speed of a telosB at different temperatures. To program the device I use contiki. My idea was to let it send messages in fixed intervals with the return of

clock_time (void)

as content. A second device reads that message and stores it in a file, together with its own clock_time (void) return value. With that I could say:

At temperature A, a device needed 500 clock ticks to send 100 messages and the second device needed 600 clock ticks to receive 100 messages.

At temperature B, a device needed 500 clock ticks to send 100 messages and the second device needed 800 clock ticks to receive 100 messages.

This would mean that the device truly is slower at temperature B, because the receiver had to wait longer.

I am stuck right now, because I get different results every time I perform my solution + the results get distorted at the moment the link quality is not perfect and some packages are getting lost. Is there a solution, maybe with a different setup, that helps me to prove the idea?

schande
  • 576
  • 12
  • 27
  • It seems like the receiver in both cases can't keep up anyway? Is the solution to acknowledge the messages, so that the transmitter does not get ahead of the receiver? Isn't it not that "the receiver had to wait longer" but that the transmitter should have waited? – Weather Vane May 20 '18 at 20:24
  • Maybe I expressed my question wrong. My assumption was, that the frequency of a clock changes at different temperatures. And I want to measure it somehow. The whole idea with the second mote was just a way to measure the relative time difference between two actions that should take the same time. But if some packages are getting lost, the receiver has less work to do because his callback function is not called, which means less ticks on his side, but the same number of ticks on the sender side. – schande May 20 '18 at 20:36

3 Answers3

1

You can send the messages with constant interval between them (e.g. every 5 milliseconds in A's time), and add a sequence number to the each messages. This way, you will know the expected time of each received message relative to the previous messages, even if some messages are missing.

To measure the CPU speed, you don't really need to measure the time to receive and process each message. It will not be an objective measure anyway because a lot of the time in communication is actually spent on the actual reception, which is clocked from the radio clock, not from the one driving CPU.

And if the CPU speed is really what you want to measure, don't use clock_time(). You need to configure an msp430 hardware timer to be sourced from the DCO.

Another option, if carrying out with your idea seems too complex, is to compare the DCO frequency with the frequency of the Low-Frequency crystal oscillator. This will not give the most accurate results, since the crystal is also affected by temperature, but is good enough if you want to measure the CPU speed to the accuracy of %, not ppm. See the function msp430_sync_dco() for an example how to do it.

kfx
  • 8,136
  • 3
  • 28
  • 52
  • I do not really understand your suggestion. The sequence number and time between the messages does make sense. I already use an etimer to control the time between them `etimer_set(&et, CLOCK_SECOND*0.5); PROCESS_WAIT_EVENT_UNTIL(etimer_expired(&et));` So you mean, that I should get a set of N messages and calculate the average/mean time between two messages? What do you mean with "...which is not clocked from the radio clock..." Is it even possible to configute the msp430 on this platform? If yes how is it possible? – schande May 20 '18 at 22:03
  • 1
    You might have ended up too deep for your current knowledge. Download the msp430f1611 code samples, they show how to configure timers. You would need to source a hardware timer (for example, Timer B) from SMCLK. Then by reading `TBR` you can access the DCO counter. Actually, since `TBR` is 16 bits, it will overflow many times per second, so the time between subsequent packets must be very short (a few milliseconds - I was wrong to suggest "a second"). What you want to do is not a trivial task. – kfx May 20 '18 at 22:42
  • 1
    With the "radio clock" comment I mean that the radio is sourced from a different hardware clock source. Since you want to measure CPU clock, the radio one is irrelevant. – kfx May 20 '18 at 22:46
1

My assumption was, that the frequency of a clock changes at different temperatures. And I want to measure it somehow.

If you want to measure deviations in the frequency of a clock, use a frequency counter. This is what it's made for, and it can measure frequencies to a much higher degree of precision than your microcontroller could ever achieve.

Generally speaking, most clock sources should be stable enough over a device's operating temperature range that differences in runtime should be negligible. If you're running your device at extreme enough temperatures that clock speeds are drifting by even a few percent, this is likely to prevent radios from operating, as their transmit/receive frequency will drift as well.

  • 1
    Good point, but the radio uses a different clock. In terms of CPU speed, they promise 0.1% typical error per degree Celsius, so a few % difference is nothing extraordinary. – kfx May 20 '18 at 23:02
1

The MSP430 typically has a free running internal oscillator that you program to give an approximate operating frequency for the processor clock. This calibration alternates between two frequencies to give an approximation to the desired operating frequency. Having looked at the circuit diagram for the telosB, it has a 32kHz crystal that can be used to provide a more accurate time source. I am not familiar with the operating system or other software on the board that you are using but this 32kHz oscillator can be used as a calibration source for the main processor clock where the software uses an internal timer to calculate the actual main processor clock rate and tweaks the programmed clock frequency to bring it back to the desired frequency. This link goes to a page in the TI MSP training material that describes the DCO operation and calibration. If the OS includes this functionality then the drift in operating frequency with temperature will depend upon the 32kHz crystal characteristic and the frequency tracking algorithm. It may well not be monotonic.

There is also the software structure of the application sending the messages. How is the decision to send the message triggered. The normal method that I would use is to have a timer that uses the crystal to generate an event that triggers the main loop software to send out the message at a defined periodicity. The main software then generates the message on the event. Assuming that there are enough processor cycles between events to allow for the message generation then the actual clock frequency is irrelevant.

You say that the link is not perfect and some of the messages are getting lost. In both cases you are sending 100 messages in 500 ticks. The device cannot miss a transmission as it is the originator. The difference in the received data times look like they could be due to the number of missed messages on the receive side. You are sending one message every 5 OS ticks, presumably the receive interval is similar so you can detect missed messages if the time between any two messages is more than 6 ticks.

Another thought; how is the OS tick generated. If it is derived from the processor clock then the tick duration will change along with the processor clock. If it is generated from the 32kHz XL1 oscillator then its period will vary with the crystal frequency characteristic.

uɐɪ
  • 2,540
  • 1
  • 20
  • 23