0

I need help on this.

I am seeing Latency or context shift time when a function exported by a Windows DLL is called from a Windows exe.

How one concluded this is that most of the times the DLL exported function is seen to get completed in under 1 milliSec. But sometimes time-stamping the point from when the DLL function is called to when it returns is seen to be even 600 mSecs. This is causiing buffer over-flow and loss of data on the slave side. Actually it is a USB to SPI converter I am using. The DLL takes in USB feed & emits SPI data at the other end. So If this function taken upto 600 mSecs to return, I loose data on the SPI slave.

On profiling the DLL's functions they don't take more than 15 mSecs (though SPI read & write of this magnitude is also lot considering a SPI speed of 15 MHz, and we read 4 bytes).

SO is it context shift time? Would incorporating the DLL's code into my exe itself help? The only delay I see is only across this DLL's function call. Is there any way I can cause no-preemption, so that my applicatiomn gets more CPU time on a Win 7 maching. I am using Visual studio.

Please suggest. Appreciate your help on this.

Thanks, Sakul

2 Answers2

0

Doubtful that 600ms is incurred as a result of calling through to a DLL. Show us the code to the DLL function and that might help. The more probable reasons for this delay are one of these things.

  1. Your DLL function is actually doing something that occasionally takes that long. I/O operations, logging, waits on kernel objects, etc... You didn't show us any code, so this is only a guess, but is highly probable.

  2. Your program isn't the only thing running on the computer. If your app is taking up an entire CPU core (or the whole processor), of course it's going to get interrupted so that other threads on the system can run. Have you called SetThreadPriority and SetPriorityClass to see if that helps? You can also look at the REALTIME_PRIORITY_CLASS.

  3. Low memory and code is getting paged in and out.

Have you profiled your DLL with any profiling tools to see why it takes that long?

selbie
  • 100,020
  • 15
  • 103
  • 173
0

I too stumbled upon the "Have you called SetThreadPriority and SetPriorityClass to see if that helps? You can also look at the REALTIME_PRIORITY_CLASS." Am still to use the REALTIME_PRIORITY_CLASS, but too believe it should help. (Presently I also see a -1 coming as the priority of the threads in question, by doing a

i8_priority = GetThreadPriority(
    pDpc->threadHandle );
 
                printf("\n 0 (threadHandle:0x%08X) i8_priority = %d \n ",  pDpc->threadHandle, i8_priority);
                
                SetThreadPriority(
    pDpc->threadHandle,
    2 );
)

Both original & after doing a SetThreadPriority gives -1 only.

Not sure why? Need to see the error code if it returned any.

One reason we saw for the latency, by seeing the USB trace too is that the device isn't able to see esponse from hardware sometimes, and hence is taking the 600 mSecs and even upto 4 Secs. So this is being looked into by the FPGA lads.. This is teh same as your point of IO handling been done by the DLL..?

I want to try the thread and class priority things to see how much they help, they should.

  • Don't pass "2" to SetThreadpriority. Pass THREAD_PRIORITY_TIME_CRITICAL or THREAD_PRIORITY_HIGHEST. In any case, check the value of GetLastError() to get the error code of what went wrong. My psychic powers tell that that pDpc->threadHandle is not valid. – selbie Feb 07 '13 at 17:17