This question might be a bit weird but I wonder if there is a way NOT to use cache in c++.
I'm doing some tests, in this test I'm loading 2 GB (512*4 MB matrices)
to memory, then do some correlations among them and calculate performance.
When I run the code for the 1st run, the running time is t1+x second
, in the 2nd run, total time is t2+x seconds
where t1
and t2
are loading time of 2 GB matrices and t1 > t2
. (approx. t1=20, t2=5 sec)
. My assumption is it is because in the 2nd run, cache is used. (I don't know if there can be any other reason that decreases loading time like that.)
My problem with this is since there is no standards in loading times, the results are deceptive in some cases. So I want a standard in IO time. The only thing comes to my mind is not to use cache if there is a way.
Is there a way to standardize my IO time?
I'm using Windows 7 x64 and working on visual studio 2010, my RAM is 32 GB.
TEST RESULTS: I've compared average loading times of 4MB binary file in 5 options.
The options are 1st run with my original code, 2nd run with original code, using FILE_FLAG_NO_BUFFER, 1st run using cache and 2nd run as Roy Longbottom
suggested.
1st run : 39.1 ms
2nd run : 10.4 ms
no_buffer : 127.8 ms
cache_1st run : 27.4 ms
cache_2nd run : 19.2 ms
My original read code is as follows:
void readNoise(string fpath,Mat& data){
FILE* fp = fopen(fpath.c_str(),"rb");
if (!fp)perror("fopen");
float* buffer= new float[size];
for(int i=0;i<size;++i) {
fread(buffer,sizeof(float),size,fp);
for(int j=0;j<size;++j){
data.at<float>(i,j)=buffer[j];
}
}
fclose(fp);
free(buffer);
}
I noticed a mistake in my code which is to do dynamic allocation, when I change dynamic allocation to static allocation, running time of readNoise
method becomes same as cache used version of Roy Longbottom
.
The difference of two run decreased but the question remains the same: "How to standardize running time of both first and second run"?