0

I used to work with 3D dynamic allocated arrays but I read that 1D array has better performance so I tried to transorm my 3D array in a 1D array.

I work with two big arrays (about 240*320*1000 double) and the allocation look like this :

int numberFiles; //about 1000, can change
int nRow; //240
int nCol; //320
double *inputData;
double *outputData;

inputData = (double*)malloc(nCol*nRow*numberFiles*sizeof(double)); //Always works
outputData = (double*)malloc(nCol*nRow*numberFiles*sizeof(double)); //Always fails

The second allocation always fail. Since the first doesn't, it's not a size problem. And since, when allocate this in a 3D arrays, it's also working, it's not a memory space problem

Any ideas why the second allocation fails ?

Thanks.

Chen Song.

PS : Sorry for the poor english.

PS2: Tried new Double[], same problems.

Edit : So allocation in a simple code works but as soon as I try to put it in my app, it's not working anymore. I'm woking with QtCreator with mingw32 and have matio(libraries to read MATLAB files) to read data. Here a code example of my program which give me error :

//Needed include
using namespace std;

int main(int argc, char ** argv)
{

    int k,i,j;
    double *matriceBrute,*matriceSortie;
    double* matData;
    const char * c;
    mat_t *matfp;
    matvar_t *matvar;

    QApplication app(argc,argv);
    // Getting the directory of the data and the files in it
    QString directoryPath = QFileDialog::getExistingDirectory();
    QDir myDir(directoryPath);
    myDir.setNameFilters(QStringList()<<"*.MAT");
    QStringList filesList = myDir.entryList(QDir::Files | QDir::Hidden);
    string fullPath= (directoryPath+"/"+filesList[0]).toLocal8Bit().constData();
    c = fullPath.c_str();
    // Loading the first one to get the size of the data
    matfp = Mat_Open(c,MAT_ACC_RDONLY);
    matvar = Mat_VarReadNextInfo(matfp);
    int read_err = Mat_VarReadDataAll(matfp,matvar);
    int numberFiles = (int)filesList.size();
    int nCol = matvar->dims[1];
    int nRow = matvar->dims[0];
    //Allocating
    matriceBrute = (double*)malloc(nCol*nRow*numberFiles*sizeof(double));

    matriceSortie = (double*)malloc(nCol*nRow*numberFiles*sizeof(double));
    //the first one works but not the second

    //loading the data in the allocated memory
    for (k=0;k<numberFiles;k++)
    {
        fullPath= (directoryPath+"/"+filesList[k]).toLocal8Bit().constData();
        c = fullPath.c_str();
        matfp = Mat_Open(c,MAT_ACC_RDONLY);
       matvar = Mat_VarReadNext(matfp);
        matData = (double*)(matvar->data);
        for (i=0;i<nRow;i++)
        {
            for (j=0;j<nCol;j++)
            {
                matriceBrute[i + nRow*(j+k*nCol)]=*(matData+(i+j*nRow));
                matriceSortie[i + nRow*(j+k*nCol)]=*(matData+(i+j*nRow));
            }

        }
        Mat_VarFree(matvar);
        Mat_Close(matfp);
    }
}
SongCn
  • 1
  • 1
  • 1
    Can you show a short complete example that demonstrates the problem? Code out of context often doesn't include the actual problem. – Retired Ninja Apr 09 '14 at 09:13
  • @jsantander 240*320*1000*8 in my calculator is 614400000, i.e. 6E8 byte, i.e. 0.6E9 byte, i.e. 0.6 GB (not GiB btw) – dyp Apr 09 '14 at 09:16
  • @dyp you're right... I calculated with 2400 instead of 240 – jsantander Apr 09 '14 at 09:17
  • What operating system are you using? – amdn Apr 09 '14 at 09:17
  • Why do you need to load _all_ files into memory? – user877329 Apr 09 '14 at 09:27
  • @amdn I'm on Windows 7 64bits – SongCn Apr 09 '14 at 09:41
  • @user877329 My software is a data(temperature) processing software. I think that I would lost too much performance if I had to reload for every process(Fourier, derivative). – SongCn Apr 09 '14 at 09:45
  • 1
    I am not familiar with Windows - it could be that you are low in disk space for a `swap` partition (whatever that is in Windows) - see http://stackoverflow.com/questions/833234/64-bit-large-mallocs – amdn Apr 09 '14 at 09:57
  • In Windows the space on disk used to back virtual pages when they need to be swapped is called a `pagefile`. Here's some information on performance counters you can apparently access in Windows to see how much of your physical memory is committed: http://support.microsoft.com/kb/2860880 – amdn Apr 09 '14 at 10:11
  • 1
    Being on a 64 bit OS does not necessarily mean that you're using a 64 bit compiler and running a 64 bit executable... you may well be running 32 bit then having address space fragmentation issues, which would have been far less troublesome with your old "3D" approach. Try to make sure you're actually doing a 64 bit build. Exactly how that's done depends on the compiler you've chosen. For example, VS2013 adds a Start menu option for a "VS2013 x64 Native Tools Command Prompt" where the cl.exe is 64 bit. – Tony Delroy Apr 09 '14 at 10:15
  • @RetiredNinja Added the complete example. – SongCn Apr 09 '14 at 13:15
  • @SongCn What happens if you gdb it. Also how does it fail? Does it return NULL, or does the program crach? – user877329 Apr 09 '14 at 14:19
  • @user877329 It returns NULL – SongCn Apr 09 '14 at 14:55
  • Seems like it's a QApplication problem. When I remove the call or call QCoreApplication, the allocation succeed. Still looking for the problem. – SongCn Apr 10 '14 at 09:16

2 Answers2

1

No error, it works.

#include <iostream>
using namespace std;

int main()
{
int numberFiles = 1000; //about 1000, can change
int nRow = 240; //240
int nCol = 240; //320
double *inputData;
double *outputData;

inputData = (double*)malloc(nCol*nRow*numberFiles*sizeof(double)); //Always works
outputData = (double*)malloc(nCol*nRow*numberFiles*sizeof(double)); //Always fails
if (inputData == NULL)
  cout << "inputData alloc failed" << endl;
  if (outputData == NULL)
  cout << "outputData alloc failed" << endl;
    return 0;
}

It does not print as failed, so, don't worry, be happy.

Dr. Debasish Jana
  • 6,980
  • 4
  • 30
  • 69
1

May it be memory fragmentation problem? If, for example, you can't alloc ~0.6Gb (by one heap) twice? First time you alloc it, at second - it just can't find in your physical memory such big heap of free memory? Which exception do you receive?

Arkady
  • 2,084
  • 3
  • 27
  • 48
  • That could be a problem in an embedded system without virtual memory, which is why I asked about the operating system. – amdn Apr 09 '14 at 09:28
  • as I understand, even if system has virtual memory, it can meet such problem, when it will try to "connect" physical memory to virtual allocation. – Arkady Apr 09 '14 at 09:41
  • 1
    No, contiguous virtual pages do not have to come from contiguous physical pages. The mapping is done via a page table, see http://en.wikipedia.org/wiki/Page_table. On x86 pages are 4KB. You could have Huge Pages, which typically are 2MB - those require contiguous 2MB of physical memory. – amdn Apr 09 '14 at 09:51