When using a very large vector of vectors we've found that part of the memory is not released.
#include <iostream>
#include <vector>
#include <unistd.h>
void foo()
{
std::vector<std::vector<unsigned int> > voxelToPixel;
unsigned int numElem = 1<<27;
voxelToPixel.resize( numElem );
for (unsigned int idx=0; idx < numElem; idx++)
voxelToPixel.at(idx).push_back(idx);
}
int main()
{
foo();
std::cout << "End" << std::endl;
sleep(30);
return 0;
}
That leaves around 4GB of memory hanging until the process ends.
If we change the for
line to
for (unsigned int idx=0; idx < numElem; idx++)
voxelToPixel.at(0).push_back(idx);
the memory is released.
Using gcc-4.8
on a linux machine. We've used htop
to track the memory usage on a computer with 100 GB of RAM. You will need around 8 GB of RAM to run the code. Can you reproduce the problem? Any ideas on why that is happening?
EDIT:
We've seen that that does not happen in a Mac (with either gcc
or clang
). Also, in linux, the memory is freed if we call foo
two times (but happens again the third time).