2

I need to create a very big array in my project. I tried 3 methods, but all of them turned out to be bad_alloc. I couldn't understand, as my PC's RAM is 10GB.

Here are my implementations under MSVC2015 x86 mode.

CODE1

#include<fstream>
#include<iostream>
#include<string>
#include<vector>
using namespace std;
const long long MEM_SIZE = 1LL * 1024LL * 1024LL * 1024LL; // available memory 1GB
typedef struct MyClass {
    int a;
    unsigned char b,c,d;
    size_t e,f;
    double g, h; 
};
int main() {
    MyClass *mc = new MyClass[MEM_SIZE / sizeof(MyClass)];
    cout << "done!" << endl;
    return 0;
}

CODE2

#include<fstream>
#include<iostream>
#include<string>
#include<vector>
using namespace std;

const long long MEM_SIZE = 1LL * 1024LL * 1024LL * 1024LL; // available memory 1GB
typedef struct MyClass {
    int a;
    unsigned char b,c,d;
    size_t e,f;
    double g, h; 
};
int main() {
    vector<MyClass> myv;
    myv.resize(MEM_SIZE / sizeof(MyClass));
    cout << "done!" << endl;
    return 0;
}

CODE3

#include<fstream>
#include<iostream>
#include<string>
#include<vector>
using namespace std;
const long long MEM_SIZE = 1LL * 1024LL * 1024LL * 1024LL; // available memory 1GB
typedef struct MyClass {
    int a;
    unsigned char b,c,d;
    size_t e,f;
    double g, h; 
};
int main() {
    vector<MyClass> myv;
    MyClass tmp;
    for (int i = 0; i < 12000000; i++){
        tmp.a = i;
        myv.push_back(tmp);
    }
    cout << "done!" << endl;
    return 0;
}

The size of MyClass is 32 Bytes, I set available memory as 1GB, so the array length is 1GB/32B=33554432.

As for CODE1 and CODE2, the array size is 1GB, far less than PC's RAM, why bad_alloc?

As for CODE3, I know when push_back, the capacity of vector will double, but it's also less than PC's RAM. In CODE3, when i==11958657 crashed.

But when I build and run in x64 mode, all are fine. To my knowledge, x86's heap is around 2GB, why my 1GB array crashed?

How do i do in x86 mode?

user1024
  • 982
  • 4
  • 13
  • 26

1 Answers1

2

An array has to be contiguous in memory so you don't just require 1 GB of memory, you need it in one block. Even if you have enough free virtual memory (physical memory doesn't matter much), memory fragmentation may prevent that allocation.

Tannin
  • 488
  • 5
  • 11
  • Very good point; hence my comment that `std::deque` should be tried. However, I am pretty sure that you meant to write "contiguous" and not "continuous". – Christian Hackl Mar 05 '16 at 15:57
  • A deque won't solve the problem if memory contiguity is the issue. Although the standard doesn't require that a deque allocate contiguous memory, the operating system still tries to allocate a contiguous block when you request memory. (At least on Windows, not sure if that's true for Linux et al.) – Cody Gray - on strike Mar 06 '16 at 06:46
  • @CodyGray: I'm fairly certain that is wrong. The standard doesn't "forbid" the memory to be contiguous but to my knowledge it requires that insert operations at the front and end leave all pointers valid which wouldn't be possible if it required a re-allocation of the whole memory block. Also, it requires insertion at the beginning to be (amortized) constant complexity. – Tannin Mar 06 '16 at 11:32
  • you maybe right, but is it reasonable? i have so much memory, how could i make full use of it? shouldn't OS do for me? – user1024 Mar 07 '16 at 02:47
  • 1
    You can of course use all of your memory, just not in one huge block. The OS can't take this problem away. Say you malloc three separate block, 100MB each. When you now release the second block, the OS cannot merge the now-free 100MB block with other free memory blocks without breaking your pointers (=virtual memory address). And since C allows you to pointer arithmetics (i.e. calculate the distance between the two memory blocks) and cast pointers to other types, it can't magically update everything that may be derived from the pointers. – Tannin Mar 08 '16 at 17:48