10

I was checking how big of an array can I create on a X64 application, my understanding was that I can create arrays bigger than 2^31 on X64 process but I'm getting a compilation error on VS2010 compiler, below code

const size_t ARRSIZE = size_t(1)<<32;
int main()
{
    char *cp = new char[ARRSIZE];
    return 0;
}

gives compiler error "error C2148: total size of array must not exceed 0x7fffffff bytes" on VS2010 on target x64 platform, I can create upto (size_t(1)<<32 - 1);

I have Linker->Advanced->Target Machine is Machinex64. Also Linker->System->Enable Large Addresses as Yes ( Not sure if this really matters ). Does the paging file\Physical Ram present in the pc matter here? (I'm sure that it is a 64-bit app because if I remove that line and just have char* cp; it is 8-byte.) Am I missing some settings?

Coder777
  • 985
  • 2
  • 9
  • 15
  • 1
    I think this may be an issue with the 32-bit compiler, [see here](http://connect.microsoft.com/VisualStudio/feedback/details/553756/invalid-check-for-maximum-array-size-in-x64-compiler-c2148). The workaround is to use the 64-bit native compiler. Not sure if it's fixed in a newer version of the compiler. – icabod Nov 06 '13 at 17:16
  • 2
    @icabod This is **not** an issue with a 32-bit compiler (as the link you posted explains). This happens with a 64-bit compiler. Still, though, is this **really** a bug? I didn't find anything in the C++ standard that would mandate a conforming implementation to guarantee a minimum size for arrays. `size_t` must be able to store any index into an array. This does not imply that it must be possible for an array to be large enough to use up the entire range. – IInspectable Nov 06 '13 at 18:00
  • @IInspectable: The link I posted states that, in VS2010, cross-compiling for 64-bit using the x86 compiler gives that error, but compiling for 64-bit using the x64 compiler doesn't. That reads to me like an issue with the 32-bit cross-compiler (which is what I meant :)). – icabod Nov 06 '13 at 23:23
  • @icabod I see where I went wrong. I just verified this and the results are unchanged with Visual Studio 2012: Building on the x64 command line succeeds, while building from the 32-bit x64 cross-compile command line fails as well as when building from the IDE. This is somewhat awkward. I would have assumed that VS would spawn the native 64-bit compiler, and I can't see why it doesn't take advantage of the increased address space available to the compiler. So I would agree, this is a bug, and with VS launching the cross-compiler a pretty significant one at that. – IInspectable Nov 07 '13 at 00:18
  • I'm curious to know if you could perform the allocation using [HeapAlloc](http://msdn.microsoft.com/en-us/library/windows/desktop/aa366597.aspx)? The size parameter _I think_ typedefs as a `LONG_PTR`, and the only size restriction noted is if you specify a non-growing heap. – icabod Nov 07 '13 at 09:30
  • The C++ language recommended minimum for the size of an array is 262,144, so MSVC is well within its rights to refuse your request for an array with 4 billion elements. – Raymond Chen Nov 08 '13 at 22:21

2 Answers2

7

This appears to be a defect in the 32-bit cross compiler for x64 targets. The Microsoft Connect link posted by icabod in the comments above addresses this particular issue. Unfortunately the bug's status has been set to Closed - Won't Fix.

The following code snippets will fail to compile using the 32-bit cross compiler for x64:

char* p = new char[(size_t)1 << 32];

and

const size_t sz = (size_t)1 << 32;
char* p = new char[sz];

Both of the above will fail with the error message error C2148: total size of array must not exceed 0x7fffffff bytes when compiled with the 32-bit cross compiler for x64. Unfortunately, Visual Studio does launch the 32-bit compiler even when run on a 64-bit version of Windows, targetting x64.

The following workarounds can be applied:

  • Compiling the code using the native 64-bit compiler for x64 from the command line.
  • Changing the code to either of the following:

    size_t sz = (size_t)1 << 32;  // sz is non-const
    char* p = new char[sz];
    

    or

    std::vector<char> v( (size_t)1 << 32 );
    

The bug is still present in Visual Studio 2012, and all workarounds still apply.

IInspectable
  • 46,945
  • 8
  • 85
  • 181
1

The compiler is likely trying to optimize since your ARRSIZE value is a constant. And then it hits its own static initialization limit. You could probably just take out the "const" keyword and it will work.

If not, something like this will likely work.

extern size_t GetArraySize();

int main()
{
    size_t allocationsize = GetArraySize();

    char *cp = new char[allocationsize];
    return 0;
}

size_t GetArraySize()
{
    // compile time assert to validate that size_t can hold a 64-bit value
    char compile_time_assert_64_bit[(sizeof(size_t) == 8)?1:-1];

    size_t allocsize = 0x100000000UL; // 64-bit literal

    return allocsize;
}
selbie
  • 100,020
  • 15
  • 103
  • 173
  • This is not what's happening (How would a compiler even implement static initialization for an array dynamically allocated on the free store? And what initialization anyway?). This seems to be a bug in the cross compiler. The native 64-bit compiler for x64 does not expose this behavior; it's only the 32-bit cross-compiler for x64 that does. – IInspectable Nov 07 '13 at 08:42
  • I just validated - taking out the "const" keyword, at least on Visual Studio 2012 as a 64-bit compile, resolves the compile issue. I updated my answer. – selbie Nov 07 '13 at 16:55
  • The explanation is still not correct: The compiler cannot optimize the expression `new char[allocationsize]` and apply some sort of static initialization. Slightly offtopic: The comment and your compile time assertion do not match. It should read `static_assert(sizeof(size_t)>=8);` instead. – IInspectable Nov 08 '13 at 21:31