I'm having trouble with memory allocation and can't find any references to this case online. In particular, if I allocating an array of 65536 elements (not bytes) or more, any subsequent allocation (even a small one) "fails" – while it executes successfully, it returns a pointer that overlaps with the recently allocated array.
I'm using Array. I'm not sure if using StaticArray, Array, or Float32Array changes behaviour here, but I've tried them all and didn't seem to get any improvement.
AssemblyScript:
export function empty(): t {
return new Array<f32>(16)
}
export function makeArray(count: u32): arr {
let arr = new Array<f32>(count * 16)
let j: u32
for (let i: u32 = 0; i < count; i++) {
for (j = 0; j < 16; j++) {
//logU32(i * 16 + j)
arr[i * 16 + j] = j as f32 + 1;
}
}
return arr
}
Host JS:
console.log("memory.buffer.byteLength",LinAlg.memory.buffer.byteLength)
matrixBuffer = LinAlg.Matrix4.makeArray(6000)
console.log("matrixBuffer pointer", matrixBuffer)
console.log("empty pointer", LinAlg.Matrix4.empty())
Some relevant logging from my script:
- memory.buffer.byteLength (logged in JS): 655,360
- Request to allocate an Array of how many elements (logged in WASM): 96,000
- Array.length after initialising each buffer (logged in WASM): 96,000
- Pointer value returned to JS: 21,216
- Pointer value of a 16-element Array subsequently allocated: 21,216
If I don't allocate that 2nd array, the original one is usable in JS as a 96,000 element array via __getArrayView(). It's as if allocating the large array works, but breaks the memory allocator for any subsequent operation.
In theory, I should be up to byte (21,216 + 4 * 96,000) = 405,216 in the heap, and still have about 250k of memory left.
Thanks in advance for any help you can provide!