I know, it is an implementation detail, and some people think it is forbidden to be interested into them. But I nevertheless want to find references for, and confirmation of, the following:
The large object heap maintains a free-list of holes in a segment. It uses this to fullfill allocation requests for large objects. Doesn't that also mean that such allocations would potentially be more expensive than regular (only allocation pointer increasing) allocations from the small object heap? Reference
On 32-bit processes, the lower limit of segment sizes is 16MB. What is that size limit for 64-bit processes?
Remark: This question does not ask for proper object design (pooling) solutions.