Since it is recommended to use the IDisposal pattern for large objects, I am wondering, why there seems to be no reliable way to determine the limit, up from which an object is to be considered "large"?
Internally such distinction exist: the lower limit for objects being allocated on the LOH. Whenever it is communicated publicly as 85k, one at the same time is prevented from relying on that number.
Especially for applications handling a lot of "larger" arrays, that limit is necessarily needed in order to implement proper memory management and preventing LOH fragmentation. For "smaller" arrays on the other hand, IDisposal does not make sense from a memory consumption point of view. Here, the compacting GC does a lot better.
Why is there no such thing as
GC.GetLOHLimit()
or even better:
bool GC.ArrayTargetForManualDisposal(Type type, int length);
Edit: I know, the IDisposable pattern is just a recommendation for proper handling of special objects (f.e. "large" or unmanaged objects). My question is not assuming, there would be any special handling for those objects by the runtime. I rather ask for a runtime support for implementors of the pattern (may be others as well), to know, when an object should follow special memory management or not.