I am writing an HDF5 file using the C++ HDF api and performing a few comparisons against the H5py Python library.
In the H5py Python library autochunking is applied by default when a compression algorithm such as GZIP or LZF is used.
Does the same condition apply to the HDF5 C++ api? If so, how can I prove that chunks were automatically created when a compression algorithm, such as GZIP, was applied to the data sets.