I have documents that exceed 16MB. These documents are comprised of many key/value pairs and their containing subdocuments (dicts) and arrays (lists), which may be nested several levels deep.
If I try to insert one of these super-16MB files, I get an error regarding the size of the doc being larger than 16MB. So, I started looking into GridFS. GridFS seems great for chunking up files such as binary data. However, I am not clear on how I would "chunk up" highly nested K/V docs like I described above. I am thinking that I may just need to break these huge docs down into smaller docs and bite the bullet and implement transactions due to no atomicity of insertion on multiple docs.
Is my understanding of GridFS way off? Is breaking up the doc into smaller documents with transaction support the best way forward, or is there a way to use GridFS here?
Kind thanks for your attention.