In most cases, MarkLogic works best if each row is a separate document instead of putting all of the rows for a table in a single document.
By setting a collection on the documents, it's easy to reconstruct a table from the row documents on read.
You have much more flexibility in taking subsets of rows.
Coordinated queries (such as "find me all of the row documents where the name is "Rogers" and the age is "30") are much more efficient because you don't have to project the row from a large document and can be resolved from the indexes without risk of false positives (such as a document where a property in one row has a name of "Rogers" and a property in a different row has "30").
In short, consider whether the requirement will prove to be the best approach in the long term.
All that said, I believe that client libraries like jackson-csv can convert a CSV to one large JSON document, which you can then write with mlcp or the MarkLogic Java API.
But, again, that approach is likely to prove problematic down the road.