The accepted answer in the post I mentioned in my comment (Converting Json to CSV in Javascript) is completely usable in arangosh, you do program javascript in the shell after all.
Here is a console transcript of an example session using the technique:
Prepare db and aql modules:
127.0.0.1:8529@_system> var db = require("@arangodb").db
127.0.0.1:8529@_system> var aql = require("@arangodb").aql;
seperator
could be \t
, but we use a comma
127.0.0.1:8529@_system> const seperator = ',';
headers
contains the names of the fields/properties of the documents we want to write to the csv file. They are used as column headers as well
127.0.0.1:8529@_system> let headers = ['prop1', 'prop2'];
The simple example collection I created for this example only contains two documents, so I don't filter anything - the .toArray()
call is important here, the map
call is otherwise not possible. This can bite you, if the amount of data you need to extract does not fit into the available memory.
127.0.0.1:8529@_system> var data = db._query(
aql`FOR doc IN test RETURN doc`
).toArray();
This creates an array of strings. The first is the column header line, then follows the map call that extracts the properties from the documents and creates a csv string for each. If a field value can contain a comma, you would use enclosing quotes here, you would have to escape quote characters in values then
127.0.0.1:8529@_system> const csv = [
headers.join(seperator),
...data.map(
row => headers.map(
field => `${row[field]}`
).join(seperator)
)
];
Finally we join the lines into one string seperated by newlines
127.0.0.1:8529@_system> csv.join("\n");
prop1,prop2
val,other
foo,baz
To write the javascript to a file:
127.0.0.1:8529@_system> var fs = require("fs");
127.0.0.1:8529@_system> fs.write("mydata.csv", csv.join("\n"));
true