I'm trying to import a huge dataset to ArangoDB via arangoimp. It's about 55 million edges. I alreasy increased the size of the wal.logfiles ( --wal.logfile-size to 1024k from 32k). This solved the last error. But now i get the following error:
WARNING {collector} got unexpected error in MMFilesCollectorThread::collect: no journal
ERROR cannot create datafile '/usr/local/var/lib/arangodb3/databases/database-1/collection-2088918365-385765492/temp-2153337069.db': Too many open files
Importstatement was: arangoimp --file links_de.csv --type csv --collection links
Is there a way to let arangoimp work more iterativ like doing chunks or something? I would be quite complicated to split the CSV in some parts because of its size...
thanks a lot!