I am trying to transfer all the documents out of my large couchdb db, and appear to hit a serious slow down shortly after starting. The request being used to get the documents is:
url = 'http://<ip>:5984/marketwatch_weekly/_all_docs?include_docs=true&limit=4000&skip=%s' % skip
The print out of the slow down is below. The furthest right column is the time in seconds for the request to complete. The column next to that one is the skip amount.
getting 2018-03-22 20:53:31.523599 16833 364000 89.11844325065613
getting 2018-03-22 20:55:02.698881 17478 368000 89.88783812522888
getting 2018-03-22 20:56:33.738854 19864 372000 90.0836386680603
getting 2018-03-22 20:57:56.869204 21151 376000 82.24904656410217
getting 2018-03-22 20:59:09.616417 23135 380000 72.10899209976196
getting 2018-03-22 21:00:18.940941 24875 384000 68.40224647521973
getting 2018-03-22 21:01:41.423078 25589 388000 81.92294359207153
getting 2018-03-22 21:11:47.979055 6395 392000 605.9177582263947
getting 2018-03-22 21:31:37.420515 1425 396000 1188.589150428772
getting 2018-03-22 21:46:11.717596 0 400000 873.0646567344666
getting 2018-03-22 22:02:38.413917 0 404000 985.686975479126
getting 2018-03-22 22:20:19.832703 0 408000 1060.2585520744324
getting 2018-03-22 22:39:29.712637 0 412000 1148.8915960788727
getting 2018-03-22 22:59:27.880014 0 416000 1197.4601407051086
getting 2018-03-22 23:21:09.851654 0 420000 1300.9372861385345
getting 2018-03-22 23:45:07.953314 0 424000 1436.5531301498413
Wondering what might be causing this, and any tips to correct this? Should I set include_docs to false and request each doc my its _id instead?
I'm using my own data transfer script as I'm changing the schema of the database as well, so I can't really use any sort of replication tools.
Thanks!