It seems you are shelling out to flatten your JSON into a single line on the command this may work for small amounts of data but you might have just gone too big.
IMHO sending a large JSON file on the command line just seems wrong (as there are command line limits in Linux), you might look at some of the limits
getconf ARG_MAX
But for your true single arg limit on the command line look at MAX_ARG_STRLEN which for my debian system is 131072 from "binfmts.h"
So just test things out WRT argument lengths do the following
/bin/echo "$(printf "%*s" 131071 ".")">/dev/null
/bin/echo "$(printf "%*s" 131072 ".")">/dev/null
-bash: /bin/echo: Argument list too long
I needed some moderate large data so I just grabbed some from the net for testing
curl 'https://data.ny.gov/api/views/pxa9-czw8/rows.json?accessType=DOWNLOAD&utm_medium=referral&utm_campaign=ZEEF&utm_source=https%3A%2F%2Fjson-datasets.zeef.com%2Fjdorfman' -o local.json
ls -ltr local.json
-rw-r--r-- 1 linuxbrew linuxbrew 251492 Jan 10 10:28 local.json
What if you did the following not $keyspace is just your oddly named bucket com.src.test.default:
echo 'statement=INSERT INTO `'$keyspace'` (KEY, VALUE) VALUES ( "'$docId'",' > tmp.cmd
cat local.json >> tmp.cmd
echo ');' >> tmp.cmd
curl -v localhost:8093/query/service -u ${username}:${password} -d @./tmp.cmd
But since you are using curl why not use cbimport? You could merely do the following:
jq -c -M . local.json > tmp.json
/opt/couchbase/bin/cbimport json -c localhost -b "com.src.test.default" -f lines -g $docId -d file://./tmp.json
Another method you could use:
cat local.json | /opt/couchbase/bin/cbc-create --username $username --password $password --spec 'couchbase://localhost/com.src.test.default' "$docID"
Some other thoughts ...
The command jq by default colorizes the output as such if you insist on compacting with jq use the -M flag.
Your bucket although having a legal name `com.src.test.default` uses the "." character in version 7.0+ will be a keyspace of `com.src.test.default`._default._default the dots in you bucket name caused me a bit of confusion. Just be aware when you upgrade that we use bucketname.scope.collection where your pre 7.0+ data will be automatically migrated to bucketname._default._default
You might consider using an SDK and writing a simple program in you favorite language.