Every time I try to execute a transaction or query where the payload is > ~2MB I get the following errors:
Immediately upon executing the query, from the docker container running the business network application:
[ERROR] lib/handler.js - Chat stream with peer - on error:
"Error: 8 RESOURCE_EXHAUSTED: Received message larger than max (19090846 vs. 4194304)\n
at createStatusError (/usr/local/src/node_modules/grpc/src/client.js:64:15)\n
at ClientDuplexStream._emitStatusIfDone (/usr/local/src/node_modules/grpc/src/client.js:270:19)\n
at ClientDuplexStream._receiveStatus (/usr/local/src/node_modules/grpc/src/client.js:248:8)\n
at /usr/local/src/node_modules/grpc/src/client.js:804:12"
Then from the application side, when the timeout has been reached:
{ Error: 2 UNKNOWN: error executing chaincode: failed to execute transaction: timeout expired while executing transaction
at new createStatusError (C:\Users\jean5\Fabric\Qostodian\qostodian-analyzer\node_modules\grpc\src\client.js:64:15)
at C:\Users\jean5\Fabric\Qostodian\qostodian-analyzer\node_modules\grpc\src\client.js:583:15
code: 2,
metadata: Metadata { _internal_repr: {} },
details: 'error executing chaincode: failed to execute transaction: timeout expired while executing transaction' }
These errors show that the GRPC default limit of 4MB is reached when I try to retrieve ~18.2MB of data from the query (19090846 vs. 4194304).
From what I've seen Fabric is hardcoded to support up to 100MB already:
MaxRecvMsgSize = 100 * 1024 * 1024
MaxSendMsgSize = 100 * 1024 * 1024
I've also found a JIRA task (FAB-5049) on hyperledger.org where they face the same issue. However, there is no discussion of a potential fix for the 4MB limit.
Question 1: If fabric is hardcoded with 100MB, where is that 4MB limit coming from?
Question 2: How can I make sure that the GRPC limit is indeed 100MB?
I would also like to know if its possible to explicitly set the GRPC limit for example in the connection.json or when installing/starting the network using composer CLI.