I'm currently facing a situation where when querying the incoming and outgoing data (edges) of a certain node, if the number of edges exceeds a certain limit, such as 1000, it easily results in a timeout error. At the same time, RPC timeouts occur immediately during storage, even though I have added a limit of 500 when querying.
I would like to ask two questions:
- Is this easy timeout issue due to some misconfiguration?
- Currently, the storage performance is slow, with only 1200 edges being stored in 3-5 seconds. What aspects can be investigated to improve this?
Server situation:
3 NebulaGraph servers (each with 3 mechanical hard drives as NebulaGraph data disks. The server memory is 128G), with NebulaGraph memory set at 40G per server.
Storage situation:
Data is imported through Kafka.
Graph space schema: