0

In a NebulaGraph cluster environment, if the disk of a certain storage node becomes full, the exchange ingestion program will be unable to write data.

  • Nebula version: v3.5.0
  • Deployment mode: Distributed
  • Installation method: Docker
  • In production environment: Yes
  • Hardware information:
  • Disk: SATA
  • Processor: 64 cores
  • Memory: 128GB

There are 4 nodes in the storage cluster, and one of the storage nodes has run out of disk space. When submitting a spark-submit command to consume topic data in a spark-streaming program, the data cannot be stored due to the full data disk. How should this be handled to ensure that data can still be inserted even if a node's data disk is full?

0 Answers0