0

In certain container boxes chronicle queue is not working.I am seeing this exception: 2018-11-17 16:30:57.825 [failsafe-sender] WARN n.o.c.q.i.s.SingleChronicleQueueExcerpts$StoreTailer - Unable to append EOF, skipping java.util.concurrent.TimeoutException: header: 80000000, pos: 104666 at net.openhft.chronicle.wire.AbstractWire.writeEndOfWire(AbstractWire.java:459) at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueStore.writeEOF(SingleChronicleQueueStore.java:349) at

I want to understand why only in certain VM's.

Note: We are using NFS file system

Tried to understand the behavior in NFS

2 Answers2

2

Chronicle Queue does not support operating off any network file system, be it NFS, AFS, SAN-based storage or anything else. The reason for this is those file systems do not provide all the required primitives for memory-mapped files Chronicle Queue uses.

Or putting it another way, Chronicle Queue uses off-heap memory mapped files and these files utilize memory mapped CAS based locks, usually these CAS operations are not atomic between processes when using network attached storage and certainly not atomic between processes that are hosted on different machines. If your test sometimes works on different combinations of file-system and/or OS's, then it is possible your test did not experience a concurrency race, or that that on some combination of NAS and OS, it is possible the hardware and OS has honoured these CAS operations, however, we feel this is very unlikely. As a solution to this, we have created a product called chronicle-queue-enterprise, it is a commercial product that will let you share a queue between machines using TCP/IP. Please contact sales@chronicle.software for more information on chronicle-queue-enterprise.

Rob Austin
  • 620
  • 3
  • 5
0

For reliable distribution of data between machines you need to use Chronicle Queue Enterprise. NFS doesn't support atomic memory operations between machines.

Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130