2

For my web application ,I have 2 instance that I have defined in hazelcast xml. When I start one server, it started properly but when I start second server i am getting following error :

SEVERE: [192.168.1.32]:5701 [dev] [3.5] java.io.EOFException: Cannot read 4 bytes! 2015-07-31 18:08:49 com.hazelcast.nio.serialization.HazelcastSerializationException: java.io.EOFException: Cannot read 4 bytes! 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.SerializationServiceImpl.handleException(SerializationServiceImpl.java:380) 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(SerializationServiceImpl.java:282) 2015-07-31 18:08:49 at com.hazelcast.spi.impl.NodeEngineImpl.toObject(NodeEngineImpl.java:200) 2015-07-31 18:08:49 at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:294) 2015-07-31 18:08:49 at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.processPacket(OperationThread.java:142) 2015-07-31 18:08:49 at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.process(OperationThread.java:115) 2015-07-31 18:08:49 at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.doRun(OperationThread.java:101) 2015-07-31 18:08:49 at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.run(OperationThread.java:76) 2015-07-31 18:08:49 Caused by: java.io.EOFException: Cannot read 4 bytes! 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.checkAvailable(ByteArrayObjectDataInput.java:543) 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readInt(ByteArrayObjectDataInput.java:255) 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readInt(ByteArrayObjectDataInput.java:249) 2015-07-31 18:08:49 at com.hazelcast.cluster.impl.ConfigCheck.readData(ConfigCheck.java:217) 2015-07-31 18:08:49 at com.hazelcast.cluster.impl.JoinMessage.readData(JoinMessage.java:80) 2015-07-31 18:08:49 at com.hazelcast.cluster.impl.operations.MasterDiscoveryOperation.readInternal(MasterDiscoveryOperation.java:46) 2015-07-31 18:08:49 at com.hazelcast.spi.Operation.readData(Operation.java:451) 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:111) 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:39) 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:41) 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(SerializationServiceImpl.java:276) 2015-07-31 18:08:49 ... 6 more

Can someone help me? I am not able to find anything :(

Here is my hazelcast xml:

- no hazelcast.xml if present

--> dev dev-pass http://localhost:8080/mancenter 5701 0 224.2.2.3 54327 192.168.1.67 192.168.1.75 my-access-key my-secret-key us-west-1 ec2.amazonaws.com hazelcast-sg type hz-nodes 10.10.1.* PBEWithMD5AndDES thesalt thepass 19 16 0 0 1

    <!--
        Number of async backups. 0 means no backup.
    -->
    <async-backup-count>0</async-backup-count>

    <empty-queue-ttl>-1</empty-queue-ttl>
</queue>
 <map name="persistent.*">
    <!--
       Data type that will be used for storing recordMap.
       Possible values:
       BINARY (default): keys and values will be stored as binary data
       OBJECT : values will be stored in their object forms
       NATIVE : values will be stored in non-heap region of JVM
    -->
    <in-memory-format>BINARY</in-memory-format>

    <!--
        Number of backups. If 1 is set as the backup-count for example,
        then all entries of the map will be copied to another JVM for
        fail-safety. 0 means no backup.
    -->
    <backup-count>1</backup-count>
    <!--
        Number of async backups. 0 means no backup.
    -->
    <async-backup-count>0</async-backup-count>
    <!--
        Maximum number of seconds for each entry to stay in the map. Entries that are
        older than <time-to-live-seconds> and not updated for <time-to-live-seconds>
        will get automatically evicted from the map.
        Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
    -->
    <time-to-live-seconds>0</time-to-live-seconds>
    <!--
        Maximum number of seconds for each entry to stay idle in the map. Entries that are
        idle(not touched) for more than <max-idle-seconds> will get
        automatically evicted from the map. Entry is touched if get, put or containsKey is called.
        Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
    -->
    <max-idle-seconds>0</max-idle-seconds>
    <!--
        Valid values are:
        NONE (no eviction),
        LRU (Least Recently Used),
        LFU (Least Frequently Used).
        NONE is the default.
    -->
    <eviction-policy>NONE</eviction-policy>
    <!--
        Maximum size of the map. When max size is reached,
        map is evicted based on the policy defined.
        Any integer between 0 and Integer.MAX_VALUE. 0 means
        Integer.MAX_VALUE. Default is 0.
    -->
    <max-size policy="PER_NODE">0</max-size>
    <!--
        When max. size is reached, specified percentage of
        the map will be evicted. Any integer between 0 and 100.
        If 25 is set for example, 25% of the entries will
        get evicted.
    -->
    <eviction-percentage>25</eviction-percentage>
    <!--
        Minimum time in milliseconds which should pass before checking
        if a partition of this map is evictable or not.
        Default value is 100 millis.
    -->
    <min-eviction-check-millis>100</min-eviction-check-millis>
    <!--
        While recovering from split-brain (network partitioning),
        map entries in the small cluster will merge into the bigger cluster
        based on the policy set here. When an entry merge into the
        cluster, there might an existing entry with the same key already.
        Values of these entries might be different for that same key.
        Which value should be set for the key? Conflict is resolved by
        the policy set here. Default policy is PutIfAbsentMapMergePolicy

        There are built-in merge policies such as
        com.hazelcast.map.merge.PassThroughMergePolicy; entry will be overwritten if merging entry exists for the key.
        com.hazelcast.map.merge.PutIfAbsentMapMergePolicy ; entry will be added if the merging entry doesn't exist in the cluster.
        com.hazelcast.map.merge.HigherHitsMapMergePolicy ; entry with the higher hits wins.
        com.hazelcast.map.merge.LatestUpdateMapMergePolicy ; entry with the latest update wins.
    -->
  <merge-policy>com.hazelcast.map.merge.PutIfAbsentMapMergePolicy</merge-policy>
     <map-store enabled="true">
        <factory-class-name>com.adeptia.indigo.services.hazelcast.PersistentMapStoreFactory</factory-class-name>
        <write-delay-seconds>0</write-delay-seconds>
    </map-store>

</map>

<multimap name="default">
    <backup-count>1</backup-count>
    <value-collection-type>SET</value-collection-type>
</multimap>

<list name="default">
    <backup-count>1</backup-count>
</list>

<set name="default">
    <backup-count>1</backup-count>
</set>

<jobtracker name="default">
    <max-thread-size>0</max-thread-size>
    <!-- Queue size 0 means number of partitions * 2 -->
    <queue-size>0</queue-size>
    <retry-count>0</retry-count>
    <chunk-size>1000</chunk-size>
    <communicate-stats>true</communicate-stats>
    <topology-changed-strategy>CANCEL_RUNNING_OPERATION</topology-changed-strategy>
</jobtracker>

<semaphore name="default">
    <initial-permits>0</initial-permits>
    <backup-count>1</backup-count>
    <async-backup-count>0</async-backup-count>
</semaphore>

<reliable-topic name="default">
    <read-batch-size>10</read-batch-size>
    <topic-overload-policy>BLOCK</topic-overload-policy>
    <statistics-enabled>true</statistics-enabled>
</reliable-topic>

<ringbuffer name="default">
    <capacity>10000</capacity>
    <backup-count>1</backup-count>
    <async-backup-count>0</async-backup-count>
    <time-to-live-seconds>30</time-to-live-seconds>
    <in-memory-format>BINARY</in-memory-format>
</ringbuffer>

<serialization>
    <portable-version>0</portable-version>
</serialization>

<services enable-defaults="true"/>

Soner Gönül
  • 97,193
  • 102
  • 206
  • 364
Sandy Arora
  • 25
  • 1
  • 4

1 Answers1

0

I had the same probelm. I tried to store the following data structure into Hazelcast using portables (row and cell are different portable impl.):

row { cell { 'name' : 'cell_0_0', 'value' : 'cell_value_0_0' }, cell { 'name' : 'cell_0_1', 'value' : 1} }, ...

The problem is that for the first cell hazelcast stores for the field name 'value' a field type of UTF, but during the storing of the second cell, hazelcast retrieves stored field definition for the field name 'value' and this was UTF. So the field type is not Int but UTF and during reading of stored portables from map readUTF was used and that caused the exception for me, because stored field value and stored field type do not correspond to each other.

EDIT: In your case after starting the second instance stored objects are exchanged and of course read. Perhaps the problem lies at this point.

San Droid
  • 332
  • 1
  • 7