0

I want to use vertx cluster with hazelcast on karaf. When I try to write messages to the bus (after cluster is formed) I am getting this serialization error. I was thinking about adding a class definition to hazelcast to tell it where to find the vertx server id class (io.vertx.spi.cluster.hazelcast.impl.HazelcastServerID) but I am not sure how.

On Karaf I had to wrap the vertx-hazelcast jar because it doesn't have a proper manifest file.

    <bundle start-level="80">wrap:mvn:io.vertx/vertx-hazelcast/${vertx.version}</bundle>

here is my error com.hazelcast.nio.serialization.HazelcastSerializationException: Problem while reading DataSerializable, namespace: 0, id: 0, class: 'io.vertx.spi.cluster.hazelcast.impl.HazelcastServerID', exception: io.vertx.spi.cluster.hazelcast.impl. HazelcastServerID at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:130)[11:com.hazelcast:3.6.3] at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:47)[11:com.hazelcast:3.6.3] at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:46)[11:com.hazelcast:3.6.3] at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:170)[11:com.hazelcast:3.6.3] at com.hazelcast.map.impl.DataAwareEntryEvent.getOldValue(DataAwareEntryEvent.java:82)[11:com.hazelcast:3.6.3] at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMultiMap.entryRemoved(HazelcastAsyncMultiMap.java:147)[64:wrap_file__C__Users_gadei_development_github_effectus.io_effectus-core_core.test_core.test.exam_target_paxexam_unpack_ 5bf4439f-01ff-4db4-bd3d-e3b6a1542596_system_io_vertx_vertx-hazelcast_3.4.0-SNAPSHOT_vertx-hazelcast-3.4.0-SNAPSHOT.jar:0.0.0] at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatch0(MultiMapEventsDispatcher.java:111)[11:com.hazelcast:3.6.3] at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatchEntryEventData(MultiMapEventsDispatcher.java:84)[11:com.hazelcast:3.6.3] at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatchEvent(MultiMapEventsDispatcher.java:55)[11:com.hazelcast:3.6.3] at com.hazelcast.multimap.impl.MultiMapService.dispatchEvent(MultiMapService.java:371)[11:com.hazelcast:3.6.3] at com.hazelcast.multimap.impl.MultiMapService.dispatchEvent(MultiMapService.java:65)[11:com.hazelcast:3.6.3] at com.hazelcast.spi.impl.eventservice.impl.LocalEventDispatcher.run(LocalEventDispatcher.java:56)[11:com.hazelcast:3.6.3] at com.hazelcast.util.executor.StripedExecutor$Worker.process(StripedExecutor.java:187)[11:com.hazelcast:3.6.3] at com.hazelcast.util.executor.StripedExecutor$Worker.run(StripedExecutor.java:171)[11:com.hazelcast:3.6.3] Caused by: java.lang.ClassNotFoundException: io.vertx.spi.cluster.hazelcast.impl.HazelcastServerID at java.net.URLClassLoader.findClass(URLClassLoader.java:381)[:1.8.0_101] at java.lang.ClassLoader.loadClass(ClassLoader.java:424)[:1.8.0_101] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)[:1.8.0_101] at java.lang.ClassLoader.loadClass(ClassLoader.java:357)[:1.8.0_101] at com.hazelcast.nio.ClassLoaderUtil.tryLoadClass(ClassLoaderUtil.java:137)[11:com.hazelcast:3.6.3] at com.hazelcast.nio.ClassLoaderUtil.loadClass(ClassLoaderUtil.java:115)[11:com.hazelcast:3.6.3] at com.hazelcast.nio.ClassLoaderUtil.newInstance(ClassLoaderUtil.java:68)[11:com.hazelcast:3.6.3] at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:119)[11:com.hazelcast:3.6.3] ... 13 more

any suggestions appreciated thanks.

Gadi
  • 1,539
  • 22
  • 37

2 Answers2

0

This normally happens if one object has an acyclic serialization (reading one less / much property). In this case you're on a wrong stream position which means you end up reading the wrong datatype.

Another possible reason is multiple different Hazelcast versions in the classpath (please check that) or different versions on different nodes.

noctarius
  • 5,979
  • 19
  • 20
  • thanks @noctarius, I only use hazelcast 3.6.3 which is what vertx uses internally. This error happen even if I have single node in the cluster. This only happen on Karaf/OSGi so I think it is safe to assume vertx or hazelcast are not the fault here, it is just the combination on karaf that is not working. – Gadi Mar 01 '17 at 10:20
  • Possibly, would have to look into this myself. Never actually tried vertx on top of karaf. – noctarius Mar 02 '17 at 06:52
0

The solution involved classloading magic!

            .setClassLoader(HazelcastClusterManager.class.getClassLoader())

I ended up rolling my own hazelcast instance and configuring it the way vertx specification is instructing with the additional classloader configuration trick.

``` ServiceReference serviceRef = context.getServiceReference(HazelcastOSGiService.class);

    log.info("Hazelcast OSGi Service Reference: {}", serviceRef);
    hazelcastOsgiService = context.getService(serviceRef);
    log.info("Hazelcast OSGi Service: {}", hazelcastOsgiService);

    hazelcastOsgiService.getClass().getClassLoader();

    Map<String, SemaphoreConfig> semaphores = new HashMap<>();
    semaphores.put("__vertx.*", new SemaphoreConfig().setInitialPermits(1));

    Config hazelcastConfig = new Config("effectus-instance")
            .setClassLoader(HazelcastClusterManager.class.getClassLoader())
            .setGroupConfig(new GroupConfig("dev").setPassword("effectus"))
            //                .setSerializationConfig(new SerializationConfig().addClassDefinition()
            .addMapConfig(new MapConfig()
                    .setName("__vertx.subs")
                    .setBackupCount(1)
                    .setTimeToLiveSeconds(0)
                    .setMaxIdleSeconds(0)
                    .setEvictionPolicy(EvictionPolicy.NONE)
                    .setMaxSizeConfig(new MaxSizeConfig().setSize(0).setMaxSizePolicy(MaxSizeConfig.MaxSizePolicy.PER_NODE))
                    .setEvictionPercentage(25)
                    .setMergePolicy("com.hazelcast.map.merge.LatestUpdateMapMergePolicy"))
            .setSemaphoreConfigs(semaphores);

    hazelcastOSGiInstance = hazelcastOsgiService.newHazelcastInstance(hazelcastConfig);
    log.info("New Hazelcast OSGI instance: {}", hazelcastOSGiInstance);
    hazelcastOsgiService.getAllHazelcastInstances().stream().forEach(instance -> {
        log.info("Registered Hazelcast OSGI Instance: {}", instance.getName());
    });

    clusterManager = new HazelcastClusterManager(hazelcastOSGiInstance);
    VertxOptions options = new VertxOptions().setClusterManager(clusterManager).setHAGroup("effectus");

    Vertx.clusteredVertx(options, res -> {
        if (res.succeeded()) {
            Vertx v = res.result();
            log.info("Vertx is running in cluster mode: {}", v);

 // some more code...

```

so the issue is that hazelcast instance doesn't have access to the cleasass inside the vertx-hazelcst bundle.

I am sure there is a shorter cleaner way somewhere..

any better suggestions would be great.

Gadi
  • 1,539
  • 22
  • 37