13

I use kafkaEmbedded in integration test and I get FileNotFoundException :

java.io.FileNotFoundException: /tmp/kafka-7785736914220873149/replication-offset-checkpoint.tmp 
at java.io.FileOutputStream.open0(Native Method) ~[na:1.8.0_141]
at java.io.FileOutputStream.open(FileOutputStream.java:270) ~[na:1.8.0_141]
at java.io.FileOutputStream.<init>(FileOutputStream.java:213) ~[na:1.8.0_141]
at java.io.FileOutputStream.<init>(FileOutputStream.java:162) ~[na:1.8.0_141]
at kafka.server.checkpoints.CheckpointFile.write(CheckpointFile.scala:43) ~[kafka_2.11-0.11.0.0.jar:na]
at kafka.server.checkpoints.OffsetCheckpointFile.write(OffsetCheckpointFile.scala:58) ~[kafka_2.11-0.11.0.0.jar:na]
at kafka.server.ReplicaManager$$anonfun$checkpointHighWatermarks$2.apply(ReplicaManager.scala:1118) [kafka_2.11-0.11.0.0.jar:na]
at kafka.server.ReplicaManager$$anonfun$checkpointHighWatermarks$2.apply(ReplicaManager.scala:1115) [kafka_2.11-0.11.0.0.jar:na]
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) [scala-library-2.11.11.jar:na]
at scala.collection.immutable.Map$Map1.foreach(Map.scala:116) [scala-library-2.11.11.jar:na]
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) [scala-library-2.11.11.jar:na]
at kafka.server.ReplicaManager.checkpointHighWatermarks(ReplicaManager.scala:1115) [kafka_2.11-0.11.0.0.jar:na]
at kafka.server.ReplicaManager$$anonfun$1.apply$mcV$sp(ReplicaManager.scala:211) [kafka_2.11-0.11.0.0.jar:na]
at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) [kafka_2.11-0.11.0.0.jar:na]
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57) [kafka_2.11-0.11.0.0.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_141]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_141]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_141]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_141]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_141]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_141]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]

My tests pass with success but I get this error in the end of my build

After many hours of research I found this :

  • kafka TestUtils.tempDirectory method is used to create temporary directory for embedded kafka broker. It also registers shutdown hook which deletes this directory when JVM exits.
  • when unit test finishes execution it calls System.exit, which in turn executes all registered shutdown hooks

If kafka broker runs at the end of unit test it will attempt to write/read data in a dir which is deleted and produces different FileNotFound exceptions.

My config class :

@Configuration
public class KafkaEmbeddedConfiguration {

private final KafkaEmbedded kafkaEmbedded;

public KafkaEmbeddedListenerConfigurationIT() throws Exception {
    kafkaEmbedded = new KafkaEmbedded(1, true, "topic1");
    kafkaEmbedded.before();
}

@Bean
public KafkaTemplate<String, Message> sender(ProtobufSerializer protobufSerializer,
        KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry) throws Exception {
    KafkaTemplate<String, Message> sender = KafkaTestUtils.newTemplate(kafkaEmbedded, new StringSerializer(),
            protobufSerializer);
for (MessageListenerContainer listenerContainer : 
registry.getListenerContainers()) {
        ContainerTestUtils.waitForAssignment(listenerContainer, 
kafkaEmbedded.getPartitionsPerTopic());
    }        

    return sender;
}

Test class :

@RunWith(SpringRunner.class)
public class DeviceEnergyKafkaListenerIT {
 ...
@Autowired
private KafkaTemplate<String, Message> sender;

@Test
public void test (){
    ...
    sender.send(topic, msg);
    sender.flush();
}

Any ideas how to resolve this please ?

qasmi
  • 215
  • 5
  • 11

3 Answers3

10

With a @ClassRule broker, add an @AfterClass method...

@AfterClass
public static void tearDown() {
    embeddedKafka.getKafkaServers().forEach(b -> b.shutdown());
    embeddedKafka.getKafkaServers().forEach(b -> b.awaitShutdown());
}

For a @Rule or bean, use an @After method.

Gary Russell
  • 166,535
  • 14
  • 146
  • 179
  • I have multiple integration tests which share the same Spring test context. I do not want to shut down kafka broker in each test. I have also A cassandra dependency in my context and i cant shut down cassandra cluster between tests – qasmi Feb 27 '18 at 14:09
  • 1
    If you declare it as a `@Bean` instead of using `new`, the `destroy()` method should shut it down when the application context is closed (it's a `DisposableBean`, since 1.3.x). – Gary Russell Feb 27 '18 at 14:32
  • I put kafkaEmbessed in a wrapper and I use it like bean to avoid calling kafkaembedded after() method and I have a predestroy metchod to shutdown kafka server , but still have the build failure, I have this problem only when i have cassandra dependency in the context – qasmi Feb 27 '18 at 14:47
  • 3
    For those using the `@EmbeddedKafka` annotation (available from spring-kafka 2.0), you can add `controlledShutdown = true` to the annotation to achieve the same effect Gary described. – Tobias Gies Mar 27 '19 at 14:54
  • 2
    The `controlledShutdown` didn't work for me, `@DirtiesContext` did the trick for me (I put it on a method, but this will depend on your situation) – jvwilge Mar 28 '19 at 07:46
  • Adding `@DirtiesContext` also worked for me (at the class-level) as `controlledShutdown = true` had no effect. – Bruno Gasparotto Dec 20 '20 at 22:37
1
final KafkaServer server = 
embeddedKafka.getKafkaServers().stream().findFirst().orElse(null);  
if(server != null) {
  server.replicaManager().shutdown(false);
final Field replicaManagerField = server.getClass().getDeclaredField("replicaManager");
if(replicaManagerField != null) {
    replicaManagerField.setAccessible(true);
    replicaManagerField.set(server, null);
 }
}
embeddedKafka.after();

For a more detail discussion you can refer this thread Embedded kafka issue with multiple tests using the same context

pannu
  • 518
  • 7
  • 20
0

The following solution provided by mhyeon-lee has worked for me:

import org.apache.kafka.common.utils.Exit

class SomeTest {
    static {
        Exit.setHaltProcedure((statusCode, message) -> {
            if (statusCode != 1) {
                Runtime.getRuntime().halt(statusCode);
            }
        });
    }

    @Test
    void test1()  {
    }

    @Test
    void test2() {
    }
}

When JVM shutdown Hook is running, kafka log file is deleted and Exit.halt (1) is called when other shutdown hook accesses kafka log file at the same time.

Since halt is called here and status is 1, i only defend against 1. https://github.com/a0x8o/kafka/blob/master/core/src/main/scala/kafka/log/LogManager.scala#L193

If you encounter a situation where the test fails with a different status value, you can add defense code.

An error log may occur, but the test will not fail because the command is not propagated to Runtime.halt.

References:

https://github.com/spring-projects/spring-kafka/issues/194#issuecomment-612875646 https://github.com/spring-projects/spring-kafka/issues/194#issuecomment-613548108

driver733
  • 401
  • 2
  • 6
  • 21