0

I am currently working on a project which uses Spring Boot Apache Kafka and Gemfire Integration . In this project I have to subscribe the topic from the kafka and delete some matching keys from the Gemfire Region .

I am able to successfully subscribe the topic but while deleting the keys from the Gemfire region it throws No bean such exception when I try to delete from that region, from the Gemfire Configuration I am using @EnableClusterDefined Regions . The issue is that Spring has a weird behavior that it loads the gemfire regions after the spring application context is loaded . To overcome this I made a custom repository implementing the Application Context Aware Overridden the setApplicationContext and wrote a method getRegion where I am getting the region by context. getBean("Region Name") ... but still I am not able to load the required region bean. Can someone suggest something

John Blum
  • 7,381
  • 1
  • 20
  • 30

1 Answers1

1

Regarding...

The issue is that Spring has a weird behavior that it loads the GemFire Regions after the Spring ApplicationContext is loaded.

Technically (from here, to here, and finally, here), this happens after the ClientCache bean is initialized, not necessarily after the Spring ApplicationContext is (fully) loaded, or rather after the ContextRefreshEvent. It is an issue in your Spring application configuration.

The feature to which you are referring is from Spring Data for Apache Geode, or alternatively VMware Tanzu GemFire (SDG).

The feature is used by declaring the SDG @EnableClusterDefinedRegions annotation (Javadoc) in your Spring application configuration.

The behavior might seem "weird", but is in fact quite necessary.

PREREQUISITE KNOWLEDGE

With Spring configuration, regardless of source: [XML, JavaConfig, Groovy, Annotations or otherwise], there are 2 primary phases: parsing and initialization.

Spring uses a generic, common representation to model the configuration (i.e. BeanDefinition) for each bean defined, declared and managed by the Spring container when parsing the bean definition(s) from any configuration source. This model is then used to create the resolved beans during initialization.

Parsing allows the Spring container to determine (for one) the necessary dependencies between beans and the proper order of initialization on startup.

When using SDG's @EnableClusterDefinedRegions annotation, the GemFire/Geode client Spring application (a GemFire/Geode ClientCache application) must be connected to an existing GemFire/Geode cluster, where the Regions have already been defined, to create matching client-side Regions.

In order to connect to a cluster from the client, you would have to have defined (explicitly or implicitly) a connection (or connections) to the cluster using a GemFire/Geode Pool (Javadoc). This Pool (or Pools) is also registered as a bean in the Spring container by SDG.

The ClientCache or client Pool beans contain the metadata used to create connections to the cluster. The connections are necessary to perform Region data access operations, or even determine the Regions that need to be created on the client-side to be able to perform Region data access operations and persist data on the server-side in the first place.

All of this cannot happen until the client Pools are "initialized", thereby forming connections to the cluster where the necessary request can then be made to determine the Regions in the cluster. This is not unlike how the Gfsh list regions command works, in fact. Gfsh must be connected to execute the list regions command.

The main purpose of using SDG's @EnableClusterDefinedRegions annotation is so you do not have to explicitly define client-side ([CACHING_]PROXY) Regions that have already been determined by an (existing) cluster. It is for convenience. But, it doesn't mean there are no (implied) dependencies on the resulting (client) Region imposed by your Spring application that must be carefully considered and ordered.

Now...

I suspect your Spring application is using Spring for Apache Kafka (??) to define Kafka Topic subscriptions/listeners to receive messages? Somehow you loosely coupled the Kafka Topic listener receiving messages from the Kafka queue to the GemFire/Geode client Region.

The real question then is, how did you initially get a reference to the client Region from which you delete keys when an event is received from the Kafka topic?

You say that, "I am able to successfully subscribe the topic but while deleting the keys from the Gemfire region it throws No bean such exception when i try to delete from that region."

Do you mean the NoSuchBeanDefinitionException? This Exception is typically thrown on startup when using Spring container dependency injection, such as when defining a @KafkaListener as described here), like so:

@Component
class MyApplicationListeners {

  @Autowired
  @Qualifier("myRegion")
  private Region<String, Object> clientRegion;

  @KafkaListener(id = "foo", topics = "myTopic")
  public void listener(String key) {
    clientRegion.remove(key);
  }
}

However, when you specifically say, "..while deleting the keys from the GemFire Region..", would imply you were initially doing some sort of lookup (e.g. clientCache.getRegion(..)):

@Component
class MyApplicationListeners {

  @Autowired
  private ApplicationContext applicationContext;

  @KafkaListener(id = "foo", topics = "myTopic")
  public void listener(String key) {
    applicationContext.getBean("myRegion", Region.class).remove(key);
  }
}

Not unlike to your attempted workaround using a ApplicationContextAware implementation.

At any rate, you definitely have a bean initialization ordering problem, and I am nearly certain it is caused by a loose coupling between the bean dependencies (not to be confused with "tight coupling in code").

Not knowing all your Spring application configuration details for sure, you can solve this 1 of several ways.

  1. First, and the easiest and most explicit (obvious and recommended) way to solve this with an explicit Region bean definition matching the server-side Region on the client:
@Configuration
@EnableClusterDefinedRegions
class MyApplicationConfiguration {

  @Bean("myRegion")
  ClientRegionFactoryBean myRegion(ClientCache cache) {

    ClientRegionFactoryBean myRegion = new ClientRegionFactoryBean();

    myRegion.setCache(cache);
    myRegion.setName("myRegion");
    myRegion.setShortcut(ClientRegionShortcut.PROXY);

    return myRegion;
  }

  // other declared application bean definitions
}

Then when the Region is injected by the Spring container in:

  @Autowired
  @Qualifier("myRegion")
  private Region<String, Object> clientRegion;

  @KafkaListener(id = "foo", topics = "myTopic")
  public void listener(String key) {
    clientRegion.remove(key);
  }
}

It will definitely exist!

SDG's @EnableClusterDefinedRegions is also careful not to stomp on explicit Region bean definitions if a Region bean is already defined (explicitly) in your Spring application configuration, as demonstrated above. Just be careful that the client Region (bean name) matches the server-side Region by "name".

Otherwise, you can play on the fact that the SDG framework attempts to early initialized client Regions from the cluster in the BeanPostProcessor by defining an "order", https://github.com/spring-projects/spring-data-geode/blob/2.7.1/spring-data-geode/src/main/java/org/springframework/data/gemfire/config/annotation/ClusterDefinedRegionsConfiguration.java#L90.

Then, you could simply do:

@Component
@Order(1)
class MyApplicationListeners {

  @Autowired
  @Qualifier("myRegion")
  private Region<String, Object> clientRegion;

  @KafkaListener(id = "foo", topics = "myTopic")
  public void listener(String key) {
    clientRegion.remove(key);
  }
}

Using the Spring Framework @Order annotation on the MyApplicationListeners class containing your Kafka Listener used to delete keys from the cluster/server Region using the client Region.

In this case, no explicit client-side Region bean definition is necessary.

Of course, other, maybe, non-obvious dependency on your MyApplicationListener class in your Spring application configuration could force an eager initialization of the MyApplicationListener class and you could potentially still hit a NoSuchBeanDefinitionException on startup during DI. In this case, the Spring container must respect dependency order and therefor overrides the @Order definition on the MyApplicationListener class (bean).

Still, you could also delay the reception of events from the Kafka queues subscriptions for all topics by setting autoStartup to false; see here. Then, you could subsequently listen for a Spring container, ContextRefreshedEvent to startup the Kafka Listener Container to start receiving events in your @KafkaListeners once the Spring application is properly initialized. Remember, all automatic client Region bean creation using the SDG @EnableClusterDefinedRegions annotation happens inside a BeanPostProcessor, and all BeanPostProcessers are called by the Spring container before the context is completely refreshed (i.e. the ContextRefreshedEvent). See the Spring Framework documentation for more details on BPPs.

Anyway, you have a lot of options.

John Blum
  • 7,381
  • 1
  • 20
  • 30
  • Hi John .. I tried the first way of explicitly making a bean .. in that i am facing an exception when i am autowiring the @Autowired @Qualifier("myRegion") private Region clientRegion; .. I am getting No SuchBeanException ..No qualifying bean of org.apache.geode.cache.Region available ..expected atleast 1 bean which qualifies as autowire candidate.. Can you please help on this ??? – JavaLearner Jun 27 '22 at 07:52
  • The second way or Ordering is not working ... its still giving me No Bean .. Can you please help me with the first way .. I ma getting an exception which i have mentioned above .. Can you please help me in that – JavaLearner Jun 27 '22 at 08:14
  • We are trying to create a bean of clientFactoryRegionBean and we are autowiring region .. dont think this will work ... can u pls provide a solution to this – JavaLearner Jun 27 '22 at 09:58
  • The first approach should work as expected. The Spring container might be (IIRC) using the generic type signature to be exact about the Region bean that is being autowired, so you probably need to be explicit about the Region bean definition by doing the following: `@Bean("myRegion") ClientRegionFactoryBean myRegion(ClientCache cache) { ... }`. – John Blum Jun 27 '22 at 15:39
  • Here is an example test class from the SDG test suite: https://github.com/spring-projects/spring-data-geode/blob/2.7.1/spring-data-geode/src/test/java/org/springframework/data/gemfire/client/ClientRegionIntegrationTests.java – John Blum Jun 27 '22 at 15:42
  • I just played around with the generic typing of both the Region bean injection as well as the Region bean definition declared in the Spring configuration, and with or without strong typing on either seems to work as expected. Perhaps you have another configuration problem going on here? If you could send me a link to a Gist reproducing the issue, I could further evaluate. Thanks. – John Blum Jun 27 '22 at 15:48
  • Additionally, here is another applicable test class from SDG's test suite that you can review: https://github.com/spring-projects/spring-data-geode/blob/main/spring-data-geode/src/test/java/org/springframework/data/gemfire/config/annotation/EnableClusterDefinedRegionsIntegrationTests.java. I added an additional client-only Region bean definition to the client's (test class's) Spring configuration, in addition to the client (`PROXY`) Region beans that will be created from the Regions defined on the server. All works as expected. – John Blum Jun 27 '22 at 16:38