0

Using gfsh I start a locator: start locator --name=LocatorUAT --properties-file=..\config\gemfire.properties Then I start a server with a properties file and cache.xml

start server --name=ServerUAT --properties-file=..\config\gemfire.properties

where the properties has use-cluster-configuration=true and the cache.xml has a number of regions like this:

<region name="deal" refid="REPLICATE_PERSISTENT"><region-attributes disk-store-name="deal" disk-synchronous="false"></region-attributes></region>

Then I start 2 more servers like this:

start server --name=ServerUAT2 --server-port=40405

start server --name=ServerUAT3 --server-port=40406

These all start fine and I can list members and clients connecting to the cluster. Then I go to pulse and see the topology where I can see the 3 servers, and also see there are 47 regions:

Pulse

Though when I step through into the server2 or server3, the pulse shows regions=0 and I was expecting to see replicated regions in the server2 and server3. Why is that?

This is server1 with 47 regions

server1

This is server2 with 0 regions

server2

This is server3 with 0 regions

server3

rupweb
  • 3,052
  • 1
  • 30
  • 57

1 Answers1

1

First things first: mixing the cluster configuration service with individual cache.xml files is not entirely supported and several problems might arise, I'd recommend to use a single approach when configuring your cluster (the cluster configuration service preferably as individual cache.xml files will likely be deprecated in the future).

That said, the second and third server don't seem to be started using a cache.xml file, and the regions are only created locally on each server whenever you have them defined within the cache.xml file (or whenever they are pushed to them by the locator through the cluster configuration service); that's probably what's happening here.

How to fix the problem (assuming the above assumption is correct): don't use individual cache.xml files for any member and, instead, create all the regions using gfsh commands so they get persisted within the cluster configuration service and "pushed" to the servers whenever they come up.

Dharman
  • 30,962
  • 25
  • 85
  • 135
Juan Ramos
  • 1,421
  • 1
  • 8
  • 13
  • Thanks, well there is just the 1 cache.xml file which is picked up by the 1st server to start and then I was expecting the 2nd and 3rd servers to take the same configuration from the cluster configuration service... do you know what we are missing? Also, if we had to create all regions from `gfsh` then it could take quite a long time - not a few seconds! – rupweb Sep 03 '20 at 08:46
  • 1
    The `cache.xml` files used by servers during startup are not pushed neither stored within the cluster configuration service, it's the other way around, actually: whatever you have defined within the cluster configuration service (managed by `gfsh` commands) is pushed to the servers when they startup. You can get more details about how this works [here](https://gemfire.docs.pivotal.io/910/geode/configuring/cluster_config/gfsh_persist.html). Long story short, you have two options: – Juan Ramos Sep 03 '20 at 09:49
  • 1
    1. Entirely configure your cluster using the cluster configuration service through `gfsh` commands. It might take a while, but it's done only once as the changes are persisted on locators and picked up by servers upon each restart. – Juan Ramos Sep 03 '20 at 09:49
  • 1
    2. Replicate the `cache.xml` file and make it available to every single server during startup so they all create the same regions with the same configurations (you might want to use relative paths for `disk-stores` in order to avoid problems). – Juan Ramos Sep 03 '20 at 09:51
  • 1
    I'm certainly an advocate for option 1, it's easier and the recommended mechanism to configure and manage your cluster. Having individual `cache.xml` files per server can be cumbersome and error prone (every time you change something, it has to be changed on other servers as well), but it's to you. – Juan Ramos Sep 03 '20 at 09:52