Hmmm.
First, it is hard to describe exactly what problem you are experiencing, but I am nearly certain it has very little to do with Spring Data, or technically, Spring's Cache Abstraction in this case (especially since you mention "caching" using the @Cacheable
annotation) than it does with say, Pivotal GemFire itself, or more likely in your application domain model, specifically.
Second, the problem you are experiencing has very little do with your configuration shown above. Essentially, in your configuration, you are creating a "peer" Cache
instance along with Regions
for each of your caches identified in the @Cacheable
annotations declared on your application service methods, which is not particularly interesting in this case.
TIP: Regarding configuration, it would have been better to do this:
@SpringBootApplication
@EnableCachingDefinedRegions
public class MyCachingSpringBootApplication { ... }
See here, here and here for more information.
NOTE: SBDG creates a ClientCache
instance by default, not a "peer" Cache
instance. If you truly want your Spring application to contain an embedded peer Cache
instance and be part of the server cluster, then you would additionally override SBDG's preference of auto-configuring a ClientCache
instance by declaring the @PeerCacheApplication
annotation. See here for more details.
Next, you mention that you "overrode" equals and hashCode, which seems to suggest you are using some complex key. In general, it is better to keep with simple key types when using Pivotal GemFire, such as Long
, Integer
, String
, etc, for reasons like what you are experiencing.
A better option if you need to influence your partitioning strategy or data organization across the cluster (e.g. perhaps for collocation) is to implement GemFire's PartitionResolver
and register it with the PR.
However, it is not uncommon for you cacheable service methods to look like the following:
@Cacheable("CustomersByAccount")
Account findBy(Customer customer) { ... }
As you may well know, the "key" to the @Cacheable
"findBy" service method shown above is Customer
, which is clearly a complex object and must have a valid equals
and hashCode
method when used as a key in a GemFire cache Region, used to back the application cache "CustomersByAccount".
A few questions:
Is it possible that A) your complex key's class definition (e.g. like Customer
) changed, such as by adding/removing a [new] field or by changing a field type (?) and B) your PARTITION Region backing the cache (e.g. "CustomersByAccount") is persistent?
Is your equals
and hashCode
methods consistent? That is they declare and use the same fields to determine the result of equals
and hashCode
?
For example, this would not be valid:
class Customer {
private Long id;
private String firstName;
private String lastName;
...
@Override
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (!(obj instanceof Customer)) {
return false;
}
Customer that = (Customer) obj;
return this.id.equals(that.id);
}
@Override
public int hashCode() {
int hashValue = 17;
hashValue = 37 * hashValue + this.firstName.hashCode();
hashValue = 37 * hashValue + this.lastName.hashCode();
return hashValue;
}
...
}
Or any other combination where equals
/hashCode
could potentially yield a different result depending on state previously stored in GemFire.
You might also try clearing the cache and rehydrating (eagerly or lazily as necessary), particularly if your class definitions have changed and especially if some of those class types are used as keys.
Also, in general, I would recommend immutable keys as much as possible if it is not possible to strictly stick to simple/scalar types (e.g. like Long
or String
).
Perhaps, if you could share a bit more details into your application domain model classes, such as the types used as keys, along with your use of Spring's Cache Abstraction on your service methods, that might help.
Also, any examples or test cases reproducing the problem are greatly appreciated.
Thanks!