I'm working on a Kafka record consumer and comparator. I'm using caffeine to store the consumed messages with a unique I'd for later comparison. Here's a simplified example.
// Spring cache configuration
Cache<String, LinkedList<Data>> cache = Caffeine.newBuilder().build();
// Consumer // determine if already present code above.
LinkedList<Data> dataLinkedList = cache.getIfPresent("KEY");
// Directly update the list with consumed data ordered by timestamp on Data object (mocked in this example).
dataLinkedList.add(new Data(UUID.randomUUID().toString(), rand.nextLong(1000) - 10));
Is this approach of directly updating the list reference completely stupid or do I have to call put again with the new collection. The application is not mutli-threaded.
Following @Ben Manes suggestion I added this
public static BiFunction<String, LinkedList<Data>, LinkedList<Data>> mapperFunc(Data data) {
return (key, list) -> {
int current;
for (current = 0; current < list.size(); current++) {
if(data.timeStamp() > list.get(current).timeStamp())
break;
}
list.add(current, data);
return list;
};
}
// add elements
cache.asMap().compute("KEY", mapperFunc(new Data(UUID.randomUUID().toString(), 1234)));