I have the following question:
I have a Java a service reading from a queue and pushing data to Redis (SADD). We originally used Jedis, but I wanted to give it a try to lettuce. Right now I am facing some performance issues which I believe that is because of the amount of data we are pushing.
We use spring data redis, and we have a Java POJO we are storing as a JSON. The code we have to do the insertion looks like:
public void add(final UUID uuid, final MyPojo... values) {
final String key = getKey(uuid);
final long startTime = System.currentTimeMillis();
final List<Object> response = redisTemplate.executePipelined(new SessionCallback<List<Object>>() {
@Override
public <K, V> List<Object> execute(final RedisOperations<K, V> operations) throws DataAccessException {
final BoundSetOperations setOperations = operations.boundSetOps((K) key);
setOperations.add(values);
setOperations.expire(expiration, expirationUnit);
return null;
}
});
final Long added = (Long) response.get(0);
final Boolean expirationSet = (Boolean) response.get(1);
if (added != values.length || !expirationSet) {
final String msg = String.format("Error executing commands: Added %d, expected = %d. Expiration set = %b", added, values.length, expirationSet);
throw new DataIntegrityViolationException(msg);
}
if (log.isInfoEnabled()) {
log.info("add took = {} millis", (System.currentTimeMillis() - startTime));
}
}
The Connection factory looks like:
final ClientResources clientResources = DefaultClientResources.builder()
.ioThreadPoolSize(4)
.computationThreadPoolSize(4)
.build();
final LettuceConnectionFactory connectionFactory = new LettuceConnectionFactory();
connectionFactory.setHostName(getRedis().getHostname());
connectionFactory.setPort(getRedis().getPort());
connectionFactory.setShareNativeConnection(true);
connectionFactory.setTimeout(30000);
connectionFactory.setClientResources(clientResources);
Some SADD operations are taking too long (around 10s) ... My main question is... is there any improvement that I can apply to improve the performance? Maybe partitioning the data, and send a pre-defined number of values at a time? What else can I try?