1

First, a link to the library: ServiceStack.Redis

now, Im working on some generic cache mechanism which supports, for now, in 4 methods:

Put, Get, PutMany, GetMany

Problem is, whenever I want to insert numerous records, I dont have an option (visible to me) to add an expiration, unlike Put - where I can.

The code is pretty streight forward:

public void PutMany(ICollection<T> items)
{
    TimeSpan expiration = GetExpiration();
    DateTime expire = DateTime.UtcNow.Add(expiration);

    Dictionary<string, CachedItem<T>> dict = new Dictionary<string, CachedItem<T>>();
    foreach (T item in items)
    {
        CachedItem<T> cacheItem = new CachedItem<T>(item, expire);

        if (!dict.ContainsKey(cacheItem.Id))
            dict.Add(cacheItem.Id, cacheItem);
    }

    // Store item in cache
    _client.SetAll(dict);
}

the model CachedItem<T> is mine, just imagine it as some sort of an object.

As you can see, I dont have an option of setting the expiration. Is there a way (besides inserting them 1 by 1 using _client.Set()) to achieve this?

TIA.

P.S

I know I can store all records in a list or an hash, I dont want that all the records will have a single expiration date (wrong, and can cause very serious performance issues whenever they expire)

Ori Refael
  • 2,888
  • 3
  • 37
  • 68

1 Answers1

2

Redis does not have any command that will let you set an expiry with a bulk insert nor does any of its Expire commands allow you to apply the expiry to multiple keys.

To avoid an N+1 operation you'll need to queue multiple SET commands in a Redis Transaction or pipeline, setting each entry individually with an expiry, e.g:

using (var trans = Redis.CreateTransaction())
{
    foreach (var entry in dict) 
    {
        trans.QueueCommand(r => r.SetValue(entry.Key, entry.Value, expireIn));
    }

    trans.Commit();
}

Where ServiceStack.Redis will still send the multiple SET operations in a bulk Redis transaction.

mythz
  • 141,670
  • 29
  • 246
  • 390
  • as long as its 1 request, thats fine. – Ori Refael May 10 '17 at 14:37
  • @MichaC ServiceStack.Redis batches all commands in a single network write so it doesn't have N+1 request latency. LUA would be more overhead. – mythz May 12 '17 at 08:26
  • @mythz Redis still has to do n+1, that's what I meant. Sure, network latency is one thing, but the Redis Server still has to run all those commands ^^. Don't see why LUA would be more overhead? It would be exactly one command/request which then runs the loop on the server. – MichaC May 12 '17 at 09:01
  • @MichaC The slowest part about executing N+1 requests is the latency of a separate network write/read per request, this doesn't happen with Redis pipelining as explained. LUA still has to execute the same commands on the server but with the additional script processing overhead. – mythz May 12 '17 at 09:05
  • @mythz there is no script processing overhead if you pre-load the LUA script. In addition, you don't have to put a "transaction" around all the commands because LUA scripts already run atomic which can improve performance. But hey, that's just another flavor ^^ – MichaC May 12 '17 at 09:11
  • @MichaC what do you think Redis does faster, executing Redis commands natively or running them through a LUA interpreter to execute them through a scripting proxy object? Transactions are also atomic, but that's not where the latency comes from as it would be marginally faster executing them through a non-atomic pipelined request – mythz May 12 '17 at 09:14