2

I am using ServiceStack.Redis pooled client in a servicestack API and after a couple of hours of traffic with about 3000rpm I receive a connection timeout exception from the pool manager. The implementation is as follows:

In AppStart:

 container.Register<IRedisClientsManager>(
                    p => new RedisManagerPool(Configuration.Config.Instance.RedisConfig.Server)
                    {
                        MaxPoolSize = 10000,
                        PoolTimeoutMs = 2000
                    }).ReusedWithin(ReuseScope.Container);

In the service:

Pool = (RedisManagerPool) GetResolver().TryResolve<IRedisClientsManager>();
            RedisClient = (RedisClient)Pool.GetClient();

....

RedisClient.Dispose();

I also tried disposing the client by using Pool.DisposeClient(RedisClient) in order to return the client back to the pool but I see the same results.

I've also checked the Redis server but no issues in cpu usage, mem usage, 0 refused connections etc.

Can you please let me know if anybody encountered this?

Thank you

Radu Cotofana
  • 117
  • 1
  • 11

1 Answers1

3

I wouldn't have a pool size that big, keeping 10000 open connections seems worse than not having any connection pooling at all.

You also don't need specify ReuseScope.Container since the default is to use a singleton which is the correct scope for a manager/factory, so I would first try the default configuration:

container.Register<IRedisClientsManager>(c => 
    new RedisManagerPool(Configuration.Config.Instance.RedisConfig.Server));

The Pool Timeout exception suggests that the connection pool has filled up and no connections were free-ed up (i.e. disposed) within the Pool Timeout.

I recommend using the latest v4.0.34 of RedisManagerPool that's on MyGet has a alternate pooling strategy where once the connection pool is full will instead create new unmanaged pool instances instead of locking and throwing after the pool timeout has been reached.

Also in your Service you can access the Redis client using base.Redis since it automatically creates an instance when first accessed and is disposed after the Service is executed, i.e:

public class Service : IDisposable
{
    private IRedisClient redis;
    public virtual IRedisClient Redis
    {
        get { return redis ?? (redis = TryResolve<IRedisClientsManager>().GetClient()); }
    }

    //...

    public virtual void Dispose()
    {
        if (redis != null)
            redis.Dispose();
    }

}

This helps to ensure that the Redis Client is properly disposed of after each request.

Radu Cotofana
  • 117
  • 1
  • 11
mythz
  • 141,670
  • 29
  • 246
  • 390
  • hello mythz. Thank you for your answer. I will try this solution tomorrow. Btw even if the pool is set to 10000 I saw a maximum of 130 connection to my redis server so I don't think that is part of the solution. Anyway I will try tomorrow and keep you posted. Good luck with your framework which btw is great ;) – Radu Cotofana Nov 28 '14 at 19:55
  • 1
    Hello mythz. I have deployed the latest version with your suggested changes and until now there was no timeout but it usually occured after a couple of hours. Anyway I noticed a big change in the connections to the redis server which makes me believe the pooled manager is not working as it should: https://www.dropbox.com/s/o9skt1n9r3xl6py/Screenshot%202014-11-29%2010.58.53.png?dl=0. https://www.dropbox.com/s/68vrfo2v5cxdyfq/Screenshot%202014-11-29%2010.59.22.png?dl=0. As you can see I see a huge increase in current number of connections and especially in New Connections count. – Radu Cotofana Nov 29 '14 at 08:58
  • This makes me believe a new connection is open on each request. What do you think? – Radu Cotofana Nov 29 '14 at 09:00
  • @RaduCotofana ok the default pool size of **20** might be low in your use-case, since when the max pool size has been reached new connections are created. You can try increasing it to **50** to see how much that helps. I'd also be looking to make sure that all Redis connections are properly disposed, you can call `GetStatus()` or `GetClientPoolActiveStates()` on `RedisManagerPool` to view stats about current open connections. – mythz Nov 29 '14 at 10:34
  • the max pool size was set to 200 because we usually have 4k rpm on this API. btw before applying your changes the number of connections was much lower but unfortunately it threw that error. please check the link below with the increase in connections count: https://www.dropbox.com/s/7itpgxxuthw9aof/Screenshot%202014-11-30%2011.24.55.png?dl=0. Thank you – Radu Cotofana Nov 30 '14 at 09:25
  • @RaduCotofana it's odd that it's so high, it should only start creating new connections once the pool gets full, which shouldn't be happening if you've got less than 200 concurrent connections open. Are you sure you're disposing the Redis clients correctly? i.e. have you switched to using `base.Redis` in Services, i.e. so they're automatically disposed? – mythz Nov 30 '14 at 09:53
  • 2
    Hi @mythz. Everything is working now. The missing piece was the IDisposable declaration on the Service. I've edited your answer so that anyone can see your solution. Thank you – Radu Cotofana Dec 02 '14 at 20:46
  • Hello @mythz. Things look ok now but when we encounter traffic spikes the following exception is thrown:Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index. Stack: at System.Collections.Generic.List`1.get_Item(Int32 index) at ServiceStack.Redis.RedisManagerPool.GetClient(). After a couple of minutes errors occur less frequently but are still thrown. Do you have any idea why this could happen? Thank you – Radu Cotofana Dec 10 '14 at 21:18
  • @RaduCotofana I've resolved a bug in the Hosts lookup (when the pool has overflowed) in `GetClient()` which could've caused this. An update release with this fix is now [available on MyGet](https://github.com/ServiceStack/ServiceStack/wiki/MyGet). – mythz Dec 10 '14 at 23:16
  • we are deploying the latest version now. Hope this will solve this issue because since upgrading to v4 we had a lot of issues. The current pool size is 40 but if we increase it to let's say 100 we see a very high increase in response time so I think that there might be more issues related to this. What do you think? Thank you again. – Radu Cotofana Dec 11 '14 at 06:20
  • 1
    @RaduCotofana Note latest fix is v4.0.35 on [MyGet](https://github.com/ServiceStack/ServiceStack/wiki/MyGet) (ie. not NuGet). Your earlier issues were due to not disposing the Redis Client properly. The 2 Pooled Redis Client managers [have different pooling behavior](https://github.com/ServiceStack/ServiceStack/blob/master/release-notes.md#updated-redismanagerpool-pooling-behavior), theres a trade-off vs size of pool vs connections open vs blocking for available connection, try both to see what's optimal for your usage. My personal pref is RPM with poolsize 50% over avg concurrent connections. – mythz Dec 11 '14 at 08:15
  • the chnages you pushed today fixed our issue. thanks again for your help – Radu Cotofana Dec 11 '14 at 19:28