0

I am using .NET Driver for Neo4j .

Environment: .NET 6 Neo4j Server version : 4.3.2 Driver version : Neo4j.Driver 4.4.0

We are using a singleton driver connection with the server using the following code snippet and reusing it across all the sessions.

Neo4j.Driver.IDriver _driver = GraphDatabase.Driver("neo4j://*.*.*.*:7687", AuthTokens.Basic("neo4j", "*****"));

And we are opening and closing a session with each transaction like

var session = _driver.AsyncSession(o => o.WithDatabase("pdb00"));
    try
    {
            return await session.ReadTransactionAsync(async tx =>
            {
                var result = await tx.RunAsync
                                         (
                                            query, parameters
                                          );

                res = await result.ToListAsync();
                var counters = await result.ConsumeAsync();

                Console.WriteLine("Time taken to read query " + index + ":  " + counters.ResultConsumedAfter.TotalMilliseconds);
                return res;

            });
        
    }
    catch (Exception ex)
    {
        Console.WriteLine(ex.Message);
        throw;
    }
    finally
    {
        await session.CloseAsync();
    }

However when we monitor the number of active connections to the Neo4j server using the following command

call dbms.listConnections()

We are seeing as many connections as the number of sessions that are made, and the connections are not getting dropped until the Driver is closed.

For instance, if we call the transaction 100 times, the active connections increase by 100, and they are staying as-is even though session.closeasync() is getting invoked per session.

And only after Driver.closeAsync() is invoked at the application end, all the connections are dropped.

On heavy load, we are running into server overloading and port exhaustion scenarios due to this behavior.

Snapshot of current connections: Neo4j browser snapshot

Are we missing something here ?

Thanks in advance.

Charlotte Skardon
  • 6,220
  • 2
  • 31
  • 42

1 Answers1

0

The driver maintains a connection pool, CloseAsync on a session doesn't dispose of the session, it just releases it back to the pool, so over time, you'll hit the max number.

I can't remember what the default is, but have you tried setting the MaxConnectionPoolSize on construction of the driver?

Neo4j.Driver.IDriver _driver = GraphDatabase.Driver(
    "neo4j://*.*.*.*:7687", 
    AuthTokens.Basic("neo4j", "*****")
    config => config.WithMaxConnectionPoolSize(10)
);

You could also play around with the other config elements:

Neo4j.Driver.IDriver _driver = GraphDatabase.Driver(
    "neo4j://*.*.*.*:7687", 
    AuthTokens.Basic("neo4j", "*****")
    config => config
        .WithMaxConnectionLifetime(TimeSpan.FromSeconds(10))
        .WithMaxConnectionPoolSize(10)
        .WithMaxIdleConnectionPoolSize(10)
);
Charlotte Skardon
  • 6,220
  • 2
  • 31
  • 42
  • Thanks @Charlotte. The default is 100 for maxconnectionpoolsize. However in an environment with many hundreds of parallel reads and writes, what's the most ideal configuration for MaxConnectionPoolSize and MaxIdleConnectionPoolSize ? Can we max them both and probably give a 5-sec Idle connection timeout using the ConnectionIdleTimeout variable ? The IdleConnectionTimeout() value will help us close the idle connections after 5 seconds hence not keeping them around for too long if not really required. Thoughts ? – tam-cocomelon Aug 23 '22 at 09:23
  • Try and see is the only answer unfortunately - you have to test it on your system and see what you get – Charlotte Skardon Aug 24 '22 at 14:30