21

Sometimes when inserting a small bunch of different document (synchronously), I get the following exception (see full stack trace further down):

MongoDB.Driver.MongoWaitQueueFullException: The wait queue for acquiring a connection to server xyz.mongolab.com:54128 is full.

I am using singleton MongoDatabase instance (and thus a single connection) between all my repositories. Essentially, I am doing something like this (with no more than 20 documents in each collection):

Context.Collection<ClientDocument>("clients").InsertMany(clients);
Context.Collection<VendorDocument>("vendors").InsertMany(vendors);
Context.Collection<SaleDocument>("sales").InsertOne(sale);

Below is the singleton context:

public class MongoContext
{
    public IMongoDatabase Database { get; }

    public MongoContext(IOptions<MongoSettings> settings)
    {
        var url = MongoUrl.Create(settings.Value.EndpointUri);

        var client = new MongoClient(new MongoClientSettings()
        {
            Server = url.Server
        });

        Database = client.GetDatabase(url.DatabaseName);
    }

    public IMongoCollection<TDocument> Collection<TDocument>(string collection)
        where TDocument : IDocument
    {
        return Database.GetCollection<TDocument>(collection);
    }
}

Something similar was filed on MongoDB's Jira (https://jira.mongodb.org/browse/CSHARP-1144) but these cases are dealing with huge bulk inserts (and often asynchronously).

I don't see the need to increase MaxConnectionPoolSize or WaitQueueSize with such small inserts.

What could be the cause of this?

I am using MongoDB 3.0.7 hosted in mLabs. Our application is hosted in Azure (as a Web App) and I am using the C# 2.2.3 SDK.

MongoDB.Driver.MongoWaitQueueFullException: The wait queue for acquiring a connection to server xyz.mongolab.com:54128 is full. at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.AcquireConnectionHelper.CheckingOutConnection() at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.AcquireConnection(CancellationToken cancellationToken) at MongoDB.Driver.Core.Servers.ClusterableServer.GetChannel(CancellationToken cancellationToken) at MongoDB.Driver.Core.Bindings.ServerChannelSource.GetChannel(CancellationToken cancellationToken) at MongoDB.Driver.Core.Bindings.ChannelSourceHandle.GetChannel(CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.Execute(IWriteBinding binding, CancellationToken cancellationToken) at MongoDB.Driver.OperationExecutor.ExecuteWriteOperation[TResult](IWriteBinding binding, IWriteOperation'1 operation, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl'1.ExecuteWriteOperation[TResult](IWriteOperation`1 operation, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl'1.BulkWrite(IEnumerable'1 requests, BulkWriteOptions options, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionBase'1.InsertOne(TDocument document, InsertOneOptions options, CancellationToken cancellationToken)

EDIT:

If I set MaxConnectionPoolSize to 500 and WaitQueueSize to 2000, then I get the following exception:

MongoConnectionException: An exception occurred while opening a connection to the server. ---> System.Net.Sockets.SocketException: An attempt was made to access a socket in a way forbidden by its access permissions 191.235.xxx.xxx:54128

Instantiating MongoClient:

var client = new MongoClient(new MongoClientSettings()
{
    Server = url.Server,
    Credentials = credentials,
    MaxConnectionPoolSize = 500,
    WaitQueueSize = 2000
});

I raised this problem initially here. This led to me trying to figure out why on earth I have so many connections. This led to this post (questioning if Insert/InsertBulk could be a cause). Regardless, I still need to fix the original MongoWaitQueueFullException problem.

Community
  • 1
  • 1
Dave New
  • 38,496
  • 59
  • 215
  • 394
  • I'm attempting to reproduce this now. – Craig Wilson May 19 '16 at 12:25
  • Yeah, I'm not having any success at reproducing this. I guess I'll need to setup something at mLabs, cause local tests never manifest this. When you say "sometimes", what do you mean? How often? Can you get some other information, like the server logs? Could you enable .NET network tracing to see what is going on at the socket level? – Craig Wilson May 19 '16 at 12:34
  • Also, is this a replica set, sharded system, standalone? – Craig Wilson May 19 '16 at 12:46
  • @CraigWilson: See my edit. This is using the sandbox (free) database option. I have also tried monitoring connection using [this](https://github.com/WadGraphEs/AzurePlot/blob/99fdab7c050c33e6a0eb871014f0b31215d9fa57/AzurePlot/AzurePlot.Lib/ServicePointMonitor.cs) and there are no more than 2 or 3 connections open at any time. Funny enough, the MongoDB connection doesn't show up here - I would assume this to return all open TCP connections. I will looking into .NET network tracing - do you have any suggestions here? Thanks for your help! – Dave New May 19 '16 at 15:44
  • oh yeah, azure... network tracing: https://msdn.microsoft.com/en-us/library/ty48b824(v=vs.110).aspx. We'd care about System.Net and System.Net.Sockets. While a Database object isn't pinned to a mongodb connection, this should likely only be using one because you are using it serially. – Craig Wilson May 19 '16 at 16:06
  • @CraigWilson: It seems that 8 socket connections are established to my MongoDB instance and they certainly do remain open and are reused. I've taken out a snippet from one of these connections from the trace. See here http://pastebin.com/3p4GWj9V – Dave New May 20 '16 at 05:51
  • Don't know if you ever found a solution but in my case, running against Atlas, I was using like 10000 calls to InsertOneAsync followed by a Task.WhenAll. This always ends up with this error (or the funky "Invalid BinaryConnection state transition from 4 to Failed"). I tried to change queueSize, maxServerSelectionWaitQueueSize to whatever, it always fails. So, I just changed my 10000 calls to a BulkWriteAsync with 10000 items and it know works like a charm. Looks like a design flaw/bug to me in the C# driver. – Simon Mourier Sep 22 '18 at 09:13

1 Answers1

9

A long term way to solve your problem might be to introduce a throttling mechanism, to ensure you aren't exceeding your max number of connections. Fortunately, with a Semaphore this is pretty easy to implement.

public class ConnectionThrottlingPipeline : IConnectionThrottlingPipeline
{
    private readonly Semaphore openConnectionSemaphore;

    public ConnectionThrottlingPipeline( IMongoClient client )
    {
        //Only grabbing half the available connections to hedge against collisions.
        //If you send every operation through here
        //you should be able to use the entire connection pool.
        openConnectionSemaphore = new Semaphore( client.Settings.MaxConnectionPoolSize / 2,
            client.Settings.MaxConnectionPoolSize / 2 );
    }

    public async Task<T> AddRequest<T>( Task<T> task )
    {
        openConnectionSemaphore.WaitOne();
        try
        {
            var result = await task;
            return result;
        }
        finally
        {
            openConnectionSemaphore.Release();
        }
    }
}

If you send all your requests to the database through this throttling pipeline you should never hit the limit.

In your case sending the operations through the pipeline might look like this (the big change you'd have to make would be making your database calls asynchronous) :

await connectionThrottlingPipeline.AddRequest( 
   Context.Collection<ClientDocument>("clients").InsertManyAsync(clients))
Community
  • 1
  • 1
Bumler
  • 91
  • 2
  • 4
  • 3
    This wont acctually block the request connection. You need to defer the call to any database functions until the semaphore has been awaited. i.e. use "Func> task" as the params and then "await task();" Then you can call "AddRequest( () => ... )" and it will correctly await. – DwayneBull Mar 15 '21 at 14:10