I currently use two Azure functions. One to receive webhook requests and add them to a Azure service bus queue. Another to process the queue. The second function reads from and then writes to a MongoDB Atlas database.
My queue processing Azure function app does cache the MongoDB client, so when each function host executes the script, it will reuse the connection if possible. However, presumably Azure function is creating new instances under the load. For reference, here is the caching code:
const mongodb = require('mongodb');
const MongoClient = mongodb.MongoClient;
const uri = process.env["MONGODB_URI"];
let dbInstance;
module.exports = async function() {
if (!dbInstance) {
client = await MongoClient.connect(uri);
dbInstance = client.db();
}
return dbInstance;
};
Yesterday, I had an Atlas email notification stating I was nearing the connection limit. Here is the connection spike:
As you can see, it nears my MongoDB Atlas limit of 500 connections.
Is there anyway to terminate these zombie connections, or perhaps reduce the connection TTL?
Alternatively, would it just make more sense to run this queue processor on a traditional server that polls the queue forever? I am currently dealing with ~500 executions a minute, and I simply assumed serverless would be much more scalable. But I am beginning to think a traditional server could handle that load without carrying the risk of overusing DB connections.