UPDATE:
I have found a hidden code which shutdowns the cluster. That was causing the exception. There is no bug in the driver.
Driver version: 3.19.2
We have very high CPU intensive application querying Cassandra. The database itself is low in resources and accessible all the time. However, on an application level we get the following exception:
{
"Depth": 1,
"ClassName": "System.InvalidOperationException",
"Message": "Can not start timer after Disposed",
"Source": "Cassandra",
"StackTraceString": "
at Cassandra.Tasks.HashedWheelTimer.Start()\n
at Cassandra.Tasks.HashedWheelTimer.NewTimeout(Action`1 action, Object state, Int64 delay)\n
at Cassandra.Connections.Connection.Send(IRequest request, Action`2 callback, Int32 timeoutMillis)\n
--- End of stack trace from previous location ---\n
at Cassandra.Connections.Connection.SetKeyspace(String value)\n
at Cassandra.Connections.Connection.SetKeyspace(String value)\n
at Cassandra.Connections.HostConnectionPool.GetConnectionFromHostAsync(IDictionary`2 triedHosts, Func`1 getKeyspaceFunc, Boolean createIfNeeded)\n
at Cassandra.Requests.RequestHandler.GetConnectionFromHostInternalAsync(Host host, HostDistance distance, IInternalSession session, IDictionary`2 triedHosts, Boolean retry)\n
at Cassandra.Requests.RequestExecution.SendToNextHostAsync(ValidHost validHost)\n
at Elders.Cronus.Persistence.Cassandra.Preview.CassandraEventStore.LoadAggregateCommitsAsync(IBlobId id)",
"RemoteStackTraceString": null,
"RemoteStackIndex": 0,
"HResult": -2146233079,
"HelpURL": null
}
I am not sure what is the real cause of this exception but the driver is not able to recover the connection and normal operation when it happens. There is no indication on ISession level that there is a problem.
The only way to do workaround is to wrap every cassandra call with try catch and establish a new Session.
Have you ever encountered this issue?
Do you have any idea what might cause the disposal of the timer? May be a socket error or too many pending requests in the queue?