5

I'm getting this error when reading from a table in a 5 node cluster using datastax drivers.

2015-02-19 03:24:09,908 ERROR [akka.actor.default-dispatcher-9] OneForOneStrategy akka://user/HealthServiceChecker-49e686b9-e189-48e3-9aeb-a574c875a8ab Can't use this Cluster instance because it was previously closed java.lang.IllegalStateException: Can't use this Cluster instance because it was previously closed at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1128) ~[cassandra-driver-core-2.0.4.jar:na] at com.datastax.driver.core.Cluster.init(Cluster.java:149) ~[cassandra-driver-core-2.0.4.jar:na] at com.datastax.driver.core.Cluster.connect(Cluster.java:225) ~[cassandra-driver-core-2.0.4.jar:na] at com.datastax.driver.core.Cluster.connect(Cluster.java:258) ~[cassandra-driver-core-2.0.4.jar:na]

I am able to connect using cqlsh and perform read operations.

Any clue what could be the problem here?

settings:

  • Consistency Level: ONE
  • keyspace replication strategy: 'class': 'NetworkTopologyStrategy', 'DC2': '1', 'DC1': '1'

  • cassandra version: 2.0.6

The code managing cassandra sessions is central and it is;

trait ConfigCassandraCluster
  extends CassandraCluster
{
  def cassandraConf: CassandraConfig
  lazy val port = cassandraConf.port
  lazy val host = cassandraConf.host
  lazy val cluster: Cluster =
    Cluster.builder()
      .addContactPoints(host)
      .withReconnectionPolicy(new ExponentialReconnectionPolicy(100, 30000))
      .withPort(port)
      .withSocketOptions(new SocketOptions().setKeepAlive(true))
      .build()

  lazy val keyspace = cassandraConf.keyspace
  private lazy val casSession = cluster.connect(keyspace)
  val session = new SessionProvider(casSession)
}

class SessionProvider(casSession: => Session) extends Logging {
  var lastSuccessful: Long = 0
  var firstSuccessful: Long = -1
  def apply[T](fn: Session => T): T = {
    val result = retry(fn, 15)
    if(firstSuccessful < 0)
      firstSuccessful = System.currentTimeMillis()
    lastSuccessful = System.currentTimeMillis()
    result
  }

  private def retry[T](fn: Session => T, remainingAttempts: Int): T = {
    //retry logic
}
Minar Mahmud
  • 2,577
  • 6
  • 20
  • 32
Kasun Kumara
  • 61
  • 1
  • 4
  • Your code has some problem... we can not magically know the cause without seeing the code. You are closing the cluster connection somewhere... and are trying to query from a closed connection. – sarveshseri Feb 19 '15 at 12:04
  • 1
    Thanks for the reply. I forgot the mention that the code works elsewhere except this Cassandra configuration. I can confirm the code doesn't have any closing logic anywhere and sessions are managed centrally including retries in case of failures. (code above) – Kasun Kumara Feb 19 '15 at 13:33

2 Answers2

5

The problem is, cluster.connect(keyspace) will close the cluster itself if it experiences NoHostAvailableException. Due to that during retry logic, you are experiencing IllegalStateException.

Have a look at Cluster init() method and you will understand more.

The solution for your problem would be, in the retry logic, do Cluster.builder.addContactPoint(node).build.connect(keyspace). This will enable to have a new cluster object while you retry.

happysathya
  • 142
  • 2
  • 9
0

Search your code for session.close().

You are closing your connection somewhere as stated in the comments. Once a session is closed, it can't be used again. Instead of closing connections, pool them to allow for re-use.

Lyuben Todorov
  • 13,987
  • 5
  • 50
  • 69
  • 1
    Just for the benefit of this thread, it seems the problem is due to a network issue in this particular environment. As mentioned earlier we have tested this in multiple environments and seem to work quite well, except in this. While we haven't found the exact cause yet there is a firewall in between and there are slight differences in cassandra configs as well. I shall post further if we nail it down to the exact cause. – Kasun Kumara Feb 25 '15 at 08:34