4

I have a 4 node Cassandra cluster. A single node is acting as a seed node for the Astyanax connection pool, provided via the setSeeds(...) method when building an AstyanaxContext. I have also plugged in my own connection pool monitoring implementation. This shows me one host added (the seed node), but it doesn't show me the other three nodes. When taking down the seed node for Astyanax, e.g. by disabling thrift via nodetool for this particular node, any Astyanax request to Cassandra fails. I thought the connection pool learns via the seed node(s) how the cluster looks like and doesn't need the seed node up and running all the time?

I'm using RING_DESCRIBE as NodeDiscoveryType and TOKEN_AWARE as ConnectionPoolType.

tsteinmaurer
  • 409
  • 2
  • 4
  • Anybody an idea why every Astyanax request fails after the single seed node went down? I thought that Astyanax internally learns how the cluster/ring looks like and then does a failover to other nodes, even if no seed node is available anymore. – tsteinmaurer Oct 04 '13 at 08:30
  • have you found a solution for this? – Adrian Jan 13 '14 at 18:12

1 Answers1

1

The contact point provided always needs to be running when you issue a query with Astyanax. The learning kicks in afterwards when you start writing to multiple replicas and the driver needs to workout where to send extra replicated data (because as you know, if you have more than 1 node, data gets written to multiple partitions)

What exactly do I mean?

127.0.0.1 <--- seed
127.0.0.2
127.0.0.3
127.0.0.4

// code where you initialize Astyanax
...
.setSeeds("127.0.0.1") // this node always has to be available
...
Lyuben Todorov
  • 13,987
  • 5
  • 50
  • 69