0

Is there a possibility to write to a particular node using datastax driver?

For example, I have three nodes in datacenter 1 and three nodes in datacenter 2.

Existing

If i build up the cluster with any one of them as seed, all the nodes will get detected by the datastax java driver. So, in this case, if i insert a data using driver, it will automatically choose one of the nodes and proceed with it as the co-ordinator(preferably local data center)

Requirement

I want a way to contact any node in datacenter 2 and hand over the co-ordinator job to one of the nodes in datacenter 2.

Why i need this

I am trying to use the trigger functionality from datacenter 2 alone. Since triggers are taken care by co-ordinator , i want a co-ordinator to be selected from datacenter 2 so that data center 1 doesnt have to do this operation.

Community
  • 1
  • 1
Ananth
  • 971
  • 9
  • 23

2 Answers2

1

You may be able to use the DCAwareRoundRobinPolicy load balancing policy to achieve this by creating the policy such that DC2 is considered the "local" DC.

Cluster.Builder builder = Cluster.builder().withLoadBalancingPolicy(new DCAwareRoundRobinPolicy("dc2"));

In the above example, remote (non-DC2) nodes will be ignored.

There is also a new WhiteListPolicy in driver version 2.0.2 that wraps another load balancing policy and restricts the nodes to a specific list you provide.

Cluster.Builder builder = Cluster.builder().withLoadBalancingPolicy(new WhiteListPolicy(new DCAwareRoundRobinPolicy("dc2"), whiteList));
djatnieks
  • 734
  • 3
  • 11
  • Meaning requests are still going to DC1 as well as DC2, or something else doesn't work? Did you use the DCAwareRoundRobinPolicy and specify DC2 when connecting? Some more details may help. – djatnieks Apr 08 '14 at 04:02
  • In datastax , even if i add the following policy , it is able to detect the other hosts in the topology and add to its list. So, while inserting, the data automatically gets written in both the nodes in two different nodes. I did restrict the consistency level to local quorum but didnt help. – Ananth Apr 09 '14 at 07:07
0

For multi-DC scenarios Cassandra provides EACH and LOCAL consistency levels where EACH will acknowledge successful operation in each DC and LOCAL only in local one.

If I understood correctly, what you are trying to achieve is DC failover in your application. This is not a good practice. Let's assume your application is hosted in DC1 alongside with Cassandra. If DC1 goes down, your entire application is unavailable. If DC2 goes down, your application still can write with LOCAL CL and C* will replicate changes when DC2 is back.

If you want to achieve HA, you need to deploy application in each DC, use CL=LOCAL_X and finally do failover on DNS level (e.g. using AWS Route53).

See data consistency docs and this blog post for more info about consistency levels for multiple DCs.

rstml
  • 310
  • 2
  • 6