0

I have successfully setuped BigCouch on two different machines. Both of them run locally very well. When I joins them in a cluster using one of or both this command: curl -X PUT machine1:5986/nodes/bigcouch@machine2 -d {} curl -X PUT machine2:5986/nodes/bigcouch@machine1 -d {}

I always receive positive results. The database nodes contains two documents bigcouch@machine2, bigcouch@machine1. But in fact, it is always erreous. I saw this error message in the command line of BigCouch

=*ERROR REPORT==== 9-Dec-2011::20:01:40 === Error in process <0.3117.0> on node 'bigcouch@machine1.fr' with exit value: {{rexi_DOWN,noconnect},[{mem3_rep,rexi_call,2},{mem3_rep,replicate_batch,1},{mem3_rep,go,3},{mem3_rep,go,2}]} <148>1 2011-12-09T19:01:40.559992Z machine1 twig <0.159.0> -------- - mem3_sync nodes -> 'bigcouch@machine2' {{rexi_DOWN,noconnect}, [{mem3_rep,rexi_call,2}, {mem3_rep,replicate_batch,1}, {mem3_rep,go,3}, {mem3_rep,go,2}]} <148>1 2011-12-09T19:01:40.560106Z machine1 twig <0.159.0> -------- - mem3_sync dbs -> 'bigcouch@machine2' {{rexi_DOWN,noconnect}, [{mem3_rep,rexi_call,2}, {mem3_rep,replicate_batch,1}, {mem3_rep,go,3}, {mem3_rep,go,2}]} <148>1 2011-12-09T19:01:40.560205Z machine1 twig <0.159.0> -------- - mem3_sync _users -> 'bigcouch@machine2' {{rexi_DOWN,noconnect}, [{mem3_rep,rexi_call,2}, {mem3_rep,replicate_batch,1}, {mem3_rep,go,3}, {mem3_rep,go,2}]} [error] [emulator] [--------] Error in process <0.3198.0> on node 'bigcouch@machine2' with exit value: {{rexi_DOWN,noconnect},[{mem3_rep,rexi_call,2},{mem3_rep,replicate_batch,1},{mem3_rep,go,3},{mem3_rep,go,2}]} <147>1 2011-12-09T19:01:45.560979Z machine1 twig emulator msg - Error in process <0.3198.0> on node 'bigcouch@machine1' with exit value: {{rexi_DOWN,noconnect},[{mem3_rep,rexi_call,2},{mem3_rep,replicate_batch,1},{mem3_rep,go,3},{mem3_rep,go,2}]}*

Maybe it's the firewalled? If Yes, plese tell me the range port to let nodes connect each other. If not, Please explain it to me and how to solve it to connect them.

In the document, they ask that nodes can ping each other and the nodes set the same magic cookie. My machines can ping each other, but what is magic cookie?

CD Tran
  • 99
  • 1
  • 1
  • 7

1 Answers1

0

Occasionally you can see this error when a node is first connected as there are various processes that receive update messages and monitor the other nodes as well as an internal replicator. These messages are harmless but if you see "noconnect" persistently then something is wrong.

On each instance there is a file, /etc/vm.args in which you will see two values of interest, -name and -setcookie The first -name corresponds to the doc id you must use when connecting the nodes and the second is the magic cookie that must be the same on all the erlang nodes for them to talk to one another. If this cookie isn't set it defaults to the value in ~/.erlang-cookie

When you execute "make dev" it will build a 3 node cluster that you can inspect to see how these bits should be set.

Also you only need to run the connect on one side, .eg. node2 to node1 as the internal replicator will sync the nodes dbs across the cluster

  • Also, if you have a firewall, you should read this: http://www.erlang.org/faq/how_do_i.html#id55164. You'll need to open the EPMD port, 4369, as well as a range of ports (say, 9100-9105), and, finally, convince the erlang vm to use only that range. The linked page includes one way, but I think you can also add '-kernel inet_dist_listen_min 9001' to the vm.args file. – Robert Newson Dec 11 '11 at 15:08