4

I'm going through the book Seven Databases in Seven Weeks (a good read so far), and I'm confused on a Riak detail that is passed over quickly in the book.

Riak is supposed to, by default, break the data up into 64 partitions. Each of these partitions are supposed to be split up between the nodes in the ring. (Correct me if I got the lingo wrong.)

I am using 4 dev nodes that come with the Riak source. All of them are started, but when I curl http://localhost:8091/stats | grep ring, I see

"ring_ownership": "[{'dev1@127.0.0.1',64}]"

This is further confirmed with a $RIAK_INSTALL/dev/dev4/bin/riak-admin member-status:

================================= Membership ==================================
Status     Ring    Pending    Node
-------------------------------------------------------------------------------
joining     0.0%      --      'dev2@127.0.0.1'
joining     0.0%      --      'dev3@127.0.0.1'
joining     0.0%      --      'dev4@127.0.0.1'
valid     100.0%      --      'dev1@127.0.0.1'
-------------------------------------------------------------------------------

What's going on? Why has the dev1 node claimed all of the partitions, and how can I make it share?

Perhaps related

I edited app.config for each node (in RIAK_ROOT/dev/devN/etc/app.config) to make the pb_ip 0.0.0.0 instead of 127.0.0.1. This was so that I can access Riak from a browser in my host machine, even though I'm running Riak in a Vagrant VM. Even though I made this same change in each of them, I can only access dev1 from my host's browser (not dev2, dev3, or dev4).

If you think it would help, I can package up this VM and make it available to help you help me troubleshoot. (One of the many reasons VMs are awesome.)

chadoh
  • 4,343
  • 6
  • 39
  • 64

2 Answers2

5

I suspect you're seeing that output riak-admin member-status because you have outstanding changes to your cluster which need to be committed.

riak-admin cluster plan

riak-admin cluster commit

Running riak-admin cluster plan will show information on transfers that are outstanding, if any. You then need to commit the changes to your cluster with the second command.

mafrosis
  • 2,720
  • 1
  • 25
  • 34
  • Nice! I'm curious as to how I ended up in that state, but I'm happy just getting past it for now. Each node now has 16 partitions. Any idea why only dev1 responds in my host browser? – chadoh Nov 27 '12 at 14:32
  • 1
    Each dev node is listening on different ports. That's how a devrel cluster is setup by the build scripts. You're running in a test environment. In a deployed environment each node (read: separate system or virtual machine) will bind to the same port, the one that dev1 listens on, and you can load balance between them. – Greg Burd Nov 27 '12 at 17:48
  • @GregoryBurd I get that, but it seems to me that I ought to be able to hit `localhost:8092` to directly access dev2, `localhost:8093` to access dev3, etc. I can `curl` those URLs in my (headless) VM, but I can only access dev1 (at `localhost:8091`) from my host machine. Not a big deal, but I'm not sure why. – chadoh Nov 27 '12 at 18:48
2

The dangers of publishing works on high velocity projects is that the interface changes before the ink dries.

There was a major shift in cluster management between 1.0 (what the book covers) and 1.2 (the current version). The book just issued a join directly to dev1 and called it a day. Now you must go through the riak-admin cluster command, which batches multiple joins/leaves and executes them as a single transaction. Once you've joined, you must view the plan and commit the transaction, as mentioned in another comment.

FWIW, most of the remaining Riak information is still the same.

Coderoshi
  • 364
  • 3
  • 5