I am trying to add two bricks to a Gluster Volume. The two new nodes are in the network, and can be verified with:
root /# gluster peer status
Also the volume:
Status of volume: mainvolume
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick Node-1:/storage 49152 Y 1162
NFS Server on localhost 2049 Y 4004
Self-heal Daemon on localhost N/A Y 4011
NFS Server on 104.xxx.xxx.xxx 2049 Y 3024
Self-heal Daemon on 104.xxx.xxx.xxx N/A Y 3031
Brick 45.xx.xx.xx:/storage-pool N/A N N/A
NFS Server on 45.xx.xx.xx N/A N N/A
There are no active volume tasks
The last brick was accidentally added and needs to be removed. I been looking at the Gluster docs as well as someone's github cheat sheet, but I can't seem to add the two nodes. I started off only wanting to add one node, but then I accidentally removed a node. So now I have two nodes to add. Below is some sample code of what I am trying:
gluster volume add-brick mainvolume replica 2 Node-2:/storage Node-3:/storage
--> volume add-brick: failed:
Log File:
[2015-09-07 02:57:44.475415] I [input.c:36:cli_batch] 0-: Exiting with: -1
[2015-09-07 03:04:31.229023] I [input.c:36:cli_batch] 0-: Exiting with: -1
[2015-09-07 02:49:54.270231] E [glusterd-brick-ops.c:492:__glusterd_handle_add_brick] 0-management:
[2015-09-07 02:52:48.909897] E [glusterd-brick-ops.c:454:__glusterd_handle_add_brick] 0-management: Incorrect number of bricks supplied 1 with count 2
[2015-09-07 02:16:46.498829] E [client-handshake.c:1742:client_query_portmap_cbk] 1-mainvolume-client-2: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
I am add a loss for what to do, my next step is going to be to recreate the network if I can't figure it out.