This is tricky with a two node Gluster implementation.
The first problem we run into is the handling of split brain conditions, which you would intentionally be creating. In the scenario you describe, you will be modifying only one node, then modifying another outside of Gluster's client replication. Gluster itself doesn't traditionally replicate between nodes, but relies on clients to write to and read from all nodes simultaneously in a fan-out method. Because of this, client machines are primarily responsible for "replication", rather than a server-to-server replication. So we really need to make sure that clients can access all relevant Gluster nodes at all times.
When you bring this volume online with both bricks attached to it after causing a split brain, you will find that Gluster refuses to proceed activating the volume until you fix the split brain problem manually at the brick level. It will do you the favor of telling you what is split brain, but you'll already know that because you would have been the directly responsible party. This is due to there only being two nodes, and no third node to act as a tiebreaker when analyzing whether data should be overwritten from a "dominant" copy. 3n Gluster volumes can self-heal in most situations, but wouldn't guarantee the kind of behavior you want.
To avoid this, a serious rethinking of the strategy is needed if you still intend to use GlusterFS. Gluster isn't designed with the intention of willful split-brain in mind, nor is it a traditional "failover" system. It's designed to be accessed at all nodes simultaneously, and can deals with a node failure using a majority rule (or prolonged offline manual intervention).
A reasonable GlusterFS solution:
You could create a new GlusterFS volume on your nodes and mount that on your node that you intend to write your new web content to after stopping service to it via HAproxy. Then switch over to that node and mount that same GlusterFS volume on your other node. Discard the old Gluster volume after finished.
This would change your steps thusly:
1) As I'm going to use HaProxy as a load balancer then I can change the status of node1 to maintenance and let node2 handle all the traffic.
2) Create a new GLusterFS volume from new bricks on both nodes, and mount this volume in the app / web directory of the node in maintenance mode, unmounting the original volume first.
3) Copy all relevant new and unchanged data to this new Gluster volume.
4) From HaProxy take down node2 for maintenance and take up node1 to
handle the traffic.
5) Mount the new Gluster volume on node 2, first umounting the old one
6) let HAproxy load balance services again, as we will have a working active / active cluster again.
7) Get rid of the old GlusterFS volume when we don't need it anymore.
This will ensure you keep your service online and don't get a horrible split-brain condition. Regarding your bricks: a brick is only a directory, and doesn't have to be a separate filesystem. You can have your new brick be on the same filesystem as your old brick by simply using a different directory at the root of that filesystem. This way you don't need a bunch of disk space to do an online service update.
An alternate solution:
DRDB handles data in a server-to-server replication ring, and you can force one node to replicate to others arbitrarily. It isn't quite as good for active / active load balanced clusters, as you'd have to use a filesystem like OCFS2 on top of it. However, it's a traditional replication system that would happily adhere to your current plan.
Clarification:
You do not need a three node cluster to implement the GlusterFS plan I described above. Two nodes will do fine.