Consider the following:
I (hypothetically) have one heavyweight server/master node with a couple disks in it and a group of lightweight diskless clients/nodes on a 10 gigabit LAN.
The server is running: DHCP and TFTP to give the clients a kernel over PXE. PXE kernel loads and mounts NFS on master server as root filesystem.
I route all internet access through the master server and share it with the clients to protect from possible security problems introduced by the fact that I want to maximize I/O between the nodes and the master, thus creating a private unprotected trusted network and a public firewalled network. This would be made possible by the fact that my master node would have 2 gigE ports.
The goal is to set up a software development lab for a group of about six people for less than four thousand dollars. The idea is to share as much of the hardware as possible.
My question is this: Which is better, to do compiles directly on the NFS mount, or do compiles on a tmpfs (a fileystem completely in RAM with no backing store) and then have my swap on the master node (and hope I don't have to use it too much). Or is it worth it to just get SSDs for the slave nodes so they can do local compiles and then save the results to their NFS mounts on the master, keeping in mind that this is very expensive and contrary to the goals of the setup.