Modern Hadoop installations typically go for several consumer grade SATA drives per box.
Exactly how many disks per node depends a lot on what your application is. At Yahoo, for instance, they are mostly disk size bound so lots of disks per node makes sense. I have seen stealth technology that can saturate a large number of drive channels so multiple back planes with lots of disks makes sense there.
If you are just starting, I would recommend either 6 x 2TB SATA or 12 x 2TB SATA. There are some nice Supermicro boxes that give you four nodes in a single 2U chassis with 12 drives on the front which is nice and compact, but having only 2 x 2TB drives per node can be kind of limiting. That same 2U form factor can also host 1 or 2 nodes with the same 12 drives on the face plate. Since the chassis itself costs money, this can make a difference.
Another consideration is that many data centers are limited by power per square foot. Power expended gets divided two ways in a Hadoop cluster, some to CPU's/memory and and a large portion to keeping the drives spinning. Since these limits are likely to keep you from filling a rack with super compact 4 x node boxes, you might rather go ahead and get single node boxes so that you can add drives later as you see fit.
If you aren't limited by disk space, you should consider total network bandwidth. Having more NIC's per drive is good here so the quad boxes are nice.
In a similar vein, what are your memory requirements? 24GB RAM for a dual quad core machine is pretty standard lately, but you might need more or be able to get away with less. Having a larger aggregate amount of memory across the same number of drives might be good for your application.