For very large datasets you may find that a SAN is not optimal. SANs excel at lots and lots of small ios really fast. They are often just ok at sequential throughput, unless you've got a very fast interconnect to them, and even then they often just aren't optimized for sequential throughput. I have tested sequential and random read write performance on my machines with Areca and LSI RAID Cards with battery backed cache, native SAS interface with linux software RAID, and with a SAN on the backend. Fastest for random access was close to a tie with the SAN and the RAID cards, but for sequential throughput, the linux software RAID stomped them into the ground. Where the HW RAID can get 350M/s and the SAN was in the 100M/s range (it's connected on gig e) the native SAS with SW RAID gets around 1G/s reads and about 80% that on writes. All sequential of course. Don't assume your SAN is super fast for what you're doing, it might be, it might not be. Test it with bonnie++ or dd or something to get some idea of how fast it really is. If you're getting ~100MB/s sequential then it'll be painfully slow next to a cheaper machine with 4 or 8 7200RPM SATA drives running RAID-10 for analytics.
When you say 8x 3GHz CPUs do you mean 8x sockets each with 4 or 8 cores? Or 8 cores? Or 4 cores with hyperthreading? for your kind of work anything past 4 cores total is probably gonna be a waste. Anything past 8 cores is definitely a waste. With OLAP / Analytics you want fewer faster CPUs if you can get them.
On to your settings. shared_mem doesn't need to be real big. On windows the shared memory implementation is sub optimal for large values, and making it bigger seldom helps improve performance. that said, I'd test various values to see, but a few hundred megs is likely to be as fast as it gets. maintenance work mem can be in the gig range, but the big gain is cranking it up over 100M or so. work_mem is the postgresql foot gun. If you are gonna crank it up, and I recommend going to at least 16 or 32M on your machine, make sure you ARE limiting postgresql's max_connections parameter to a few dozen connections at most. If, somehow, someone launches a bunch of queries at once, you can quickly run out of RAM. Not good. OTOH, some testing will likely show that anything over a hundred or so doesn't really help a lot.
The danger with cranking up work_mem too high is that it will wind up pushing data cached by the OS out of cache, only to need to be reloaded again. The cost of hitting the disks to get that data is usually higher than the gain of really cranking it up.
A good rule of thumb is to keep work_mem*max_connections*2 < 1/4 of memory. So, on a machine with 64G ram and 100 connections, you'd want work_mem*200 < 16G or about 80 Megs max. This ensures that any pathological behaviour where all the connections run lots of sorts don't kill the machine too readily.
If you find that work_mem of 1G works much better than 100M etc then you can compromise by leaving regular work_mem lower for safety and having the single thread that runs big queries set its own work_mem on connection.
I agree with the previous poster that windows is suboptimal for pgsql, with an emphasis that it's much worse for OLAP where being able to allocate more shared_memory can be an advantage for pg / linux.