GFS2/OCFS2 via DRBD allow a pair of servers to run dual primary as clustered storage. Your web frontends would pull from that shared pair. You could have multiple heads sharing a single FC attached media using either as well, or, could use NFS to have a single shared filesystem used by each of the web frontends. If you use NFS with DRBD, remember that you can only run that in primary/secondary mode due to the lack of cluster locks. This could cut your potential throughput in half.
GlusterFS sounds more like what you're looking for. It'll have some unique quirks, i.e. file requested on node that doesn't have it yet, metadata lookup says it is there, it gets transferred from one of the replicated nodes and then served. First time requested on a node will have some lag depending on the filesize.
OpenAFS is also another possibility. You have shared storage, each local resource has a local pool of recently used items. If the storage engine goes down, your local resource pools still serve.
Hadoop's HDFS is another alternative that just 'works'. A bit complicated to set up, but, would also meet your requirements. You're going to have a lot of duplicated content when using a distributed filesystem.
Another dirty method would be to have caches running on each of your web frontends that pull static/uploaded content from a single machine and use Varnish on each of the frontends to maintain a ram-cached version of your single copy. If your single machine fails, Varnish would cache existing items until the grace period, new items would be lost.
Much of this will be based on how reliable a backend you need. Distributed filesystems where your local machines are a replicating node are probably going to have the edge on speed since they don't involve network operations to get the data, but, with gigE and 10G cards being cheap, you probably won't experience significant latency.