I have used WriteBack in the past for two primary reasons:
1) Faster writes from the host perspective.
2) Reordering disk writes.
Faster writes allow the host to write to enclosure RAM and then continue on (with battery backup of course). Reordering allows these writes to take place in a different order than they were received from the host. Data can be written at leisure when the read/write heads are in close vicinity to a write location. Though I haven't read it specifically, I would speculate that some enclosures are more efficient at reordering packets and deferring writes than others, based on the understanding and skills of the team writing the firmware.
Let's compare an SSD drive with a 15k SAS drive. Using an Intel 320 for example, the specs show up to 38000 random IOPS during reads (14000 for writes), while 15k disks can reach, say 200 random IOPS during reads. That would give each SSD drive the same speed as about 190 hard drives.
Since SSD's are not subject to speed increases by reordering writes like spining disks are, and because of the high throughput of the SSD's, it would seem that the usefulness of WriteBack has been mostly eliminated. Therefore, based on this logic, and from what research I was able to find, I would recommend using WriteThrough for SSD SCSI enclosures, while allowing Read Caching to occur (debatable). I would also disable any Read Ahead caching schemes. It seems pointless to pre-read something that can already move almost 300 MB/sec.
Using SSD drives in RAID enclosures, the bottleneck moves from disk IOPS to the RAID enclosure link (iSCSI/Fibre), unless of course you are fortunate enough to have 10GB speeds.