The main advantage of hardware RAID is the protected write back cache which will boost performance when dealing with synchronized writes (eg: databases). Your should absolutely avoid RAID cards without protected write back cache, as they often are much slower than software RAID. At the same time, not all RAID controllers play well with SSDs. The main reason is that to give good performance, SSDs need their local private cache to be enabled but, as a safety measure, some controller forcefully disable disk caches. While this is perfectly reasonable with hard disk, it leads to much degraded performance with SSDs.
Not all controller will behave the same. Ideally, your controller should leave disk cache enabled and flushing it when transferring the content of controller cache into disk cache. My experience with LSI RAIDs (Dell PERC are reference LSI) shows that, while by default they disable disk's private cache, their behavior can be tuned to exactly match the one described above (disk cache enabled + flush), so they should be very fast with SSDs without compromising data safety. Anyway, as it depends not only on the controller used but on its firmware also, you should consult the controller manual/guide to be sure.
On the other side of the equation, you have software RAID. Its big advantages are the standardization of the format and the greater flexibility it provide, but lacking a protected write back cache performance can suffer in some scenario. Moreover, Linux MDRAID works very well with SSDs (recent versions even support TRIM).
One solution for all performance problems, being software or hardware RAID based, it to use SSDs with complete power loss protection failures (read: enterprise-grade drives), as the INTEL 3700/3600/3500 series or Micron M500/M600 DC (please note the "DC" part). With these disks, you could safety leave disk cache enabled and flushes disabled as the disk themselves protect it using capacitors.
Again, be sure to read specifications before buying anything.