How long 1.5TB of data takes to copy depends very much on the type of data. If you have a few 1,500 1GB files, it will probably only take a few hours, but if you have a billion and a half 1KB files it will probably take days.
This is because of two contending specs on discs: the throughput and the average access time. A traditional disc with a 100MB/sec throughput and 10ms access time is fairly common. If you can stream data sequentially, you can get 100MB/sec. However, if you need to jump to another place it takes 10ms. Had you been streaming, you could have written 1MB of data in the time it takes to jump to another location.
Creating a file can take several seeks, so making a 1KB file can "cost" as much as streaming several MB of data.
So, in some cases it's better to do a raw disc copy of the block device than copying at the file-system via something like rsync. If you have a lot of files, in a file-system that is, say, 50% or more full, you're often better off just copying the full block device via "dd", as far as the time it takes. Of course, you can't do this while the file-system is mounted, so this has drawbacks as well.
SSDs can help mitigate this, because their access times are around 100 times as fast, but MLC SSD drives have complicated access issues depending on the availability of a pool of pre-erased blocks. SLC SSDs can help this.
RAID controllers with built-in cache can help with the seeks, as can something like the flashcache kernel module that lets you cache a block device via an SSD.
RAID systems can allow for multiple parallel seeks, effectively reducing the average access time, and also parallelization to increase the throughput. But your overall performance will often depend on how many files are involved.