IMO, you should use striping by itself only when you don't care at all about your data.
when you stripe data across multiple drives, you increase the risk of losing everything on all drives because the failure of any one drive means complete loss of the entire volume...and the more drives you stripe over, the greater the risk.
accordingly, IMO, the answer to "when should i stripe?" is either "never" or "when you're striping mirrored volumes, as in RAID-10".
if both the data itself and IO performance is important to you, then get a good hardware SAS RAID card (e.g. an adaptec 3805 or 5805 or similar) with a large battery-backed write cache, and make a RAID-6 volume. to get 4TB with RAID-6, you'll need 6 x 1TB drives. plus one more as a hot or cold spare.
SAS controllers support both SAS drives and SATA drives. the models mentioned above support up to 8 drives directly but can support more through the use of SAS expanders (at the cost of performance - more drives mean less IO bandwidth per drive, but you could probably expand up to 16 drives or so without noticing any real performance hit. 3Gbps per SATA channel gives you maybe 250MB/s of IO, and current good non-SSD drives can use about 100-120MB/s or so each).
alternatively, use software RAID-10 (a striped array of mirrored volumes). a 4TB array would require 8x1TB drives. e.g. 4 x RAID-1 arrays striped together with RAID-0 (or LVM) for a single 4TB volume.
you can use LVM on top of these RAID arrays to manage the space. If you're going the RAID-10 route, then the striping can be done with LVM rather than RAID-0.
one other thing to consider is to separate the IO-consuming applications so that they don't compete for IO. e.g. keep your OS on one smallish drive, say 80GB (or a RAID-1 mirrored pair), your source code for compiler regression on another drive or RAID-1 pair, and your video data on either sw RAID-10 or hw RAID-6.
and install as much memory as you possibly can into the machine as linux will use it all for disk buffering. most common motherboards support up to 4 DDR-2 or 6 DDR-3 memory sticks, so with 2GB sticks being far cheaper than 4GB sticks you can install a maximum of 8GB or 12GB at a reasonable price. if you need more than that, it's more cost-effective to replace the motherboard with a server MB (from Tyan or SuperMicro etc) with more RAM sockets than it is to use 4GB sticks.
oh, and hot swap bays are a good idea - when (not if, when) a drive fails you need to be able to replace it as quickly as possible. RAID-6 can cope with any two drives failing simultaneously, so when 1 drive fails it will only take one more drive dying to take everything with it. RAID-10 can cope with more drives failing (up to half of the drives can fail as long as 1 of each mirrored pair survives).
and, finally, backup. RAID, as has been mentioned many times before by many people, is NOT a substitute for backup. The only tape medium currently capable of backing up the quantity of data you have in a reasonable time without spending days swapping cartridges is LTO-4. The drives for this are expensive and the tape cartridges seem expensive (but are actually cheaper than hard drives when you calculate the cost per gigabyte). if your budget doesn't stretch to that, then you could use multiple extra drives (connected via e-sata, firewire, a spare hot-swap bay...or even USB) to backup to - insert drive, run backup, remove drive, store on shelf or off-site...current drive capacities are up to 2TB, and will get larger and cheaper over time. BTW, at current prices (approx $1500-$2000 for a bare LTO drive vs approx $100 for a 1TB hard disk - approx. current australian dollar prices), the cost of an LTO drive would buy you 15 to 20 hard drives for backup...and you could buy them as you need them rather than all at once, with prices dropping noticably each time.