This kind of depends on the details of "huge folder", specifically whether its a large number of small files or physically large files as well as how deep the tree of directories is.
XFS is a very solid file system that is excellent at working efficiently with large files. It often gets knocked in production environments for its aggressive caching of data in RAM and possible data loss due to sudden power failure (not file system corruption, just data loss) although pretty much every file system suffers from this same problem to some extent. The other gotcha is somewhat slower metadata operations when adding or deleting directories. This may be a deal-breaker for you if you have a deep directory tree, but I would suggest testing out XFS before dismissing it.
JFS is a rock solid file system noted for low CPU usage and well-rounded performance under many different loads. It's pretty much my go-to file system when I have a strong desire for the stability of ext3 but can't deal with the performance quirks (aka inefficient allocation and slow disk access) of the ext series of file systems. You may not find it quite as high performing with large files when compared to XFS.
Without further details on your targetted workload, I can't really give you a definitive suggestion, but I suspect JFS will be a very good choice if you don't have time for extensive tuning and benchmarking.