Large numbers of files are inefficient on multiple levels; perhaps you could improve the neuroimaging software?
If that's not an option, you can do several things. The first is to store the data on an SSD. These operations are slow because they must query the status of each of the files in your repository, and putting them on an SSD makes every disk read much, much faster.
Another is to limit the number of files in any given directory. You may not be able to split the files from a single experiment up, but make sure you're not putting files from multiple experiments in the same directory. This is because the access time on a directory is often proportional to the number of files in that directory.
Another would be to investigate different filesystems, or different filesystem configurations; Not all filesystems are good with large directories. For example, on ext3/4 you can set the filesystem option dir_index so that it uses b-tree indexes to speed up access times on large directories. Use the tune2fs
program to set it.
A last desperate option might be to combine all of these tiny files into archives, such as tarballs or zip files. This might complicate working with them, but would greatly reduce the number of files you have to deal with. You may also be able to script away some of the complexity this causes; for example, when you need to view one of these images your script could extract the tarball into a temporary directory, launch the viewer, and then delete the extracted files when it exits.