Scenario:
With Locky virus on the rampage the computer center I work for have found the only method of file recovery is using tools like Recuva now the problem with that is it dumps all the recovered files into a single directory. I would like to move all those files based on there file extensions into categories. All JPG in one all BMP in another ... etc. i have looked around Stackoverflow and based off of various other questions and responses I managed to build a small bash script (sample provided) that kinda does that however it takes forever to finish and i think i have the extensions messed up.
Code:
#!/bin/bash
path=$2 # Starting path to the directory of the junk files
var=0 # How many records were processed
SECONDS=0 # reset the clock so we can time the event
clear
echo "Searching $2 for file types and then moving all files into grouped folders."
# Only want to move Files from first level as Directories are ok were they are
for FILE in `find $2 -maxdepth 1 -type f`
do
# Split the EXT off for the directory name using AWK
DIR=$(awk -F. '{print $NF}' <<<"$FILE")
# DEBUG ONLY
# echo "Moving file: $FILE into directory $DIR"
# Make a directory in our path then Move that file into the directory
mkdir -p "$DIR"
mv "$FILE" "$DIR"
((var++))
done
echo "$var Files found and orginized in:"
echo "$(($diff / 3600)) hours, $((($diff / 60) % 60)) minutes and $(($diff % 60)) seconds."
Question:
How can i make this more efficient while dealing with 500,000+ files? The find takes forever to grab a list of files and in the loop its attempting to create a directory (even if that path is already there). I would like to more efficiently deal with those two particular aspects of the loop if at possible.