2

I tried recovering a disk image from an NTFS hard drive that is 50% unreadable. And I guess a side effect of that is a lot of the files recovered have the correct filename and type, and take up the same file size as the original file, however instead of containing any useful data, they are just filled with 00 00 00 00 etc. in the HEX editor. Since these files aren't of any use but still take up disk space, is there a way to automate finding and deleting them all?

  • Do an inverse binary search for zero byte. Don't delete if anything is found in the file. Look up the man pages for which flags to use. – Mad Physicist Jul 15 '18 at 05:17
  • 2
    Let @tripleee help you... https://stackoverflow.com/a/20225032/2836621 – Mark Setchell Jul 15 '18 at 09:23
  • Possible duplicate of [How to check if a file contains only zeros in a Linux shell?](https://stackoverflow.com/questions/20224822/how-to-check-if-a-file-contains-only-zeros-in-a-linux-shell) – tripleee Jul 23 '18 at 07:03
  • I'm looking through old backups and also seeing ~50 GB files inside System Volume Information that have nothing but zeros when viewed in a hex editor. Easy to delete those! – endolith Feb 28 '19 at 17:12

3 Answers3

0

After some research, I came up with

grep --ignore-case -r -L --null [^0] * | xargs -0 rm
  • 1
    This removes files which contain ony 0x30 bytes (the ASCII code for the character `0`). Some variants of `grep` might support the notation `[^\000]` or if you have Bash you can say `$'[^\x00]'` – tripleee Jul 23 '18 at 07:06
0

I tested [^0] and [^\000] cases on: 1000B 2000B 3000B ... 5000000B lenght zero-filled files.

Looks like both works properly.

0

Finding files using https://stackoverflow.com/a/20226139/969504 answer:

find 2>/dev/null -type f -size +0c -exec bash -c '<"$0" tr -d "\0" | read -n 1 || echo "$0"' {} ';'