0

I have a RHEL 7.9 server I'm using as a file server. It exposes an XFS partition over NFS for clients to mount and use. I am willing to use a different filesystem than XFS if need be, but I cannot escape using NFS.

I'd like to force a Recycle Bin so that accidentally-deleted files are recoverable. Most of the results I see when looking for how to do this under NFS are simply "you can't, use CIFS/Samba".

I thought I could maybe use inotifywait to intercept file delete calls and create a hard link in a Recycle Bin directory to "save" the file from deletion, but it seems inotifywait runs after the file is already gone.

tjlds
  • 3
  • 2
  • > inotifywait runs after the file is already gone. Undelete file and move it – gapsf Sep 13 '22 at 06:09
  • I guess it is possible with ebpf. I read something about syscall interception but dont remember exactly – gapsf Sep 13 '22 at 06:15

2 Answers2

0

Let me say it again: in UNIX/Linux file operations there is no such thing as "Recycle Bin". By design!
You can create directory and move files there for "delete" operation. Or rewrite the file functions of Linux.

Romeo Ninov
  • 5,263
  • 4
  • 20
  • 26
  • This doesn't answer my question though. I know there is nothing automatic that Linux does; I want to _set up_ something to do this. – tjlds Sep 13 '22 at 05:42
0

Simpliest way to emulate recycle bin is with hardlinks. Just create one directory and add one command to cron.

Create "shadow" dir in the same filesystem but outside of the exported nfs directory. If users should have access to deleted files export it readonly,

When file deleted from nfs dir it stays on disk because it linked to filename in the shadow dir from where you can "restore" it.

There are options:

  • create hardlinks on schedule or on inotify events - after new file created

  • copy full directory or hardlink each new file individually.

Simpliest way is full dir copy on schedule with cp or rsync.

Add cron entry or add systemd timer and service with somthing like

# -l means create hardlink instead of copy data
cp -al /path/to/nfs_share /path/to/shadow_dir

And you will have hardlinks in shadow dir to all file(names) ever created on nfs share (except new files deleted before shadow dir will be update).

Of course if new file name on nfs share will match filename in shadow dir it will be relinked to new data.

It should be fast since files data not copied at all just files metadata.

With inotify create event and cp -al --backup=numbered you get 'versioned bin' which will store different versions of files with the same filenames.

gapsf
  • 846
  • 1
  • 6
  • 12