0

Here is the scenario;

I need to authorize a user to do activity in /opt/dev-folder/. Developer makes frequent changes in file, some time they replaced or remove original file.

I have assigned privileges with sudo. Now i want some idea how can i take auto backup of file when developer run rm -f file-name.

e.g :

sudo rm -rf file-name.php

in background, it should do something like below :

mv file-name.php /backup/

any idea or better recommendation would be highly appreciated.

voretaq7
  • 79,879
  • 17
  • 130
  • 214
Email geek
  • 67
  • 1
  • 8
  • 8
    Use an actual revision control system, such as git. – Michael Hampton Jan 17 '13 at 05:55
  • @MichaelHampton we do use for code. We need for precautionary purpose. – Email geek Jan 17 '13 at 06:33
  • 4
    revision control isn't just about code. you can alias rm to something sure, but it *will* bite you in the ass at some point. – Sirex Jan 17 '13 at 07:12
  • otherwise, try something like http://en.wikipedia.org/wiki/Ext3cow – Sirex Jan 17 '13 at 07:14
  • 1
    +1 for revision control and for alias. If this folder is important to you, keep it regularly backed up. Use cron to schedule a regular checkin to a Git/SVN repository, or even just to copy that directory to another date/time-named directory. – jimbobmcgee Jan 17 '13 at 11:21

1 Answers1

1

You can't.
(Well you can - there are hacks for some filesystems, like EXT3 Copy on Write that Sirex pointed out, but there's no universal way to do this on all Unix platforms.)

Let's look at some of the possible implementations and issues/circumventions, just for fun:

  1. Copy on Write Filesystem Extensions
    These are the most foolproof, but they require you to be using a filesystem that has a copy-on-write extension. The implementation varies from OS to OS (and may not even be available on your target platform), and they chew up lots of disk space keeping the shadow copies lying around.

  2. Alias rm to something else
    This works great until someone runs /bin/rm (or uses a tool other than the rm command to delete a file: GUI file explorers, the unlink() call, etc.)
    This approach is likely to break if sudo gets involved.

  3. Replace rm with a script or custom binary
    A nice idea, but replacing core system binaries is generally a Bad Thing.
    It might work, but you'll still need something that actually makes data go away or you're back in the same boat as option (1) chewing up gobs of disk space. This means either rm hangs around renamed to something else like /bin/realrm (reduced to case (2) - someone can just run realrm. You can mitigate that a bit by making it only executable by root) or you have some other cleanup process.
    You can do it, but it's a lot of work.


My analysis above is by no means exhaustive -- Given the litany of problems version control is the better option (put the sensitive directories under revision control. When something is deleted you can always restore it by pulling the pre-deletion version.)

Another traditional precaution against loose nuts on keyboards is having a BACKUP from which you can restore accidentally-deleted files.

voretaq7
  • 79,879
  • 17
  • 130
  • 214