I use Git to manage the development cycle of my web application, and for changes where a git pull
is enough and no changes needed, I just naturally keep the site up and git pull
the changes.
But I'm thinking, say I have 100 files changed, and say there was some unexpected load on the disk, what happens if someone does a request during the moment that git was committing changes on disk? Wouldn't that make my application possibly vulnerable to possible data corruption or malformed responses?
I mean this is very, very unlikely for a low traffic website, but if you get some really high traffic, and happens to be a spike server load, there's some risk.
I actually thought of a solution but it is probably not the best if I roll my own implementation and I'm rather not sure if its practical. I remember reading an article on how we do not see artifacts on our screens, basically by writing new data to an entirely different block on the GPU memory, and then just switching the up to date screen data (whatever that is called) pointer to the new block, and possibly discarding or reusing the old block in the end. This way if a GPU lags no half-written data will be shown
Would this be practical if one could implement such a similar thing?