We have come to git after using SVN for years and at times, I must admit, it is confusing. Take the following example -
- User1 makes a change to a.java and pushes to the remote server.
- User2 makes a change to b.java. He can't push straight away (a deviation from SVN but that is OK). He needs to first pull from remote server, and then push his change to the remote server. This would be shown as a separate merge commit and has been beautifully explained in here on stackoverflow itself
- Now is the interesting part. If we extrapolate this to multiple files, there is a possibility of a conflict with one of those changed by User2. This time, git can't make an auto-commit. User2 would have to resolve the conflicts and then commit this merge.
This is confusing since the user who hasn't made changes to so many files would be skeptical about committing them as part of this merge commit (especially with the SVN background). If this user now just commits the files which he resolved the conflicts for and pushes to the remote, Git stops giving the latest versions of the files that he didn't push. This brings the perception of I lost my work in the rest of the team.
My question after this long story is, why does it do so? Why shouldn't GIT keep the other files at their latest revision? Should git know that the user is not committing all the files that it brought to the user's machine as part of this auto-merge? Could there be a mechanism by which we can avoid making this mistake?