Serving a fairly large repository (9.5Gb for the bare .git) everytime someones needs to clone it it will spend a large amount of memory (and time) during the "Compressing objects" phase.
Is there a way to optimize repositories files so it will store the data as ready as possible for transmission? I wanted to avoid redoing the work everytime a clone is requested.
I'm using gitlab as a central server, but this also occurs cloning directly over ssh from another machine.