2

I'm configuring a Jenkins environment for my company. We use gitea (1.9.3), Jenkins 2.194 and git version 2.23 when i build a repository from source code management it builds the repository but it takes between 120-150 seconds to do a fetch command. any suggestions on what to do?

I tried to change git clients, make sure the credentials are ok, tried with empty or full repository and with sub modules and only one branch configuration.

the problematic part:

Edit 1: The build take 4:27 no matter if the repository is empty or full of stuff, the git fetch command takes around 3 seconds if i run it manually from the server.

3 Answers3

0

Git is pretty fast, but if your repo is very large, downloading the entire history can take some time.

One option is to use the --depth 1 argument to git clone or git fetch. This argument tells git to only download the most recent commit, and not the full history. You won't be able to switch between commits, view diffs, and so on, but since this is for a build environment, you don't need any of that.

Dan
  • 10,531
  • 2
  • 36
  • 55
  • Thanks for the answer, i tried to do that but the build is still taking around 4.5 minutes. any suggestions on something else to try? – Barak Valzer Sep 16 '19 at 08:07
0

I tried to install Jenkins on Ubuntu server 18.04 and it shorten the time from 4.5 minutes to around 20 seconds for a full build with sub modules.

0

I tried to install Jenkins on Ubuntu server 18.04 and it shorten the time from 4.5 minutes to around 20 seconds for a full build with sub modules.

There is a performance regression reported on the mailing list for any Git more than 2.20 on Windows indeed.
Also in git-for-windows/git issue 2199.

With Git 2.29 (Q4 2020), an optimization around submodule handling will make that much faster, on Windows.

See commit 7ea0c2f (04 Sep 2020) by Orgad Shaneh (orgads).
(Merged by Junio C Hamano -- gitster -- in commit bcb68bf, 22 Sep 2020)

fetch: do not look for submodule changes in unchanged refs

Signed-off-by: Orgad Shaneh

When fetching recursively with submodules, for each ref in the superproject, we call check_for_new_submodule_commits() which collects all the objects that have to be checked for submodule changes on calculate_changed_submodule_paths().
On the first call, it also collects all the existing refs for excluding them from the scan.

calculate_changed_submodule_paths() creates an argument array with all the collected new objects, followed by --not and all the old objects. This argv is passed to setup_revisions, which parses each argument, converts it back to an oid and resolves the object.
The parsing itself also does redundant work, because it is treated like user input, while in fact it is a full oid. So it needlessly attempts to look it up as ref (checks if it has ^, ~ etc.), checks if it is a file name etc.

For a repository with many refs, all of this is expensive. But if the fetch in the superproject did not update the ref (i.e. the objects that are required to exist in the submodule did not change), there is no need to include it in the list.

Before commit be76c212 ("fetch: ensure submodule objects fetched", 2018-12-06, Git v2.21.0-rc0 -- merge listed in batch #4), submodule reference changes were only detected for refs that were changed, but not for new refs. This commit covered also this case, but what it did was to just include every ref.

This change should reduce the number of scanned refs by about half (except the case of a no-op fetch, which will not scan any ref), because all the existing refs will still be listed after --not.

VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250