1

I would like to verify that an rpm is available from Nexus 3 after it is uploaded.

When an rpm is uploaded to Nexus 3, the following events happen (looking at the logs):

Scheduling rebuild of yum metadata to start in 60 seconds
Rebuilding yum metadata for repository rpm
...
Finished rebuilding yum metadata for repository rpm

This takes a while. In my CI pipeline I would like to check periodically until the artifact is available to be installed.

The pipeline builds the rpm, it uploads it to Nexus 3 and then checks every 10 seconds whether the rpm is available. In order to check the availability of the rpm I'm performing the following command:

yum clean all && yum --disablerepo="*" --enablerepo="the-repo-I-care-about" list  --showduplicates | grep <name_of_artifcat> | grep <expected_version>

The /etc/yum.conf contains:

cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=5
distroverpkg=centos-release
http_caching=none

The /etc/yum.repos.d/repo-i-care-about.repo contains:

[repo-i-care-about]
name=Repo I care about
enabled=1
gpgcheck=0
baseurl=https://somewhere.com
metadata_expire=5
mirrorlist_expire=5
http_caching=none

The problem I'm experiencing is that the list response seems to return stale information.

The metadata rebuild takes about 70 seconds (60 seconds initial can be configured, I will tweak it eventually), and I'm checking every 10 seconds: the response from the yum repo looks cached somewhere (sometime), and when it happens if I try to perform the same search on another box with the same repo settings I get the expected artefact version.

The fact that on another machine I get the expected result on the first attempt given the specific list command and the fact that the machine where I check every 10 seconds seems to never receive the expected result (even after several minutes since the artefact is available on a different box) makes me think that the response gets cached.

I would like to avoid waiting 90 seconds or so before making the first list request (to make sure that the very first time I perform the list command the artefact is most likely ready and I don't cache the result), especially because the initial delay of the scheduling of the metadata might change (from 60 seconds we might change to a lower value).

The flakyness of this check got better since I've added the http_caching=none to the yum.conf and to the repo definition. But it still didn't make the problem go away reliably.

Is there any other settings around caching that I'm supposed to configure in order to expect more reliable results from the list command? At this point I really don't care about how long the list command would take, as long as it does not contain stale information.

  • The `metadata_expire` is telling it 5 seconds before it can expire the cache, which should be fast enough. Does it say it is fetching every time? If not, it might be the server itself caching the page/request. Is there a proxy in between somewhere? – Aaron D. Marasco Dec 02 '18 at 22:47

1 Answers1

-1

Looks like deleting the /var/cache/yum/* folders is making the check more reliable. Still, it feels like I'm missing some settings to achieve what I need in a neater way.