I have a Python build pipeline set up, with a Jfrog Artifactory Python repository serving our internal packages.
The pipeline uses this repository to resolve internal dependencies and upload build artefacts.
This is currently done with python setup.py bdist_wheel sdist upload -r local
. This command actually requires the credentials to be on disk twice; once for resolving local dependencies via ~/.pydistutils.cfg:
[easy_install]
index_url=https://username:password@artifactory/api/repo/path/simple
...and again in ~/.pypirc for uploading the build package:
[distutils]
index-servers = local
[local]
repository: https://artifactory/api/repo/path
username: build_agent
password: ****
For security reasons I would like the username and password to never touch the disk. We already have a secure way of injecting secrets into the build environment from (Hashicorp) Vault, and I'd like to leverage this for the Artifactory credentials.
However, while I can set PIP_INDEX_URL for installing packages with pip, there is no equivalent for setuptools that I can see. Without these files defined, the command python setup.py sdist
results in a stack trace the following error:
distutils.errors.DistutilsError: Could not find suitable distribution for Requirement.parse('local-package==0.0.3')
Clearly, it is attempting to resolve a local dependency on public pypi, which is undesirable for a number of reasons (the local repository proxies pypi, so we should never read public pypi to prevent hijacking).
Essentially then my question is; how can I build and upload a python package, which has dependencies in a local repository, to that local repository, using credentials that are in environment variables and not in ~/.pypirc or ~/.pydistutils.cfg?
Additional background
The original implementation baked .pypirc and .pydistutils.cfg in the build agent AMI, which is built with Packer.
The problem with this, is that the username and password for the repository is essentially readable in plain text by anyone with access to the AWS account and the ability to download or create a new instance using the image.
That's easy, just add a build step which sets up pypirc before starting the build you say! I would like to avoid this approach, because it is brittle in the sense that it creates a global run-time build dependency. In future, someone could setup a job assuming the config files have already been created without doing so themselves, and it will almost always work because another job has run first. I can't remove the file after each build, because there could be concurrent builds requiring the file to be present.