0

I'm developing a Python package my_package that depends upon some other package foo_package.

There's a few versions of foo_package that are used in the community, and unfortunately they're not backwards compatible with each other. (Because of changes to the C interface that my code is compiled against).

So I'd like to distribute multiple copies of my_package, corresponding to the different versions of foo_package.

I can distinguish these different copies of my_package from each other using post-release tags. For example I can give my_package the version 1.1.4-foo_package1.2, corresponding to version 1.1.4 of my_package compiled against version 1.2 of foo_package.

So far so good. The caveat now is that when it comes to installing this with pip, end users have to specify this full version string to be able to get the correct version of my_package. That is, they have to know that the latest release of my_package is 1.1.4 and that the release of foo_package that they're using is 1.2 and thus use the command pip install my_package==1.1.4-foo_package1.2.

Obviously this isn't ideal for all sorts of reasons. (End user friendliness, avoiding dependency hell... ) After all, all of this can be determined programmatically!

Is there any sensible way to handle this issue, so that an end user can just run pip install my_package and have the correct copy downloaded automatically?

There is one unsatisfactory answer to a similar question here.

FWIW the best solution I've come up with so far is to create another package my_package_installer which as part of its setup.py checks what version of foo_package is installed and then specifies the relevant version of my_package as the install_requires argument to setuptools.setup. But that's completely asinine and seems quite fragile. I can't be the only one with this issue.

2 Answers2

0

From a design point of view, it seems to me that you should be specifying one version of foo as a dependency, because that's what works best with your package. If other people have different versions of foo, that's their responsibility.

If it's not so simple and you have a constraint for compatibility with multiple versions of foo, then what I think most developers do is try to detect foo at initialization and adapt to it from there.

nightsh4de
  • 26
  • 2
  • `my_package` is really just a plugin for `foo_package`. I don't want to constrain people to use just the version I happen to have compiled against. (And the fact that it's a compilation-time problem means there's no adaption that can be done during initialisation - unless the end user has a compiler available then it's too late.) – latexisdifficult Oct 22 '19 at 00:17
0

What about creating multiple Python projects from the same code base?

Let's say you have a source code repository MyProject.git, and you want to distribute it for libfoo1.2 and libfoo1.4. Then maybe a setup.py that looks like this would do the trick:

#!/usr/bin/env python3

import distutils.core
import setuptools

def get_foo_version():
    return '1.2' if True else '1.4'

foo_version = get_foo_version()

foo_module = distutils.core.Extension(
    'foo',
    define_macros=[('FOO_VERSION', foo_version)],
    libraries=['foo{}'.format(foo_version)],
    sources=['foo.c'],
    # ...
)

setuptools.setup(
   name='MyProjectForFoo{}'.format(foo_version),
   ext_modules = [foo_module],
   # ...
)

In this scenario you would end up with two Python projects MyProjectForFoo1.2 and MyProjectForFoo1.4. The users of your project would still have to pick the right project, but it would be less prone to confusion than sorting by version numbers.

You could probably use tox and/or a good CI/CD system to help you automate the creation and publication of the distributions (wheels) for the two (or more) projects.

sinoroc
  • 18,409
  • 2
  • 39
  • 70