I got a hint to use optional requirements and conditional import to provide a function that can use pandas or not, depending whether it's available.
See here for reference:
https://stackoverflow.com/a/74862141/10576322
This solution works, but if I test this code I get always a bad coverage since I either have pandas imported or not. So even if I configure hatch to create environments for both tests, it looks like the tests don't cover this if/else function definition sufficiently.
Is there a proper way around to eg. combine the two results? Or can I tell coverage that the result is expected for that block of code?
Code
The module is looking like that:
try:
import pandas as pd
PANDAS_INSTALLED = True
except ImportError:
PANDAS_INSTALLED = False
if PANDAS_INSTALLED:
def your_function(...):
# magic with pandas
return output
else:
def your_function(...):
# magic without pandas
return output
The idea is that the two version of the two functions work exactly the same beside the inner procedures. So everybody no matter where can use my_module.my_function and don't need to start writing code depending on what environment they are on.
The same is true for testing. I can write tests for my_module.my_function and if the venv has pandas I am testing one part of it and if not the test is testing the other part.
from mypackage import my_module
def test_my_function:
res = 'foo'
assert my_module.my_function() == res
That is working fine, but coverage evaluation is complicated.
Paths to solution
Till now I am ware of two solutions.
1. mocking the behavior
@TYZ suggested to have always pandad as dependency for testing and mock the global variable.
I tried that, but it didn't work as I expected it. The reason is that I can of course mock the PANDAS_INSTALLED variable, but the function defifintion already took place during import and is not affected anymore by the variable.
I tried to check if I can mock the import in another test module, but didn't succeed.
2. defining venvs w and w/o pandas and combine results
I found that coverage and pytest-cov have the abillity to append test results between environments or combine different results.
In a first test I changed the pytest-cov script in hatch to include --cov-append
. That worked, but it's totally global. That means if I get complete coverage in Python 3.8, but for whatever reason the switch doesn't work in Python 3.9 I wouldn't see it.
What I like to do is to combine the different results by some logic inherited from hatchs test.matrix. Like coverage combine py38.core py38.pandas
and the same for 3.9. So I would see if I have same coverage for all tested versions.
I guess that there are possibly solutions to do that with tox, but maybe I don't need to include another tool.