1

I have the below test_dss.py file which is used for pytest:

import dataikuapi

import pytest
 

def setup_list():

    client = dataikuapi.DSSClient("{DSS_URL}", "{APY_KEY}")

    client._session.verify = False

    project = client.get_project("{DSS_PROJECT}")

    # Check that there is at least one scenario TEST_XXXXX & that all test scenarios pass

    scenarios = project.list_scenarios()

    scenarios_filter = [obj for obj in scenarios if obj["name"].startswith("TEST")]

    return scenarios_filter

 
def test_check_scenario_exist():

    assert len(setup_list()) > 0, "You need at least one test scenario (name starts with 'TEST_')"

 
@pytest.mark.parametrize("scenario", setup_list())

def test_scenario_run(scenario, params):

    client = dataikuapi.DSSClient(params['host'], params['api'])

    client._session.verify = False

    project = client.get_project(params['project'])

    scenario_id = scenario["id"]

    print("Executing scenario ", scenario["name"])

    scenario_result = project.get_scenario(scenario_id).run_and_wait()

    assert scenario_result.get_details()["scenarioRun"]["result"]["outcome"] == "SUCCESS", "test " + scenario[

    "name"] + " failed"

My issue is with setup_list function, which able to get only hard coded values for {DSS_URL}, {APY_KEY}, {PROJECT}. I'm not able to use PARAMS or other method like in test_scenario_run any idea how I can pass the PARAMS also to this function?

rel.foo.fighters
  • 422
  • 1
  • 3
  • 16
  • 1
    I'm not sure I understand what you are doing, but I assume that you want to access the config values passed via command line in the code used for parametrization. This could be done by parametrizing the test dynamically using `pytest_generate_tests`. – MrBean Bremen Jan 10 '21 at 16:30
  • I want to be able to use the params also in the function `setup_list` but it only works hard coded... – rel.foo.fighters Jan 12 '21 at 15:50
  • You cannot use these params in a function, because the function knows nothing about them. As I wrote, you can do this in `pytest_generate_tests`, if you need the function only for the parametrization. Is that correct? – MrBean Bremen Jan 12 '21 at 17:08
  • correct. can you show me example on how it should look on my code? – A. Man Jan 12 '21 at 17:41
  • Tell me if I understand correctly. I should add to `conftest.py`: `def pytest_generate_tests(metafunc): metafunc.parametrize(host,api,project)` and to `test_dss.py` this addition to the function: `def setup_list(host,api,project):` – A. Man Jan 12 '21 at 18:17
  • I edited the question to be more clear for other users which run into the same issue. @MrBeanBremen tell me if it's clear enough – rel.foo.fighters Jan 12 '21 at 20:25

1 Answers1

2

The parameters in the mark.parametrize marker are read at load time, where the information about the config parameters is not yet available. Therefore you have to parametrize the test at runtime, where you have access to the configuration.

This can be done in pytest_generate_tests (which can live in your test module):

@pytest.hookimpl
def pytest_generate_tests(metafunc):
    if "scenario" in metafunc.fixturenames:
        host = metafunc.config.getoption('--host')
        api = metafuc.config.getoption('--api')
        project = metafuc.config.getoption('--project')
        metafunc.parametrize("scenario", setup_list(host, api, project))

This implies that your setup_list function takes these parameters:

def setup_list(host, api, project):
    client = dataikuapi.DSSClient(host, api)
    client._session.verify = False
    project = client.get_project(project)
    ...

And your test just looks like this (without the parametrize marker, as the parametrization is now done in pytest_generate_tests):

def test_scenario_run(scenario, params):
    scenario_id = scenario["id"]
    ...

The parametrization is now done at run-time, so it behaves the same as if you had placed a parametrize marker in the test.

And the other test that tests setup_list now has also to use the params fixture to get the needed arguments:

def test_check_scenario_exist(params):
    assert len(setup_list(params["host"], params["api"], params["project"])) > 0,
 "You need at least ..."
MrBean Bremen
  • 14,916
  • 3
  • 26
  • 46
  • one row above the `def test_scenario_run(scenario):` I have: `@pytest.mark.parametrize("scenario", setup_list())` . I'm using it so I can run in loop on this list which returns from the function `setup_list` with the iterator `scenario`. what I should modify there? – A. Man Jan 12 '21 at 19:18
  • 1
    As I wrote in the answer - you don't need this, as you do the parametrization at run-time. Just remove it, as shown above. Added another note. – MrBean Bremen Jan 12 '21 at 19:26
  • but then I have this line after `scenario_id = scenario["id"]` : `scenario_result = project.get_scenario(scenario_id).run_and_wait()`, and it throws error: `NameError: name 'project' is not defined` – A. Man Jan 12 '21 at 19:30
  • 1
    Ok, if you need the project, you also have to use the `params` fixture - I didn't see this in your question. Edited the answer again... It looks a bit strange that you use the same code in the test and in `setup_list`. – MrBean Bremen Jan 12 '21 at 19:33
  • Looks better (you can edit your answer with that). Now the only issue left is with the len assert test.: `def test_check_scenario_exist(): > assert len(setup_list()) > 0, "You need at least one test scenario (name starts with 'TEST_')" E TypeError: setup_list() missing 3 required positional arguments: 'host', 'api', and 'project'` – A. Man Jan 12 '21 at 19:43
  • Works like charm :) just edit the last params instead the empty value: `params["project"]` – A. Man Jan 12 '21 at 20:05
  • @MrBeanBremen regards the success results, currently after running Junit I'm getting these results in jenkins for all the scenarios which passed: `tests_project.run_test_CI.test_scenario_run[scenario0] (from pytest)`, any idea how can I show the instead or in addition the `scenario["name"]`? maybe with that: https://docs.pytest.org/en/stable/usage.html#record-property – rel.foo.fighters Jan 25 '21 at 15:28
  • I'm afraid I don't understand your question. Maybe you write a separate question that makes this more clear? This will also improve the chance of getting an answer... – MrBean Bremen Jan 25 '21 at 17:52