1

We are using Quality Center to manage our test cases and manual test execution. Quick Test Professional isn't quite lining up with our needs, so we have begun implementing WebDriver + Java + TestNg + GRID2. TestNg approaches things centrally from TestNg, where it organizes the tests into suites, allows for parameters for data driven testing, produces reports, logs and potentially screenshots.

Quality Center also makes the assumption that it is the center of the test "universe", and that it is used to trigger all tests, both automated and manual, and that it will take care of processing and storing results.

My question is, how can we use Quality Center to:

a) Act as a central repository for requirements and test cases
b) Act as a central repository for test execution results

while utilizing TestNg + Selenium + Java + GRID2 to:

a) Test on different platforms and browsers b) Utilize parallel test execution
c) Utilize distributed test execution in the cloud

Ed Wolb
  • 198
  • 9

1 Answers1

0

I think there are two possible ways: Either use Quality Center as master for the execution of the tests and somehow write back the results to it, or use TestNg (or whatever tool you use) to trigger the tests and have some scripts that mirror your tests in Quality Center and also import the test results to it. I think the former is needed to support your requirement a), the latter should be enough for b).

For starting tests from Quality Center, you can simply use a VAPI-XP test to start some script which executes your tests. In the VAPI-XP script you then have to parse the result and set the status of the test run accordingly. Maybe there is also a better way in defining a custom test type in Quality Center, but I don't have any experience with it.

Normally I prefer to manage automated tests outside from Quality Center and to start them from outside of Quality Center. I think it is the better way—it lets you use whatever you want for triggering the tests (e.g. Jenkins) and minimizes your dependencies on Quality Center. The hard part is to mirror your tests in Quality Center. Normally I use a OTA script to import the test cases to Quality Center, do the coverage of the requirements and create the test sets in Test Lab. After every execution of the test cases, I automatically add runs to the tests in the Test Lab and attach whatever is necessary to satisfy the test managers need for traceability (using a OTA script which runs after test execution). Instead of using OTA it is also possible to use Quality Center's REST API to do the imports.

As a rule of thumb I try to do as little as possible in Quality Center and only import whatever is needed to satisfy the needs of a project and it's test manager...

Roland
  • 1,220
  • 1
  • 14
  • 29