0

I am building a set of automated regression tests in ruby using rspec and capybara. To give you an idea of a test, imagine logging in to a website, adding a new data item with all of its fields, saving it, validating the new row, updating the row, changing fields, and updating/validating that.

For example:

describe "auto regression test #1", :type => :feature, js: true do
  it  "should add and update my data" do
    # login
    # go to page
    # press new button
    # fill in fields
    # etc.
  end
end

This is a simplified version and there may be many things going on within the "it". At first I was thinking that I should just separate out the single test into multiple cases but then I would have to login and get back to the page (which I assume is extra time I don't need to waste in my automated tests - agree?).

Nonetheless, I'd like to log what I am doing such that it shows up in Browserstack Automate logging tab. Currently what's in there is related to selenium operations or screenshots. I would like to have some custom logging. The reason being is that when my test fails I currently get a stack trace - line number (which is great) along with the test that fails. Since my test includes lots of functionality (since I don't want rspec to log in over and over again) if the tests fails and someone is looking at Browserstack to see where it failed it is difficult to know where the logic failed without some additional custom logging. How can I put in custom logging so I can see the text in browserstack? (or do I have this all wrong and I should really separate out my tests into small pieces even with the re-logging in issue)?

Arthur Frankel
  • 4,695
  • 6
  • 35
  • 56

3 Answers3

1

The bits of advice I can give you is:

1:. Each test should be a single use case scenario. If it fails you know why

2:. If you need to perform many steps to achieve a use case scenario then you should be abstracting your elements into classes that represent a page (https://code.google.com/p/selenium/wiki/PageObjects) and maybe go even further and add flows that abstract multiple page actions. So if it fails in one of the steps prior to your validation, you will know which element/page it failed and have an idea what went wrong.

3:. If you still have problems understanding what went wrong in a test even with BrowserStack logging and screenshots, then your problem is not lack of logging and yes the way your tests are written

Pippo
  • 905
  • 11
  • 22
0

First, I agree that logging in at the beginning of each and every scenario/example wastes time and is definitely redundant. See this question for keeping a cookie session between each test. Also read through the pros/cons of this approach.

Your tests need to be coded properly. This means that each line of code should align with a manually executed step that a user would perform. This should allow you to easily trace and reproduce any failures that happen.

Tests can fail absolutely anywhere, and to build in custom messaging/exception handling in every place you can think of is too much overhead. It is more useful to write reliable, deterministic tests first, and then narrow down the (what should be a low number of) failures later on.

What you can possibly do is wrap each scenario/example in an exception block, and take a screenshot should a failure take place. That would be much less overhead than adding custom error messages throughout your suite.

Community
  • 1
  • 1
Phil
  • 886
  • 7
  • 20
0

You can try creating your own custom message in the logs generated on BrowserStack's Automate dashboard by executing the following JavaScript from your Selenium tests:

(For eg, in Ruby)

driver.execute_script("\" <Write your custom log here> \";")

Umang Sardesai
  • 762
  • 6
  • 14