0

Any suggestions how to get around stale elements exceptions when it seems one shouldn't be raised? Fair enough with any of the many javascript libraries in use any updates may remove the respective DOM nodes and later FindElement from the disconnected node you can get a stale element.

The problem it seems, the tests in question i'm testing i know once displayed the content is static. Also the same code is working 100% with Firefox, but both the Chrome and Edge's WebDrivers (if not the same really close?) are giving unexpected "Stale Element Exceptions".

My code gets a node which is a parent for a given subtree holding information desired. For the parent node "Driver.FindElement()" is used but subsequent XPath queries are relative to the "parent" node.

For example given this DOM tree:

    <node id='Closest node By ID'>
        <span>
            <div>text i want</div>
            <div>ignore this</div>
            <div>text to get</div>
        </span>
        <span>
        <!-- same pattern ... --> 
        </span>
    </node>
   var parentNode = WebDriver.FindElement(ByID("ID"));
   var txt1 = parentNode.FindElement(By.XPath("./span/div[1]");
   var txt2 = parentNode.FindElement(By.XPath("./span/div[3]");

The problem is at some point performing the XPath quires will result in a 'stale element exception'.

I have working repro i'd be glad to share if someone could help look into this, getting into the WebDriver itself may be beyond me... Ping me will gladly provide a complete repro in C# details and whatever else may be needed.

Ah, and for vitals: Chrome && WebDriver are both matched up.

Google Chrome is up to date, actually checking was 92, updated to 93 re-ran and same behavior. Version 93.0.4577.63 (Official Build) (64-bit)

C# .Net framework 4.8 Win 10 "Version 21H1 (OS Build 19043.1202)"

Thanks in advance.

Dano
  • 112
  • 7
  • *"The problem is at some point performing the XPath quires will result in a 'stale element exception'."* > are you saying that StaleElement is thrown by `findElement` - not by accessing the element after you have found it? – Alexey R. Sep 03 '21 at 06:12
  • This needs an explicit wait to get rid off the staleness of elements – cruisepandey Sep 03 '21 at 06:18
  • No problem until a "Stale Element" is encountered. Same exact code, change WebDriver Firefox -> Chrome || Edge. 1000's of pages to process. I simplified the html a bit, for the "#ID" node may 200+ sub entries, and 1000's of pages. Firefox no problem 100% of the time Chrome normally fails a few pages in. Seems to error only after a FindElement().FindElement(). Driver.FindElemen() no problem (understandable as the Dom would never know about unconnected nodes). While the Find when it fail is random, not first/last, by random? A garbage collection error or using a deleted node returned? – Dano Sep 03 '21 at 06:57
  • @cruisepandey Not sure why there is a need for a wait is necessary. And if so, my hack to try fixing it was to sleep and retry, never worked. var x = Driver.Find(); // works -- off driver var y = x.Find(); // failing from element. Would expect any API i've ever used 'y' valid and good to go... If what your saying is true, not saying your wrong, I would expect the following? var y = await x.FindElementAsync() ; So one explicitly knows FindElement is only a promise and awaited? Otherwise the node returned should be valid to use without an exception? – Dano Sep 03 '21 at 07:09
  • Was thinking a little more about this, was wondering and might assert it sure feels like simply reading a node should be causing any change. My gut feel is something like a "visited" property akin to say an href on a web page, possibly a reference count but in the DOM node itself? But something seem to be changing a property in the queried nodes that is causing the internal DOM to remove/replace the stale node because of some change, or some other bug causing unnecessary DOM updates....? Would assert any read/query be idempotent... – Dano Sep 03 '21 at 16:58

0 Answers0