I use Python Selenium for scraping a website, but my crawler stopped because of a exception:
StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
How can i continue to crawl even if the element is not attached?
UPDATE
i change my code to:
try:
libelle1 = prod.find_element_by_css_selector('.em11')
libelle1produit = libelle1.text # libelle1 OK
libelle1produit = libelle1produit.decode('utf-8', 'strict')
except StaleElementReferenceException:
pass
but i have this exception
NoSuchElementException: Message: no such element
i also tried this one:
try:
libelle1 = prod.find_element_by_css_selector('.em11')
libelle1produit = libelle1.text # libelle1 OK
libelle1produit = libelle1produit.decode('utf-8', 'strict')
except :
pass
Put a try-except block around the piece of code that produced that error.
To be more specific about what John Gordon is talking about. Handle the StaleElementReferenceException common selenium exception and ignore it:
from selenium.common.exceptions import StaleElementReferenceException
try:
element.click()
except StaleElementReferenceException: # ignore this error
pass # TODO: consider logging the exception
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With