Using Selenium Python bindings for javascript link


I'm trying to use Scrapy to parse a relatively simple set of webpages. The main page has a bunch of links that look like:

<a name='LINK1$17' id='LINK1$17' tabindex='145' href="javascript:hAction_win0(document.win0,'LINK1$17', 0, 0, 'International Relations', false, true);"  class='SSSAZLINK'>International Relations</a>

Clicking that link loads up the second page on which some of the details I'm scraping appear. I do need to start on that first page because it serves as an index of all these things I'm scraping. How do I use selenium to run that javascript action? I've tried:

import webdriver
driver = webdriver.Firefox()
driver.execute_script("javascript:hAction_win0(document.win0,'LINK1$17', 0, 0, 'International Relations', false, true);")

That did not work. Is there an easy way to "click" the link and get what appears?


Turns out I was using the right function. The following call works:

driver.execute_script("hAction_win0(document.win0,'LINK1$17', 0, 0, 'International Relations', false, true);")

I just had to remove the "javascript:" at the beginning.

This video can help you solving your question :)
By: admin