I am learning to scrape with selenium and scrapy. I have a page with list of links. I want to click the first link , visit the page crawl items and again come back to main(previous page with list of links) and click on second link and crawl and repeat the process until the desired links are over. All i could do was click the first link and then my crawler stops. What could be done to again crawl the second link and remaining ones?
My spider looks so:
class test(InitSpider): name="test" start_urls = ["http://www.somepage.com"] def __init__(self): InitSpider.__init__(self) self.browser = webdriver.Firefox() def parse(self,response): self.browser.get(response.url) time.sleep(2) items= sel = Selector(text=self.browser.page_source) links = self.browser.find_elements_by_xpath('//ol[@class="listing"]/li/h4/a') for link in links: link.click() time.sleep(10) #do some crawling and go back and repeat the process. self.browser.back()