I have about 200 domains that I need to crawl, but I am certain no valuable informationed for me is contain in the subdomains, therefore I would like to exclude them from crawling.
For domain example.com I could use deny rule
(www.)*\w+\.example
but this approach would make me write 200 deny rules for every domain. My question is whether it is possible to create a deny rule for all subdomains of every domain?
Snippet from the spider:
class Spider(CrawlSpider):
    name = "courses"
    start_urls = [
        'https://www.eb-zuerich.ch',
]
    allowed_domains = ['eb-zuerich.ch',]
    rules = [
    Rule(LinkExtractor(allow=(),
                       deny=(r'.+[sS]itemap', r'.+[uU]eber', r'.+[kK]ontakt', r'.+[iI]mpressum',
                        r'.+[lL]ogin', r'.+[dD]ownload[s]?', r'.+[dD]isclaimer',
                        r'.+[nN]ews', r'.+[tT]erm', r'.+[aA]nmeldung.+',
                        r'.+[Aa][Gg][Bb]', r'/en/*', r'\.pdf$')),
         callback='parse_item', follow=True)
]
    def parse_item(self, response):
        # get soup of the current page
        soup = bs(response.body, 'html.parser')
        page_soup = bs(response.body, 'html.parser')
        # check if it is a course description page
        ex = Extractor(response.url, soup, page_soup)
        is_course = ex.is_course_page()
        if is_course:
            ex.save_course_info()
I am using Scrapy 1.4.0 and Python 3.6.1
My question is whether it is possible to create a deny rule for all subdomains of every domain?
With a simplistic approach (ignoring top-level domain names like .co.uk):
r'^(https?)?//([^./]+\.){2,}[^./]+(/|$)'
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With