scrapy只爬了第一个页面就结束


问个问题,为啥我的scrapy在parse里面返回了Request的list,为啥还是只爬了第一个页面就结束了?
我没有处理Items,准备以后直接在parse函数里面存数据库的,跟这个有关系吗?

def parse(self, response):

        hxs = HtmlXPathSelector(response)
        result = []
        for div in hxs.select("//div[@class='box']//li//div[@class='bassex']"):
            item = PoiItem()
            item['name'] = div.select('.//a/text()')[0].extract()
            item['url'] = div.select('.//a/@href')[0].extract()
            item['tag'] = div.select('.//span[@class="ic"]/a/@title').extract()
            item['sence'] = div.select('.//p[last()]/a/text()').extract()
            print item
            result.append(item)

        urls = self.getUrls(hxs)
        reqList = []
        for url in urls:
            print 'push to Queue:'+ url
            self.doneSet[url] = True
            yield Request(url, callback=self.parse)

python scrapy 网页爬虫

淋淋查水表 11 years ago

allow domain 写错了。。。。多加了http。。。找了一个小时

ifoaf answered 11 years ago

Your Answer