Scrapy递归抓取数据存入数据库(示例二)

来源:互联网 发布:淘宝哪个买正品乔丹 编辑:程序博客网 时间:2024/06/05 08:11

参考:http://www.hulufei.com/post/Some-Experiences-Of-Using-Scrapy
          http://www.shahuwang.com/?p=1620

scrapy爬取了链接之后,如何继续进一步爬取该链接对应的内容?
parse可以返回Request列表,或者items列表,如果返回的是Request,则这个Request会放到下一次需要抓取的队列,如果返回items,则对应的items才能传到pipelines处理(或者直接保存,如果使用默认FEED exporter)。那么如果由parse()方法返回下一个链接,那么items怎么返回保存? Request对象接受一个参数callback指定这个Request返回的网页内容的解析函数(实际上start_urls对应的callback默认是parse方法),所以可以指定parse返回Request,然后指定另一个parse_item方法返回items:

以爬取南京大学bbs为例:

1. spider下的文件:

# -*- coding: utf-8 -*-
import chardet
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.utils.url import urljoin_rfc
from scrapy.http import Request
from tutorial.items import bbsItem

class bbsSpider(BaseSpider):
    name = "boat"
    allowed_domains = ["bbs.nju.edu.cn"]
    start_urls = ["http://bbs.nju.edu.cn/bbstop10"]
    def parseContent(self,content):
        content = content[0].encode('utf-8')
#print chardet.detect(content)
#print content
        authorIndex =content.index('信区')
        author = content[11:authorIndex-2]
        boardIndex = content.index('标  题')
        board = content[authorIndex+8:boardIndex-2]
        timeIndex = content.index('南京大学小百合站 (')
        time = content[timeIndex+26:timeIndex+50]
return (author,board,time)
        #content = content[timeIndex+58:]
        #return (author,board,time,content)
    def parse2(self,response):
        hxs =HtmlXPathSelector(response)
        item = response.meta['item']
        items = []
        content = hxs.select('/html/body/center/table[1]/tr[2]/td/textarea/text()').extract()
        parseTuple = self.parseContent(content)
        item['author'] = parseTuple[0].decode('utf-8')
        item['board'] =parseTuple[1].decode('utf-8')
        item['time'] = parseTuple[2]
        #item['content'] = parseTuple[3]
        items.append(item)
        return items
    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        items = []
        title= hxs.select('/html/body/center/table/tr[position()>1]/td[3]/a/text()').extract()
        url= hxs.select('/html/body/center/table/tr[position()>1]/td[3]/a/@href').extract()
        for i in range(0, 10):
            item = bbsItem()
            item['link'] = urljoin_rfc('http://bbs.nju.edu.cn/', url[i])
            item['title'] =  title[i][:]
            items.append(item)
        for item in items:
            yield Request(item['link'],meta={'item':item},callback=self.parse2)

2. pipelines文件:

# -*- coding: utf-8 -*-
# Define your item pipelines here
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/topics/item-pipeline.html

from scrapy import log
from twisted.enterprise import adbapi
from scrapy.http import Request  
from scrapy.exceptions import DropItem  
from scrapy.contrib.pipeline.images import ImagesPipeline  
import time  
import MySQLdb  
import MySQLdb.cursors
import socket
import select
import sys
import os
import errno

class MySQLStorePipeline(object):
    def __init__(self):
self.dbpool = adbapi.ConnectionPool('MySQLdb',  
                db = 'test',  
                user = 'root',  
                passwd = 'root',  
                cursorclass = MySQLdb.cursors.DictCursor,  
                charset = 'utf8',  
         
0 0
原创粉丝点击