Scrapy安装和入门Demo开发

来源:互联网 发布:动作特效软件 编辑:程序博客网 时间:2024/06/05 01:51

1,Scrapy安装

windows上,可以试用pycharm安装,但是,无法通过cmd执行scrapy命令。
于是,通过查询资料,通过cmd模式,先卸载scrapy,再安装一次。或者可以直接安装 (可能存在两个的scrapy),只要能执行scrapy命令即可。

scrapy安装完成后,在windows cmd模式里输入scrapy,命令无法识别
http://blog.csdn.net/u012263493/article/details/38071143

2,Scrapy入门demo

第一步,默认的Scrapy项目结构

scrapy startproject myproject

类似下面的项目结构:

tutorial/    scrapy.cfg    tutorial/        __init__.py        items.py        pipelines.py        settings.py        spiders/            __init__.py            ...

第二步,定义要抓取的数据

import scrapyclass DmozItem(scrapy.Item):    title = scrapy.Field()    link = scrapy.Field()    desc = scrapy.Field()

第三步,使用项目命令genspider创建Spider

scrapy genspider xxt xxt.cn

$ scrapy genspider -lAvailable templates:  basic  crawl  csvfeed  xmlfeed$ scrapy genspider -d basicimport scrapyclass $classname(scrapy.Spider):    name = "$name"    allowed_domains = ["$domain"]    start_urls = (        'http://www.$domain/',        )    def parse(self, response):        pass$ scrapy genspider -t basic example example.comCreated spider 'example' using template 'basic' in module:  mybot.spiders.example

第四步,编写提取item数据的Spider

参考下面的代码

import scrapyclass DmozSpider(scrapy.spider.Spider):    name = "dmoz"    #唯一标识,启动spider时即指定该名称    allowed_domains = ["dmoz.org"]    start_urls = [        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"    ]    def parse(self, response):        filename = response.url.split("/")[-2]        with open(filename, 'wb') as f:            f.write(response.body)

第五步,启动爬取

scrapy crawl dmoz

可以看到scrapy的进程日志如下:

E:\python\tutorial>scrapy crawl dmoz -o items.json2017-06-29 21:18:30 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: tutorial)2017-06-29 21:18:30 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'FEED_URI': 'items.json', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial', 'ROBOTSTXT_OBEY': True, 'FEED_FORMAT': 'json'}2017-06-29 21:18:30 [scrapy.middleware] INFO: Enabled extensions:['scrapy.extensions.feedexport.FeedExporter', 'scrapy.extensions.logstats.LogStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.corestats.CoreStats']2017-06-29 21:18:31 [scrapy.middleware] INFO: Enabled downloader middlewares:['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats']2017-06-29 21:18:31 [scrapy.middleware] INFO: Enabled spider middlewares:['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware']2017-06-29 21:18:31 [scrapy.middleware] INFO: Enabled item pipelines:['tutorial.pipelines.TutorialPipeline', 'tutorial.pipelines.TutorialPipeline1']2017-06-29 21:18:31 [scrapy.core.engine] INFO: Spider opened2017-06-29 21:18:31 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)2017-06-29 21:18:31 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:60232017-06-29 21:18:31 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://data.caida.org/robots.txt> (referer: None)2017-06-29 21:18:31 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://data.caida.org/datasets/dns/> (referer: None)...

3,补充说明,功能升级

通过选择器提取数据,参考下面代码

    def parse(self, response):        for sel in response.xpath('//ul/li'):            item = DmozItem()            item['title'] = sel.xpath('a/text()').extract()            item['link'] = sel.xpath('a/@href').extract()            item['desc'] = sel.xpath('text()').extract()            yield item

保存数据

最简单存储爬取的数据的方式是使用 Feed exports:

scrapy crawl dmoz -o items.json

该命令将采用 JSON 格式对爬取的数据进行序列化,生成 items.json 文件。

如果需要对爬取到的item做更多更为复杂的操作,您可以编写 Item Pipeline 。类似于我们在创建项目时对Item做的,用于您编写自己的 tutorial/pipelines.py 也被创建。不过如果您仅仅想要保存item,您不需要实现任何的pipeline。

编写pipelines.py,可以入库、写文件等等

首先,打开settings.py的设置

# Configure item pipelines# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.htmlITEM_PIPELINES = {  'tutorial.pipelines.TutorialPipeline': 300,  'tutorial.pipelines.TutorialPipeline1': 500,}

然后,编写pipelines.py

class TutorialPipeline(object):    def process_item(self, item, spider):        print "TutorialPipeline00000000000", item        return itemclass TutorialPipeline1(object):    def process_item(self, item, spider):        print "TutorialPipeline11111111111", item        return item

结论:scrapy根据settings.py的配置,先将item抛给高优先级(类后面的数值越小优先级越高)的pipelines类,如上例所示,先执行TutorialPipeline ,后执行TutorialPipeline1

递归爬取网站数据

首先,设置爬虫类的全局变量,保证allowed_domains和start_urls一致,否则,无法递归爬取

allowed_domains = ["caida.org"]start_urls = [    # "http://data.caida.org/datasets/2013-asrank-data-supplement/",    "http://data.caida.org/datasets/dns/",    # "http://data.caida.org/datasets/2013-asrank-data-supplement/extra/"]

然后,编写爬虫类的parse成员方法,在必要时候,需要通过yield返回scrapy.Request(response.url + next_url, callback=self.parse)

# 广度优先,递归爬取数据def parse(self, response):    print '2222222222222222222222222222222',response,response.url    self.log('A response from %s just arrived!' % response.url)    for sel in response.xpath('/html/body/pre/a'):        yield scrapy.Request(response.url + next_url, callback=self.parse)

最后,经过测试,发现默认是广度优先。如果需要深度,应该可以配置。

参考网址:
【scrapy】学习Scrapy入门
http://www.jianshu.com/p/a8aad3bf4dc4
Spiders
http://scrapy-chs.readthedocs.io/zh_CN/0.24/topics/spiders.html
搜索引擎五:Scrapy抓取数据入库
http://blog.csdn.net/ns2250225/article/details/43966671
Python yield 使用浅析
https://www.ibm.com/developerworks/cn/opensource/os-cn-python-yield/

原创粉丝点击