distribute_crawler项目实战

来源:互联网 发布:南平seo自动优化软件 编辑:程序博客网 时间:2024/06/05 19:06

安装scrapy

https://scrapy-chs.readthedocs.org/zh_CN/latest/intro/install.html

使用pip安装:

pip install Scrapy

Ubuntu 软件包

  • 把Scrapy签名的GPG密钥添加到APT的钥匙环中:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 627220E7
  • 执行如下命令,创建 /etc/apt/sources.list.d/scrapy.list 文件:
echo 'deb http://archive.scrapy.org/ubuntu scrapy main' | sudo tee /etc/apt/sources.list.d/scrapy.list
  • 更新包列表并安装 scrapy-0.25:
sudo apt-get update && sudo apt-get install scrapy-0.25

安装redispy

sudo easy_install redis  

安装pymongo

sudo pip install pymongo==2.7.2

安装graphite

http://blog.csdn.net/u011008734/article/details/47166469
注意之后 需要每次启动 carbon

$ cd /opt/graphite/$ sudo ./bin/carbon-cache.py start

(如何配置请查看:statscol/graphite.py)

配置

  • 修改 /opt/graphite/webapp/content/js/composer_widgets.js,
    toggleAutoRefresh 函数中的'interval'的值由60改为1
  • 添加 storage-aggregation.conf文件在'/opt/graphite/conf'目录下 :
[scrapy_min]pattern = ^scrapy\..*_min$xFilesFactor = 0.1aggregationMethod = min[scrapy_max]pattern = ^scrapy\..*_max$xFilesFactor = 0.1aggregationMethod = max[scrapy_sum]pattern = ^scrapy\..*_count$xFilesFactor = 0.1aggregationMethod = sum
  • in settings set:
#分布式 修改如下:STATS_CLASS = 'scrapygraphite.RedisGraphiteStatsCollector'                GRAPHITE_HOST = '127.0.0.1'                GRAPHITE_PORT = 2003#单机 修改如下:STATS_CLASS = 'scrapygraphite.GraphiteStatsCollector'                GRAPHITE_HOST = '127.0.0.1'                GRAPHITE_PORT = 2003

安装mongodb

  • 为软件包管理系统导入公钥。
    Ubuntu 软件包管理工具为了保证软件包的一致性和可靠性需要用 GPG 密钥检验软件包。使用下列命令导入 MongoDB 的 GPG 密钥 ( MongoDB public GPG Key http://docs.mongodb.org/10gen-gpg-key.asc)_:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
  • 为MongoDB创建一个列表文件。
    使用下列命令创建 /etc/apt/sources.list.d/mongodb.list 列表文件:
echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list
  • 重载本地软件包数据库:
sudo apt-get update
  • 安装MongoDB最新的稳定版本
sudo apt-get install mongodb-org
  • 使用下列命令启动 mongod 进程
sudo service mongod start

安装redis

wget http://download.redis.io/releases/redis-2.8.12.tar.gztar xzf redis-2.8.12.tar.gzcd redis-2.8.12make

编译成功

启动并运行redis:

src/redis-server

可能需要安装的其他模块

IPy
zope.interface

pip install IPypip install zope.interface

修改项目

  • 使用该分支https://github.com/aware-why/distribute_crawler

  • 修改woaidu_detail_spider.pyWoaiduSpider的父类
    参见https://scrapy-chs.readthedocs.org/zh_CN/latest/intro/tutorial.html#spider
    from scrapy.spider import BaseSpider BaseSpider类以过时,修改为import scrapy scrapy.spider.Spider

  • distribute_crawler/woaidu_crawler/woaidu_crawler/pipelines/file.py
    line 18:from scrapy.contrib.pipeline.images import MediaPipeline
    改成:from scrapy.contrib.pipeline.media import MediaPipeline

搭建mongodb集群

cd woaidu_crawler/commands/sudo python init_sharding_mongodb.py --path=/usr/bin

修改settings.pyITEM_PIPELINES
参见https://scrapy-chs.readthedocs.org/zh_CN/latest/topics/item-pipeline.html#id4

ITEM_PIPELINES = {'woaidu_crawler.pipelines.cover_image.WoaiduCoverImage':300,'woaidu_crawler.pipelines.mongodb_book_file.MongodbWoaiduBookFile':400,    'woaidu_crawler.pipelines.drop_none_download.DropNoneBookFile':500,'woaidu_crawler.pipelines.mongodb.ShardMongodbPipeline':600,    'woaidu_crawler.pipelines.final_test.FinalTestPipeline':700,}

修改woaidu_crawler/pipelines/mongodb_book_file.py下130的
line 130:info = self.spiderinfo.spider
info = self.spiderinfo

在含有log文件夹的目录下执行:需切换到root用户 sudo su

scrapy crawl woaidu

打开http://127.0.0.1/ 通过图表查看spider实时状态信息
要想尝试分布式,可以在另外一个目录运行此工程

搭建mongodb服务器

  cd woaidu_crawler/commands/  python init_single_mongodb.py 

设置settings.py:

      ITEM_PIPELINES = ['woaidu_crawler.pipelines.cover_image.WoaiduCoverImage':300,          'woaidu_crawler.pipelines.bookfile.WoaiduBookFile':400,          'woaidu_crawler.pipelines.drop_none_download.DropNoneBookFile':500,          'woaidu_crawler.pipelines.mongodb.SingleMongodbPipeline':600,          'woaidu_crawler.pipelines.final_test.FinalTestPipeline':700,]

在含有log文件夹的目录下执行:需切换到root用户 sudo su

scrapy crawl woaidu

打开http://127.0.0.1/ (也就是你运行的graphite-web的url) 通过图表查看spider实时状态信息
要想尝试分布式,可以在另外一个目录运行此工程

注意

每次运行完之后都要执行commands/clear_stats.py文件来清除redis中的stats信息

python clear_stats.py
0 0