爬虫笔记(10/6)-------多开技能

来源:互联网 发布:手机万网域名如何解析 编辑:程序博客网 时间:2024/05/21 18:55

爬虫文件批量运行,方法:

1.使用CrawProcess

官网:http://doc.scrapy.org/en/latest/topics/practices.html

2.使用修改craw源码+自定义命令的方式实现

(1)创建一个project:scrapy startproject mymultispd

(2)进入mymultispd:cd mymultispd

(3)创建3个爬虫文件:scrapy genspider -t basic myspd1 sina.com.cn

scrapy genspider -t basic myspd2 sina.com.cn

scrapy genspider -t basic myspd3 sina.com.cn

(4)查看crawl文件,从GitHub上面下载下来:https://github.com/scrapy/scrapy/blob/master/scrapy/commands/crawl.py

(5)在cmd下执行:

cd .\mymultispd\

mkdir mycmd                       #将新文件夹命名为mycmd

cd .\mycmd\

echo #>mucrawl.py        #进入新建的文件夹并创建Python文件

(6)复制crawl文件到mycrawl文件中,并修改:

import osfrom scrapy.commands import ScrapyCommandfrom scrapy.utils.conf import arglist_to_dictfrom scrapy.utils.python import without_none_valuesfrom scrapy.exceptions import UsageError#建立一个命令类继承自scrapycommandclass Command(ScrapyCommand):    requires_project = True    def syntax(self):        return "[options] <spider>"    def short_desc(self):        return "Run all spider"    def add_options(self, parser):        ScrapyCommand.add_options(self, parser)        parser.add_option("-a", dest="spargs", action="append", default=[], metavar="NAME=VALUE",                          help="set spider argument (may be repeated)")        parser.add_option("-o", "--output", metavar="FILE",                          help="dump scraped items into FILE (use - for stdout)")        parser.add_option("-t", "--output-format", metavar="FORMAT",                          help="format to use for dumping items with -o")    def process_options(self, args, opts):        ScrapyCommand.process_options(self, args, opts)        try:            opts.spargs = arglist_to_dict(opts.spargs)        except ValueError:            raise UsageError("Invalid -a value, use -a NAME=VALUE", print_help=False)        if opts.output:            if opts.output == '-':                self.settings.set('FEED_URI', 'stdout:', priority='cmdline')            else:                self.settings.set('FEED_URI', opts.output, priority='cmdline')            feed_exporters = without_none_values(                self.settings.getwithbase('FEED_EXPORTERS'))            valid_output_formats = feed_exporters.keys()            if not opts.output_format:                opts.output_format = os.path.splitext(opts.output)[1].replace(".", "")            if opts.output_format not in valid_output_formats:                raise UsageError("Unrecognized output format '%s', set one"                                 " using the '-t' switch or as a file extension"                                 " from the supported list %s" % (opts.output_format,                                                                  tuple(valid_output_formats)))            self.settings.set('FEED_FORMAT', opts.output_format, priority='cmdline')    def run(self, args, opts):        #获取爬虫列表        spd_loader_list = self.crawler_process.spider_loader.list()        #遍历各爬虫        for spname in spd_loader_list or args:            self.crawler_process.crawl(spname, **opts.spargs)            print("此时启动的爬虫为:"+spname)        self.crawler_process.start()
(7)添加一个初始化文件__init__.py

echo #>__init__.py

(8)设置settings文件,添加一句:

COMMANDS_MODULE = 'mymultispd.mycmd'
(9)cmd执行:scrapy -h

(10)使用自定义命令启动所有爬虫文件:scrapy mycrawl --nolog

原创粉丝点击