scrapy+spynner获取ajax中的内容(以微信公众号为例)

来源:互联网 发布:2016淘宝卖家避开同款 编辑:程序博客网 时间:2024/06/06 13:19

现在越来越多的网站的使用ajax来动态加载数据,scrapy只能获取静态html中的数据,对于动态加载的就无能为力了

spynner是一个模拟浏览器加载的工具,可以在后台模拟ajax加载后的网页,然后再通过scrapy进行爬取

原理就是在scrapy的中间件设置spynner模块加载

微信公众号里面的内容,文字可以直接加载出来,但是图片使用的是ajax技术,如果我们成功获取到了图片的src则我们就实现了对网页的动态页面的获取


环境: Ubuntu16  Python2.7

1 spynner安装

pip install spynner

安装的时候会出现各种依赖包,如果没有安装的话会报错,可以通过apt-file来定位需要的依赖http://blog.csdn.net/lcyong_/article/details/72904275


2创建scrapy工程,编写中间件,添加spynner模块

scrapy stratproject testSpynner

cd testSpynner

scrapy genspider weixin qq.com

现在成功创建了一个名为wx的scrapy爬虫

编写中间件 middlewares   在这里我添加了随机更换浏览器头的模块代码如下:


# -*- coding: utf-8 -*-# Define here the models for your spider middleware## See documentation in:# http://doc.scrapy.org/en/latest/topics/spider-middleware.htmlimport randomimport spynnerimport pyqueryfrom scrapy.http import HtmlResponsefrom scrapy import signalsclass WebkitDownloaderTest(object):    def process_request(self, request, spider):        print "创建spynner"        browser = spynner.Browser()        browser.create_webview()        browser.set_html_parser(pyquery.PyQuery)        browser.load(request.url, 20)        print "打开网页"        try:            browser.wait_load(10)        except:            pass        string = browser.html.encode('utf-8')        print "打开网页"        renderedBody = str(string)        print "读取数据"+string        return HtmlResponse(request.url, body=renderedBody)class UserAgentMiddleware(object):    def __init__(self, user_agent=''):        self.user_agent = user_agent    def process_request(self, request, spider):        ua = random.choice(self.user_agent_list)        if ua:            request.headers.setdefault('User-Agent', ua)            print "********Current UserAgent:%s************" % ua    user_agent_list = [ \        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 "        "(KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",        "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 "        "(KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 "        "(KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 "        "(KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 "        "(KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 "        "(KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",        "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 "        "(KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "        "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",        "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 "        "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 "        "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "        "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "        "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "        "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "        "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",        "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 "        "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "        "(KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 "        "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 "        "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"    ]


3 设置setting打开中间件

DOWNLOADER_MIDDLEWARES = {
   'jsspider.middlewares.UserAgentMiddleware': 400,
   'jsspider.middlewares.WebkitDownloaderTest': 401,
   'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
 }


这里spynner模块要比useragent的value值大,因为要首先使用useragent


4 编辑爬虫

# -*- coding: utf-8 -*-import scrapyclass WeixinSpider(scrapy.Spider):    name = "weixin"    allowed_domains = ["qq.com"]    start_urls = [        'https://mp.weixin.qq.com/s?src=3×tamp=1496839659&ver=1&signature=IrQ2oi0qMCOa0-*lbf7OCdgKjBnbKqAYOumviodVwtgeWWkt-fvA1kcd63*u0Z4uQY4kJVn*jS8rbRwd9Hg4FLj9hxw*sKA7rVYTMpWKXaemALgabVrrAeOBCPBFmtLUQx3zSoapN7i1ZBhPw*2eQ2*gbTwQVTUvTDaBRhCKePg=']    def parse(self, response):        img = response.xpath('//*[@id="js_content"]/p[2]')        data = img[0].xpath('img/@data-s')        src = img[0].xpath('img/@src')        self.log("^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^")        self.log(img.extract()[0])        self.log("^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^")        self.log("^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^")        self.log(data.extract()[0])        self.log("^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^")        self.log("^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^")        self.log(src.extract()[0])        self.log("^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^")




5 测试运行  

命令行输入 scrapy crawl weixin

在日志文件中看到了:

2017-06-07 22:16:49 [scrapy] DEBUG: Crawled (200) <GET https://mp.weixin.qq.com/s?src=3×tamp=1496839659&ver=1&signature=IrQ2oi0qMCOa0-*lbf7OCdgKjBnbKqAYOumviodVwtgeWWkt-fvA1kcd63*u0Z4uQY4kJVn*jS8rbRwd9Hg4FLj9hxw*sKA7rVYTMpWKXaemALgabVrrAeOBCPBFmtLUQx3zSoapN7i1ZBhPw*2eQ2*gbTwQVTUvTDaBRhCKePg=> (referer: None)2017-06-07 22:16:49 [jingdong] DEBUG: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^2017-06-07 22:16:49 [jingdong] DEBUG: <p><img data-s="300,640" data-type="jpeg" data-src="http://mmbiz.qpic.cn/mmbiz_jpg/xZe7vUPPTJSvnfLFLhuCyib4ZiclZleCA56AXdAunYSPWR3CLEUciatU40n2lhicycpw8IiadvicKkdSaBYl0BVzoT9Q/0?wx_fmt=jpeg" style="width: auto !important; height: auto !important; visibility: visible !important;" data-ratio="0.625" data-w="1000" class=" " src="http://mmbiz.qpic.cn/mmbiz_jpg/xZe7vUPPTJSvnfLFLhuCyib4ZiclZleCA56AXdAunYSPWR3CLEUciatU40n2lhicycpw8IiadvicKkdSaBYl0BVzoT9Q/640?wx_fmt=jpeg&wxfrom=5&wx_lazy=1" data-fail="0"></p>2017-06-07 22:16:49 [jingdong] DEBUG: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^2017-06-07 22:16:49 [jingdong] DEBUG: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^2017-06-07 22:16:49 [jingdong] DEBUG: 300,6402017-06-07 22:16:49 [jingdong] DEBUG: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^2017-06-07 22:16:49 [jingdong] DEBUG: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^2017-06-07 22:16:49 [jingdong] DEBUG: http://mmbiz.qpic.cn/mmbiz_jpg/xZe7vUPPTJSvnfLFLhuCyib4ZiclZleCA56AXdAunYSPWR3CLEUciatU40n2lhicycpw8IiadvicKkdSaBYl0BVzoT9Q/640?wx_fmt=jpeg&wxfrom=5&wx_lazy=12017-06-07 22:16:49 [jingdong] DEBUG: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


说明爬取成功


阅读全文
0 0
原创粉丝点击