Scrapy爬虫实战四:糗事百科

来源:互联网 发布:网络幼儿英语 编辑:程序博客网 时间:2024/06/08 06:25

本文项目采用python3.6版本语言,利用scrapy框架进行爬取。

该案例相比于之间几个案例新增的知识点是:添加中间件。实现的功能是爬取糗事百科(http://www.qiushibaike.com)的信息。

下面是本次项目的目录结构:


----qiushi

----qiushi

----middlewares

__init__.py

customMiddlewares.py

userAgents.py

----spiders

__init__.py

QiushiSpider.py

 __init__.py

items.py

pipelines.py

settings.py

scrapy.cfg





上述目录结构中,没有后缀名的为文件夹,有后缀的为文件。


1、决定爬取的项目items.py

#决定爬取哪些项目import scrapyclass QiushiItem(scrapy.Item):    author=scrapy.Field()    content=scrapy.Field()    img=scrapy.Field()    funNum=scrapy.Field()    talkNum=scrapy.Field()

2、定义怎样爬取QiushiSpider.py

#定义如何爬取import scrapyfrom qiushi.items import QiushiItemclass QiushiSpider(scrapy.Spider):    name="qiushiSpider"    allowed_domains=['qiushibaike.com']    start_urls=[]    for i in range(1,2):        url='http://www.qiushibaike.com/hot/page/'+str(i)+'/'        start_urls.append(url)    def parse(self,response):        subSelector=response.xpath('//div[@class="article block untagged mb15"and @id]')        items=[]        for sub in subSelector:            item=QiushiItem()            item['author']=sub.xpath('.//h2/text()').extract()[0]            item['content']=sub.xpath('.//div[@class="content"]/span/text()').extract()[0]            item['img']=sub.xpath('.//img/@src').extract()            item['funNum']=sub.xpath('.//i[@class="number"]/text()').extract()[0]            try:                item['talkNum']=sub.xpath('.//i[@class="number"]/text()').extract()[1]            except IndexError:                item['talkNum']='0'            items.append(item)        return items

不会使用xpath选择器的小伙伴可以查看的前几篇博客,这里不再重复说明各个标签怎么来的了。

3、保存爬取的结果pipelines.py

#保存爬取结果import timeimport osfrom urllib import requestclass QiushiPipeline(object):    def process_item(self,item,spider):        today=time.strftime('%Y-%m-%d',time.localtime())        fileName=today+'.txt'        imgDir='IMG'        if os.path.isdir(imgDir):            pass        else:            os.mkdir(imgDir)        with open(fileName,'a') as fp:            fp.write('-'*50+'\n'+'*'*50+'\n')            fp.write("author:%s\n" %(item['author']))            fp.write("content:%s\n" %(item['content']))            try:                imgUrl="http:"+item['img'][1]#[0]表示的是头像,[1]中的才是内容里的图片            except IndexError:                pass            else:                imgName=os.path.basename(imgUrl)                fp.write("img:\t %s\n" %(imgName))                imgPathName=imgDir+os.sep+imgName#os.sep是文档目录的分割线                with open(imgPathName,'wb') as fp1:                    response=request.urlopen(imgUrl)                    fp1.write(response.read())            fp.write("fun:%s\t  talk:%s\n" %(item['funNum'],item['talkNum']))            fp.write('*'*50+'\n'+'-'*50+'\n'*5)            time.sleep(1)        return item
4、分派任务的settings.py


BOT_NAME='qiushi'SPIDER_MODULES=['qiushi.spiders']NEWSPIDER_MODULE='qiushi.spiders'DOWNLOADER_MIDDLEWARES={    'qiushi.middlewares.customMiddlewares.CustomProxy':10,    'qiushi.middlewares.customMiddlewares.CustomUserAgent':30,    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware':20    }ITEM_PIPELINES={'qiushi.pipelines.QiushiPipeline':1,                }
5、本篇博客的重点,中间件customMiddlewares.py


from scrapy.downloadermiddlewares.useragent import UserAgentMiddlewarefrom qiushi.middlewares import userAgentsclass CustomUserAgent(UserAgentMiddleware):    def process_request(self,request,spider):        ua=userAgents.pcUserAgent.get('safari 5.1 – Windows')  #      ua="Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50"        request.headers.setdefault('User-Agent',ua)class CustomProxy(object):    def process_request(self,request,spider):#这里的代理可能要跟着换,可以去西刺代理网里找最新的代理        request.meta['proxy']='https://101.94.134.225:8123'

6、防反爬虫的头文件userAgents.py

pcUserAgent = {"safari 5.1 – MAC":"User-Agent:Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50","safari 5.1 – Windows":"User-Agent:Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50","IE 9.0":"User-Agent:Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0);","IE 8.0":"User-Agent:Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)","IE 7.0":"User-Agent:Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)","IE 6.0":"User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)","Firefox 4.0.1 – MAC":"User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1","Firefox 4.0.1 – Windows":"User-Agent:Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1","Opera 11.11 – MAC":"User-Agent:Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11","Opera 11.11 – Windows":"User-Agent:Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11","Chrome 17.0 – MAC":"User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11","Maxthon":"User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Maxthon 2.0)","Tencent TT":"User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; TencentTraveler 4.0)","The World 2.x":"User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)","The World 3.x":"User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; The World)","sogou 1.x":"User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SE 2.X MetaSr 1.0; SE 2.X MetaSr 1.0; .NET CLR 2.0.50727; SE 2.X MetaSr 1.0)","360":"User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; 360SE)","Avant":"User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Avant Browser)","Green Browser":"User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)"}mobileUserAgent = {"iOS 4.33 – iPhone":"User-Agent:Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5","iOS 4.33 – iPod Touch":"User-Agent:Mozilla/5.0 (iPod; U; CPU iPhone OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5","iOS 4.33 – iPad":"User-Agent:Mozilla/5.0 (iPad; U; CPU OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5","Android N1":"User-Agent: Mozilla/5.0 (Linux; U; Android 2.3.7; en-us; Nexus One Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1","Android QQ":"User-Agent: MQQBrowser/26 Mozilla/5.0 (Linux; U; Android 2.3.7; zh-cn; MB200 Build/GRJ22; CyanogenMod-7) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1","Android Opera ":"User-Agent: Opera/9.80 (Android 2.3.4; Linux; Opera Mobi/build-1107180945; U; en-GB) Presto/2.8.149 Version/11.10","Android Pad Moto Xoom":"User-Agent: Mozilla/5.0 (Linux; U; Android 3.0; en-us; Xoom Build/HRI39) AppleWebKit/534.13 (KHTML, like Gecko) Version/4.0 Safari/534.13","BlackBerry":"User-Agent: Mozilla/5.0 (BlackBerry; U; BlackBerry 9800; en) AppleWebKit/534.1+ (KHTML, like Gecko) Version/6.0.0.337 Mobile Safari/534.1+","WebOS HP Touchpad":"User-Agent: Mozilla/5.0 (hp-tablet; Linux; hpwOS/3.0.0; U; en-US) AppleWebKit/534.6 (KHTML, like Gecko) wOSBrowser/233.70 Safari/534.6 TouchPad/1.0","Nokia N97":"User-Agent: Mozilla/5.0 (SymbianOS/9.4; Series60/5.0 NokiaN97-1/20.0.019; Profile/MIDP-2.1 Configuration/CLDC-1.1) AppleWebKit/525 (KHTML, like Gecko) BrowserNG/7.1.18124","Windows Phone Mango":"User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows Phone OS 7.5; Trident/5.0; IEMobile/9.0; HTC; Titan)","UC":"User-Agent: UCWEB7.0.2.37/28/999","UC standard":"User-Agent: NOKIA5700/ UCWEB7.0.2.37/28/999","UCOpenwave":"User-Agent: Openwave/ UCWEB7.0.2.37/28/999","UC Opera":"User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; ) Opera/UCWEB7.0.2.37/28/999"}

7、配置文件scrapy.cfg


[settings]default=qiushi.settings[deploy]project=qiushi

8、怎么运行

cmd->cd 将文件调到我们项目所在的这一层文件,也就是上面目录结构中scrapy.cfg所在的这一层文件夹,然后输入命令:scrapy crawlqiushiSpider
这里的
qiushiSpider是QiushiSpider类中的name="qiushiSpider"的值。


然后可以看到以当天日期命名的.txt文件,这就是爬取的结果,根目录下还会新建一个IMG的文件夹,里面存入的是爬取出的图片。

原创粉丝点击