python爬虫实战笔记---以轮子哥为起点Scrapy爬取知乎用户信息

来源:互联网 发布:辅导软件 编辑:程序博客网 时间:2024/06/05 11:35

开发环境:python3.5+Scrapy+pycharm+mongodb

思路:

1.选定起始人:选定一个关注数量或粉丝数量多的大佬
2.获取粉丝和关注列表
3.获取列表用户信息
4.获取每位用户粉丝和关注

站点分析:
以轮子哥vczh为起点,分析网页请求,
找到一个带有followees的请求,在其Preview内有相关的data数据即关注人的信息


5.创建一个项目
打开cmd,运行scrapy startproject zhihuuser

6。在pycharm中打开打开项目
文件目录如下:


7.打开项目的settings.py,将ROBOTSTXT_OBEY属性改为False(robots.txt即站点的反爬虫协议)


8.进入项目根目录,键入scrapy genspider zhihu www.zhihu.com,创建一个spider


9.键入一个执行命令scrapy crawl zhihu,发现终端报了代号500的服务器错误,这是因为知乎本身设置了user-agent识别,因此,我们需要在settings.py中修改一下请求头的user-agent属性,即可以从浏览器访问的请求头中复制一个user-agent,填在DEFAULT_REQUEST_HEADERS中



测试运行后,正常返回结果

接下来,将进入重要的代码环节!!

1.获取轮子哥的关注列表
在浏览器的调试窗口打开的状态下,将鼠标放在某个用户的名字上,即可获得一个请求:



拿到这个request url,在ZhihuSpider类中定义一个start_requests方法,传入这个url
测试一下


然后发现一个401的错误,即爬虫被禁止访问了??
因为这个请求接口还应该有另外一个头信息:

从浏览器中将其复制下来粘贴至setting文件中即可

再去运行,则可正常返回

然后,我们就要去测试一下是否能爬取到follow请求

测试正常

接下来,即可进一步完善爬取
https://www.zhihu.com/api/v4/members/excited-vczh/followees?include=data%5B*%5D.answer_count%2Carticles_count%2Cgender%2Cfollower_count%2Cis_followed%2Cis_following%2Cbadge%5B%3F(type%3Dbest_answerer)%5D.topics&offset=0&limit=20


offset即为列表偏移值,即当前显示的总人数 offset=0,就是第一页,offset=20,就是第二页
limit即为限制值,即当前页的人数
那么要获取很多用户信息,我们只要动态地改变url即可
点开具体某一用户的请求时,发现request url为


https://www.zhihu.com/api/v4/members/mai-tuo-shen?include=allow_message%2Cis_followed%2Cis_following%2Cis_org%2Cis_blocking%2Cemployments%2Canswer_count%2Cfollower_count%2Carticles_count%2Cgender%2Cbadge%5B%3F(type%3Dbest_answerer)%5D.topics

发现红色的是url_token,include为一个确定的变量


所以,所以从follow请求中提取data字典中的每一个用户的url_token就可以构造相应用户的request url!!!

开始~~
改写spider中定义一个user_url变量,传入类似“https://www.zhihu.com/api/v4/members/mai-tuo-shen?include=......"的值

定义一个查询参数user_query,传入那个长长的include
嗯,用相似的方式构造一个follows_url变量和一个follows_query
然后以轮子哥为起点,利用format函数格式化url,编写start_requests函数,及相应的回调函数
# -*- coding: utf-8 -*-from scrapy import Request,Spiderclass ZhihuSpider(Spider):    name = 'zhihu'    allowed_domains = ['www.zhihu.com']    start_urls = ['http://www.zhihu.com/']    start_user = 'excited-vczh'    user_url = "https://www.zhihu.com/api/v4/members/{user}?include={include}"    user_query = "allow_message,is_followed,is_following,is_org,is_blocking,employments,answer_count,follower_count,articles_count,gender,badge[?(type=best_answerer)].topics"    follows_url = "https://www.zhihu.com/api/v4/members/{user}/followees?include={include}&offset={offset}&limit={limit}"    follows_query ="data[*].answer_count,articles_count,gender,follower_count,is_followed,is_following,badge[?(type=best_answerer)].topics"    def start_requests(self):        yield Request(self.user_url.format(user =self.start_user,include=self.user_query),self.parse_user)        yield Request(self.follows_url.format(user=self.start_user,include=self.follows_query,offset=0,limit=20),callback=self.parse_follows)    def parse_user(self, response):        print(response.text)    def parse_follows(self, response):        print(response.text)

这样,我们获得的就是轮子哥的基本信息以及他的关注信息,接下来,我们将要做的就是将每个用户的信息当作一个item存下来,
首先,在item.py将用户信息的data内容写成field()
然后在zhihu.py中改写parse_user函数
    def parse_user(self, response):       result = json.loads(response.text)       item = UserItem()       for field in item.fields:           if field in result.key():               item[field] = result.get(field)       yield item
然后改写parse_follows函数
 def parse_follows(self, response):        results = json.loads(response.text)        if 'data' in results.keys():            for result in results.get('data'):                yield Request(self.user_url.format(user=result.get('url_token'),include=self.user_query),callback=self.parse_user)        if 'paging' in results.keys() and results.get('paging').get('is_end') == False:            next_page = results.get('paging').get('next')            yield Request(next_page,self.parse_follows)

这样的话,就能运行得到轮子哥的关注者信息了,要想一层一层地继续爬下去,则需要在
parse_user()函数中再写一句
yield Request(self.follows_url.format(user = result.get('url_token'),include=self.follows_query,limit=20,offset=0),self.parse_follows)
接下来,可以把知乎用户的粉丝列表也添加进来
followers_url = "https://www.zhihu.com/api/v4/members/{user}/followers?include={include}&offset={offset}&limit={limit}"followers_query = "data[*].answer_count,articles_count,gender,follower_count,is_followed,is_following,badge[?(type=best_answerer)].topics"
其编写逻辑与获取关注者类似,不作详述

然后,就可以将爬取的数据存进mongodb数据库之中

可以从scrapy的官方文档中将关于pipeline把数据存入mongodb的代码拷贝下来,粘贴至pipelines.py中,然后重写process_item函数,做去重处理

并且在setting中配置
MONGO_URL = 'localhost'MONGO_DATABASE = 'zhihu'

ITEM_PIPELINES = {   'zhihuuser.pipelines.MongoPipeline': 300,}

打开mongodb客户端,就能看到成功爬下了用户的数据





以下为项目的关键代码:

#item.py# -*- coding: utf-8 -*-# Define here the models for your scraped items## See documentation in:# http://doc.scrapy.org/en/latest/topics/items.htmlfrom scrapy import Item,Fieldclass UserItem(Item):    # define the fields for your item here like:    # name = scrapy.Field()    allow_message = Field()    answer_count= Field()    articles_count= Field()    avatar_hue= Field()    avatar_url= Field()    avatar_url_template= Field()    badge= Field()    business= Field()    columns_count= Field()    commercial_question_count= Field()    cover_url= Field()    description= Field()    educations= Field()    employments= Field()    favorite_count= Field()    favorited_count= Field()    follower_count= Field()    following_columns_count= Field()    following_count= Field()    following_favlists_count= Field()    following_question_count= Field()    following_topic_count= Field()    gender= Field()    headline= Field()    hosted_live_count= Field()    id= Field()    is_active= Field()    is_advertiser= Field()    is_bind_sina= Field()    is_blocked= Field()    is_blocking= Field()    is_followed= Field()    is_following= Field()    is_force_renamed= Field()    is_privacy_protected= Field()    locations= Field()    logs_count= Field()    marked_answers_count= Field()    marked_answers_text= Field()    message_thread_token= Field()    mutual_followees_count= Field()    name= Field()    participated_live_count= Field()    pins_count= Field()    question_count= Field()    show_sina_weibo= Field()    thank_from_count= Field()    thank_to_count= Field()    thanked_count= Field()    type= Field()    url= Field()    url_token= Field()    user_type= Field()    vote_from_count= Field()    vote_to_count= Field()    voteup_count= Field()

#pipeline.py
import pymongoclass MongoPipeline(object):    def __init__(self, mongo_uri, mongo_db):        self.mongo_uri = mongo_uri        self.mongo_db = mongo_db    @classmethod    def from_crawler(cls, crawler):        return cls(            mongo_uri=crawler.settings.get('MONGO_URI'),            mongo_db=crawler.settings.get('MONGO_DATABASE', 'items')        )    def open_spider(self, spider):        self.client = pymongo.MongoClient(self.mongo_uri)        self.db = self.client[self.mongo_db]    def close_spider(self, spider):        self.client.close()    def process_item(self, item, spider):        self.db['user'].update({'url_token':item['url_token']},{'$set':item},True)        return item


#setting.py# -*- coding: utf-8 -*-BOT_NAME = 'zhihuuser'SPIDER_MODULES = ['zhihuuser.spiders']NEWSPIDER_MODULE = 'zhihuuser.spiders'ROBOTSTXT_OBEY = FalseITEM_PIPELINES = {   'zhihuuser.pipelines.MongoPipeline': 300,}MONGO_URL = 'localhost'MONGO_DATABASE = 'zhihu'