1800万知乎用户的爬取

来源:互联网 发布:navicat for linux 64 编辑:程序博客网 时间:2024/05/23 14:04

近日爬取了18,037,764个知乎用户;1,627,302篇文章;7,309,906个提问,42,825,840个回答,记录其主要过程


爬取工具:python3+scrapy+redis+mongo
知识要点:python3,scrapy-redis框架,redis数据库,mongo数据库,http请求,正则表达式, xpath,https代理。
爬取思路:从过百万粉丝的张佳伟/李开复等几个知乎大V开始爬取,递归爬取关注列表和粉丝列表。


python教程推荐:
1:Python 基础教程 | 菜鸟教程[1]
2:廖雪峰的官方网站[2]
scrapy-redis推荐:Scrapy 1.0 文档http://scrapy-chs.readthedocs.io/zh_CN/1.0/index.html
redis+mongo推荐:菜鸟教程http://www.runoob.com/
http推荐:《http权威指南》
正则表达式推荐:《精通正则表达式》
xpath推荐:w3cschool https://www.w3cschool.cn/xpath/
https代理:快代理/西刺代理/goubanjia
phantomjs:更新知乎令牌专用。
验证码:由于不需要太多验证码,因此没有采用深度学习训练模型,而采用云速打码(90%的准确率)的方式


scrapy是非常非常优秀\快捷\高效的爬虫框架,其原理图这里不附,网上很多,简单说就是请求-去重-入队-调度-出队-下载。  首先需要写的代码就是item,也就是要下载的内容。很简单,字段/属性而已

from scrapy.item import Item, Fieldclass UserItem(Item):       locations = Field()#所在地    educations = Field()#教育背景    employments = Field()#工作信息    badge = Field()    business = Field()      id = Field()    name = Field()#用户昵称    avatar_url = Field()    headline = Field()    description = Field()#个人描述    #url = Field()    url_token = Field()#知乎给予的每个人用户主页唯一的ID    gender = Field()    #cover_url = Field()    type = Field()    is_active=Field()    is_advertiser=Field() ##是否是广告用户    is_org=Field()  ##是否是组织     answer_count = Field()#回答数量    articles_count = Field()#写过的文章数    commercial_question_count = Field()    favorite_count = Field()#收藏数量    favorited_count = Field()#被收藏次数    follower_count = Field()#粉丝数量    following_columns_count = Field()#关注的专栏    following_count = Field()#关注了多少人    following_favlists_count = Field()#关注的收藏数量    following_question_count = Field()#关注问题数量    following_topic_count = Field()#关注话题数量    hosted_live_count = Field()#举办live数    logs_count = Field()#参与公共编辑    marked_answers_count = Field()##知乎收录    participated_live_count = Field()         pins_count = Field()#分享总数    question_count = Field()#提问数量    thank_from_count = Field()    thank_to_count = Field()    thanked_count = Field()#获得的感谢数    vote_from_count = Field()    vote_to_count = Field()    voteup_count = Field()#获得的赞数      #接下来是spider的代码。from scrapy_redis.spiders import RedisSpiderfrom scrapy.http import Requestfrom zhihu.items import UserItemclass ZhuanlanSpider(RedisSpider):    name = "zhihu"  ##定义spider名字的字符串。    allowed_domains = ["www.zhihu.com"] ##可选。包含了spider允许爬取的域名(domain)列表(list)    user_url = 'https://www.zhihu.com/api/v4/members/{user}?include={include}'    follows_url = 'https://www.zhihu.com/api/v4/members/{user}/followees?include={include}&limit={limit}&offset={offset}'    followers_url = 'https://www.zhihu.com/api/v4/members/{user}/followers?include={include}&limit={limit}&offset={offset}'        start_user = 'zhang-jia-wei'        user_query = 'locations,employments,gender,educations,business,voteup_count,thanked_Count,        following_favlists_count,following_columns_count,answer_count,articles_count,        pins_count,question_count,commercial_question_count,favorite_count,favorited_count,        logs_count,marked_answers_count,marked_answers_text,message_thread_token,        account_status,is_active,is_force_renamed,is_bind_sina,sina_weibo_url,        sina_weibo_name,show_sina_weibo,is_blocking,is_blocked,is_following,is_followed,        thank_from_count,thanked_count,description,hosted_live_count,        participated_live_count,allow_message,industry_category,org_name,        org_homepage,badge[?(type=best_answerer)].topics'    follows_query = 'data[*].answer_count,articles_count,gender,follower_count,is_followed,is_following,badge[?(type=best_answerer)].topics'    followers_query = 'data[*].answer_count,articles_count,gender,follower_count,is_followed,is_following,badge[?(type=best_answerer)].topics'    start_urls =[]    def start_requests(self):##当spider启动爬取并且未指定start_urls时,该方法被调用        user=self.start_user        url=self.user_url.format(user=user, include=self.user_query)          yield Request(url, self.parse_user)        yield Request(self.follows_url.format(user=user, include=self.follows_query, limit=20, offset=0),self.parse_follows)        yield Request(self.followers_url.format(user=user, include=self.followers_query, limit=20, offset=0), self.parse_followers)      def parse_user(self,response):        item=UserItem()         result = json.loads(response.text)        self.DealResult(result)#处理educations,employments,business,locations,badge        for field in item.fields:            if field in result.keys():                item[field] = result.get(field)          yield item        if item["following_count"]>0:            yield Request(self.follows_url.format(user=result.get('url_token'), include=self.follows_query, limit=20, offset=0) , self.parse_follows)        if item['follower_count']>0:             yield Request(self.followers_url.format(user=result.get('url_token'), include=self.followers_query, limit=20, offset=0) , self.parse_followers)                  #def parse_follows()  parse_followers()更为简单,这里就不再放上来了。#... prompt'''

本想全部放上,发现实在太长了,还是说说一些要点就好了。
  1. queue出队入队中,为了减少内存占用,入队时把链接压缩为user%s0,出队时再补足。且出队优先级中,user_url>follows_url|followers_url
  2. HeaderMidware中,随机更换知乎令牌及https代理服务器,在获取服务器回复后,发现401,403错误则更新知乎令牌,404错误则记录url后忽略,其余错误则依次把链接重新入队、输出错误、忽略请求。并非采用重试几次的做法。
  3. bloomfilter去重算法,则参考九茶的代码
  4. 用户数据保存到mongo数据库,这里必须提一下educations,employments,business,locations,badge这几个字段必须处理(上诉代码中DealResult(),限于篇幅没有放上去,需要可私信),否则保存时可能因为引号出现错误。这里也吐槽一下知乎,在这几个字段里有太多太长的wiki数据。
  5. 稍后代码会上传github,相关知乎用户数据分析,请查看《大数据报告:1800万知乎用户的简单分析(一)》[3]《大数据报告:1800万知乎用户之最(二)》[4]《大数据报告:1800万知乎用户之机构用户与广告用户》[5]等系列文章
    谢谢

[1] http://www.runoob.com/python/python-tutorial.html
[2] https://www.liaoxuefeng.com/wiki/0014316089557264a6b348958f449949df42a6d3a2e542c000
[3] https://zhuanlan.zhihu.com/p/30051491
[4] https://zhuanlan.zhihu.com/p/30106090
[5] https://zhuanlan.zhihu.com/p/30144848

原创粉丝点击