Python3.4 写一个简单的定向爬虫

来源:互联网 发布:消费品行业 知乎 编辑:程序博客网 时间:2024/04/29 11:20

这是一个针对百度百科python词条以及相关1000条的爬虫程序,这里主要记录自己编程过程中遇到的问题,希望和我一样的python菜鸟又点帮助:
我是参考慕课网上的视频教程,教程中使用python2,我使用的是python3,因此遇到的问题很多都是由于python架构更改导致的:

  1. python3安装BeautifulSoup
    python命令行直接命令行在线安装行不通:
    (参考:http://blog.csdn.net/u012387575/article/details/51024455):
    a. 下载
    https://www.crummy.com/software/BeautifulSoup/bs4/download/
    b. 解压至D:\python34 即Python安装目录
    c. 打开cmd,进入D:\python34\beautifulsoup4-4.4.1 ,这是我的安装路径,这里面有setup.py 文件
    d. cmd中输入
    $ Python setup.py install

    1. Non-UTF-8 code starting with ‘\xc4’ in file
      ecilpse报错,显然这个是编码问题,网上有一大堆针对eclipse的设置,不管你怎么认为,我觉得很烦,
      索性直接添加一个模板:
      windows->preferences->Pydev->editor->code style->templetes
      这里写图片描述
      之后创建文件选择这个模板就可以自动加上# coding:utf-8,当然还可以加上自己的个性签名哦

      1. urlparse.urljoin这个语句报错:
        这个就是python3的重架构导致的:
        python3对urllib和urllib2进行了重构,拆分成了urllib.request, urllib.response, urllib.parse, urllib.error等几个子模块,这样的架构从逻辑和结构上说更加合理。urljoin现在对应的函数是urllib.parse.urljoin
        我想这些区别也就是我们这些菜鸟的一个学习的障碍吧

      2. TypeError: output_html() missing 1 required positional argument: ‘self’
        在下面给出的主控代码中,我调用其他模块(py文件)里的函数
        self.outputer().output_html()没有第一个括号之前都是报错,加了之后居然可以,不知道是我语法不熟悉还是怎么回事,后来看到stackoverflow上别人同样的问题,好像也是python3的不同造成的 http://stackoverflow.com/questions/17534345/typeerror-missing-1-required-positional-argument-self

这个爬虫分为五个模块:
1. 主控模块
2. html下载器
3. url管理器
4. html解析器
5. 数据收集与输出
代码:
1. 主控模块

# coding:utf-8'''Created on 2017��1��17��@author: micle'''from baike_spider import url_manager, html_downloader, html_parser, html_outputerimport sysclass SpiderMain(object):    def __init__(self):        self.urls = url_manager.UrlManager()        self.downloader = html_downloader.HtmlDownloader()        self.parser = html_parser.HtmlParser()        self.outputer = html_outputer.HtmlOutputer    def craw(self, root_url):        count = 1        self.urls.add_new_url(root_url)        while self.urls.has_new_url():            try:                new_url = self.urls.get_new_url()                print('craw %d : %s'%(count,new_url))                html_cont = self.downloader.download(new_url)                new_urls, new_data = self.parser.parse(new_url, html_cont)                self.urls.add_new_urls(new_urls)                self.outputer().collect_data(new_data)                if count==1000:                    break                count = count+1            except:                print('craw failed')                info=sys.exc_info()                  print(info[0],":",info[1])        self.outputer().output_html()if __name__=="__main__":    root_url = "http://baike.baidu.com/item/Python"    obj_spider = SpiderMain()    obj_spider.craw(root_url)
  1. html下载器
# coding:utf-8'''Created on 2017��1��17��@author: micle'''import urllib.requestclass HtmlDownloader(object):    def download(self, url):        if url is None:            return None        response = urllib.request.urlopen(url)        if response.getcode()!=200:            return None        return response.read()
  1. url管理器
# coding:utf-8'''Created on 2017��1��17��@author: micle'''class UrlManager(object):    def __init__(self):        self.new_urls = set()        self.old_urls = set()    def add_new_url(self, url):        if url is None:            return        if url not in self.new_urls and url not in self.old_urls:            self.new_urls.add(url)    def add_new_urls(self, urls):        if urls is None or len(urls) == 0:            return        for url in urls:            self.add_new_url(url)    def has_new_url(self):        return len(self.new_urls) != 0    def get_new_url(self):        new_url = self.new_urls.pop()        self.old_urls.add(new_url)        return new_url
  1. html解析器
# coding:utf-8'''Created on 2017��1��17��@author: micle'''from bs4 import BeautifulSoupimport reimport urllib.parseclass HtmlParser(object):    def _get_new_urls(self, page_url, soup):        new_urls = set()        # /view/123.htm        links = soup.find_all('a', href=re.compile(r"/view/\d+\.htm"))        for link in links:            new_url = link['href']            new_full_url = urllib.parse.urljoin(page_url,new_url)            new_urls.add(new_full_url)        return new_urls    def _get_new_data(self, page_url, soup):        res_data = {}        #url        res_data['url']= page_url        #<dd class="lemmaWgt-lemmaTitle-title"> <h1>Python</h1>        title_node = soup.find('dd', class_="lemmaWgt-lemmaTitle-title").find("h1")        res_data['title'] = title_node.get_text()        #<div class="lemma-summary" label-module="lemmaSummary">        summary_node = soup.find('div', class_="lemma-summary")        res_data['summary'] = summary_node.get_text()        return res_data    def parse(self, page_url, html_cont):        if page_url is None or html_cont is None:            return        soup = BeautifulSoup(html_cont, 'html.parser', from_encoding='utf-8')        new_urls = self._get_new_urls(page_url, soup)        new_data = self._get_new_data(page_url, soup)        return new_urls, new_data
  1. 数据收集与输出
# coding:utf-8'''Created on 2017��1��17��@author: micle'''class HtmlOutputer(object):    def __init__(self):        self.datas =[]    def output_html(self):        fout = open('output.html','w')         fout.write("<html>")        fout.write("<body>")        fout.write("<table>")        #默认输出asci        for data in self.datas:            fout.write("<tr>")            fout.write("<td>%s</td>" % data['url'])            fout.write("<td>%s</td>" % data['title'].encode('utf-8'))            fout.write("<td>%s</td>" % data['summary'].encode('utf-8'))            fout.write("<tr>")        fout.write("</table>")        fout.write("</body>")        fout.write("</html>")        fout.close()    def collect_data(self, data):        if data is None:            return        self.datas.append(data)
1 0