爬取糗事百科

来源:互联网 发布:mac 大容量网盘 编辑:程序博客网 时间:2024/06/08 04:15

这是一个简单爬虫
打开糗事百科首页https://www.qiushibaike.com/
拉到最下面点击下一页,观察url变化

这里写图片描述
由此可以构造生成url的函数

def  getUrls(self,pages):        url1 = 'https://www.qiushibaike.com/text/page/'        for i in range(1,pages):            url = url1 + str(i)            self.urls.append(url)

我们选择一个内容,右击鼠标点击检查

这里写图片描述

可以发现所有的糗事内容都包含在'div',attrs = {'class':'col1'}
然后再找这个标签下面所有的'div',attrs={'class':'content'}
之后遍历取出每个标签里面的<span></span>里的内容即可
全部代码如下
(这里为了测试只爬取了5页内容)

from bs4 import BeautifulSoupimport requestsclass JokeItem(object):    author = None    content = Noneclass GetJoke(object):    def __init__(self):        self.urlBase = 'https://www.qiushibaike.com/text/'        self.urls = []        self.item = []        self.getUrls(5)        self.spider(self.urls)    def getHTMLText(self,url):        r = requests.get(url)        r.raise_for_status()        r.encoding = r.apparent_encoding        return r.text    # def  getPages(self):    #   html = self.getHTMLText(self.urlBase)    #   soup = BeautifulSoup(html,'html.parser')    def  getUrls(self,pages):        url1 = 'https://www.qiushibaike.com/text/page/'        for i in range(1,pages):            url = url1 + str(i)            self.urls.append(url)    def  spider(self,urls):        for url in urls:            htmlContent = self.getHTMLText(url)            soup = BeautifulSoup(htmlContent,'lxml')            anchorTag = soup.find('div',attrs = {'class':'col1'})            #tags = anchorTag.find('span')            tags = anchorTag.find_all('div',attrs={'class':'content'})            for tag in tags:                item = JokeItem()                item.author = tag.find('span').getText()                # item.content = tag.find('h2').getText()                self.item.append(item)            # print(soup.encode('GBK', 'ignore'))                print(tag.find('span').getText())if __name__ == '__main__':    g = GetJoke()