爬虫学习笔记--爬取百度贴吧

来源:互联网 发布:辅导软件 编辑:程序博客网 时间:2024/05/18 10:47

由于松爱协会小伙伴的邀请 我把贴吧里的一些诗集整理了一下 

用爬虫爬取下来 

由于是静态的 不需要用到selenuim


就直接贴代码了


#coding=utf-8import requestsfrom bs4 import BeautifulSoupimport sysimport timereload(sys)sys.setdefaultencoding('utf-8')link = "https://tieba.baidu.com/p/4877675324"link2 = "https://tieba.baidu.com/p/4877675324?pn=2"headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}r = requests.get(link,headers=headers)r2 = requests.get(link2,headers=headers)soup = BeautifulSoup(r.text,"html.parser")soup2 = BeautifulSoup(r2.text,"html.parser")content_list = soup.find_all("div",class_ = "d_post_content j_d_post_content ")content_list2 = soup2.find_all("div",class_ = "d_post_content j_d_post_content ")for i in range(len(content_list)):    conent = content_list[i].text.strip()    print ("诗集"+str(i+1)+":")    print (conent)for j in range(len(content_list2)):    conent2 = content_list2[j].text.strip()    print ("诗集"+str(i+j+1)+":")    print (conent2)


阅读全文
0 0