python3.6.2实现的简单爬虫爬取百度百科
来源:互联网 发布:js a href 赋值 编辑:程序博客网 时间:2024/05/16 06:21
话不多说,直接上代码
1.主程序代码
from baike_spider import url_manager, html_downloader, html_parser,html_outputerclass SpiderMain(object): def __init__(self): self.urls = url_manager.UrlManager() self.downloader = html_downloader.HtmlDownloader() self.parser = html_parser.HtmlParser() self.outputer = html_outputer.HtmlOutputer() def craw(self, root_url): count = 1 self.urls.add_new_url(root_url) while self.urls.has_new_url(): try: new_url = self.urls.get_new_url() print ('craw %d : %s' % (count, new_url)) html_cont = self.downloader.download(new_url) new_urls, new_data = self.parser.parse(new_url, html_cont) self.urls.add_new_urls(new_urls) self.outputer.collect_data(new_data) if count == 1000: break count = count + 1 except: print("faild") self.outputer.output_html()if __name__=="__main__": root_url='https://baike.baidu.com/item/Python' obj_spider = SpiderMain() obj_spider.craw(root_url) print ('end')
2.url管理模块代码
class UrlManager(object): def __init__(self): self.new_urls=set() self.old_urls=set() def add_new_url(self,url): if url is None: return if url not in self.new_urls and url not in self.old_urls: self.new_urls.add(url) def add_new_urls(self,urls): if urls is None or len(urls)==0: return for url in urls: self.add_new_url(url) def has_new_url(self): return len(self.new_urls)!=0 def get_new_url(self): new_url=self.new_urls.pop() self.old_urls.add(new_url) return new_url
3.html页面下载代码
import urllib.requestclass HtmlDownloader(object): def download(self, url): if url is None: return None response = urllib.request.urlopen(url) if response.getcode() != 200: print("acess error") return None return response.read().decode("utf-8")
4.html页面解析代码
import refrom bs4 import BeautifulSoupimport urllib.parseclass HtmlParser(object): def parse(self,page_url,html_count): if page_url is None or html_count is None: return soup=BeautifulSoup(html_count,'html.parser') new_urls=self._get_new_urls(page_url,soup) new_data=self._get_new_data(page_url,soup) return new_urls,new_data def _get_new_urls(self,page_url,soup): new_urls=set() links = soup.find_all('a',href=re.compile(r"/item/(.*)")) for link in links: new_url=link['href'] new_full_url=urllib.parse.urljoin(page_url,new_url) #print ("the full url is",new_full_url) new_urls.add(new_full_url) return new_urls def _get_new_data(self,page_url,soup): res_data = {} # url res_data['url'] = page_url # <dd class="lemmaWgt-lemmaTitle-title"><h1>Python</h1> title_node = soup.find('dd', class_= "lemmaWgt-lemmaTitle-title").find("h1") res_data['title'] = title_node.get_text() summary_node = soup.find('div', class_="lemma-summary") res_data['summary'] = summary_node.get_text() return res_data
5.爬取数据输出代码
class HtmlOutputer(object): def __init__(self): self.datas = [] def collect_data(self, data): if data is None: return self.datas.append(data) def output_html(self): fout = open('output.html', 'w',encoding='utf-8') fout.write('<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">') fout.write("<html>") fout.write("<body>") fout.write("<table>") for data in self.datas: fout.write("<tr>") #fout.write("<td>%s</td>" % data['url']) fout.write("<td>%s</td>" % data['title']) fout.write("<td>%s</td>" % data['summary']) fout.write("</tr>") fout.write("</table>") fout.write("</body>") fout.write("</html>") fout.close()
好了,到这里5个部分编写完毕即可运行爬取简单的数据了。
阅读全文
1 0
- python3.6.2实现的简单爬虫爬取百度百科
- python3爬虫(1)--百度百科的页面爬取
- Python 简单爬虫实现(爬取百度百科信息)
- Python3爬虫之四简单爬虫架构【爬取百度百科python词条网页】
- 简单的python爬虫(爬取百度百科词条)
- 简单的爬虫----爬取百度百科练习
- python3.5简单爬虫爬取百度百科(参考imooc实战)
- python3 爬虫学习-根据关键词爬取百度百科内容
- Python3 爬取百度百科
- 简单爬虫——爬取百度百科总结
- 简单爬虫-爬取百度百科1000个页面
- 一个简单的爬虫程序(爬取百度百科关于python的一千个页面)
- Python简单爬虫开发的学习笔记整理(爬取百度百科词条)
- Python3 简单爬虫爬取百度贴吧帖子
- 第一个python爬虫(python3爬取百度百科1000个页面)
- Python爬虫_BeautifulSoup爬取百度百科
- Python爬虫,爬取百度百科词条
- Python爬虫爬取百度百科词条
- PythonTip--8.8
- 问题 : 最少钱币数
- 2017.8.8每天五个编程题(四)
- luogu1014【1999提高】Cantor表(模拟)
- sparkSQL以JDBC为数据源
- python3.6.2实现的简单爬虫爬取百度百科
- 问题 : 蛇行矩阵
- TopK问题
- cvInitUndistortRectifyMap;initUndistortRectifyMap()
- T626code学习笔记之Java机试题(1)
- 手机车牌识别二次开发包
- Poj-3696 The Luckiest number(数论)
- 8.8.1
- 多线程相关