python beautifulsoup多线程分析抓取网页
来源:互联网 发布:java方法的特征签名 编辑:程序博客网 时间:2024/05/22 15:03
python beautifulsoup多线程分析抓取网页
最近在用python做一些网页分析方面的事情,很久没更新博客了,今天补上。下面的代码用到了
1 python 多线程
2 网页分析库:beautifulsoup ,这个库比之前分享的python SGMLParser网页分析库要强大很多,大家有兴趣可以去了解下。
#encoding=utf-8
#@description:蜘蛛抓取内容。
import Queue
import threading
import urllib,urllib2
import time
from BeautifulSoup import BeautifulSoup
hosts = ["http://www.baidu.com","http://www.163.com"]#要抓取的网页
queue = Queue.Queue()
out_queue = Queue.Queue()
class ThreadUrl(threading.Thread):
"""Threaded Url Grab"""
def __init__(self, queue, out_queue):
threading.Thread.__init__(self)
self.queue = queue
self.out_queue = out_queue
def run(self):
while True:
#grabs host from queue
host = self.queue.get()
proxy_support = urllib2.ProxyHandler({'http':'http://xxx.xxx.xxx.xxxx'})#代理IP
opener = urllib2.build_opener(proxy_support, urllib2.HTTPHandler)
urllib2.install_opener(opener)
#grabs urls of hosts and then grabs chunk of webpage
url = urllib.urlopen(host)
chunk = url.read()
#place chunk into out queue
self.out_queue.put(chunk)
#signals to queue job is done
self.queue.task_done()
class DatamineThread(threading.Thread):
"""Threaded Url Grab"""
def __init__(self, out_queue):
threading.Thread.__init__(self)
self.out_queue = out_queue
def run(self):
while True:
#grabs host from queue
chunk = self.out_queue.get()
#parse the chunk
soup = BeautifulSoup(chunk)
print soup.findAll(['title']))
#signals to queue job is done
self.out_queue.task_done()
start = time.time()
def main():
#spawn a pool of threads, and pass them queue instance
t = ThreadUrl(queue, out_queue)
t.setDaemon(True)
t.start()
#populate queue with data
for host in hosts:
queue.put(host)
dt = DatamineThread(out_queue)
dt.setDaemon(True)
dt.start()
#wait on the queue until everything has been processed
queue.join()
out_queue.join()
main()
print "Elapsed Time: %s" % (time.time() - start)
#@description:蜘蛛抓取内容。
import Queue
import threading
import urllib,urllib2
import time
from BeautifulSoup import BeautifulSoup
hosts = ["http://www.baidu.com","http://www.163.com"]#要抓取的网页
queue = Queue.Queue()
out_queue = Queue.Queue()
class ThreadUrl(threading.Thread):
"""Threaded Url Grab"""
def __init__(self, queue, out_queue):
threading.Thread.__init__(self)
self.queue = queue
self.out_queue = out_queue
def run(self):
while True:
#grabs host from queue
host = self.queue.get()
proxy_support = urllib2.ProxyHandler({'http':'http://xxx.xxx.xxx.xxxx'})#代理IP
opener = urllib2.build_opener(proxy_support, urllib2.HTTPHandler)
urllib2.install_opener(opener)
#grabs urls of hosts and then grabs chunk of webpage
url = urllib.urlopen(host)
chunk = url.read()
#place chunk into out queue
self.out_queue.put(chunk)
#signals to queue job is done
self.queue.task_done()
class DatamineThread(threading.Thread):
"""Threaded Url Grab"""
def __init__(self, out_queue):
threading.Thread.__init__(self)
self.out_queue = out_queue
def run(self):
while True:
#grabs host from queue
chunk = self.out_queue.get()
#parse the chunk
soup = BeautifulSoup(chunk)
print soup.findAll(['title']))
#signals to queue job is done
self.out_queue.task_done()
start = time.time()
def main():
#spawn a pool of threads, and pass them queue instance
t = ThreadUrl(queue, out_queue)
t.setDaemon(True)
t.start()
#populate queue with data
for host in hosts:
queue.put(host)
dt = DatamineThread(out_queue)
dt.setDaemon(True)
dt.start()
#wait on the queue until everything has been processed
queue.join()
out_queue.join()
main()
print "Elapsed Time: %s" % (time.time() - start)
运行上面的程序需要安装beautifulsoup, 这个是beautifulsou 文档,大家可以看看。
今天分享python beautifulsoup多线程分析抓取网页就到这里了,有什么运行问题可以发到下面的评论里。大家相互讨论。
- python beautifulsoup多线程分析抓取网页
- python beautifulsoup多线程分析抓取网页
- python 抓取网页--用BeautifulSoup
- BeautifulSoup+正则+Python 抓取网页数据
- python beautifulsoup 抓取网页正文内容
- python : BeautifulSoup 网页 table 抓取实例
- Python 多线程抓取网页
- python多线程抓取网页信息
- python 多线程处理抓取网页
- python多线程实现抓取网页
- python多线程爬虫抓取网页
- 使用Python+selenium+BeautifulSoup抓取动态网页的关键信息
- 网页抓取方式(六)--python/urllib3/BeautifulSoup
- Python 插件杂谈 (4) ---- BeautifulSoup , Python中的网页分析工具
- 【Python网络爬虫】Python维基百科网页抓取(BeautifulSoup+Urllib2)
- Python获取网页内容、使用BeautifulSoup库分析html
- 运用BeautifulSoup抓取网页的链接
- 运用BeautifulSoup抓取网页的链接
- JAVA读取一个文件夹下所有某类型文件
- FLAG_ACTIVITY_CLEAR_TOP和FLAG_ACTIVITY_REORDER_TO_FRONT
- 2013 多校联合 H Park Visit (hdu 4607)
- 第十六篇--算法导论排序篇总结-开发实用quicksort算法
- Jiink-linux驱动以前版本的下载方法
- python beautifulsoup多线程分析抓取网页
- 学习笔记:结构保留的目标跟踪
- hdu - 4602 《Partition》
- 10个PHP代码片段
- Installing Xdebug on CentOS
- java 多线程 访问一个共享资源
- 爬虫实力
- linux 下ATM终端的模拟项目--这是本人求职用的
- ubuntu 关闭/打开触摸板功能