Python_Ubuntu 12.04 安装Twisted、Scrapy爬虫框架

来源:互联网 发布:华为mate抢购软件 编辑:程序博客网 时间:2024/05/16 08:25
Ubuntu 12.04 安装Twisted、Scrapy爬虫框架
Scrapy,Python开发的一个快速,高层次的屏幕抓取和web抓取框架,用于抓取web站点并从页面中提取结 ScrapyPthyon爬虫框架logo[1]构化的数据。Scrapy用途广泛,可以用于数据挖掘、监测和自动化测试。Scrapy吸引人的地方在于它是一个框架,任何人都可以根据需求方便的修改。它也提供了多种类型爬虫的基类,如BaseSpider、sitemap爬虫等,最新版本又提供了web2.0爬虫的支持。

准备工作
Requirements
Python 2.5, 2.6, 2.7 (3.x is not yet supported)
Twisted 2.5.0, 8.0 or above (Windows users: you’ll need toinstall Zope.Interface and maybe pywin32 because of this Twistedbug)
w3lib
lxml or libxml2 (if using libxml2, version 2.6.28 or above ishighly recommended)
simplejson (not required if using Python 2.6 or above)
pyopenssl (for HTTPS support. Optional, but highlyrecommended)
---------------------------------------------
Twisted安装过程
sudo apt-get install python-twisted python-libxml2python-simplejson
安装完成后进入python,测试Twisted是否安装成功

pyOpenSSL
wgethttp://pypi.python.org/packages/source/p/pyOpenSSL/pyOpenSSL-0.13.tar.gz#md5=767bca18a71178ca353dff9e10941929
tar -zxvf pyOpenSSL-0.13.tar.gz
cd pyOpenSSL-0.13
sudo python setup.py install

pycrypto
wgethttp://pypi.python.org/packages/source/p/pycrypto/pycrypto-2.5.tar.gz#md5=783e45d4a1a309e03ab378b00f97b291
tar -zxvf pycrypto-2.5.tar.gz
cd pycrypto-2.5
sudo python setup.py install

测试是否安装成功
$python
>>> importCrypto
>>> importtwisted.conch.ssh.transport
>>> printCrypto.PublicKey.RSA
<module 'Crypto.PublicKey.RSA' from'/usr/python/lib/python2.5/site-packages/Crypto/PublicKey/RSA.pyc'>
>>> importOpenSSL 
>>> importtwisted.internet.ssl
>>>twisted.internet.ssl
<module 'twisted.internet.ssl' from'/usr/python/lib/python2.5/site-packages/Twisted-10.1.0-py2.5-linux-i686.egg/twisted/internet/ssl.pyc'>
如果出现类似提示,说明pyOpenSSL模块已经安装成功了,否则,请检查上面的安装过程(OpenSSL需要pycrypto)。

w3lib
sudo easy_install -U w3lib

Scrapy
wgethttp://pypi.python.org/packages/source/S/Scrapy/Scrapy-0.14.3.tar.gz#md5=59f1225f7692f28fa0f78db3d34b3850
tar -zxvf Scrapy-0.14.3.tar.gz
cd Scrapy-0.14.3
sudo python setup.py install

Scrapy安装验证
经过上面的安装和配置过程,已经完成了Scrapy的安装,我们可以通过如下命令行来验证一下:
$ scrapy
Scrapy 0.14.3 - no active project

Usage:
  scrapy<command> [options] [args]

Available commands:
  fetch        Fetch a URL using the Scrapydownloader
  runspider    Run a self-contained spider (without creating aproject)
  settings     Get settings values
  shell        Interactive scrapingconsole
  startproject  Create newproject
  version      Print Scrapy version
  view         Open URL inbrowser, as seen by Scrapy

Use "scrapy <command> -h" to seemore info about a command
上面提示信息,提供了一个fetch命令,这个命令抓取指定的网页,可以先看看fetch命令的帮助信息,如下所示:
$ scrapy fetch --help
Usage
=====
  scrapy fetch [options]<url>

Fetch a URL using the Scrapy downloader and print its contentto stdout. You
may want to use --nolog to disable logging

Options
=======
--help, -h            show this help message andexit
--spider=SPIDER        use this spider
--headers             print response HTTP headersinstead of body

Global Options
--------------
--logfile=FILE         log file.if omitted stderr will be used
--loglevel=LEVEL, -L LEVEL
                     log level(default: DEBUG)
--nolog               disablelogging completely
--profile=FILE         writepython cProfile stats to FILE
--lsprof=FILE          writelsprof profiling stats to FILE
--pidfile=FILE         writeprocess ID to FILE
--set=NAME=VALUE, -s NAME=VALUE
                    set/override setting (may be repeated)
                    
根据命令提示,指定一个URL,执行后抓取一个网页的数据,如下所示:
scrapy fetchhttp://doc.scrapy.org/en/latest/intro/install.html >install.html
2012-04-28 14:34:35+0800 [scrapy] INFO: Scrapy 0.14.3 started(bot: scrapybot)
2012-04-28 14:34:36+0800 [scrapy] DEBUG: Enabled extensions:LogStats, TelnetConsole, CloseSpider, WebService, CoreStats,MemoryUsage, SpiderState
2012-04-28 14:34:36+0800 [scrapy] DEBUG: Enabled downloadermiddlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware,UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware,RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware,ChunkedTransferMiddleware, DownloaderStats
2012-04-28 14:34:36+0800 [scrapy] DEBUG: Enabled spidermiddlewares: HttpErrorMiddleware, OffsiteMiddleware,RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2012-04-28 14:34:36+0800 [scrapy] DEBUG: Enabled itempipelines: 
2012-04-28 14:34:36+0800 [default] INFO: Spider opened
2012-04-28 14:34:36+0800 [default] INFO: Crawled 0 pages (at 0pages/min), scraped 0 items (at 0 items/min)
2012-04-28 14:34:36+0800 [scrapy] DEBUG: Telnet consolelistening on 0.0.0.0:6023
2012-04-28 14:34:36+0800 [scrapy] DEBUG: Web service listeningon 0.0.0.0:6080
2012-04-28 14:34:37+0800 [default] DEBUG: Crawled (200)<GEThttp://doc.scrapy.org/en/latest/intro/install.html>(referer: None)
2012-04-28 14:34:37+0800 [default] INFO: Closing spider(finished)
2012-04-28 14:34:37+0800 [default] INFO: Dumping spiderstats:
{'downloader/request_bytes': 227,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 21732,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2012, 4, 28, 6, 34, 37,567161),
'scheduler/memory_enqueued': 1,
'start_time': datetime.datetime(2012, 4, 28, 6, 34, 36,433983)}
2012-04-28 14:34:37+0800 [default] INFO: Spider closed(finished)
2012-04-28 14:34:37+0800 [scrapy] INFO: Dumping globalstats:
{'memusage/max': 26214400, 'memusage/startup': 26214400}
$ ls -l install.html 
-rw-rw-r-- 1 zhuoguoqing zhuoguoqing 21462 2012-04-28 14:34install.html
可见,我们已经成功抓取了一个网页。
根据scrapy官网的指南来进一步应用scrapy框架
Tutorial链接页面为http://doc.scrapy.org/en/latest/intro/tutorial.html
http://media.readthedocs.org/pdf/scrapy/0.14/scrapy.pdf
http://baike.baidu.com/view/6687996.htm