PY 爬虫 Urllib2
来源:互联网 发布:seo网站分析案例 编辑:程序博客网 时间:2024/06/05 11:27
版本是python2.7,
3.x的版本urllib与urllib2已经合并为一个urllib库,学着比较清晰些,2.7的版本呢urllib与urllib2各有各的作用
urllib2可以接受一个Request对象,并以此可以来设置一个URL的headers
1.
urlopen方法也可通过建立了一个Request对象来明确指明想要获取的url。调用urlopen函数对请求的url返回一个response对象。这个response类似于一个file对象,所以用.read()函数可以操作这个response对象
import urllib2response = urllib2.urlopen('http://python.org/')html = response.read()
2.
URL——是一个字符串,其中包含一个有效的URL。
data——是一个字符串,指定额外的数据发送到服务器,如果没有data需要发送可以为“None”。目前使用data的HTTP请求是唯一的。当请求含有data参数时,HTTP的请求为POST,而不是GET。数据应该是缓存在一个标准的application/x-www-form-urlencoded格式中。urllib.urlencode()函数用映射或2元组,返回一个这种格式的字符串。通俗的说就是如果想向一个URL发送数据(通常这些数据是代表一些CGI脚本或者其他的web应用)。对于HTTP来说这动作叫Post。例如在网上填的form(表单)时,浏览器会POST表单的内容,这些数据需要被以标准的格式编码(encode),然后作为一个数据参数传送给Request对象。Encoding是在urlib模块中完成的,而不是在urlib2中完成的.
headers——是字典类型,头字典可以作为参数在request时直接传入,也可以把每个键和值作为参数调用add_header()方法来添加。作为辨别浏览器身份的User-Agent header是经常被用来恶搞和伪装的,因为一些HTTP服务只允许某些请求来自常见的浏览器而不是脚本,或是针对不同的浏览器返回不同的版本。例如,Mozilla Firefox浏览器被识别为“Mozilla/5.0 (X11; U; Linux i686) Gecko/20071127 Firefox/2.0.0.11”。默认情况下,urlib2把自己识别为Python-urllib/x.y(这里的xy是python发行版的主要或次要的版本号,如在Python 2.6中,urllib2的默认用户代理字符串是“Python-urllib/2.6。
import urllib
import urllib2
url = 'http://www.someserver.com/cgi-bin/register.cgi'
user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
values = {'name' :'Michael Foord','location' :'Northampton','language' :'Python' }
headers = { 'User-Agent' : user_agent }data= urllib.urlencode(values)
req = urllib2.Request(url, data, headers)
response = urllib2.urlopen(req)
the_page = response.read()
import
urllib2
request
=
urllib2.Request(uri)
request.add_header(
'User-Agent'
,
'fake-client'
)
response
=
urllib2.urlopen(request)
import urllib2
req = urllib2.Request('http://python.org')
response = urllib2.urlopen(req)
the_page = response.read() print the_page
3.Headers
在这里讨论一个特定的HTTP头,说明如何添加头到你的HTTP请求。一些网站(google)不喜欢由程序来访问或不同的浏览器发送不同的版本,默认情况下,urllib2标识自己为python urllib / x.y(x.y是Python版本号eg : 2.7),这个可能会混淆网站,或只是简单的不工作。浏览器标识自己的方式是通过用户代理(User-Agent)头。当创建一个Request对象时,你可以通过一个Headers的字典。下面的例子使相同的请求,但将其标识为一个版本的Internet Explorer。 import urllib import urllib2 url = 'http://www.someserver.com/cgi-bin/register.cgi' user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)' values = {} values['name'] = 'Michael Foord' values['location'] = 'Northampton' values['language'] = 'Python' headers = { 'User-Agent' : user_agent } data = urllib.urlencode(values) req = urllib2.Request(url,data,headers) response = urllib2.urlopen(req) the_page = response.read() print the_pageresponse还有两个常用的方法(info、geturl)。
4.Handling Exceptions
当urlopen无法处理响应时将引起URLError(尽管像往常一样使用Python APIs,内建的异常如ValueError、TypeError等也可以引起),在特定的HTTP URLs情况下,HTTPError 是URLError的子类。URLError 通常,网络没有连接或没有路由到指定的服务器或指定的服务器不存在将引起URLError.在这种情形下,异常引发将有个reason属性,这是一个包含错误代码和一个文本错误信息的元组。 import urllib2 req = urllib2.Request('http://www.pretend_server.org') try: urllib2.urlopen(req) except urllib2.URLError,e: print e.reason [Errno 11004] getaddrinfo failed HTTPError 每个HTTP响应从服务器包含一个数字状态码”,有时,这个状态码表明服务器无法满足请求。默认处理程序将处理一些响应(例如:如果响应是一个“重定向”,则要求客户端从不同的网址提取文档,urllib2将自行处理);对于那些它不能处理的,urlopen将引起HTTPError。典型的错误包括404(没有找到网页)、403(禁止请求)、401(需要验证)。 HTTPError 实例提出的将一个整型的’code ‘属性,对应服务器发出的错误Error Codes。因为默认处理程序处理重定向(代码在300范围内),代码在100-299表明成功,你通常会看到在400-599范围错误代码。 basehttpserver.basehttprequesthandler.responses是一个有用的字典响应码,显示所有的的响应码由RFC2616的应用。 # Table mapping response codes to messages; entries have the # form {code: (shortmessage, longmessage)}. responses = { 100: ('Continue', 'Request received, please continue'), 101: ('Switching Protocols', 'Switching to new protocol; obey Upgrade header'), 200: ('OK', 'Request fulfilled, document follows'), 201: ('Created', 'Document created, URL follows'), 202: ('Accepted', 'Request accepted, processing continues off-line'), 203: ('Non-Authoritative Information', 'Request fulfilled from cache'), 204: ('No Content', 'Request fulfilled, nothing follows'), 205: ('Reset Content', 'Clear input form for further input.'), 206: ('Partial Content', 'Partial content follows.'), 300: ('Multiple Choices', 'Object has several resources -- see URI list'), 301: ('Moved Permanently', 'Object moved permanently -- see URI list'), 302: ('Found', 'Object moved temporarily -- see URI list'), 303: ('See Other', 'Object moved -- see Method and URL list'), 304: ('Not Modified', 'Document has not changed since given time'), 305: ('Use Proxy', 'You must use proxy specified in Location to access this ' 'resource.'), 307: ('Temporary Redirect', 'Object moved temporarily -- see URI list'), 400: ('Bad Request', 'Bad request syntax or unsupported method'), 401: ('Unauthorized', 'No permission -- see authorization schemes'), 402: ('Payment Required', 'No payment -- see charging schemes'), 403: ('Forbidden', 'Request forbidden -- authorization will not help'), 404: ('Not Found', 'Nothing matches the given URI'), 405: ('Method Not Allowed', 'Specified method is invalid for this server.'), 406: ('Not Acceptable', 'URI not available in preferred format.'), 407: ('Proxy Authentication Required', 'You must authenticate with ' 'this proxy before proceeding.'), 408: ('Request Timeout', 'Request timed out; try again later.'), 409: ('Conflict', 'Request conflict.'), 410: ('Gone', 'URI no longer exists and has been permanently removed.'), 411: ('Length Required', 'Client must specify Content-Length.'), 412: ('Precondition Failed', 'Precondition in headers is false.'), 413: ('Request Entity Too Large', 'Entity is too large.'), 414: ('Request-URI Too Long', 'URI is too long.'), 415: ('Unsupported Media Type', 'Entity body in unsupported format.'), 416: ('Requested Range Not Satisfiable', 'Cannot satisfy request range.'), 417: ('Expectation Failed', 'Expect condition could not be satisfied.'), 500: ('Internal Server Error', 'Server got itself in trouble'), 501: ('Not Implemented', 'Server does not support this operation'), 502: ('Bad Gateway', 'Invalid responses from another server/proxy.'), 503: ('Service Unavailable', 'The server cannot process the request due to a high load'), 504: ('Gateway Timeout', 'The gateway server did not receive a timely response'), 505: ('HTTP Version Not Supported', 'Cannot fulfill request.'), }服务器响应引起的错误通过返回一个HTTP错误代码和错误页面,可以使用HTTPError实例为页面上的响应中返回。这个也有code属性,也有read、geturl、info方法。 import urllib2 req = urllib2.Request('http://www.python.org/fish.html') try: urllib2.urlopen(req) except urllib2.URLError,e: print e.code print e.read() print e.geturl() print e.info()
在这里讨论一个特定的HTTP头,说明如何添加头到你的HTTP请求。一些网站(google)不喜欢由程序来访问或不同的浏览器发送不同的版本,默认情况下,urllib2标识自己为python urllib / x.y(x.y是Python版本号eg : 2.7),这个可能会混淆网站,或只是简单的不工作。浏览器标识自己的方式是通过用户代理(User-Agent)头。当创建一个Request对象时,你可以通过一个Headers的字典。下面的例子使相同的请求,但将其标识为一个版本的Internet Explorer。 import urllib import urllib2 url = 'http://www.someserver.com/cgi-bin/register.cgi' user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)' values = {} values['name'] = 'Michael Foord' values['location'] = 'Northampton' values['language'] = 'Python' headers = { 'User-Agent' : user_agent } data = urllib.urlencode(values) req = urllib2.Request(url,data,headers) response = urllib2.urlopen(req) the_page = response.read() print the_pageresponse还有两个常用的方法(info、geturl)。
当urlopen无法处理响应时将引起URLError(尽管像往常一样使用Python APIs,内建的异常如ValueError、TypeError等也可以引起),在特定的HTTP URLs情况下,HTTPError 是URLError的子类。URLError 通常,网络没有连接或没有路由到指定的服务器或指定的服务器不存在将引起URLError.在这种情形下,异常引发将有个reason属性,这是一个包含错误代码和一个文本错误信息的元组。 import urllib2 req = urllib2.Request('http://www.pretend_server.org') try: urllib2.urlopen(req) except urllib2.URLError,e: print e.reason [Errno 11004] getaddrinfo failed HTTPError 每个HTTP响应从服务器包含一个数字状态码”,有时,这个状态码表明服务器无法满足请求。默认处理程序将处理一些响应(例如:如果响应是一个“重定向”,则要求客户端从不同的网址提取文档,urllib2将自行处理);对于那些它不能处理的,urlopen将引起HTTPError。典型的错误包括404(没有找到网页)、403(禁止请求)、401(需要验证)。 HTTPError 实例提出的将一个整型的’code ‘属性,对应服务器发出的错误Error Codes。因为默认处理程序处理重定向(代码在300范围内),代码在100-299表明成功,你通常会看到在400-599范围错误代码。 basehttpserver.basehttprequesthandler.responses是一个有用的字典响应码,显示所有的的响应码由RFC2616的应用。 # Table mapping response codes to messages; entries have the # form {code: (shortmessage, longmessage)}. responses = { 100: ('Continue', 'Request received, please continue'), 101: ('Switching Protocols', 'Switching to new protocol; obey Upgrade header'), 200: ('OK', 'Request fulfilled, document follows'), 201: ('Created', 'Document created, URL follows'), 202: ('Accepted', 'Request accepted, processing continues off-line'), 203: ('Non-Authoritative Information', 'Request fulfilled from cache'), 204: ('No Content', 'Request fulfilled, nothing follows'), 205: ('Reset Content', 'Clear input form for further input.'), 206: ('Partial Content', 'Partial content follows.'), 300: ('Multiple Choices', 'Object has several resources -- see URI list'), 301: ('Moved Permanently', 'Object moved permanently -- see URI list'), 302: ('Found', 'Object moved temporarily -- see URI list'), 303: ('See Other', 'Object moved -- see Method and URL list'), 304: ('Not Modified', 'Document has not changed since given time'), 305: ('Use Proxy', 'You must use proxy specified in Location to access this ' 'resource.'), 307: ('Temporary Redirect', 'Object moved temporarily -- see URI list'), 400: ('Bad Request', 'Bad request syntax or unsupported method'), 401: ('Unauthorized', 'No permission -- see authorization schemes'), 402: ('Payment Required', 'No payment -- see charging schemes'), 403: ('Forbidden', 'Request forbidden -- authorization will not help'), 404: ('Not Found', 'Nothing matches the given URI'), 405: ('Method Not Allowed', 'Specified method is invalid for this server.'), 406: ('Not Acceptable', 'URI not available in preferred format.'), 407: ('Proxy Authentication Required', 'You must authenticate with ' 'this proxy before proceeding.'), 408: ('Request Timeout', 'Request timed out; try again later.'), 409: ('Conflict', 'Request conflict.'), 410: ('Gone', 'URI no longer exists and has been permanently removed.'), 411: ('Length Required', 'Client must specify Content-Length.'), 412: ('Precondition Failed', 'Precondition in headers is false.'), 413: ('Request Entity Too Large', 'Entity is too large.'), 414: ('Request-URI Too Long', 'URI is too long.'), 415: ('Unsupported Media Type', 'Entity body in unsupported format.'), 416: ('Requested Range Not Satisfiable', 'Cannot satisfy request range.'), 417: ('Expectation Failed', 'Expect condition could not be satisfied.'), 500: ('Internal Server Error', 'Server got itself in trouble'), 501: ('Not Implemented', 'Server does not support this operation'), 502: ('Bad Gateway', 'Invalid responses from another server/proxy.'), 503: ('Service Unavailable', 'The server cannot process the request due to a high load'), 504: ('Gateway Timeout', 'The gateway server did not receive a timely response'), 505: ('HTTP Version Not Supported', 'Cannot fulfill request.'), }服务器响应引起的错误通过返回一个HTTP错误代码和错误页面,可以使用HTTPError实例为页面上的响应中返回。这个也有code属性,也有read、geturl、info方法。 import urllib2 req = urllib2.Request('http://www.python.org/fish.html') try: urllib2.urlopen(req) except urllib2.URLError,e: print e.code print e.read() print e.geturl() print e.info()
阅读全文
0 0
- PY 爬虫 Urllib2
- 爬虫-urllib2-Headers (常用)
- Python 爬虫 urllib2异常处理
- 把玩之python爬虫urllib2
- Python爬虫urllib2笔记(二)
- Python爬虫之urllib2介绍
- python爬虫--urllib2和requests
- 爬虫系列17.urllib2模块
- Python爬虫----爬虫入门(3)---urllib2
- PY爬虫Demo集合
- PY爬虫开发利器
- python写爬虫使用urllib2方法
- python写爬虫使用urllib2方法
- 把玩之python爬虫urllib2高级篇
- python开发爬虫----urllib2下载网页方法
- urllib/urllib2和BeautifulSoup爬虫学习
- python爬虫--urllib2和urllib区别
- 【Python网络爬虫 】新手实践笔记--urllib2
- Android Studio中使用junit4测试框架中的坑
- python--用PIL Image画圣诞树Imagedraw画椭圆
- [CodeVersion--GIT]将远程仓库分支fetch到本地
- RecyclerView实现瀑布流、条目的增加、删除
- html的angular获取本地时间和3秒后换另一个名字
- PY 爬虫 Urllib2
- 监督学习之线性回归
- Date和String类型的相互转换(工具类DateTimeUntil的编写)
- 点击按钮对数据进行操作(2)
- JSP交互---EL表达式
- UVa11729
- 无线轮播图+触摸事件+地下小圆点
- php redis秒杀
- POJ-2965 The Pilots Brothers' refrigerator【模拟法】