[Python3.x]网络爬虫(三):urllib.request抓资源的方式总结
来源:互联网 发布:博罗人民政府网络问政 编辑:程序博客网 时间:2024/06/04 01:18
转载自:http://blog.csdn.net/reymix/article/details/46869529
Python 3.X 要使用urllib.request 来抓取网络资源。
import urllib.requestresponse = urllib.request.urlopen('http://www.baidu.com')buff = response.read()html = buff.decode('utf8')response.close()print(html)
使用Request的方式:
import urllib.requestreq = urllib.request.Request('http://www.lovejing.com')response = urllib.request.urlopen(req)buff = response.read()html = buff.decode('utf8')response.close()print(html)
这种方式同样可以用来处理其他URL,例如FTP:
import urllib.requestreq = urllib.request.Request('ftp://ftp.lovejing.com')response = urllib.request.urlopen(req)buff = response.read()html = buff.decode('utf8')response.close()print(html)
使用POST请求:
import urllib.parseimporturllib.requesturl = 'http://www.somebody.com/cgi-bin/register.cgi'values = {'name' : 'Michael Foord', 'location' : 'Northampton', 'language' : 'Python' }data = urllib.parse.urlencode(values)req = urllib.request.Request(url, data)response = urllib.request.urlopen(req)the_page = response.read()
使用GET请求:
import urllib.requestimport urllib.parsedata = {}data['name'] = 'Somebody Here'data['location'] = 'Northampton'data['language'] = 'Python'url_values = urllib.parse.urlencode(data)print(url_values)name=Somebody+Here&language=Python&location=Northamptonurl = 'http://www.example.com/example.cgi'full_url = url + '?' + url_valuesdata = urllib.request.open(full_url)
添加header:
import urllib.parseimport urllib.requesturl = 'http://www.somebody.com/cgi-bin/register.cgi'user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'values = {'name' : 'Michael Foord', 'location' : 'Northampton', 'language' : 'Python' }headers = { 'User-Agent' : user_agent }data = urllib.parse.urlencode(values)req = urllib.request.Request(url, data, headers)response = urllib.request.urlopen(req)the_page = response.read()
错误处理:
import urllib.requestimport urllib.errorreq = urllib.request.Request('http://www.pretend_server.org')try: urllib.request.urlopen(req)except urllib.error.URLError as e:print(e.reason)
返回的错误代码:
# Table mapping response codes to messages; entries have the# form {code: (shortmessage, longmessage)}.responses = { 100: ('Continue', 'Request received, please continue'), 101: ('Switching Protocols', 'Switching to new protocol; obey Upgrade header'), 200: ('OK', 'Request fulfilled, document follows'), 201: ('Created', 'Document created, URL follows'), 202: ('Accepted', 'Request accepted, processing continues off-line'), 203: ('Non-Authoritative Information', 'Request fulfilled from cache'), 204: ('No Content', 'Request fulfilled, nothing follows'), 205: ('Reset Content', 'Clear input form for further input.'), 206: ('Partial Content', 'Partial content follows.'), 300: ('Multiple Choices', 'Object has several resources -- see URI list'), 301: ('Moved Permanently', 'Object moved permanently -- see URI list'), 302: ('Found', 'Object moved temporarily -- see URI list'), 303: ('See Other', 'Object moved -- see Method and URL list'), 304: ('Not Modified', 'Document has not changed since given time'), 305: ('Use Proxy', 'You must use proxy specified in Location to access this ' 'resource.'), 307: ('Temporary Redirect', 'Object moved temporarily -- see URI list'), 400: ('Bad Request', 'Bad request syntax or unsupported method'), 401: ('Unauthorized', 'No permission -- see authorization schemes'), 402: ('Payment Required', 'No payment -- see charging schemes'), 403: ('Forbidden', 'Request forbidden -- authorization will not help'), 404: ('Not Found', 'Nothing matches the given URI'), 405: ('Method Not Allowed', 'Specified method is invalid for this server.'), 406: ('Not Acceptable', 'URI not available in preferred format.'), 407: ('Proxy Authentication Required', 'You must authenticate with ' 'this proxy before proceeding.'), 408: ('Request Timeout', 'Request timed out; try again later.'), 409: ('Conflict', 'Request conflict.'), 410: ('Gone', 'URI no longer exists and has been permanently removed.'), 411: ('Length Required', 'Client must specify Content-Length.'), 412: ('Precondition Failed', 'Precondition in headers is false.'), 413: ('Request Entity Too Large', 'Entity is too large.'), 414: ('Request-URI Too Long', 'URI is too long.'), 415: ('Unsupported Media Type', 'Entity body in unsupported format.'), 416: ('Requested Range Not Satisfiable', 'Cannot satisfy request range.'), 417: ('Expectation Failed', 'Expect condition could not be satisfied.'), 500: ('Internal Server Error', 'Server got itself in trouble'), 501: ('Not Implemented', 'Server does not support this operation'), 502: ('Bad Gateway', 'Invalid responses from another server/proxy.'), 503: ('Service Unavailable', 'The server cannot process the request due to a high load'), 504: ('Gateway Timeout', 'The gateway server did not receive a timely response'), 505: ('HTTP Version Not Supported', 'Cannot fulfill request.'), }
阅读全文
0 0
- [Python3.x]网络爬虫(三):urllib.request抓资源的方式总结
- python3 urllib.request抓资源的方式
- Python3网络爬虫(三):urllib.error异常
- [Python3.x]网络爬虫(一):利用urllib通过指定的URL抓取网页内容
- python3网络爬虫一《使用urllib.request发送请求》
- python3爬虫攻略(1):urllib.request使用(1)
- python3爬虫初探(一)之urllib.request
- python3爬虫攻略(2):urllib.request(2)
- python3.x的urllib.request哪去了?
- Python3: urllib.request 的使用
- python3 urllib.request 网络请求操作
- python3使用urllib模块制作网络爬虫
- Python3.5爬虫urllib系列之三
- python3的urllib的request模块
- Python3中urllib.request.retrieve的使用
- python3 urllib.request
- Python3:urllib.request详解
- urllib2在Python3.x中被改为urllib.request
- 数字货币开发专题(精通比特币使用区块链技术原理)
- Memcached的基本参数
- H5实现浏览器全屏API(全屏进入和全屏退出)
- Android APK反编译就这么简单 详解(附图)
- TCO/IP 理解 2
- [Python3.x]网络爬虫(三):urllib.request抓资源的方式总结
- 笨鸟先飞——java基础总结之集合类
- MVP模式在Android开发中的应用
- /MT、/MD编译选项,以及可能引起在不同堆中申请、释放内存的问题
- CircleImageView的实现与使用
- 自学Java之Java类库(响应用户输入)(012day)
- C#字符串与unicode互相转换
- 收到客户还款记帐
- ThreadPoolExecutor使用介绍