requests 库的另类用法(stream)

来源:互联网 发布:淘宝虚拟类目直通车 编辑:程序博客网 时间:2024/05/02 01:17

起因: 同事让我帮他抓取一批URL,并获取对应URL的标签中的文字,忽略对应URL网站的封禁问题,这个任务并不是一个特别麻烦的事情。然后实际跑起来,却发现流量打的很高,超过10Mb/s。

经过排查发现,是因为很多URL,实际是下载链接,会触发文件下载,这些URL对应的html中根本不会包含标签,那么处理逻辑就很清晰了,先拿到headers,取出Content-Type,判断是否是
text/html,如果不是,则该Response的body体,就没有必要读取了。

查找requests的相应资料

By default, when you make a request, the body of the response is
downloaded immediately. You can override this behaviour and defer
downloading the response body until you access the Response.content
attribute with the stream parameter:

tarball_url = 'https://github.com/kennethreitz/requests/tarball/master'r = requests.get(tarball_url, stream=True)

At this point only the response headers have been downloaded and the
connection remains open, hence allowing us to make content retrieval
conditional:

if int(r.headers['content-length']) < TOO_LONG:  content = r.content  ...

只有headers头被下载了,body中的数据还没有被下载,这样就能避免不必要的流量开销,只有当你使用r.content 的时候,所有body内容才会被下载

You can further control the workflow by use of the
Response.iter_content() and Response.iter_lines() methods.
Alternatively, you can read the undecoded body from the underlying
urllib3 urllib3.HTTPResponse at Response.raw.

实时上还可以使用Response.iter_content() Response.iter_lines()
Response.raw()来自己决定要读取多少数据

最后要注意的是,使用stream=True以后需要自己执行Response的
关闭操作

好,那么看下我改进后的程序

import loggingimport threadingimport redisimport requestsfrom lxml.html import fromstringr = redis.Redis(host='127.0.0.1', port=6379, db=10)logging.basicConfig(level=logging.INFO,                    format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',                    filename='all.log',                    filemode='w')def extract(url):    logger = logging.getLogger()    try:        res = requests.get(url, stream=True, timeout=0.5)        ctype = res.headers['Content-Type']        ctype = ctype.lower()        if ctype.find('text/html') == -1:            res.close()            return None        doc = fromstring(res.content)        res.close()        item_list  = doc.xpath('//head/title')        if item_list:            res = item_list[0].text_content()            res = unicode(res)            logger.info('title = %s', res)            return res    except:        return None    return None

经过测试,带宽开销确实大幅下降了。为requests点个赞,设计者想的太周到了,既允许从hign level去使用它,也可以回到low level去精细化的管理控制连接。

默认情况下requests对URL的访问是阻塞式的,可以通过使用
1)grequests
2)requests-futures
来实现非阻塞式的访问

参考资料
1.http://docs.python-requests.org/en/master/user/advanced/#body-content-workflow
2.http://docs.python-requests.org/en/master/user/advanced/#blocking-or-non-blocking
3.http://docs.python-requests.org/en/master/user/advanced/#blocking-or-non-blocking

0 0