cherrypy和tornado性能分析
来源:互联网 发布:mac给iphone刷机 编辑:程序博客网 时间:2024/06/05 16:09
how is cherrypy working? it handls requests well compared with tornado when concurrence is low
tornado是单线程的。特点是异步服务很快。短小精悍。
web.py以cherrypy作为httpserver。
I have three test cases using siege
to send requests to server (-c means number of users; -t is testing time). Code is below the test results.
1. web.py (cherrpy)
siege ip -c20 -t100s server can handle 2747requests siege ip -c200 -t30s server can handle 1361requests siege ip -c500 -t30s server can handle 170requests
2. tornado synchronous
siege ip -c20 -t100s server can handle 600requests siege ip -c200 -t30s server can handle 200requests siege ip -c500 -t30s server can handle 116requests
3. tornado asynchronous
siege ip -c20 -t100s server can handle 3022requests siege ip -c200 -t30s server can handle 2259requests siege ip -c500 -t30s server can handle 471requests
performance analysis:
tornado synchronous < web.py (cherrypy) < tornado asynchronous
Question 1:
I know, using an asynchronous architecture can improve the performance of a web server dramatically.
I'm curious about the difference between tornado asynchronous architecture and web.py (cherry).
I think tornado synchronous mode handles requests one by one, but how is cherrypy working, using multiple threads? But I didn't see a large increase of memory. Cherrypy might handle multiple requests concurrently. How does it solve the blocking of a program?
Question 2:
Can I improve the performance of tornado synchronous mode without using asynchronous techniques? I think tornado can do better.
Web.py code:
import webimport tornado.httpclienturls = ( '/(.*)', 'hello')app = web.application(urls, globals())class hello: def GET(self, name): client = tornado.httpclient.HTTPClient() response=client.fetch("http://www.baidu.com/") return response.bodyif __name__ == "__main__": app.run()
Tornado synchronous:
import tornado.ioloopimport tornado.optionsimport tornado.webimport tornado.httpclientfrom tornado.options import define, optionsdefine("port", default=8000, help="run on the given port", type=int)class IndexHandler(tornado.web.RequestHandler): def get(self): client = tornado.httpclient.HTTPClient() response = client.fetch("http://www.baidu.com/" ) self.write(response.body)if __name__=='__main__': tornado.options.parse_command_line() app=tornado.web.Application(handlers=[(r'/',IndexHandler)]) http_server=tornado.httpserver.HTTPServer(app) http_server.listen(options.port) tornado.ioloop.IOLoop.instance().start()
Tornado asynchronous:
import tornado.httpserverimport tornado.ioloopimport tornado.optionsimport tornado.webimport tornado.httpclientfrom tornado.options import define, optionsdefine("port", default=8001, help="run on the given port", type=int)class IndexHandler(tornado.web.RequestHandler): @tornado.web.asynchronous def get(self): client = tornado.httpclient.AsyncHTTPClient() response = client.fetch("http://www.baidu.com/" ,callback=self.on_response) def on_response(self,response): self.write(response.body) self.finish()if __name__=='__main__': tornado.options.parse_command_line() app=tornado.web.Application(handlers=[(r'/',IndexHandler)]) http_server=tornado.httpserver.HTTPServer(app) http_server.listen(options.port) tornado.ioloop.IOLoop.instance().start()
1 Answer
To answer question 1...
Tornado is single threaded. If you block the main thread, as you do in your synchronous example, then that single thread cannot do anything until the blocking call returns. This limits the synchronous example to one request at a time.
I am not particularly familiar with web.py, but looking at the source for its HTTP server it appears to be using a threading mixin, which suggests that it is not limited to handling one request at a time. When the first request comes in, it is handled by a single thread. That thread will block until the HTTP client call returns, but other threads are free to handle further incoming requests. This allows for more requests to be processed at once.
I suspect if you emulated this with Tornado, eg, by handing off HTTP client requests to a thread pool, then you'd see similar throughput.
- cherrypy和tornado性能分析
- CherryPy源码分析
- CherryPy的Hello World分析
- CherryPy wsgiserver模块的分析
- CherryPy的Hello World分析
- tornado和beego的helloworld性能对比
- tornado性能测试
- 关于cherrypy和wsgidav的一些问题
- [概念] 敏感性分析(Sensitivity Analysis) 和龙卷风图(tornado diagram)
- [概念] 敏感性分析(Sensitivity Analysis) 和龙卷风图(tornado diagram)
- [概念] 敏感性分析(Sensitivity Analysis) 和龙卷风图(tornado diagram)
- 敏感性分析(Sensitivity Analysis) 和龙卷风图(tornado diagram)
- Tornado模板机制分析
- Tornado源码分析
- Tornado: 1. 流程分析
- Tornado异步原理分析
- tornado源码分析
- Tornado IOStream类分析
- 如何选择delegate、notification、KVO?
- python和java生成随即序列
- 【位操作】通过位操作快速实现某些计算结果
- 异常常见类型处理分析
- 基础算法系列(十八)排序算法之快速排序
- cherrypy和tornado性能分析
- 判断输入框
- 从 C/C++ 程序调用 Java 代码 (不是用JNI)
- Skidmarks
- 在framework中打包xib
- 使用Modernizr探测HTML5/CSS3新特性
- iOS 默认某个cell为选中状态
- Oracle Merge into 用法详解
- Linux Openoffice转换Office为pdf
fetch('http://www.baidu.com/')
much more than by the web frameworks you use. Try comparing things when you serve static content, or, at least, the same locally-generated content. – 9000 Nov 23 '12 at 2:53