scrapy中CONCURRENT_REQUESTS与DOWNLOAD_DELAY的联系

来源:互联网 发布:java里random() 编辑:程序博客网 时间:2024/06/05 03:54

先建立一个项目来找CONCURRENT_REQUESTS与DOWNLOAD_DELAY的联系

大致给出粗略代码:

jianshuspider.py:

import scrapy

from JianshuSpider_author_1.items import JianshuspiderAuthor1Item
from scrapy.selector import Selector

class JianshuSpider(scrapy.Spider):
name = "jianshu"

def start_requests(self):
urls = ['http://www.jianshu.com/users/958f740aed52/followers']

for url in urls:
yield scrapy.Request(url = url,callback=self.parse_author)


def parse_author(self,response):
item = JianshuspiderAuthor1Item()
selector = Selector(response)

fans_href = selector.xpath("//div[@class='info']/a/@href").extract()
for fan_href in fans_href:
fan_href = 'http://www.jianshu.com/users/'+ fan_href.split('/')[-1] +'/followers'
# fan_href = 'http://www.google.com.hk/'+ fan_href.split('/')[-1] + '/followers'#需要timeout时调用

yield scrapy.Request(fan_href, callback=self.parse_author)

item['author'] = selector.xpath("//div[@class='title']/a/text()").extract_first()
yield item


requestlimit.py(downlomiddleware):

class RequestLimitMiddleware(object):
count = 0
def process_request(self,request, spider):
self.count += 1
print(self.count)

以上两个文件的代码为核心代码。


测试结果:

settings.py

CONCURRENT_REQUESTS = 8
DOWNLOAD_DELAY = 0

并且jianshuspider.py中关闭递归简书链接,打开Google链接语句

效果:8个request同时来,同时timeout。8个request又来,又timeout。如此循环。


settings.py

CONCURRENT_REQUESTS = 1
DOWNLOAD_DELAY = 5

并且jianshuspider.py中打开递归简书链接,关闭Google链接语句

效果:每5秒左右来一个request


settings.py

CONCURRENT_REQUESTS = 2
DOWNLOAD_DELAY = 5

并且jianshuspider.py中打开递归简书链接,关闭Google链接语句

效果:一开始来2个request(A,B),但5秒后只处理了一个request(A),新来一个request(C),5秒后又处理一个request(B),排队一个request(D)。如此循环。


总结:

DOWNLOAD_DELAY 会影响 CONCURRENT_REQUESTS,不能使并发显现出来。


思考:

当有CONCURRENT_REQUESTS,没有DOWNLOAD_DELAY 时,服务器会在同一时间收到大量的请求。

当有CONCURRENT_REQUESTS,有DOWNLOAD_DELAY 时,服务器不会在同一时间收到大量的请求。

原创粉丝点击