scrapy Middleware
来源:互联网 发布:爱拍软件 编辑:程序博客网 时间:2024/05/16 23:54
- scrapy 自建的中间件
- Referer中间件
- randomUA中间件
- proxy中间件
scrapy 自建的中间件
# -*- coding: utf-8 -*-# Define here the models for your spider middleware## See documentation in:# http://doc.scrapy.org/en/latest/topics/spider-middleware.htmlfrom scrapy import signalsclass ItcastSpiderMiddleware(object): # Not all methods need to be defined. If a method is not defined, # scrapy acts as if the spider middleware does not modify the # passed objects. @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) return s def process_spider_input(self, response, spider): # Called for each response that goes through the spider # middleware and into the spider. # Should return None or raise an exception. return None def process_spider_output(self, response, result, spider): # Called with the results returned from the Spider, after # it has processed the response. # Must return an iterable of Request, dict or Item objects. for i in result: yield i def process_spider_exception(self, response, exception, spider): # Called when a spider or process_spider_input() method # (from other spider middleware) raises an exception. # Should return either None or an iterable of Response, dict # or Item objects. pass def process_start_requests(self, start_requests, spider): # Called with the start requests of the spider, and works # similarly to the process_spider_output() method, except # that it doesn’t have a response associated. # Must return only requests (not items). for r in start_requests: yield r def spider_opened(self, spider): spider.logger.info('Spider opened: %s' % spider.name)
Referer中间件
class Referer(object): def process_request(self, request, spider): '''设置headers和切换请求头 :param request: 请求体 :param spider: spider对象 :return: None ''' referer = request.meta.get('referer', None) if referer: request.headers['referer'] = referer
RandomUserAgent
class RandomUserAgent(object): def process_request(self, request, spider): # 随机获取一个用户头 ua = random.choice(PY3_UA_LIST) # 设置用户头 # print (ua) # print(request.headers) request.headers['User-Agent'] = ua
# settings.pyPY3_UA_LIST = [ "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/0.2.151.0 Safari/525.19 ", "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.29 Safari/525.13 ", "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13(KHTML, like Gecko) Chrome/0.2.149.27 Safari/525.13 ", "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.27 Safari/525.13 ", "Opera/7.51 (Windows NT 5.1; U) [en] ", "Opera/8.0 (Windows NT 5.1; U; en) ", "Opera/8.0 (Windows NT 5.1; U; zh-cn) ", "Opera/9.80 (Windows NT 5.1; U; ru) Presto/2.7.39 Version/11.00 ", "Opera/9.80 (Windows NT 5.1; U; zh-tw) Presto/2.8.131 Version/11.10 ", "Opera/9.80 (Windows NT 5.1; U; Opera/9.80 (J2ME/MIDP; Opera Mini/5.0.18635/1030; U; en) Presto/2.4.15; ru) Presto/2.8.99 Version/11.10 ", "Opera/9.80 (J2ME/MIDP; Opera Mini/5.0(Windows; U; Windows NT 5.1; en-US)/23.390; U; en) Presto/2.5.25 Version/10.54 ", "Opera/9.80 (J2ME/MIDP; Opera Mini/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.2.3) Gecko/23.377; U; en) Presto/2.5.25 Version/10.54 ", "Opera/9.80 (J2ME/MIDP; Opera Mini/(Windows; U; Windows NT 5.1; en-US) AppleWebKit/23.411; U; en) Presto/2.5.25 Version/10.54 ", "Opera/9.80 (Windows NT 5.1; Opera Mobi/49; U; en) Presto/2.4.18 Version/10.00 ",]
RandomProxy
class RandomProxy(object): def process_request(self, request, spider): # 随机获取一个代理 proxy = random.choice(PY3_PROXY_LIST) # 判断代理情况 if 'user_passwd' in proxy: # 对账号进行编码 b64_user_pwd = base64.b64encode(proxy['user_passwd'].encode()) # 代理权限认证 request.headers['Proxy-Authorization'] = 'Basic ' + b64_user_pwd.decode() # 设置代理 request.meta['proxy'] = proxy['ip_port'] else: request.meta['proxy'] = proxy['ip_port']
# settings.pyPY3_PROXY_LIST = [ {"ip_port": "116.62.112.142:16816", "user_passwd": "morganna_mode_g:ggc22qxp"}, {"ip_port": "119.23.63.152:8118"},]
阅读全文
0 0
- scrapy Middleware
- Scrapy进阶,middleware的使用
- Scrapy——流程以及middleware中间件
- scrapy 1.0.3版本 Selenium Phantomjs Downloader Middleware
- 关于Scrapy 自定义Spider Middleware中遇到的坑
- Scrapy框架学习(六)----Downloader Middleware及使用MongoDB储存数据
- 中间件 Middleware
- django middleware
- 中间件 Middleware
- AI middleware
- WSGI-- Middleware
- express middleware
- less-middleware
- Django MiddleWare
- Middleware详解
- Redux Middleware
- Rails Middleware
- THE MIDDLEWARE
- LeapFTP 使用指南
- UCLA 做可定制计算的三个层次
- 循环结构
- String常用类
- 解决应用服务器集群后的Session问题
- scrapy Middleware
- java算法之折半查找
- 建立有向图的邻接表,深度优先遍历和广度优先遍历的递归与非递归算法,判断是否是有向无环图,并输出一种拓扑序列
- 【备忘】Java菜鸟到大牛学习路线之高级篇
- Object-to-Primitive Conversions in JavaScript
- Metropolis-Hasting 算法和 Gibbs sampling 算法
- git 命令行
- NFS文件系统启动
- C++对文件进行读写操作