scrapy爬取链接

来源:互联网 发布:hibernate注解sql 编辑:程序博客网 时间:2024/05/18 00:32

近期因为工作需要,开始学习和写爬虫,学习到了很多内容,就整理了一下发上来。

需求

这里爬虫的目的是检测网站的漏洞,因此希望做成类似于burpSuit的历史记录一样的。

初步需求是简单地爬取网站的链接,去重,尝试绕过反爬虫。

更进一步的是希望像burpsuit一样记录网站的各个连接请求,从而获取到更全面的信息。


网址爬虫

简单的采用urllib之类的也是可以完成爬虫的,不过为了后续操作方便,还是学习了一下scrapy框架。

关于框架的内容可以移步 http://www.jianshu.com/p/a8aad3bf4dc4 学习一下。

以及scrapy的文档 http://scrapy-chs.readthedocs.io/zh_CN/0.24/intro/tutorial.html


首先下载安装scrapy:

 sudo pip install scrapy

出现错误:

'module' object has no attribute 'OP_NO_TLSv1_1'

可以用下面几个命令

sudo pip install --upgrade scrapysudo pip install --upgrade twistedsudo pip install --upgrade pyopenssl


查看一下scrapy的指令:

Scrapy 1.4.0 - no active projectUsage:  scrapy <command> [options] [args]Available commands:  bench         Run quick benchmark test  fetch         Fetch a URL using the Scrapy downloader  genspider     Generate new spider using pre-defined templates  runspider     Run a self-contained spider (without creating a project)  settings      Get settings values  shell         Interactive scraping console  startproject  Create new project  version       Print Scrapy version  view          Open URL in browser, as seen by Scrapy  [ more ]      More commands available when run from project directoryUse "scrapy <command> -h" to see more info about a command


这里新建一个项目

scrapy startproject urlspiderNew Scrapy project 'urlspider', using template directory '/usr/local/lib/python2.7/dist-packages/scrapy/templates/project', created in:    /home/qiqi/spider/urlspiderYou can start your first spider with:    cd urlspider    scrapy genspider example example.com


下面是项目的目录

lltotal 28drwxr-sr-x 3 qiqi qiqi 4096 Nov 14 07:10 ./drwxr-sr-x 3 qiqi qiqi 4096 Nov 14 07:10 ../-rw-r--r-- 1 qiqi qiqi    0 Nov 14 06:58 __init__.py-rw-rw-r-- 1 qiqi qiqi  288 Nov 14 07:10 items.py-rw-rw-r-- 1 qiqi qiqi 1907 Nov 14 07:10 middlewares.py-rw-rw-r-- 1 qiqi qiqi  289 Nov 14 07:10 pipelines.py-rw-rw-r-- 1 qiqi qiqi 3158 Nov 14 07:10 settings.pydrwxr-sr-x 2 qiqi qiqi 4096 Nov 14 06:58 spiders/

items是存放数据的格式

setting是配置文件

spiders是爬虫的文件,我们的爬虫都放在里面

进入spiders文件夹创建一个爬虫文件

# -*- coding: utf-8 -*-import scrapyfrom scrapy.selector import Selectorclass UrlSpider(scrapy.Spider):    name = 'url'    allowed_domains = ['opencv.org']    start_urls = ['http://opencv.org/']    def parse(self, response):        se = Selector(response)        site = se.xpath('//a/@href').extract()        print site                      


启动爬虫

scrapy crawl url

这样就能获取到一个页面的连接了,这里用到了scrapy自己的解析工具选择器(seletors),因为他们通过特定的 XPath 或者 CSS 表达式来“选择” HTML文件中的某个部分。XPath 是一门用来在XML文件中选择节点的语言,也可以用在HTML上。 CSS 是一门将HTML文档样式化的语言。选择器由它定义,并与特定的HTML元素的样式相关连。Scrapy选择器构建于 lxml 库之上。


上面的可能会遇到两个标签指向同一个链接或者取到相对url,加上去重部分和补全相对url:

# -*- coding: utf-8 -*-import scrapyimport urlparsefrom scrapy.selector import Selectorclass UrlSpider(scrapy.Spider):    name = 'url'    allowed_domains = ['opencv.org']    start_urls = ['http://opencv.org/']    result_urls = []    def parse(self, response):        se = Selector(response)        result = set()        site = se.xpath('//a/@href').extract()        for s in site:            tmpurl = urlparse.urljoin(response.url, s)            if tmpurl not in result:                result.add(tmpurl)        for r in result:            print r

现在可以得到这个页面上的完整的url信息了


然后需要递归爬取页面

# -*- coding: utf-8 -*-import scrapyimport urlparsefrom scrapy.selector import Selectorclass UrlSpider(scrapy.Spider):    name = 'url'    allowed_domains = ['opencv.org']    start_urls = ['http://opencv.org/']    result_urls = []    def parse(self, response):        print response.url        se = Selector(response)        result = set()        site = se.xpath('//a/@href').extract()        for s in site:            tmpurl = urlparse.urljoin(response.url, s)            if tmpurl not in result:                result.add(tmpurl)        for r in result:            if r not in UrlSpider.result_urls:                UrlSpider.result_urls.append(r)                yield scrapy.Request(url=r, callback=self.parse)


将数据存到item中去,编辑items.py

import scrapyclass UrlspiderItem(scrapy.Item):    # define the fields for your item here like:    # name = scrapy.Field()    url = scrapy.Field()                         

修改一下爬虫,将数据存入item就可以了

# -*- coding: utf-8 -*-import scrapyimport urlparsefrom urlspider.items import UrlspiderItemfrom scrapy.selector import Selectorclass UrlSpider(scrapy.Spider):    name = 'url'    allowed_domains = ['opencv.org']    start_urls = ['http://opencv.org/']    result_urls = []    def parse(self, response):        item = UrlspiderItem()        item['url'] = response.url        se = Selector(response)        result = set()        site = se.xpath('//a/@href').extract()        for s in site:            tmpurl = urlparse.urljoin(response.url, s)            if tmpurl not in result:                result.add(tmpurl)        for r in result:            if r not in UrlSpider.result_urls:                UrlSpider.result_urls.append(r)                yield scrapy.Request(url=r, callback=self.parse)        yield item                


这样对于静态页面获取链接就基本上完成了,关于反爬虫的在下一篇讲吧。




原创粉丝点击