python爬虫爬取Bloomberg新闻

来源:互联网 发布:26650锂电池容量的算法 编辑:程序博客网 时间:2024/05/19 02:05

最近在爬取bloomberg上的新闻,所以在这里记录一下过程。

思路

通过网站的sitemap获取链接,解析链接通过scrapy框架爬取。


网站链接的获取:

https://www.bloomberg.com/robots.txt 这是网站的robots.txt,如下:
# Bot rules:# 1. A bot may not injure a human being or, through inaction, allow a human being to come to harm.# 2. A bot must obey orders given it by human beings except where such orders would conflict with the First Law.# 3. A bot must protect its own existence as long as such protection does not conflict with the First or Second Law.# If you can read this then you should apply here https://www.bloomberg.com/careers/User-agent: *Disallow: /news/live-blog/2016-03-11/bank-of-japan-monetary-policy-decision-and-kuroda-s-briefingDisallow: /polskaUser-agent: Mediapartners-Google*Disallow: /about/careersDisallow: /about/careers/Disallow: /offlinemessage/Disallow: /apps/fbkDisallow: /bb/newsarchive/Disallow: /apps/newsSitemap: https://www.bloomberg.com/feeds/bbiz/sitemap_index.xmlSitemap: https://www.bloomberg.com/feeds/bpol/sitemap_index.xmlSitemap: https://www.bloomberg.com/feeds/bview/sitemap_index.xmlSitemap: https://www.bloomberg.com/feeds/gadfly/sitemap_index.xmlSitemap: https://www.bloomberg.com/feeds/quicktake/sitemap_index.xmlSitemap: https://www.bloomberg.com/bcom/sitemaps/people-index.xmlSitemap: https://www.bloomberg.com/bcom/sitemaps/private-companies-index.xmlSitemap: https://www.bloomberg.com/feeds/bbiz/sitemap_securities_index.xmlUser-agent: Spinn3rDisallow: /podcasts/Disallow: /feed/podcast/Disallow: /bb/avfile/User-agent: Googlebot-NewsDisallow: /sponsor/Disallow: /news/sponsors/*

其中红色部分是我们要爬取的sitemap,打开其中一个,会有如下的xml文件:
<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"><sitemap><loc>https://www.bloomberg.com/feeds/gadfly/sitemap_recent.xml</loc><lastmod>2017-02-17T07:46:07-05:00</lastmod></sitemap><sitemap><loc>https://www.bloomberg.com/feeds/gadfly/sitemap_news.xml</loc><lastmod>2017-02-17T07:46:07-05:00</lastmod></sitemap><sitemap><loc>https://www.bloomberg.com/feeds/gadfly/sitemap_2017_2.xml</loc><lastmod>2017-02-17T07:46:07-05:00</lastmod></sitemap>



我们需要提取其中的<loc>*</loc>中的内容,这仍然是sitemap,打开后如下:

<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:news="http://www.google.com/schemas/sitemap-news/0.9" xmlns:image="http://www.google.com/schemas/sitemap-image/1.1"><url><loc>https://www.bloomberg.com/gadfly/articles/2017-02-17/luxury-tax</loc><news:news><news:publication><news:name>Bloomberg</news:name><news:language>en</news:language></news:publication><news:title>Giving U.S. Border Tax a European Luxury Snub</news:title><news:publication_date>2017-02-17T10:43:15.284Z</news:publication_date><news:keywords>Sales Tax, Jobs, China, Europe, Ralph Lauren, Michael David Kors, Bernard Arnault, Donald John Trump, Miuccia Prada Bianchi</news:keywords><news:stock_tickers>LON:BRBY, EPA:MC, LON:BARC, EPA:KER</news:stock_tickers></news:news><image:image><image:loc>https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iurmUVIRXTqY/v0/1200x-1.jpg</image:loc><image:license>https://www.bloomberg.com/tos</image:license></image:image></url>


<loc>标签中的内容就是我们需要爬取的新闻地址。

第一步,要获得所有的新闻地址,代码如下:

# -*- coding: utf-8 -*-#可以下载网站的sitemapimport refrom downloader import Downloader #downloader的作用是下载网页内容D = Downloader()file1 = open('sitemaps.txt','a')file2 = open('htmls.txt','a')def crawl_sitemap(url):    #download sitemap file    sitemap = D(url)    #extract the sitemap links    links = re.findall('<loc>(.*?)</loc>',sitemap)    #download each link    for link in links:        file1.write(link)        file1.write('\n')        html = D(link)        file2.write(html)        file2.write('\n')crawl_sitemap('https://www.bloomberg.com/feeds/gadfly/sitemap_index.xml')#上面四个红色的sitemap,这里就放了一个,没有写到一起。file1.close()file2.close()

downloader文件:
import urlparseimport urllib2import randomimport timefrom datetime import datetime, timedeltaimport socketDEFAULT_AGENT = 'wswp'DEFAULT_DELAY = 5DEFAULT_RETRIES = 1DEFAULT_TIMEOUT = 60class Downloader:    def __init__(self, delay=DEFAULT_DELAY, user_agent=DEFAULT_AGENT, proxies=None, num_retries=DEFAULT_RETRIES,                 timeout=DEFAULT_TIMEOUT, opener=None, cache=None):        socket.setdefaulttimeout(timeout)        self.throttle = Throttle(delay)        self.user_agent = user_agent        self.proxies = proxies        self.num_retries = num_retries        self.opener = opener        self.cache = cache    def __call__(self, url):        result = None        if self.cache:            try:                result = self.cache[url]            except KeyError:                # url is not available in cache                pass            else:                if self.num_retries > 0 and 500 <= result['code'] < 600:                    # server error so ignore result from cache and re-download                    result = None        if result is None:            # result was not loaded from cache so still need to download            self.throttle.wait(url)            proxy = random.choice(self.proxies) if self.proxies else None            headers = {'User-agent': self.user_agent}            result = self.download(url, headers, proxy=proxy, num_retries=self.num_retries)            if self.cache:                # save result to cache                self.cache[url] = result        return result['html']    def download(self, url, headers, proxy, num_retries, data=None):        print 'Downloading:', url        request = urllib2.Request(url, data, headers or {})        opener = self.opener or urllib2.build_opener()        if proxy:            proxy_params = {urlparse.urlparse(url).scheme: proxy}            opener.add_handler(urllib2.ProxyHandler(proxy_params))        try:            response = opener.open(request)            html = response.read()            code = response.code        except Exception as e:            print 'Download error:', str(e)            html = ''            if hasattr(e, 'code'):                code = e.code                if num_retries > 0 and 500 <= code < 600:                    # retry 5XX HTTP errors                    return self._get(url, headers, proxy, num_retries - 1, data)            else:                code = None        return {'html': html, 'code': code}    def _get(self, url, headers, proxy, param, data):        passclass Throttle:    """Throttle downloading by sleeping between requests to same domain    """    def __init__(self, delay):        # amount of delay between downloads for each domain        self.delay = delay        # timestamp of when a domain was last accessed        self.domains = {}    def wait(self, url):        """Delay if have accessed this domain recently        """        domain = urlparse.urlsplit(url).netloc        last_accessed = self.domains.get(domain)        if self.delay > 0 and last_accessed is not None:            sleep_secs = self.delay - (datetime.now() - last_accessed).seconds            if sleep_secs > 0:                time.sleep(sleep_secs)        self.domains[domain] = datetime.now()
通过以上的代码,我们得到了对应sitemap下的xml文件


然后提取其中的<loc> </loc>标签的内容:
import refile = open('./sitemap_and_html_waiting_to_be_crawled/htmls_gadfly.txt')file2 = open('htmls.txt','a')for temp in file.readlines():    if re.match('<loc>',temp.strip()) is None:        pass    else:        print temp        #file2.write(temp)        #file2.write('\n')file.close()file2.close()

得到的文件中还会有<loc>标签,直接用文本编辑去掉就行,最后的文件形式是这样的:


另外的几个sitemap改一下上面第一段代码中的地址就行了。这样就获取了全部的新闻地址,接下来开始爬取新闻内容。

——————————————割——————————————————————————————————————————————————————————————

scrapy爬虫

爬虫教程:https://doc.scrapy.org/en/1.3/intro/tutorial.html
这里需要一些安装,都很容易,打开cmd在一个目录下输入:
scrapy startproject linkcrawler
这样就创建了一个scrapy爬虫,在spiders目录下,创建爬虫文件,写入代码:
import scrapyfile = open('G:\onedrive\workspace\crawler\htmls\htmls_gadfly.txt')#这里是之前html文件的地址data = file.readlines()def list_to_string(list):    string = ""    for i in list:        string += i.strip()    return stringclass LinkCrawler(scrapy.Spider):    name = "link"    def start_requests(self):        """        urls = [            'https://www.bloomberg.com/gadfly/articles/2017-02-16/baidu-failing-fast-is-a-smart-move-to-build-a-future',            'https://www.bloomberg.com/gadfly/articles/2017-02-13/gaslog-partners-poised-for-lng-market-recovery',            'https://www.bloomberg.com/gadfly/articles/2017-02-07/bp-earnings-today-doesn-t-match-tomorrow'        ]        :return:        """        for url in data:            yield scrapy.Request(url=url, callback=self.parse)    def parse(self, response): #这里是比较关键的部分,主要用了css选择器,选择需要的部分,下面会详细讲            yield{                'title': response.css('h1.headline_4rK3h>a::text').extract_first(),                'time_1': response.css("time::text").extract_first().strip(),                'time_2': response.css('time').re(r'datetime="\s*(.*)">')[0],                'content': list_to_string(response.css('div.container_1KxJx>p::text').extract())            }file.close()

这就是全部代码了,很easy啊。
还是刚才的cmd窗口,输入
scrapy crawl link -o news.json
 经过一段时间的运行,就把所有的新闻保存在一个news.json文件中了。


css选择器:

我们得到网页链接之后最重要的就是分析网页内容,选择我们想要的内容,这部分其实很多方法,包括正则表达式,beautifulsoup,lxml,我们直接用scrapy自带的css选择器进行选择。
随意打开一个网页,首先是标题:如下所示


我们关注的部分是:

在命令行中打开scrapy shell:、

后面是网址。
在scrapy shell中输入
response.css('h1.headline_4rK3h>a::text').extract_first()
可以看到标题就提取出来了。
还有一点就是爬取bloomberg的时候一定要能上Google,最好是全局代理,不然是连不上的。

总结

原理很简单就是通过网站的sitemap进行爬取,代码写的有点罗嗦,不过功能是实现了,可以比较稳定快速的爬取bloomberg的新闻,第一次发博客,希望如果有大神看到能指点一二。

1 0
原创粉丝点击