爬取新浪搜索内容遇到的问题及解决

来源:互联网 发布:java加载类的过程 编辑:程序博客网 时间:2024/06/05 02:32

在Nutch爬虫爬取新浪的时候 爬取率低,抽查了相关种子发现新浪搜索页面的帖子爬取率很低。遂展开分析–

搜索页面的网页内容特点

和专业的搜索引擎一样,新浪的搜索引擎搜索结果往往是众多网页的聚合,既是众多内容的聚合,也是多种形式的聚合。

这就意味着要解析索引到的这些网页,需要编写很多解析插件。幸运的是大多数类型的新浪网页,虽然网页所属的板块不同,然而网页的结构类似,采用了相同的网页模板,对于这些网页,需要修改nutch的过滤插件regex-urlfilter.txt使其支持url,并在自己写的解析插件中支持这种url。至于其他网站的网页,以及新浪采用不同模板的网页暂不考虑,爬取率保证在90%以上即可。我们需要用新浪的新闻搜索引擎 搜索一些手机的相关新闻 按相关度排序。以华为为例,搜索的链接为
http://search.sina.com.cn/?range=all&c=news&q=%BB%AA%CE%AA&from=top&col=&source=&country=&size=&time=&a=&sort=rel,附上网页链接(用casperjs获取到的实时链接)

下面是在验证爬取率时出现的两种情况:多爬和少爬。

多出来的链接

Huawei
“http://slide.tech.sina.com.cn/mobile/slide_5_22298_65513.html?img=1303562“,
“http://slide.tech.sina.com.cn/mobile/slide_5_22298_65513.html?img=1303562“,
“http://slide.tech.sina.com.cn/mobile/slide_5_22298_65513.html?img=1303561“,
“http://slide.tech.sina.com.cn/mobile/slide_5_22298_65513.html?img=1303563“,
“http://slide.tech.sina.com.cn/mobile/slide_5_22298_65513.html?img=1303564“,
“http://tech.sina.com.cn/t/2015-12-04/doc-ifxmisxu6250009.shtml“,
“http://tech.sina.com.cn/n/m/2015-12-04/doc-ifxmifzc0801858.shtml“,
“http://tech.sina.com.cn/mobile/n/n/2015-12-03/doc-ifxmifze7557039.shtml“,
“http://news.sina.com.cn/o/2015-12-04/doc-ifxmhqaa9897708.shtml“,
“http://news.sina.com.cn/o/2015-12-04/doc-ifxmhqaa9897702.shtml“,
“http://news.sina.com.cn/o/2015-12-04/doc-ifxmhqaa9897441.shtml“,
“http://tech.sina.com.cn/notebook/pad/2015-12-03/doc-ifxmcnkr7786923.shtml“,
“http://vic.sina.com.cn/news/27/2015/1203/64664.html“,
“http://finance.sina.com.cn/roll/20151204/051923928311.shtml“,
“http://news.sina.com.cn/o/2015-12-04/doc-ifxmhqaa9876368.shtml“,
“http://news.sina.com.cn/o/2015-12-03/doc-ifxmhqaa9856572.shtml“,
“http://news.sina.com.cn/o/2015-12-03/doc-ifxmhqaa9857492.shtml“,
“http://news.sina.com.cn/o/2015-12-03/doc-ifxmihae8911723.shtml“,
“http://finance.sina.com.cn/roll/20151203/162923924081.shtml“,
“http://finance.sina.com.cn/stock/t/20151203/160123923837.shtml“,
“http://news.sina.com.cn/o/2015-12-03/doc-ifxmihae8894721.shtml“,
“http://news.sina.com.cn/o/2015-12-03/doc-ifxmhqaa9840543.shtml“,
“http://news.sina.com.cn/o/2015-12-03/doc-ifxmhqaa9837605.shtml“,
“http://news.sina.com.cn/o/2015-12-03/doc-ifxmihae8890319.shtml”
共24条链接,其中第1-5行是第一个帖子的链接(幻灯片式网页),后面19行为2-20个帖子的链接。用nutch爬下来发现共19个链接,少爬的链接后面分析,这里多出一个链接:
http://news.sina.com.cn/c/2013-01-29/171826152112.shtml
这个链接不是网页上的,经测试此链接在换关键词为例搜索的时候也时常会出现,有时也会多出其他链接,然而并不影响爬取率,所以暂时不做处理。

少爬的链接

通过观察日志找到爬取的链接
http://news.sina.com.cn/o/2015-12-03/doc-ifxmihae8911723.shtml
http://finance.sina.com.cn/stock/t/20151204/105623933366.shtml
http://news.sina.com.cn/o/2015-12-03/doc-ifxmhqaa9857492.shtml
http://news.sina.com.cn/o/2015-12-03/doc-ifxmhqaa9840543.shtml
http://news.sina.com.cn/o/2015-12-03/doc-ifxmhqaa9856572.shtml
http://news.sina.com.cn/o/2015-12-04/doc-ifxmhqaa9897441.shtml
http://tech.sina.com.cn/t/2015-12-04/doc-ifxmisxu6250009.shtml
http://news.sina.com.cn/o/2015-12-03/doc-ifxmihae8894721.shtml
http://finance.sina.com.cn/stock/t/20151203/160123923837.shtml
http://finance.sina.com.cn/roll/20151203/162923924081.shtml
http://news.sina.com.cn/c/2013-01-29/171826152112.shtml
http://finance.sina.com.cn/roll/20151204/051923928311.shtml
http://tech.sina.com.cn/n/m/2015-12-04/doc-ifxmifzc0801858.shtml
http://tech.sina.com.cn/mobile/n/n/2015-12-03/doc-ifxmifze7557039.shtml
http://news.sina.com.cn/o/2015-12-04/doc-ifxmhqaa9897708.shtml
http://news.sina.com.cn/o/2015-12-03/doc-ifxmhqaa9837605.shtml
http://news.sina.com.cn/o/2015-12-04/doc-ifxmhqaa9897702.shtml
http://news.sina.com.cn/o/2015-12-04/doc-ifxmhqaa9876368.shtml
http://tech.sina.com.cn/notebook/pad/2015-12-03/doc-ifxmcnkr7786923.shtml
和之前的capserjs取到的url进行对比,可以发现有些链接并没有爬取到,如下:
“http://slide.tech.sina.com.cn/mobile/slide_5_22298_65513.html?img=1303562“,
“http://slide.tech.sina.com.cn/mobile/slide_5_22298_65513.html?img=1303562“,
“http://slide.tech.sina.com.cn/mobile/slide_5_22298_65513.html?img=1303561“,
“http://slide.tech.sina.com.cn/mobile/slide_5_22298_65513.html?img=1303563“,
“http://slide.tech.sina.com.cn/mobile/slide_5_22298_65513.html?img=1303564“,
“http://vic.sina.com.cn/news/27/2015/1203/64664.html”
观察他们的特点发现它们的后缀不是.shtml,与我们在插件中判断url是否需要爬取的策略一致。可见我们的解决方案是可行的。

更复杂的情况

这次我们用华为,中兴,小米 做关键词分别进行搜索,结果发现少的链接数多了,多爬的也多了。开始我比较了三个数据:
网页实际的链接 a ->Nutch获取到的链接数 b ->实际爬取的链接数 c
发现b>a>c ,找不到原因,b中莫名其妙的多出一些重复。
后来发现其实是出链获取过程中,会获取重复的链接,对a/b/c都增加了去重,比较b和c,a和c,a和b都合理了。其实这时发现和简单的情况一样,比较a和c即可。

经验证我们过滤掉的url数目d满足:a-b+1=d。

MARK一下写的python去重和比较脚本

before = []after = []total = []total_no_repeat =[]irregular = []def remove_same_item(before_file, after_file):    try:        f = open(before_file)        for each_line in f:            piece =each_line.split(' ')            before.append(piece[-1])        f.close()        before_deduplicated = list(set(before))        before.sort()        print ("after deduplication, size of outlinks: "+str(before_deduplicated.__len__()))        # for each_line in before:        #     print each_line        f = open(after_file)        for each_line in f:            piece =each_line.split(' ')            after.append(piece[-1])        f.close()        after_deduplicated = list(set(after))        after.sort()        print ("actual size of links to crawl: "+str(after_deduplicated.__len__()))        # for each_line in after:        #     print each_line        for each_line in after_deduplicated:            if before_deduplicated.__contains__(each_line):                before_deduplicated.remove(each_line)        print (before_deduplicated.__len__())        for each_line in before_deduplicated:            print each_line    except ValueError:        passdef count_irregular(file_name):    try:        f = open (file_name)        for each_line in f:            total.append(each_line)            if not str(each_line).__contains__(".shtml"):                irregular.append(each_line)        print "number of links on webpages: "+str(total.__len__())        total_no_repeat = list(set(total))        total_no_repeat.sort()        print "number of links on webpages after dedupaliction: "+str(total_no_repeat.__len__())        print "number of irregular links on webpages: "+str(irregular.__len__())    except ValueError:        passcount_irregular('original.txt')remove_same_item('before.txt','after.txt')
0 0