nutch爬取新闻,如何做到指定的定时更新

来源:互联网 发布:知乎大神萧井陌 编辑:程序博客网 时间:2024/05/17 06:55

适用于 nutch 1.7 版。 建立于  linux环境下。

爬新闻,需要注意的是,

     1、一定要更新入口 url 列表     2、爬取过的新闻不需要再爬取      3、如何控制nutch对爬取过的url进行检查


修改nutch-site.xml 添加如下配置

<!-- 多长时间再抓取抓取之前抓过的页面,单位秒。默认30天 --><property><name>db.fetch.interval.default</name><value>420480000</value><description>The default number of seconds between re-fetches of a page (30 days).</description></property><!-- 多少时间后强制更新整个CrawlDB库 --><property><name>db.fetch.interval.max</name><value>630720000</value><description>The maximum number of seconds between re-fetches of a page(90 days). After this period every page in the db will be re-tried, nomatter what is its status.</description></property>


往crontab 定时计划里添加定时执行如下脚本。

如下shell脚本,是控制的关键

#!/bin/shexport JAVA_HOME=/usr/java/jdk1.6.0_45export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/jre/lib/dt.jar:$JAVA_HOME/jre/lib/tools.jarexport PATH=$PATH:$JAVA_HOME/bin#Set Workspacenutch_work=/home/nutch/SearchEngine/nutch-Testtmp_dir=$nutch_work/out_tmpsave_dir=$nutch_work/outsolrurl=http://192.168.123.205:8080/solr-4.6.1/core-nutch# Set Parameterdepth=2threads=200#-------Start--------$nutch_work/bin/nutch inject $tmp_dir/crawldb $nutch_work/urls#-----循环此操作,次数有深度决定-----for((i=0;i<$depth;i++)) do#-----step 1-----if ((i==0))then$nutch_work/bin/nutch generate $tmp_dir/crawldb $tmp_dir/segmentssegment=`ls -d  $tmp_dir/segments/* | tail -1`else$nutch_work/bin/nutch generate $save_dir/crawldb $save_dir/segmentssegment=`ls -d  $save_dir/segments/* | tail -1`fi#-----step 2-----$nutch_work/bin/nutch fetch $segment -threads $threads#-----step 3-----$nutch_work/bin/nutch parse $segment#-----step 4-----if ((i==0))then$nutch_work/bin/nutch updatedb $save_dir/crawldb $segmentelse$nutch_work/bin/nutch updatedb $save_dir/crawldb $segment -noAdditionsfi#-----step 5-----$nutch_work/bin/nutch invertlinks $save_dir/linkdb $segmentdone#-----step 6-----$nutch_work/bin/nutch solrindex $solrurl $save_dir/crawldb -linkdb $save_dir/linkdb $segment#-----step 7-----$nutch_work/bin/nutch solrdedup $solrurl#-----step 8-----rm -rf $tmp_dir/*#-----Finished-----


0 0