动态网页爬取例子(WebCollector+selenium+phantomjs)

来源:互联网 发布:旺信淘宝新手交流群 编辑:程序博客网 时间:2024/06/05 21:53

目标:动态网页爬取

说明:这里的动态网页指几种可能:1)需要用户交互,如常见的登录操作;2)网页通过JS / AJAX动态生成,如一个html里有<div id="test"></div>,通过JS生成<div id="test"><span>aaa</span></div>。

这里用了WebCollector 2进行爬虫,这东东也方便,不过要支持动态关键还是要靠另外一个API -- selenium 2(集成htmlunit 和 phantomjs).


1)需要登录后的爬取,如新浪微博

import java.util.Set;import cn.edu.hfut.dmic.webcollector.crawler.DeepCrawler;import cn.edu.hfut.dmic.webcollector.model.Links;import cn.edu.hfut.dmic.webcollector.model.Page;import cn.edu.hfut.dmic.webcollector.net.HttpRequesterImpl;import org.openqa.selenium.Cookie;import org.openqa.selenium.WebElement;import org.openqa.selenium.htmlunit.HtmlUnitDriver;import org.jsoup.nodes.Element;import org.jsoup.select.Elements;/* * 登录后爬取 * Refer: http://nutcher.org/topics/33 * https://github.com/CrawlScript/WebCollector/blob/master/README.zh-cn.md * Lib required: webcollector-2.07-bin, selenium-java-2.44.0 & its lib */public class WebCollector1 extends DeepCrawler {public WebCollector1(String crawlPath) {super(crawlPath);/*获取新浪微博的cookie,账号密码以明文形式传输,请使用小号*/  try {String cookie=WebCollector1.WeiboCN.getSinaCookie("yourAccount", "yourPwd");HttpRequesterImpl myRequester=(HttpRequesterImpl) this.getHttpRequester();        myRequester.setCookie(cookie);} catch (Exception e) {e.printStackTrace();}}@Override    public Links visitAndGetNextLinks(Page page) {    /*抽取微博*/        Elements weibos=page.getDoc().select("div.c");        for(Element weibo:weibos){            System.out.println(weibo.text());        }        /*如果要爬取评论,这里可以抽取评论页面的URL,返回*/        return null;}public static void main(String[] args) {WebCollector1 crawler=new WebCollector1("/home/hu/data/weibo");        crawler.setThreads(3);        /*对某人微博前5页进行爬取*/        for(int i=0;i<5;i++){            crawler.addSeed("http://weibo.cn/zhouhongyi?vt=4&page="+i);        }        try {crawler.start(1);} catch (Exception e) {e.printStackTrace();}}public static class WeiboCN {    /**     * 获取新浪微博的cookie,这个方法针对weibo.cn有效,对weibo.com无效     * weibo.cn以明文形式传输数据,请使用小号     * @param username 新浪微博用户名     * @param password 新浪微博密码     * @return     * @throws Exception      */    public static String getSinaCookie(String username, String password) throws Exception{        StringBuilder sb = new StringBuilder();        HtmlUnitDriver driver = new HtmlUnitDriver();        driver.setJavascriptEnabled(true);        driver.get("http://login.weibo.cn/login/");        WebElement mobile = driver.findElementByCssSelector("input[name=mobile]");        mobile.sendKeys(username);        WebElement pass = driver.findElementByCssSelector("input[name^=password]");        pass.sendKeys(password);        WebElement rem = driver.findElementByCssSelector("input[name=remember]");        rem.click();        WebElement submit = driver.findElementByCssSelector("input[name=submit]");        submit.click();        Set<Cookie> cookieSet = driver.manage().getCookies();        driver.close();        for (Cookie cookie : cookieSet) {            sb.append(cookie.getName()+"="+cookie.getValue()+";");        }        String result=sb.toString();        if(result.contains("gsid_CTandWM")){            return result;        }else{            throw new Exception("weibo login failed");        }    }}}

* 这里有个自定义路径/home/hu/data/weibo(WebCollector1 crawler=new WebCollector1("/home/hu/data/weibo");),是用来保存到嵌入式数据库Berkeley DB。

* 总体上来自Webcollector 作者的sample。



2)JS动态生成HTML元素的爬取

import java.util.List;import org.openqa.selenium.By;import org.openqa.selenium.WebDriver;import org.openqa.selenium.WebElement;import cn.edu.hfut.dmic.webcollector.crawler.DeepCrawler;import cn.edu.hfut.dmic.webcollector.model.Links;import cn.edu.hfut.dmic.webcollector.model.Page;/* * JS爬取 * Refer: http://blog.csdn.net/smilings/article/details/7395509 */public class WebCollector3 extends DeepCrawler {public WebCollector3(String crawlPath) {super(crawlPath);// TODO Auto-generated constructor stub}@Overridepublic Links visitAndGetNextLinks(Page page) {/*HtmlUnitDriver可以抽取JS生成的数据*///HtmlUnitDriver driver=PageUtils.getDriver(page,BrowserVersion.CHROME);//String content = PageUtils.getPhantomJSDriver(page);        WebDriver driver = PageUtils.getWebDriver(page);//        List<WebElement> divInfos=driver.findElementsByCssSelector("#feed_content");        List<WebElement> divInfos=driver.findElements(By.cssSelector("#feed_content span"));        for(WebElement divInfo:divInfos){            System.out.println("Text是:" + divInfo.getText());        }        return null;}public static void main(String[] args) {WebCollector3 crawler=new WebCollector3("/home/hu/data/wb");        for(int page=1;page<=5;page++)//        crawler.addSeed("http://www.sogou.com/web?query="+URLEncoder.encode("编程")+"&page="+page);        crawler.addSeed("http://cq.qq.com/baoliao/detail.htm?294064");        try {crawler.start(1);} catch (Exception e) {e.printStackTrace();}}}

PageUtils.java

import java.io.BufferedReader;import java.io.IOException;import java.io.InputStream;import java.io.InputStreamReader;import org.openqa.selenium.JavascriptExecutor;import org.openqa.selenium.WebDriver;import org.openqa.selenium.chrome.ChromeDriver;import org.openqa.selenium.htmlunit.HtmlUnitDriver;import org.openqa.selenium.ie.InternetExplorerDriver;import org.openqa.selenium.phantomjs.PhantomJSDriver;import com.gargoylesoftware.htmlunit.BrowserVersion;import cn.edu.hfut.dmic.webcollector.model.Page;public class PageUtils {public static HtmlUnitDriver getDriver(Page page) {        HtmlUnitDriver driver = new HtmlUnitDriver();        driver.setJavascriptEnabled(true);        driver.get(page.getUrl());        return driver;    }    public static HtmlUnitDriver getDriver(Page page, BrowserVersion browserVersion) {        HtmlUnitDriver driver = new HtmlUnitDriver(browserVersion);        driver.setJavascriptEnabled(true);        driver.get(page.getUrl());    return driver;    }        public static WebDriver getWebDriver(Page page) {//    WebDriver driver = new HtmlUnitDriver(true);    //    System.setProperty("webdriver.chrome.driver", "D:\\Installs\\Develop\\crawling\\chromedriver.exe");//    WebDriver driver = new ChromeDriver();        System.setProperty("phantomjs.binary.path", "D:\\Installs\\Develop\\crawling\\phantomjs-2.0.0-windows\\bin\\phantomjs.exe");    WebDriver driver = new PhantomJSDriver();    driver.get(page.getUrl());    //    JavascriptExecutor js = (JavascriptExecutor) driver;//    js.executeScript("function(){}");    return driver;    }        public static String getPhantomJSDriver(Page page) {    Runtime rt = Runtime.getRuntime();    Process process = null;    try {process = rt.exec("D:\\Installs\\Develop\\crawling\\phantomjs-2.0.0-windows\\bin\\phantomjs.exe " + "D:\\workspace\\crawlTest1\\src\\crawlTest1\\parser.js " +page.getUrl().trim());InputStream in = process.getInputStream();InputStreamReader reader = new InputStreamReader(in, "UTF-8");BufferedReader br = new BufferedReader(reader);StringBuffer sbf = new StringBuffer();String tmp = "";while((tmp = br.readLine())!=null){                    sbf.append(tmp);                }return sbf.toString();} catch (IOException e) {e.printStackTrace();}        return null;    }}


2.1)HtmlUnitDriver getDriver是selenium 1.x的作法,已经outdate了,现在用WebDriver getWebDriver

2.2)这里用了几种方法:HtmlUnitDriver, ChromeDriver, PhantomJSDriver, PhantomJS,参考 http://blog.csdn.net/five3/article/details/19085303,各自之间的优缺点如下:

driver类型优点缺点应用真实浏览器driver真实模拟用户行为效率、稳定性低兼容性测试HtmlUnit速度快js引擎不是主流的浏览器支持的包含少量js的页面测试PhantomJS速度中等、模拟行为接近真实不能模拟不同/特定浏览器的行为非GUI的功能性测试* 真实浏览器driver 包括 Firefox, Chrome, IE


2.3)用PhantomJSDriver的时候,遇上错误:ClassNotFoundException: org.openqa.selenium.browserlaunchers.Proxies,原因竟然是selenium 2.44 的bug,后来通过maven找到phantomjsdriver-1.2.1.jar 才解决了。


2.4)另外,我还试了PhantomJS 原生调用(也就是不用selenium,直接调用PhantomJS,见上面的方法),原生要调用JS,这里的parser.js代码如下:

system = require('system')   address = system.args[1];//获得命令行第二个参数 接下来会用到   //console.log('Loading a web page');   var page = require('webpage').create();   var url = address;   //console.log(url);   page.open(url, function (status) {       //Page is loaded!       if (status !== 'success') {           console.log('Unable to post!');       } else {        //此处的打印,是将结果一流的形式output到java中,java通过InputStream可以获取该输出内容        console.log(page.content);       }          phantom.exit();   });

3)后话

3.1)HtmlUnitDriver + PhantomJSDriver是当前最可靠的动态抓取方案。

3.2)这过程中用到很多包、exe,遇到很多的墙~,有需要的朋友可以找我要。

Reference

http://www.ibm.com/developerworks/cn/web/1309_fengyq_seleniumvswebdriver/
http://blog.csdn.net/smilings/article/details/7395509
http://phantomjs.org/download.html
http://blog.csdn.net/five3/article/details/19085303
http://phantomjs.org/quick-start.html

... ...

1 0