Hadoop2.7.3+Hive2.1.0整合实现wordcount程序

来源:互联网 发布:python pyqt 实例 编辑:程序博客网 时间:2024/05/20 16:41

首先需要在本地搭建hadoop环境,参考:http://blog.csdn.net/kunshan_shenbin/article/details/52933499

下载hive2.1.0,解压,配置hive环境变量。本地安装mysql,创建数据库hive_db, 下载MySQL jdbc驱动,病放到hive安装目录的lib目录下。

修改Hive配置:hive/conf/下

1) hive-default.xml.template改名hive-default.xml

2) 新建hive-site.xml,内容如下:

<?xml version="1.0" encoding="UTF-8" standalone="no"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration>    <property>     <name>hive.metastore.warehouse.dir</name>     <!--><value>/Users/bin.shen/BigData/apache-hive-2.1.0/warehouse</value>--><value>hdfs://localhost:8081/user/hive_local/warehouse</value>    </property>    <property>     <name>hive.metastore.local</name>     <value>true</value>    </property>    <property>        <name>javax.jdo.option.ConnectionURL</name>        <value>jdbc:mysql://localhost:3306/hive_db?createDatabaseIfNotExist=true</value>        <description>JDBC connect string for a JDBC metastore</description>    </property>    <property>        <name>javax.jdo.option.ConnectionDriverName</name>        <value>com.mysql.jdbc.Driver</value>        <description>Driver class name for a JDBC metastore</description>    </property>    <property>        <name>javax.jdo.option.ConnectionUserName</name>        <value>hive</value>        <description>username to use against metastore database</description>    </property>    <property>        <name>javax.jdo.option.ConnectionPassword</name>        <value>hive</value>        <description>password to use against metastore database</description>    </property></configuration>

初始化db:

schematool -initSchema -dbType mysql


启动hive

在命令行运行hive。


具体可以参考:http://blog.itpub.net/30089851/viewspace-2074761/


Ok,准备工作完成,接下来进度正题:

./hive

 CREATE TABLE wordcount(name string,id int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';

LOAD DATA INPATH 'output/part-r-00000' INTO TABLE wordcount;  


查询wordcount数据表:



统计wordcount 数据表中的不同的单词个数,及count



从这个结果中,其实可以看出,我们之前所说的结论:

查询是通过MapReduce来完成的(并不是所有的查询都需要MapReduce来完成,比如select * from XXX就不需要



参考资料:http://blog.csdn.net/wangmuming/article/details/25226951

0 0