Hive的JDBC接口实现(Eclipse环境配置)

来源:互联网 发布:淘宝大图轮播尺寸 编辑:程序博客网 时间:2024/06/06 01:53

实验环境:

3个节点(OS:ubuntu12.04):master,node1,node2

hadoop版本:1.0.3

hive版本:0.11.0


1.首先是在集群上安装Hive:

(1)下载hive安装包到本地(我下载到了master节点上,为了减轻master节点的压力,可以下载到任何一台节点上进行配置),解压(解压到哪里无所谓),因为hive本质是建立在hadoop上的,而每个节点都有hadoop的配置文件($HADOOP_HOME/conf),所以只要配置好hive使它能找到hadoop即可。

(2)配置系统环境变量:

命令:/etc/profile

export HIVE_HOME=你的解压地址

export PATH=$HIVE_HOME/bin:其它的

然后终端输入:source /etc/profile 使环境变量对当前终端生效。

(3)修改Hive配置文档:

Hive的配置文档都在$HIVE_HOME/conf中以模板的形式(.template)给出,我们只需手动创建对应的文档即可:

cp hive-env.sh.template hive-env.sh

cp hive-default.xml.template hive-site.xml

cp hive-exec-log4j.properties.template hive-exec-log4j.properties

cp hive-log4j.properties.template hive-log4j.properties

(4)由于Hive0.11.0默认使用Derby数据库作为存储元数据的数据库,我们可以将此默认的数据库改为mysql,并修改hive-site.xml文件:

安装mysql:sudo apt-get install mysql-server

用root用户登录:mysql -u root -p

然后创建用户hive并赋予root权限。

mysql> CREATE USER 'hive' IDENTIFIED BY 'hive';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%' WITH GRANT OPTION;
mysql> flush privileges;

 

<property>  <name>javax.jdo.option.ConnectionURL</name>  <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>  <description>JDBC connect string for a JDBC metastore</description></property><property>  <name>javax.jdo.option.ConnectionDriverName</name>  <value>com.mysql.jdbc.Driver</value>  <description>Driver class name for a JDBC metastore</description></property><property>  <name>javax.jdo.option.ConnectionUserName</name>  <value>hive</value>  <description>username to use against metastore database</description></property><property>  <name>javax.jdo.option.ConnectionPassword</name>  <value>hive</value>  <description>password to use against metastore database</description></property>

注意上面的用户名和密码都为hive,是上面在mysql中创建的hive用户。

由于mysql默认是本地登录,因此还需要修改mysql的配置文件把本地绑定注释掉:

命令:sudo gedit /etc/mysql/my.cnf

将"# bind-address           = 127.0.0.1 “ 这行注释掉。

重启mysql服务:sudo service mysql restart

将mysql的JDBC驱动包复制到Hive的lib目录下。(驱动包从网上下载)

(5)启动hive:

终端输入:hive

进入hive的命令行接口,输入show tables; (注意hive的每个hql命令都有分号结束)

若结果正常输出,说明hive配置成功。

2.创建项目实现hive的jdbc接口

(6)在Eclipse中新建一个Java项目,我的命名:HiveJdbcClient

之后右键项目,点击Build Path->Configure Build Path->Libraries

将$HIVE_HOME/lib下的全部Jar包和hadoop-core-1.0.3.jar添加到项目中。

还要记得:

Eclipse上程序操作Hive:
需要hive开启端口监听用户的连接:hive --service hiveserver

项目源代码:

import java.sql.Connection;import java.sql.DriverManager;import java.sql.ResultSet;import java.sql.SQLException;import java.sql.Statement;import org.apache.log4j.Logger;/** * Handle data through hive on eclipse * @author urey * @time 2013\12\26 19:14 */public class HiveJdbcClient {  private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver";     private static String url = "jdbc:hive://192.168.181.128:10000/default";     private static String user = "";     private static String password = "";     private static String sql = "";     private static ResultSet res;     private static final Logger log = Logger.getLogger(HiveJdbcClient.class);     public static void main(String[] args) {             try {                     Class.forName(driverName);                     //Connection conn = DriverManager.getConnection(url, user, password);                     //默认使用端口10000, 使用默认数据库,用户名密码默认                     Connection conn = DriverManager.getConnection("jdbc:hive://192.168.181.128:10000/default", "", "");                      Statement stmt = conn.createStatement();                     // 创建的表名                     String tableName = "testHiveDriverTable";                                          /** 第一步:存在就先删除 **/                     sql = "drop table " + tableName;                     stmt.executeQuery(sql);                     /** 第二步:不存在就创建 **/                     sql = "create table " + tableName + " (key int, value string)  row format delimited fields terminated by '\t'";                     stmt.executeQuery(sql);                     // 执行“show tables”操作                     sql = "show tables '" + tableName + "'";                     System.out.println("Running:" + sql);                     res = stmt.executeQuery(sql);                     System.out.println("执行“show tables”运行结果:");                     if (res.next()) {                             System.out.println(res.getString(1));                     }                     // 执行“describe table”操作                     sql = "describe " + tableName;                     System.out.println("Running:" + sql);                     res = stmt.executeQuery(sql);                     System.out.println("执行“describe table”运行结果:");                     while (res.next()) {                               System.out.println(res.getString(1) + "\t" + res.getString(2));                     }                     // 执行“load data into table”操作                     String filepath = "/home/hadoop/file/test2_hive.txt";                     sql = "load data local inpath '" + filepath + "' into table " + tableName;                     System.out.println("Running:" + sql);                     res = stmt.executeQuery(sql);                                          // 执行“select * query”操作                     sql = "select * from " + tableName;                     System.out.println("Running:" + sql);                     res = stmt.executeQuery(sql);                     System.out.println("执行“select * query”运行结果:");                     while (res.next()) {                             System.out.println(res.getInt(1) + "\t" + res.getString(2));                     }                     // 执行“regular hive query”操作                     sql = "select count(1) from " + tableName;                     System.out.println("Running:" + sql);                     res = stmt.executeQuery(sql);                     System.out.println("执行“regular hive query”运行结果:");                     while (res.next()) {                             System.out.println(res.getString(1));                     }                     conn.close();                     conn = null;             } catch (ClassNotFoundException e) {                     e.printStackTrace();                     log.error(driverName + " not found!", e);                     System.exit(1);             } catch (SQLException e) {                     e.printStackTrace();                     log.error("Connection error!", e);                     System.exit(1);             }     }}

将项目运行在Hadoop上,成功!


并且我们可以在hdfs上找到刚才上传的文件:



0 0
原创粉丝点击