Hadoop集群之Hive安装配置

来源:互联网 发布:sql server授予权限 编辑:程序博客网 时间:2024/05/21 13:55

Hadoop集群之Hive安装配置

转自:Hadoop集群之Hive安装配置

Hive是基于Hadoop构建的一套数据仓库分析系统,它提供了丰富的SQL查询方式来分析存储在Hadoop 分布式文件系统中的数据。其在Hadoop的架构体系中承担了一个SQL解析的过程,它提供了对外的入口来获取用户的指令然后对指令进行分析,解析出一个MapReduce程序组成可执行计划,并按照该计划生成对应的MapReduce任务提交给Hadoop集群处理,获取最终的结果。元数据——如表模式——存储在名为metastore的数据库中。

系统环境

192.168.186.128 hadoop-master192.168.186.129 hadoop-slaveMySQL安装在master机器上,hive服务器也安装在master上

Hive下载

下载源码包,最新版本可自行去官网下载

[hadoop@hadoop-master ~]$ wget http://mirrors.cnnic.cn/apache/hive/hive-1.2.1/apache-hive-1.2.1-bin.tar.gz[hadoop@hadoop-master ~]$ tar -zxf apache-hive-1.2.1-bin.tar.gz [hadoop@hadoop-master ~]$ lsapache-hive-1.2.1-bin  apache-hive-1.2.1-bin.tar.gz  dfs  hadoop-2.7.1  Hsource  tmp

配置环境变量

[root@hadoop-master hadoop]# vi /etc/profileHIVE_HOME=/home/hadoop/apache-hive-1.2.1-binPATH=$PATH:$HIVE_HOME/binexport HIVE_NAME PATH[root@hadoop-master hadoop]# source /etc/profile

Metastore

metastore是Hive元数据集中存放地。它包括两部分:服务和后台数据存储。有三种方式配置metastore:内嵌metastore、本地metastore以及远程metastore。
本次搭建中采用MySQL作为远程仓库,部署在hadoop-master节点上,hive服务端也安装在hive-master上,hive客户端即hadoop-slave访问hive服务器。

MySQL安装

安装依赖包

# yum install gcc gcc-c++ ncurses-devel  -y

安装cmake

# wget http://www.cmake.org/files/v2.8/cmake-2.8.12.tar.gz# tar zxvf cmake-2.8.12.tar.gz# cd cmake-2.8.12# ./bootstrap # make && make install

创建用户的相应目录

# groupadd mysql# useradd -g mysql mysql# mkdir -p /data/mysql/# mkdir -p /data/mysql/data/# mkdir -p /data/mysql/log/

获取MySQL安装包并安装

# wget http://dev.mysql.com/get/downloads/mysql/mysql-5.6.25.tar.gz# tar zxvf mysql-5.6.25.tar.gz# cd mysql-5.6.25# cmake \-DCMAKE_INSTALL_PREFIX=/data/mysql \-DMYSQL_UNIX_ADDR=/data/mysql/mysql.sock \-DDEFAULT_CHARSET=utf8 \-DDEFAULT_COLLATION=utf8_general_ci \-DWITH_INNOBASE_STORAGE_ENGINE=1 \-DWITH_ARCHIVE_STORAGE_ENGINE=1 \-DWITH_BLACKHOLE_STORAGE_ENGINE=1 \-DMYSQL_DATADIR=/data/mysql/data \-DMYSQL_TCP_PORT=3306 \-DENABLE_DOWNLOADS=1如果报错找不到CMakeCache.txt则说明没安装ncurses-devel# make && make install

修改目录权限

# chmod +w /data/mysql/# chown -R mysql:mysql /data/mysql/# ln -s /data/mysql/lib/libmysqlclient.so.18 /usr/lib/libmysqlclient.so.18# ln -s /data/mysql/mysql.sock /tmp/mysql.sock

初始化数据库

# cp /data/mysql/support-files/my-default.cnf /etc/my.cnf# cp /data/mysql/support-files/mysql.server /etc/init.d/mysqld# /data/mysql/scripts/mysql_install_db --user=mysql --defaults-file=/etc/my.cnf --basedir=/data/mysql --datadir=/data/mysql/data

启动MySQL服务

# chmod +x /etc/init.d/mysqld# service mysqld start#ln –s /data/mysql/bin/mysql /usr/bin/

初始化密码

#mysql -uroot  -h127.0.0.1 -pmysql> SET PASSWORD = PASSWORD('123456');

创建Hive用户

mysql>CREATE USER 'hive' IDENTIFIED BY 'hive';mysql>GRANT ALL PRIVILEGES ON *.* TO 'hive'@'hadoop-master' WITH GRANT OPTION;mysql>flush privileges;

Hive用户登录

[hadoop@hadoop-master ~]mysql -h hadoop-master -uhivemysql>set password = password('hive');

创建Hive数据库

mysql>create database hive;

配置Hive

修改配置文件
进入到hive的配置文件目录下,找到hive-default.xml.template,cp份为hive-default.xml
另创建hive-site.xml并添加参数

[hadoop@hadoop-master conf]$ pwd/home/hadoop/apache-hive-1.2.1-bin/conf[hadoop@hadoop-master conf]$ vi hive-site.xml<configuration>    <property>        <name>javax.jdo.option.ConnectionURL</name>        <value>jdbc:mysql://hadoop-master:3306/hive?createDatabaseIfNotExist=true</value>        <description>JDBC connect string for a JDBC metastore</description>        </property>       <property>         <name>javax.jdo.option.ConnectionDriverName</name>         <value>com.mysql.jdbc.Driver</value>         <description>Driver class name for a JDBC metastore</description>         </property>                   <property>         <name>javax.jdo.option.ConnectionUserName</name>        <value>hive<value>        <description>username to use against metastore database</description>    </property>    <property>          <name>javax.jdo.option.ConnectionPassword</name>        <value>hive</value>        <description>password to use against metastore database</description>      </property>          </configuration>

JDBC下载

[hadoop@hadoop-master ~]$ wget http://cdn.mysql.com/Downloads/Connector-J/mysql-connector-java-5.1.36.tar.gz[hadoop@hadoop-master ~]$ lsapache-hive-1.2.1-bin  dfs  hadoop-2.7.1  Hsource  tmp[hadoop@hadoop-master ~]$ cp mysql-connector-java-5.1.33-bin.jar apache-hive-1.2.1-bin/lib/

Hive客户端配置

[hadoop@hadoop-master ~]$ scp -r apache-hive-1.2.1-bin/ hadoop@hadoop-slave:/home/hadoop[hadoop@hadoop-slave conf]$ vi hive-site.xml<configuration>    <property>          <name>hive.metastore.uris</name>      <value>thrift://hadoop-master:9083</value>      </property></configuration>

Hive启动

要启动metastore服务

[hadoop@hadoop-master ~]$ hive --service metastore &[hadoop@hadoop-master ~]$ jps10288 RunJar  #多了一个进程9365 NameNode9670 SecondaryNameNode11096 Jps9944 NodeManager9838 ResourceManager9471 DataNode

Hive服务器端访问

[hadoop@hadoop-master ~]$ hiveLogging initialized using configuration in jar:file:/home/hadoop/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.propertieshive> show databases;OKdefaultsrcTime taken: 1.332 seconds, Fetched: 2 row(s)hive> use src;OKTime taken: 0.037 secondshive> create table test1(id int);OKTime taken: 0.572 secondshive> show tables;OKabctesttest1Time taken: 0.057 seconds, Fetched: 3 row(s)hive>

Hive客户端访问

[hadoop@hadoop-slave conf]$ hiveLogging initialized using configuration in jar:file:/home/hadoop/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.propertieshive> show databases;OKdefaultsrcTime taken: 1.022 seconds, Fetched: 2 row(s)hive> use src;OKTime taken: 0.057 secondshive> show tables;OKabctesttest1Time taken: 0.218 seconds, Fetched: 3 row(s)hive> create table test2(id int ,name string);OKTime taken: 5.518 secondshive> show tables;OKabctesttest1test2Time taken: 0.102 seconds, Fetched: 4 row(s)hive>

好了,测试完毕,已经安装成功了。

安装问题纠错

Hive数据库编码问题

错误描述:hive进入后可以创建数据库,但是无法创建表

hive>create table table_test(id string,name string);FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.MetaException(message:javax.jdo.JDODataStoreException: An exception was thrown while adding/validating class(es) : Specified key was too long; max key length is 767 bytes

解决办法:登录mysql修改下hive数据库的编码方式

mysql>alter database hive character set latin1;

防火墙问题

Hive服务器开启了iptables服务,hive本机可以访问hive服务,hive的客户端hadoop-slave访问报错

[hadoop@hadoop-slave conf]$ hiveLogging initialized using configuration in jar:file:/home/hadoop/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.propertiesException in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:483)        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)        ... 8 moreCaused by: java.lang.reflect.InvocationTargetException        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)        at java.lang.reflect.Constructor.newInstance(Constructor.java:408)        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)        ... 14 moreCaused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.NoRouteToHostException: No route to host        at org.apache.thrift.transport.TSocket.open(TSocket.java:187)        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:420)        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:236)        at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)        at java.lang.reflect.Constructor.newInstance(Constructor.java:408)        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:483)        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)Caused by: java.net.NoRouteToHostException: No route to host        at java.net.PlainSocketImpl.socketConnect(Native Method)        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)        at java.net.Socket.connect(Socket.java:589)        at org.apache.thrift.transport.TSocket.open(TSocket.java:182)        ... 22 more)        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:466)        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:236)        at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)        ... 19 more

解决办法:比较粗暴直接关掉了防火墙

[root@hadoop-master hadoop]# service iptables stopiptables: Flushing firewall rules: [  OK  ]iptables: Setting chains to policy ACCEPT: filter [  OK  ]iptables: Unloading modules: [  OK  ][root@hadoop-master hadoop]#

参考资料

hive元数据库配置Metadata http://blog.csdn.net/jyl1798/article/details/41087533
Hadoop+Hive环境搭建 http://nunknown.com/?p=282#3
基于Hadoop数据仓库Hive1.2部署及使用 http://lizhenliang.blog.51cto.com/7876557/1665891

1 0
原创粉丝点击