hive 2.3.0配置与部署

来源:互联网 发布:ssh端口号是多少 编辑:程序博客网 时间:2024/06/06 16:43

配置MySQL

安装

yum -y install mysql mysql-server mysql-devel

启动

service mysqld start

开机启动

chkconfig mysqld on

登录

mysql -u root

初始化密码
mysql中输入

use mysql;update user set password=password('root') where user='root';exit;
service mysqld restart

开启远程连接
mysql中输入

grant all PRIVILEGES on *.* to root@'%' identified by 'root';
service mysqld restart

MySQL存储hive元数据

创建linux用户

useradd hive passwd hive

在MySQL中创建数据库

CREATE DATABASE hive;

创建MySQL用户,并授权

CREATE USER 'hive' IDENTIFIED BY 'hive';grant all privileges on *.* to 'hive' identified by 'hive';flush privileges;

hive部署

环境变量

export HADOOP_HOME=/usr/local/hadoopexport HADOOP_CONF_HOME=$HADOOP_HOME/etc/hadoop/export HIVE_HOME=/usr/local/hiveexport HIVE_CONF_DIR=/usr/local/hive/confexport PATH=$PATH:$HIVE_HOME/bin:$HADOOP_HOME/bin

创建临时目录

cd /usr/local/hive/mkdir -p tmp/resources

创建hdfs目录

hadoop fs -mkdir -p /tmp/hivehadoop fs -mkdir -p /user/hive/warehouse

新建用户组并将hdfs下的/user/hive/目录的权限、所属主与所属组分别赋权

groupadd hadoopusermod -G hadoop hivehadoop fs -chown -R hive:hadoop /user/hive/warehousehadoop fs -chown -R hive:hadoop /tmp/hive/hadoop fs -chmod 755 /user/hive/warehousehadoop fs -chmod 777 /tmp/hive/

hive配置

重命名配置文件
这里写图片描述

修改hive-env.sh

export HADOOP_HOME=/usr/local/hadoopexport HIVE_CONF_DIR=/usr/local/hive/conf

替换hive-site.xml中的一些value

<property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:mysql://master:3306/hive?createDatabaseIfNotExist=true</value><description>JDBC connect string for a JDBC metastore</description></property><property><name>javax.jdo.option.ConnectionDriverName</name><value>com.mysql.jdbc.Driver</value><description>Driver class name for a JDBC metastore</description></property><property><name>javax.jdo.option.ConnectionUserName</name><value>hive</value><description>Username to use against metastore database</description></property><property><name>javax.jdo.option.ConnectionPassword</name><value>hive</value><description>password to use against metastore database</description></property><property><name>hive.exec.local.scratchdir</name><value>/usr/local/hive/tmp</value><description>Local scratch space for Hive jobs</description></property><property><name>hive.downloaded.resources.dir</name><value>/usr/local/hive/tmp/resources</value><description>Temporary local directory for added resources in the remote file system.</description></property><property><name>hive.metastore.warehouse.dir</name><value>/user/hive/warehouse</value><description>location of default database for the warehouse</description></property><property><name>hive.exec.scratchdir</name><value>/tmp/hive</value><description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission.</description></property><property><name>hive.hbase.snapshot.restoredir</name><value>/tmp</value><description>The directory in which to restore the HBase table snapshot.</description></property><property><name>hive.scratch.dir.permission</name><value>700</value><description>The permission for the user specific scratch directories that get created.</description></property>

使用hive

使用bin目录下的hive启动

show databases;

这里写图片描述

在MySQL中查看元数据表
这里写图片描述

启动hiveserver2

hive --service hiveserver2 --hiveconf hive.server2.thrift.port=10000

问题解决

这里写图片描述

如果此时的MySQL中没有hive的元数据表,而之前已经启动过hive,包括启动hive之后运行出错的
在bin目录下运行如下命令导入元数据

 ./schematool -dbType mysql -initSchema

报错Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStore

配置文件中有错误,仔细检查

原创粉丝点击