使用cdh和azkaban、redis等遇到的一些小问题

来源:互联网 发布:网络广告发布软件 编辑:程序博客网 时间:2024/05/29 17:35

记录一下,下次遇到可以更快解决:

1.hive 遇到问题:

Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient      at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:444)      at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:626)      at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:570)      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)      at java.lang.reflect.Method.invoke(Method.java:597)      at org.apache.hadoop.util.RunJar.main(RunJar.java:212)  Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient      at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1453)      at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:63)      at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:73)      at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2664)      at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2683)      at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:425)      ... 7 more  Caused by: java.lang.reflect.InvocationTargetException      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)      at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)      at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)      at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1451)      ... 12 more 

解决:

登录mysql 执行:set global binlog_format='MIXED';

2.azkaban执行sqoop和hive报错:

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.D

解决:

更换高版本的mysql驱动包,我这里换成了mysql-connector-java-5.1.32-bin.jar

3.执行任务的时候报hdfs的权限问题

一般不用设置hdfs的权限问题,可以直接关闭

解决:

在HDFS配置页面,找到属性dfs.permissions,取消勾选即可。

4.在azkaban上面上传zip包报错

installation Failed.Error chunking

解决方法:

在/etc/my.cnf 中进行配置[mysqld]max_allowed_packet=1024M或者设置:mysql> set global max_allowed_packet=1073741824;

5.hive注释乱码等

在mysql 创建hive database,如果为utf8执行hive会出错,改为latin1编码就行,但是改为这个编码了,建立表创建注释等中文无法显示,那么就设置单独的几个元数据列为utf8就行

解决如下:

use hive当hive使用mysql作为元数据库的时候mysql的字符集要设置成latin1 default。alter database hive character set latin1;为了保存那些utf8的中文,要将mysql中存储注释的那几个字段的字符集单独修改为utf8。修改字段注释字符集alter table COLUMNS_V2 modify column COMMENT varchar(256) character set utf8;修改表注释字符集alter table TABLE_PARAMS modify column PARAM_VALUE varchar(4000) character set utf8;修改分区表参数,以支持分区键能够用中文表示。alter table PARTITION_PARAMS modify column PARAM_VALUE varchar(4000) character set utf8;alter table PARTITION_KEYS modify column PKEY_COMMENT varchar(4000) character set utf8;flush privileges;修改hive连接配置javax.jdo.option.ConnectionURL  jdbc:mysql://192.168.0.123:3306/hive?characterEncoding=UTF-8  发现还是乱码,后来发现是linux底层的编码原因,查看底层编码:[root@testslave01 ~]# locale LANG=CLC_CTYPE="C"LC_NUMERIC="C"LC_TIME="C"LC_COLLATE="C"LC_MONETARY="C"LC_MESSAGES="C"LC_PAPER="C"LC_NAME="C"LC_ADDRESS="C"LC_TELEPHONE="C"LC_MEASUREMENT="C"LC_IDENTIFICATION="C"LC_ALL=发现不是utf8的修改编码:vi /etc/sysconfig/i18n 内容如下:LANG="en_US.UTF-8"SYSFONT="latarcyrheb-sun16"然后source /etc/sysconfig/i18n再测试,乱码问题解决

6.cdh 安装spark 部署客户端配置的时候报错:

在服务 Spark 上执行命令 Deploy Client Configuration 失败

到如下目录查看日志:

/opt/cloudera-manager/cm-5.11.0/run/cloudera-scm-agent/process/ccdeploy_spark-conf_etcsparkconf.cloudera.spark_on_yarn_8479360117146208094/logs

发现报错:

Error: JAVA_HOME is not set and could not be found.

但是明明jdk安装完全没问题,却报错,其实是spark这里没有去环境变量里面找jdk,而是跑到了 /usr/java/default 这个目录下去找jdk,所有报错说没有找到JAVA_HOME,只需要在集群机器上面创建 /usr/java 目录,然后创建软连接就行:

ln -s /opt/jdk/jdk1.7.0_79 /usr/java/default

再执行部署客户端配置,成功!

7.redis安装失败:

首先需要安装gcc等依赖:

yum -y install gcc gcc-c++ libstdc++-devel

错误1

jemalloc/jemalloc.h: No such file or directory

原因是jemalloc重载了Linux下的ANSI C的malloc和free函数。解决办法:make时添加参数。
make test MALLOC=libc,在make MALLOC=libc 然后make install MALLOC=libc

错误2

*** [err]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1) in tests/integration/replication-psync.tclExpected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)Cleanup: may take some time... OKmake[1]: *** [test] Error 1make[1]: Leaving directory `/mydata/redis/redis-3.0.7/src'

修改文件代码:

vi tests/integration/replication-psync.tcl

将如下after 100 改成after500

这里写图片描述

使用make test MALLOC=libc 已经完全没问题了,但是一make就报错,这里需要先清除之前编译的东西:

make clean

这里写图片描述

8.通过hiveserver2的方式连接中文乱码

在hiveserver2的启动脚本中添加如下的环境变量(hiveserver2的启动编码没走系统底层编码):export LANG=zh_CN.UTF-8

9.进入hive shell客户端报错

log4j:ERROR Could not find value for key log4j.appender.NullAppenderlog4j:ERROR Could not instantiate appender named "NullAppender". 

修改hadoop的log4j的配置,我这里是cdh log4j的配置在

/etc/hadoop/conf/etc/hadoop/conf.cloudera.hdfs/etc/hadoop/conf.cloudera.yarn

添加如下配置:

vi log4j.propertieslog4j.appender.NullAppender=org.apache.log4j.varia.NullAppender
原创粉丝点击