Hive2.1安装后运行mapreduce出错,换成hadoop2.6.4全程记录!
来源:互联网 发布:武汉干部网络培训学院 编辑:程序博客网 时间:2024/06/03 21:20
Hive2.1安装后运行mapreduce出错,换成hadoop2.6.4全程记录!
解压hadoop2.6.4到/opt目录后。
修改hadoop-env.sh
# The java implementation to use.
export JAVA_HOME=/usr/java/jdk
修改~/.bashrc
#HADOOP VARIABLES STARTexport JAVA_HOME=/usr/java/jdkexport HADOOP_INSTALL=/opt/hadoopexport PATH=$PATH:$HADOOP_INSTALL/binexport PATH=$PATH:$JAVA_HOME/binexport PATH=$PATH:$HADOOP_INSTALL/sbinexport HADOOP_MAPRED_HOME=$HADOOP_INSTALLexport HADOOP_COMMON_HOME=$HADOOP_INSTALLexport HADOOP_HDFS_HOME=$HADOOP_INSTALLexport HIVE_HOME=/opt/hiveexport YARN_HOME=$HADOOP_INSTALLexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/nativeexport HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib/native"export CLASS_PATH=.:$JAVA_HOME/lib:$JRE_HOME/libexport CLASS_PATH=.:$JAVA_HOME/lib:$JRE_HOME/lib#HADOOP VARIABLES ENDexport PATH=$PATH:$JAVA_HOME/bin:$HIVE_HOME/bin:$HADOOP_INSTALL/bin:$SPARK_HOME/bin
如果把HADOOP_OPTS配置为:
HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
则会出现如下错误:
[html] view plain copy 在CODE上查看代码片派生到我的代码片
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
在hadoop根目录下执行Jar命令,运行自带的wordcount
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount input output
结果出来了:
16/11/23 12:46:36 INFO mapreduce.Job: Counters: 33File System CountersFILE: Number of bytes read=547468FILE: Number of bytes written=1054406FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0Map-Reduce Framework
说明Hadoop单机模式配置成功!下面,进入伪分布式模式配置。
在hadoop根目录下,新建tmp、hdfs、hdfs/data、hdfs/name目录。
脚本如下:
dyq@ubuntu:/opt/hadoop-2.6.4$ mkdir tmpdyq@ubuntu:/opt/hadoop-2.6.4$ mkdir hdfsdyq@ubuntu:/opt/hadoop-2.6.4$ mkdir hdfs/datadyq@ubuntu:/opt/hadoop-2.6.4$ mkdir hdfs/namedyq@ubuntu:/opt/hadoop-2.6.4$ lsbin hdfs input libexec NOTICE.txt README.txt shareetc include lib LICENSE.txt output sbin tmp
core-site.xml配置如下:
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://192.168.0.10:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop-2.6.4/tmp</value> <description>Abase for other temporary directories.</description> </property></configuration>
hdfs-site.xml配置:这里的/opt/hadoop-2.6.4/hdfs/name是我们刚才新建的几个目录。
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/opt/hadoop-2.6.4/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/opt/hadoop-2.6.4/hdfs/data</value> </property> <property> //这个属性节点是为了防止后面eclopse存在拒绝读写设置的 <name>dfs.permissions</name> <value>false</value> </property> </configuration>
yarn-env.sh
# some Java parametersexport JAVA_HOME=/usr/java/jdk
mapred-env.sh
# some Java parametersexport JAVA_HOME=/usr/java/jdk
mapreduce-site.xml:
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <!--指定JobTracker主机与端口号--> <property> <name>mapred.job.tracker</name> <value>ubuntu:9001</value> </property> </configuration>
修改yarn-site.xml:
<configuration><property> <name>yarn.resourcemanager.hostname</name> <value>ubuntu</value></property><!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
格式化namenode:
hadoop namenode -format
返回成功:
16/11/23 13:29:19 INFO namenode.FSImage: Allocated new BlockPoolId: BP-302189630-192.168.0.10-147987895937616/11/23 13:29:19 INFO common.Storage: Storage directory /opt/hadoop/hdfs/name has been successfully formatted.16/11/23 13:29:19 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 016/11/23 13:29:19 INFO util.ExitUtil: Exiting with status 016/11/23 13:29:19 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at ubuntu/192.168.0.10************************************************************/
启动hadoop:
sbin/start-dfs.shdyq@ubuntu:/opt/hadoop$ sbin/start-dfs.sh16/11/23 13:30:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableStarting namenodes on [ubuntu]ubuntu: starting namenode, logging to /opt/hadoop-2.6.4/logs/hadoop-dyq-namenode-ubuntu.outlocalhost: starting datanode, logging to /opt/hadoop-2.6.4/logs/hadoop-dyq-datanode-ubuntu.outStarting secondary namenodes [0.0.0.0]0.0.0.0: starting secondarynamenode, logging to /opt/hadoop-2.6.4/logs/hadoop-dyq-secondarynamenode-ubuntu.out16/11/23 13:30:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicabledyq@ubuntu:/opt/hadoop$ jps3016 SecondaryNameNode3123 Jps2826 DataNode2721 NameNode
启动yarn试试:
dyq@ubuntu:/opt/hadoop$ sbin/start-yarn.shstarting yarn daemonsstarting resourcemanager, logging to /opt/hadoop-2.6.4/logs/yarn-dyq-resourcemanager-ubuntu.outlocalhost: starting nodemanager, logging to /opt/hadoop-2.6.4/logs/yarn-dyq-nodemanager-ubuntu.outdyq@ubuntu:/opt/hadoop$ jps3167 ResourceManager3016 SecondaryNameNode3497 Jps2826 DataNode2721 NameNode3274 NodeManagerdyq@ubuntu:/opt/hadoop$
hadoop web控制台页面的端口整理:
50070:hdfs文件管理
8088:ResourceManager
8042:NodeManager
运行wordcount程序:
建立目录:hadoop dfs -mkdir /input
hadoop dfs -put README.txt /input/
在hadoop主目录中运行:
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount /input/README.txt /output/1124
如果出现错误:
1/1 local-dirs are bad: /opt/hadoop/tmp/nm-local-dir; 1/1 log-dirs are bad: /opt/hadoop/logs/userlogs
则是临时文件夹设置有误,重新设置正确即可!
0 0
- Hive2.1安装后运行mapreduce出错,换成hadoop2.6.4全程记录!
- CentOS基于Hadoop2.7安装hive2.1
- hive2.0.0版本安装后运行问题
- Centos6.5 64位 安装Hadoop2.7.0, MapReduce日志分析, Hive2.1.0, JDBC连接Hive查询 (1)
- hive2.0.0安装(配合hadoop2.6.0)
- Centos6.5 64位 安装Hadoop2.7.0, MapReduce日志分析, Hive2.1.0, JDBC连接Hive查询 (2)
- Centos6.5 64位 安装Hadoop2.7.0, MapReduce日志分析, Hive2.1.0, JDBC连接Hive查询 (3)
- Centos6.5 64位 安装Hadoop2.7.0, MapReduce日志分析, Hive2.1.0, JDBC连接Hive查询 (4)
- Apache kylin1.5.3 + Hadoop2.6.4 + hbase1.1.1 + hive2.1.0 问题记录
- hadoop2.6.5集群安装及mapreduce测试运行
- 搭建Hadoop2.5.2+Hive2.1+Mysql
- Hadoop学习全程记录——在Eclipse中运行第一个MapReduce程序
- Hadoop学习全程记录——在Eclipse中运行第一个MapReduce程序
- Hadoop学习全程记录——在Eclipse中运行第一个MapReduce程序
- [转载]Hadoop学习全程记录——在Eclipse中运行第一个MapReduce程序
- Hadoop学习全程记录——在Eclipse中运行第一个MapReduce程序
- Hadoop学习全程记录——在Eclipse中运行第一个MapReduce程序
- Hadoop学习全程记录——在Eclipse中运行第一个MapReduce程序(3)
- 资源整理
- Git远程操作详解
- Android中的AlertDialog使用示例一(警告对话框)
- 2016/11/23【转载2】USB描述符
- 把项目从VS2005升级到VS2013
- Hive2.1安装后运行mapreduce出错,换成hadoop2.6.4全程记录!
- Linux下查看物理CPU和逻辑CPU个数
- Android 7.0 Nougat 必须知道的 11 个新功能
- IOS 数组排序方法
- 非侵入式AOP监控之——AspectJ使用
- 快捷键_MATLAB_模块旋转
- 架构风格
- 实现离线地图行政区域划分
- C++从txt文本中输入和读取字符串