docker中安装hadoop过程及错误解决

来源:互联网 发布:淘宝订单编号查询系统 编辑:程序博客网 时间:2024/05/29 04:45

本文主要记录build 支持hdfs的docker过程中遇到的问题,以及解决方法。
自己写的Dockerfile文件,可以参考学习下:

# Creates pseudo distributed hadoop 2.7.1## docker build -t sequenceiq/hadoop .FROM localhost:5000/my-centosMAINTAINER xzpUSER root# install dev toolsRUN yum clean all && yum update -yRUN rpm --rebuilddbRUN yum install -y curl which tar sudo openssh-server openssh-clients rsync wget# update libselinux. see https://github.com/sequenceiq/hadoop-docker/issues/14RUN yum update -y libselinux# passwordless sshRUN ssh-keygen -q -N "" -t dsa -f /etc/ssh/ssh_host_dsa_keyRUN ssh-keygen -q -N "" -t rsa -f /etc/ssh/ssh_host_rsa_keyRUN ssh-keygen -q -N "" -t rsa -f /root/.ssh/id_rsaRUN cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys# java#因为实时下载太慢了,这里取了巧,将本地的文件放上去#RUN curl -LO 'http://download.oracle.com/otn-pub/java/jdk/7u71-b14/jdk-7u71-linux-x64.rpm' -H 'Cookie: oraclelicense=accept-securebackup-cookie'ADD jdk-7u71-linux-x64.rpm /root/RUN chmod +x /root/jdk-7u71-linux-x64.rpm RUN rpm -i /root/jdk-7u71-linux-x64.rpmRUN rm /root/jdk-7u71-linux-x64.rpmENV JAVA_HOME /usr/java/defaultENV PATH $PATH:$JAVA_HOME/binRUN rm /usr/bin/java && ln -s $JAVA_HOME/bin/java /usr/bin/java# hadoop#同上,直接放本地文件#RUN curl -s http://www.eu.apache.org/dist/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz | tar -xz -C /usr/local/COPY hadoop-2.7.2.tar.gz /root/RUN tar -xzvf /root/hadoop-2.7.2.tar.gz -C /usr/local RUN cd /usr/local && ln -s ./hadoop-2.7.2 hadoopENV HADOOP_PREFIX /usr/local/hadoopENV HADOOP_COMMON_HOME /usr/local/hadoopENV HADOOP_HDFS_HOME /usr/local/hadoopENV HADOOP_MAPRED_HOME /usr/local/hadoopENV HADOOP_YARN_HOME /usr/local/hadoopENV HADOOP_CONF_DIR /usr/local/hadoop/etc/hadoopENV YARN_CONF_DIR $HADOOP_PREFIX/etc/hadoopRUN mkdir $HADOOP_PREFIX/inputRUN cp $HADOOP_PREFIX/etc/hadoop/*.xml $HADOOP_PREFIX/inputRUN sed -i '/^export JAVA_HOME/ s:.*:export JAVA_HOME=/usr/java/default\nexport HADOOP_PREFIX=/usr/local/hadoop\nexport HADOOP_HOME=/usr/local/hadoop\n:' $HADOOP_PREFIX/etc/hadoop/hadoop-env.shRUN sed -i '/^export HADOOP_CONF_DIR/ s:.*:export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop/:' $HADOOP_PREFIX/etc/hadoop/hadoop-env.sh# pseudo distributedADD core-site.xml.template $HADOOP_PREFIX/etc/hadoop/core-site.xml.templateRUN sed s/HOSTNAME/localhost/ /usr/local/hadoop/etc/hadoop/core-site.xml.template > /usr/local/hadoop/etc/hadoop/core-site.xmlADD hdfs-site.xml $HADOOP_PREFIX/etc/hadoop/hdfs-site.xmlADD mapred-site.xml $HADOOP_PREFIX/etc/hadoop/mapred-site.xmlADD yarn-site.xml $HADOOP_PREFIX/etc/hadoop/yarn-site.xmlRUN $HADOOP_PREFIX/bin/hdfs namenode -format# fixing the libhadoop.so like a boss#RUN rm -rf /usr/local/hadoop/lib/native#RUN mv /tmp/native /usr/local/hadoop/libADD ssh_config /root/.ssh/configRUN chmod 600 /root/.ssh/configRUN chown root:root /root/.ssh/config# ADD supervisord.conf /etc/supervisord.confADD bootstrap.sh /etc/bootstrap.shRUN chown root:root /etc/bootstrap.shRUN chmod 700 /etc/bootstrap.shENV BOOTSTRAP /etc/bootstrap.sh# workingaround docker.io build errorRUN ls -la /usr/local/hadoop/etc/hadoop/*-env.shRUN chmod +x /usr/local/hadoop/etc/hadoop/*-env.shRUN ls -la /usr/local/hadoop/etc/hadoop/*-env.sh# fix the 254 error codeRUN sed  -i "/^[^#]*UsePAM/ s/.*/#&/"  /etc/ssh/sshd_configRUN echo "UsePAM no" >> /etc/ssh/sshd_configRUN echo "Port 2122" >> /etc/ssh/sshd_configRUN mkdir -p /root/dataRUN mkdir -p /root/name#RUN . $HADOOP_PREFIX/etc/hadoop/hadoop-env.sh#RUN $HADOOP_PREFIX/etc/hadoop/hadoop-env.sh && $HADOOP_PREFIX/sbin/start-dfs.sh && $HADOOP_PREFIX/bin/hdfs dfs -mkdir -p /user/root#RUN $HADOOP_PREFIX/etc/hadoop/hadoop-env.sh && $HADOOP_PREFIX/sbin/start-dfs.sh && $HADOOP_PREFIX/bin/hdfs dfs -put $HADOOP_PREFIX/etc/hadoop/ input#CMD ["/etc/bootstrap.sh", "-d"]#CMD ["/usr/sbin/sshd","-d"]#export ports# Hdfs portsEXPOSE 50010 50020 50070 50075 50090 8020 9000# Mapred portsEXPOSE 10020 19888#Yarn portsEXPOSE 8030 8031 8032 8033 8040 8042 8088#Other portsEXPOSE 49707 2122

执行docker build -t docker-hadoop:2.7.0 . 后,查看images:

[root@nsfocus hadoop-docker-test]# docker imagesREPOSITORY                             TAG                 IMAGE ID            CREATED             SIZEhadoop-docker                          2.7.0               c1f7c8db54bf        43 minutes ago      1.42 GBtest                                   latest              9fb729a6501a        About an hour ago   761.2 MBlocalhost:5000/dpdk-centos             1.0.1               d8dffc0f4791        4 hours ago         312.7 MB[root@nsfocus hadoop-docker-test]# docker run -it hadoop-docker:2.7.0 /etc/bootstrap.sh -bash[root@352cf3c2d73f /]# jps666 Jps260 DataNode447 SecondaryNameNode126 NameNode

ok, 大功搞成。

遇到的问题解决记录:

  • docker内service命令not found
    解决:RUN yum -y install initscripts
  • Call From e45cc3c2b295/172.17.0.6 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused;
    解决:
    sshd未启动,需要启动: RUN /usr/sbin/sshd -D & ,
    sshd的启动时机要放在hdfs node启动前;但是新版本的centos-docker不支持将sshd以systemd service启动,所以最好是最后一步启动即可
  • WARN hdfs.DFSUtil: Namenode for null remains unresolved for ID null. Check your hdfs-site.xml file to ensure namenodes are configured properly.
    解决:将haoop的core-site.xml中的配置和实际hostname对齐
  • 关于启动sshd的方法
    可以写一个启动脚本,docker run时指定脚本:
-----/etc/bootstrap.sh--------------#!/bin/bash: ${HADOOP_PREFIX:=/usr/local/hadoop}$HADOOP_PREFIX/etc/hadoop/hadoop-env.shrm /tmp/*.pid# altering the core-site configurationsed s/HOSTNAME/$HOSTNAME/ /usr/local/hadoop/etc/hadoop/core-site.xml.template > /usr/local/hadoop/etc/hadoop/core-site.xml#service sshd start/usr/sbin/sshd -D &$HADOOP_PREFIX/sbin/start-dfs.sh#$HADOOP_PREFIX/sbin/start-yarn.sh#$HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh start historyserverif [[ $1 == "-d" ]]; then  while true; do sleep 1000; donefiif [[ $1 == "-bash" ]]; then  /bin/bashfi

然后在运行时指定即可:
docker run -it XXX /etc/bootstrap.sh -bash

    -
原创粉丝点击