centOS 安装 hadoop-2.2.0

来源:互联网 发布:vb.net winform 编辑:程序博客网 时间:2024/04/30 13:35
准备工作:安装jdk1.下载hadoop:hadoop-2.2.0.tar.gz网址:http://mirrors.cnnic.cn/apache/hadoop/common/2.把hadoop解压到文件夹/usr下面[root@localhost usr]# tar -xzvf hadoop-2.2.0.tar.gz3.创建用户 hadoop [root@localhost usr]# adduser hadoop4.配置 ssh 注意在要你输入密码的时候直接按回车就行[root@localhost ~]# ssh-keygen -t  rsaGenerating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:a8:7a:3e:f6:92:85:b8:c7:be:d9:0e:45:9c:d1:36:3b root@localhost.localdomain[root@localhost ~]# [root@localhost ~]# cd ..[root@localhost /]# cd root[root@localhost ~]# lsanaconda-ks.cfg  Desktop  install.log  install.log.syslog[root@localhost ~]# cd .ssh[root@localhost .ssh]# cat id_rsa.pub > authorized_keys[root@localhost .ssh]# [root@localhost .ssh]# ssh localhostThe authenticity of host 'localhost (127.0.0.1)' can't be established.RSA key fingerprint is 41:c8:d4:e4:60:71:6f:6a:33:6a:25:27:62:9b:e3:90.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'localhost' (RSA) to the list of known hosts.Last login: Tue Jun 21 22:40:31 2011[root@localhost ~]# 5.编辑环境变量输出的脚本 exportENV.shexportHADOOP_PREFIX="/usr/hadoop-2.2.0"export PATH=$PATH:$HADOOP_PREFIX/binexport PATH=$PATH:$HADOOP_PREFIX/sbinexportHADOOP_MAPRED_HOME=${HADOOP_PREFIX}export HADOOP_COMMON_HOME=${HADOOP_PREFIX}export HADOOP_HDFS_HOME=${HADOOP_PREFIX}export YARN_HOME=${HADOOP_PREFIX}6.把 java 路径添加到 /root/.bashrc 里面vim /root/.bashrcexport JAVA_HOME=/usr/lib/jvm/jre-1.7.0-openjdk/export PATH=${JAVA_HOME}/lib:${PATH}编辑完了在把它加入到环境变量中source /root/.bashrc7.编辑配置文件 位置:/usr/hadoop-2.2.0/etc/hadoop 注意如果下面提到的文件在里面找不到,那么里面肯定有指定文件的 xxx.template 模板存在,你只需要将模板复制一份然后去掉 .template 后缀就行了7.1.编辑 core-site.xml<configuration><property><name>fs.default.name</name><value>hdfs://localhost:8020</value><final>true</final></property></configuration>7.2.编辑hdfs-site.xml<configuration><property><name>dfs.namenode.name.dir</name><value>file:/home/hadoop/workspace/hadoop_space/hadoop23/dfs/name</value><final>true</final></property><property><name>dfs.datanode.data.dir</name><value>file:/home/hadoop/workspace/hadoop_space/hadoop23/dfs/data</value><final>true</final></property><property><name>dfs.replication</name><value>1</value></property><property><name>dfs.permissions</name><value>false</value></property></configuration>7.3.编辑 mapred-site.xml<configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property><property><name>mapred.system.dir</name><value>file:/home/hadoop/workspace/hadoop_space/hadoop23/mapred/system</value><final>true</final></property><property><name>mapred.local.dir</name><value>file:/home/hadoop/workspace/hadoop_space/hadoop23/mapred/local</value><final>true</final></property></configuration>7.4.编辑yarn-site.xml<configuration><property><name>yarn.nodemanager.aux-services</name><value>mapreduce.shuffle</value></property><property><name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value></property></configuration> 8.格式化 namenode 在文件夹(/usr/hadoop-2.2.0/)下# ./bin/hdfs namenode –format9.开始守护进程# ./sbin/hadoop-daemon.sh start namenode# ./sbin/hadoop-daemon.sh start datanode可以同时启动:# ./sbin/start-dfs.sh10.开始 Yarn 守护进程# ./sbin/yarn-daemon.sh start resourcemanager# ./sbin/yarn-daemon.sh start nodemanager或同时启动:# start-yarn.sh11.检查守护进程是否启动# jps2539 NameNode2744 NodeManager3075 Jps3030 DataNode2691 ResourceManager12.浏览UI打开 localhost:8088 查看资源管理页面

0 0
原创粉丝点击