ubuntu 12.04 hadoop2.7.3 环境配置

来源:互联网 发布:沙盘模拟软件下载 编辑:程序博客网 时间:2024/06/08 16:42

ubuntu 12.04 hadoop 搭建
1.准备两台机器
2.以root用户登陆界面
sudo gedit /etc/lightdm/lightdm.conf

[SeatDefaults]    greeter-session=unity-greeter    user-session=ubuntu    greeter-show-manual-login=true    allow-guest=false

3.sudo passwd root ,输入密码即可用root用户登陆
4.环境准备
hadoop: http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
jdk :http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
5.设置电脑静态ip地址
6.设置hostname /etc/hostname master,slave (两台电脑更改各自的名称)
7.设置hosts文件 /etc/hosts
8.jdk 1.8 安装 /usr/local/java(自己选择路径)
解压 tar -zxf jdk名称
配置环境变量
/etc/profile

export JAVA_HOME=/usr/local/java/jdk1.8.0_121export JRE_HOME=${JAVA_HOME}/jreexport CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export PATH=${JAVA_HOME}/bin:$PATH

最后执行 source /etc/profile
测试jdk java -version 出现jdk版本号
9.安装ssh
(1)安装ssh sudo apt-get install ssh
(2)测试 ssh localhost 输入当前用户的密码回车 没异常说明安装成功!
(3)开启SSHD服务 sudo apt-get install openssh-server
(4)确认是否启动 ps -e | grep ssh 找到sshd就是启动了
10.ssh 无加密配置
1).主机master 进行 ssh-keygen
2).将生成的id_rsa.pub 复制到从机slave上。scp ~/.ssh/id_rsa_pub root@ip:~/.ssh
3).在master,slave机上都进入~/.ssh下,执行cat id_rsa.pub >> authorized_keys
4).最后执行 chmod 600 authorized_keys 权限必须是600 保证自己有写权限。

11.hadoop 安装
1)解压hadoop安装包。tar zxvf hadoop包名
2)新建文件夹:
mkdir dfs
mkdir tmp
mkdir dfs/name
mkdir dfs/data
3)修改/usr/local/java/hadoop-2.7.3/etc/hadoop下core-site.xml文件
加上一下代码:

<configuration><property>        <name>fs.defaultFS</name>        <value>hdfs://master:9000</value>    </property>    <property>        <name>hadoop.tmp.dir</name>        <value>file:/usr/local/java/hadoop-2.7.3/tmp</value>    </property>    <property>        <name>io.file.buffer.size</name>        <value>131702</value>    </property></configuration>

4).修改hdfs-site.xml

<configuration><property>        <name>dfs.namenode.name.dir</name>        <value>file:/usr/local/java/hadoop-2.7.3/dfs/name</value>    </property>    <property>        <name>dfs.datanode.data.dir</name>        <value>file:/usr/local/java/hadoop-2.7.3/dfs/data</value>    </property>    <property>        <name>dfs.replication</name>        <value>2</value>    </property>    <property>        <name>dfs.namenode.secondary.http-address</name>        <value>master:9001</value>    </property>    <property><name>dfs.webhdfs.enabled</name><value>true</value>    </property></configuration>

5).修改mapred-site.xml

<property>        <name>mapreduce.framework.name</name>        <value>yarn</value>    </property>    <property>        <name>mapreduce.jobhistory.address</name>        <value>master:10020</value>    </property>    <property>        <name>mapreduce.jobhistory.webapp.address</name>        <value>master:19888</value>    </property>

6).修改yarn-site.xml

<configuration><!-- Site specific YARN configuration properties --><property>        <name>yarn.nodemanager.aux-services</name>        <value>mapreduce_shuffle</value>    </property>    <property>        <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>        <value>org.apache.hadoop.mapred.ShuffleHandler</value>    </property>    <property>        <name>yarn.resourcemanager.address</name>        <value>master:8032</value>    </property>    <property>        <name>yarn.resourcemanager.scheduler.address</name>        <value>master:8030</value>    </property>    <property>        <name>yarn.resourcemanager.resource-tracker.address</name>        <value>master:8031</value>    </property>    <property>        <name>yarn.resourcemanager.admin.address</name>        <value>master:8033</value>    </property>    <property>        <name>yarn.resourcemanager.webapp.address</name>        <value>master:8088</value>    </property>    <property>        <name>yarn.nodemanager.resource.memory-mb</name>        <value>768</value>    </property></configuration>

7).hadoop-env.sh、yarn-env.sh添加jdk路径
8).修改slaves文件,添加node节点slave。
9).将hadoop文件夹复制到slave节点下。scp -r /usr/local/java/hadoop-2.7.3 root@slave:/usr/local/java/
10).format namenode 进入到hadoop目录,执行bin/hdfs namenode -format
11).进入sbin目录下启动./start-all.sh
12).停止 ./stop-all.sh

1 0
原创粉丝点击