Hadoop Fully distributed mode

来源:互联网 发布:东北林大网络 编辑:程序博客网 时间:2024/06/06 02:06

Before Clone

[centos@localhost yum.repos.d]$ sudo nano /etc/hosts127.0.0.1 localhost                192.168.168.201 s201                192.168.168.202 s202                192.168.168.203 s203                192.168.168.204 s204 
[centos@localhost ~]$ sudo nano /etc/hostname  GNU nano 2.3.1             File: /etc/hostname                                192.168.168.201 s201192.168.168.202 s202192.168.168.203 s203192.168.168.204 s204

Clone

编辑/etc/sysconfig/network-scripts/ifcfg-eno16777736

编辑/etc/hostname

重启网络服务
sudo service network restart

编辑/etc/resolv.conf
nameserver 192.168.168.2

[centos@s201 network-scripts]$ sudo nano /etc/resolv.conf 
nameserver 192.168.168.2

Port

50070 namenode http port
50075 datanode http port

ssh

Delete all the files under .ssh/

[centos@s201 .ssh]$ rm -rf *
[centos@s201 .ssh]$ ssh-keygen[centos@s201 .ssh]$ cp id_rsa.pub authorized_keys[centos@s201 .ssh]$ scp authorized_keys centos@s202:~/.ssh/[centos@s201 .ssh]$ scp authorized_keys centos@s203:~/.ssh/[centos@s201 .ssh]$ scp authorized_keys centos@s204:~/.ssh/

change to @s202 window

[centos@s202 .ssh]$ ssh-keygen[centos@s202 .ssh]$ cat id_rsa.pub >> authorized_keys

Fully distributed mode

**core-site.xmlhdfs-site.xmlmapred-site.xml (cp Need)yarn-site.xml slaves**
[centos@s201 hadoop]$ sudo nano core-site.xml
<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration>                           <property>                        <name>fs.defaultFS</name>                                                <value>hdfs://s201/</value>                                        </property>                        </configuration>   
[centos@s201 hadoop]$ sudo nano hdfs-site.xml 
<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration>        <property>                                        <name>dfs.replication</name>                                                <value>3</value>                                        </property>                        </configuration>                
[centos@s201 hadoop]$ cp mapred-site.xml.template mapred-site.xml[centos@s201 hadoop]$ sudo nano mapred-site.xml
<?xml version="1.0"?><configuration>        <property>                                        <name>mapreduce.framework.name</name>                                        <value>yarn</value>                                </property>                        </configuration>                        
[centos@s201 hadoop]$ sudo nano yarn-site.xml 
<?xml version="1.0"?><configuration>        <property>                        <name>yarn.resourcemanager.hostname</name>                                      <value>s201</value>                                </property>                                <property>                                                <name>yarn.nodemanager.aux-services</name>                                      <value>mapreduce_shuffle</value>                                </property>                        </configuration>    

Change slaves

[centos@s201 hadoop]$ nano slaves
s202s203s204

JAVA_HOME

[centos@s201 hadoop]$ sudo nano hadoop-env.sh
export JAVA_HOME=/soft/jdk

分发配置

[centos@s201 etc]$ cd /soft/hadoop/etc/[centos@s201 etc]$ scp -r full centos@s202:/soft/hadoop/etc/[centos@s201 etc]$ scp -r full centos@s203:/soft/hadoop/etc/[centos@s201 etc]$ scp -r full centos@s204:/soft/hadoop/etc/

@s204 (all other three)

[centos@s204 etc]$ ln -s full hadoop

Delete /tmp and hadoop logs

[centos@s201 etc]$ cd /tmp[centos@s201 tmp]$ rm -rf hadoop-centos[centos@s201 tmp]$ ssh s202 rm -rf /tmp/hadoop-centos[centos@s201 tmp]$ ssh s203 rm -rf /tmp/hadoop-centos[centos@s201 tmp]$ ssh s204 rm -rf /tmp/hadoop-centos[centos@s201 hadoop]$ cd /soft/hadoop/logs[centos@s201 hadoop]$ rm -rf *[centos@s201 hadoop]$ ssh s202 rm -rf /soft/hadoop/logs/*[centos@s201 hadoop]$ ssh s203 rm -rf /soft/hadoop/logs/*[centos@s201 hadoop]$ ssh s204 rm -rf /soft/hadoop/logs/*
原创粉丝点击