hadoop2.6完全分布式安装

来源:互联网 发布:示波器y轴输入端口 编辑:程序博客网 时间:2024/06/09 11:53

192.168.0.110 master

192.168.0.111 slave1

192.168.0.112 slave2

1.配置jdk

另一博客里,三台都要配。

2.添加用户hadoop

groupadd hadoopuseradd -g hadoop hadooppasswd hadoopvi /etc/sudoers配置hadoop  ALL=(ALL)        NOPASSWD: ALL
3.ssh无密码登录

ssh localhostcd ~ssh-keygen -t rsa -P '' -f .ssh/id_rsacat .ssh/id_rsa.pub >> .ssh/authorized_keyschmod 600  .ssh/authorized_keys 
把master上的id_rsa.pub传到slave

scp id_rsa.pub  hadoop@slave1:/home/hadoop/.ssh
cat id_rsa.pub >> ~/.ssh/authorized_keys
slave中.ssh的权限为700,authorized_keys的权限为744.

4.配置hadoop集群

修改slaves文件

slave1slave2

core-site.xml

<configuration><property><name>fs.defaultFS</name><value>hdfs://master:9000</value></property><property><name>hadoop.tmp.dir</name><value>/usr/hadoop/tmp</value></property></configuration>

hdfs-site.xml

<configuration>        <property>                <name>dfs.namenode.secondary.http-address</name>                <value>master:50090</value>        </property>        <property>                <name>dfs.replication</name>                <value>2</value>        </property>        <property>                <name>dfs.namenode.name.dir</name>                <value>/usr/hadoop/dfs/name</value>        </property>        <property>                <name>dfs.datanode.data.dir</name>                <value>/usr/hadoop/dfs/data</value>        </property></configuration>

mapred-site.xml     没有的话把mapred-site.xml.template改为mapred-site.xml

<configuration>        <property>                <name>mapreduce.framework.name</name>                <value>yarn</value>        </property>        <property>                <name>mapreduce.jobhistory.address</name>                <value>master:10020</value>        </property>        <property>                <name>mapreduce.jobhistory.webapp.address</name>                <value>master:19888</value>        </property></configuration>

yarn-site.xml

<configuration><!-- Site specific YARN configuration properties --><property><name>yarn.resourcemanager.hostname</name><value>master</value></property><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property></configuration>

压缩hadoop文件夹传到slave1和slave2中


5.启动

hdfs namenode -format 

在master中启动

start-dfs.shstart-yarn.shmr-jobhistory-daemon.sh start historyserver


用jps查看

[hadoop@master sbin]$ jps6208 JobHistoryServer5617 NameNode6242 Jps5930 ResourceManager5791 SecondaryNameNode
[hadoop@slave1 hadoop]$ jps3616 DataNode3715 NodeManager3811 Jps
[hadoop@slave2 hadoop]$ jps5216 DataNode5315 NodeManager5412 Jps


用hdfs dfsadmin -report查看

[hadoop@master sbin]$ hdfs dfsadmin -reportConfigured Capacity: 37492883456 (34.92 GB)Present Capacity: 32955469824 (30.69 GB)DFS Remaining: 32955461632 (30.69 GB)DFS Used: 8192 (8 KB)DFS Used%: 0.00%Under replicated blocks: 0Blocks with corrupt replicas: 0Missing blocks: 0-------------------------------------------------Live datanodes (2):Name: 192.168.0.111:50010 (slave1)Hostname: slave1Decommission Status : NormalConfigured Capacity: 18746441728 (17.46 GB)DFS Used: 4096 (4 KB)Non DFS Used: 2447355904 (2.28 GB)DFS Remaining: 16299081728 (15.18 GB)DFS Used%: 0.00%DFS Remaining%: 86.94%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Fri Jun 16 21:58:20 CST 2017Name: 192.168.0.112:50010 (slave2)Hostname: slave2Decommission Status : NormalConfigured Capacity: 18746441728 (17.46 GB)DFS Used: 4096 (4 KB)Non DFS Used: 2090057728 (1.95 GB)DFS Remaining: 16656379904 (15.51 GB)DFS Used%: 0.00%DFS Remaining%: 88.85%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Fri Jun 16 21:58:22 CST 2017
这样完全分布式就搭好了

原创粉丝点击