大数据 IMF传奇行动 如何 搭建 8台设备的 hadoop分布式集群

来源:互联网 发布:java基础知识txt下载 编辑:程序博客网 时间:2024/04/30 14:20

硬件配置  华为RH2285设备

2CPU 8核16线程

48G 内存 

380G硬盘





1.配置Hadoop的全局环境变量
输入名称# vi /etc/profile打开profile文件,按i可以进入文本输入模式,在profile文件的最后增加HADOOP_HOME及修改PATH的环境变量,输入:wq!保存退出。
export HADOOP_HOME=/usr/local/hadoop-2.6.0 
export PATH=.:$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$SCALA_HOME/bin




2.在命令行中输入source  /etc/profile,使刚才修改的HADOOP_HOME及PATH配置文件生效。




3.hadoop-env.sh配置文件修改
 


export JAVA_HOME=/usr/local/jdk1.8.0_60


4.core-site.xml核心配置文件修改


<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/usr/local/hadoop-2.6.0/tmp</value>
        <description>hadoop.tmp.dir</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
   
</configuration>




5.hdfs-site.xml配置文件修改
 <configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/usr/local/hadoop-2.6.0/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value> /usr/local/hadoop-2.6.0/tmp/dfs/data</value>
    </property>
</configuration>


6.salves 修改
root@master:/usr/local/hadoop-2.6.0/etc/hadoop# cat slaves
worker1
worker2
worker3
worker4
worker5
worker6
worker7
worker8


7。分发hadoop
root@master:/usr/local# cd  setup_scripts
root@master:/usr/local/setup_scripts# ls
host_scp.sh  ssh_config.sh  ssh_scp.sh
root@master:/usr/local/setup_scripts# 


root@master:/usr/local/setup_scripts# cat hadoop_scp.sh


#!/bin/sh
for i in  2 3 4 5 6 7 8  9
do
scp   -rq /etc/profile  root@192.168.189.$i:/etc/profile
ssh   root@192.168.189.$i source /etc/profile
scp   -rq /usr/local/hadoop-2.6.0  root@192.168.189.$i:/usr/local/hadoop-2.6.0


done




root@master:/usr/local/setup_scripts#


8。执行完成


9。格式化hdfs namenode -format


root@master:/usr/local/setup_scripts#  cd /usr/local/hadoop-2.6.0/bin
root@master:/usr/local/hadoop-2.6.0/bin# hdfs namenode -format


10。启动集群
root@master:/usr/local/hadoop-2.6.0/sbin# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: Warning: Permanently added 'master,192.168.189.1' (ECDSA) to the list of known hosts.
master: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-namenode-master.out
worker6: Warning: Permanently added 'worker6' (ECDSA) to the list of known hosts.
worker7: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker7.out
worker6: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker6.out
worker5: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker5.out
worker4: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker4.out
worker3: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker3.out
worker8: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker8.out
worker2: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker2.out
worker1: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-resourcemanager-master.out
worker1: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker1.out
worker2: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker2.out
worker5: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker5.out
worker7: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker7.out
worker6: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker6.out
worker4: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker4.out
worker3: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker3.out
worker8: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker8.out
root@master:/usr/local/hadoop-2.6.0/sbin# 




root@master:/usr/local/hadoop-2.6.0/sbin# jps
5378 NameNode
5608 SecondaryNameNode
6009 Jps
5742 ResourceManager




root@worker1:/usr/local# jps
3866 DataNode
4077 Jps
3950 NodeManager
root@worker1:/usr/local# 


root@worker7:/usr/local# jps
3750 NodeManager
3656 DataNode
3865 Jps
root@worker7:/usr/local# 




11.
打开http://192.168.189.1:50070
http://192.168.189.1:50070/dfshealth.html#tab-datanode


Datanode Information
In operation
Node Last contactAdmin StateCapacityUsedNon DFS UsedRemainingBlocksBlock pool usedFailed VolumesVersion
worker6 (192.168.189.7:50010) 2In Service17.45 GB24 KB6.19 GB11.26 GB024 KB (0%)02.6.0
worker7 (192.168.189.8:50010) 1In Service17.45 GB24 KB6.19 GB11.26 GB024 KB (0%)02.6.0
worker8 (192.168.189.9:50010) 1In Service17.45 GB24 KB6.19 GB11.26 GB024 KB (0%)02.6.0
worker1 (192.168.189.2:50010) 2In Service17.45 GB24 KB6.19 GB11.26 GB024 KB (0%)02.6.0
worker2 (192.168.189.3:50010) 1In Service17.45 GB24 KB6.19 GB11.26 GB024 KB (0%)02.6.0
worker3 (192.168.189.4:50010) 2In Service17.45 GB24 KB6.19 GB11.26 GB024 KB (0%)02.6.0
worker4 (192.168.189.5:50010) 0In Service17.45 GB24 KB6.19 GB11.26 GB024 KB (0%)02.6.0
worker5 (192.168.189.6:50010) 1In Service17.45 GB24 KB6.19 GB11.25 GB024 KB (0%)02.6.0
Decomissioning




Overview 'master:9000' (active)
Started: Sun Feb 07 14:17:41 CST 2016
Version: 2.6.0, re3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
Compiled: 2014-11-13T21:10Z by jenkins from (detached from e349649)
Cluster ID: CID-f4efbd54-7685-450e-b119-5932052252ff

Block Pool ID: BP-367257699-192.168.189.1-1454825792055





0 0
原创粉丝点击