大数据 IMF传奇行动 如何 搭建 8台设备的 hadoop分布式集群
来源:互联网 发布:java基础知识txt下载 编辑:程序博客网 时间:2024/04/30 14:20
硬件配置 华为RH2285设备
2CPU 8核16线程
48G 内存
380G硬盘
1.配置Hadoop的全局环境变量
输入名称# vi /etc/profile打开profile文件,按i可以进入文本输入模式,在profile文件的最后增加HADOOP_HOME及修改PATH的环境变量,输入:wq!保存退出。
export HADOOP_HOME=/usr/local/hadoop-2.6.0
export PATH=.:$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$SCALA_HOME/bin
2.在命令行中输入source /etc/profile,使刚才修改的HADOOP_HOME及PATH配置文件生效。
3.hadoop-env.sh配置文件修改
export JAVA_HOME=/usr/local/jdk1.8.0_60
4.core-site.xml核心配置文件修改
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop-2.6.0/tmp</value>
<description>hadoop.tmp.dir</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
5.hdfs-site.xml配置文件修改
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop-2.6.0/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value> /usr/local/hadoop-2.6.0/tmp/dfs/data</value>
</property>
</configuration>
6.salves 修改
root@master:/usr/local/hadoop-2.6.0/etc/hadoop# cat slaves
worker1
worker2
worker3
worker4
worker5
worker6
worker7
worker8
7。分发hadoop
root@master:/usr/local# cd setup_scripts
root@master:/usr/local/setup_scripts# ls
host_scp.sh ssh_config.sh ssh_scp.sh
root@master:/usr/local/setup_scripts#
root@master:/usr/local/setup_scripts# cat hadoop_scp.sh
#!/bin/sh
for i in 2 3 4 5 6 7 8 9
do
scp -rq /etc/profile root@192.168.189.$i:/etc/profile
ssh root@192.168.189.$i source /etc/profile
scp -rq /usr/local/hadoop-2.6.0 root@192.168.189.$i:/usr/local/hadoop-2.6.0
done
root@master:/usr/local/setup_scripts#
8。执行完成
9。格式化hdfs namenode -format
root@master:/usr/local/setup_scripts# cd /usr/local/hadoop-2.6.0/bin
root@master:/usr/local/hadoop-2.6.0/bin# hdfs namenode -format
10。启动集群
root@master:/usr/local/hadoop-2.6.0/sbin# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: Warning: Permanently added 'master,192.168.189.1' (ECDSA) to the list of known hosts.
master: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-namenode-master.out
worker6: Warning: Permanently added 'worker6' (ECDSA) to the list of known hosts.
worker7: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker7.out
worker6: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker6.out
worker5: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker5.out
worker4: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker4.out
worker3: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker3.out
worker8: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker8.out
worker2: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker2.out
worker1: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-resourcemanager-master.out
worker1: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker1.out
worker2: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker2.out
worker5: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker5.out
worker7: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker7.out
worker6: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker6.out
worker4: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker4.out
worker3: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker3.out
worker8: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker8.out
root@master:/usr/local/hadoop-2.6.0/sbin#
root@master:/usr/local/hadoop-2.6.0/sbin# jps
5378 NameNode
5608 SecondaryNameNode
6009 Jps
5742 ResourceManager
root@worker1:/usr/local# jps
3866 DataNode
4077 Jps
3950 NodeManager
root@worker1:/usr/local#
root@worker7:/usr/local# jps
3750 NodeManager
3656 DataNode
3865 Jps
root@worker7:/usr/local#
11.
打开http://192.168.189.1:50070
http://192.168.189.1:50070/dfshealth.html#tab-datanode
Datanode Information
In operation
Node Last contactAdmin StateCapacityUsedNon DFS UsedRemainingBlocksBlock pool usedFailed VolumesVersion
worker6 (192.168.189.7:50010) 2In Service17.45 GB24 KB6.19 GB11.26 GB024 KB (0%)02.6.0
worker7 (192.168.189.8:50010) 1In Service17.45 GB24 KB6.19 GB11.26 GB024 KB (0%)02.6.0
worker8 (192.168.189.9:50010) 1In Service17.45 GB24 KB6.19 GB11.26 GB024 KB (0%)02.6.0
worker1 (192.168.189.2:50010) 2In Service17.45 GB24 KB6.19 GB11.26 GB024 KB (0%)02.6.0
worker2 (192.168.189.3:50010) 1In Service17.45 GB24 KB6.19 GB11.26 GB024 KB (0%)02.6.0
worker3 (192.168.189.4:50010) 2In Service17.45 GB24 KB6.19 GB11.26 GB024 KB (0%)02.6.0
worker4 (192.168.189.5:50010) 0In Service17.45 GB24 KB6.19 GB11.26 GB024 KB (0%)02.6.0
worker5 (192.168.189.6:50010) 1In Service17.45 GB24 KB6.19 GB11.25 GB024 KB (0%)02.6.0
Decomissioning
Overview 'master:9000' (active)
Started: Sun Feb 07 14:17:41 CST 2016
Version: 2.6.0, re3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
Compiled: 2014-11-13T21:10Z by jenkins from (detached from e349649)
Cluster ID: CID-f4efbd54-7685-450e-b119-5932052252ff
Block Pool ID: BP-367257699-192.168.189.1-1454825792055
0 0
- 大数据 IMF传奇行动 如何 搭建 8台设备的 hadoop分布式集群
- 大数据 IMF传奇 如何搭建 8台设备的SPARK分布式 集群
- 大数据IMF 传奇 8台设备如何实现免密码的SSH登录呢 ?脚本分发 解决方案
- 大数据 IMF传奇行动 hadoop 中 开发mapreduce 天气预报的例子
- 大数据IMF传奇行动绝密课程第118课:Spark Streaming性能优化:如何获得和持续使用足够的集群计算资源
- 大数据 IMF 传奇 spark -history在分布式 集群 的安装部署 及问题解决
- 大数据IMF传奇行动绝密课程第12课:HA下的Spark集群工作原理解密
- 大数据IMF传奇行动 scala IDE 内存不够问题解决
- 大数据IMF传奇行动 UBUNTU的SSH SECURECRT不能登陆 与 vmvare net 8的问题解决
- 大数据IMF传奇行动 Spark history-server 配置 !运维人员的强大工具
- 大数据IMF传奇行动 Spark pi 例子计算 解析 百万次的运算
- 大数据IMF传奇行动绝密课程第22课:RDD的依赖关系彻底解密
- 大数据IMF传奇行动绝密课程第22课:RDD的依赖关系彻底解密
- 大数据IMF传奇行动绝密课程第44课:真正的Spark功力:性能优化!
- 大数据IMF传奇行动绝密课程第117课:Spark Streaming性能优化:如何最大程度的确保Spark Cluster和Kafka连接的稳定性
- 大数据IMF传奇行动绝密课程第23课:从物理执行的角度透视Spark Job
- 大数据IMF传奇行动绝密课程第23课:从物理执行的角度透视Spark Job
- 大数据IMF传奇行动绝密课程第30课:Master的注册机制和状态管理解密
- Qt5.5 使用Git+VS2010静态编译
- apache commons io 2.2(三)Monitor部分
- 3-4
- 【Android开发小记--15】录制视频
- find()和filter()函数的用法区别
- 大数据 IMF传奇行动 如何 搭建 8台设备的 hadoop分布式集群
- 华为手机 logcat不输入错误日志
- [1651] Red packet【二分】
- uva 1625 color length
- HDU 1007 (分治递归)
- NYOJ 题目58 最小步数
- Linux命令行学习之路(七)——定时执行
- POJ 2112 Optimal Milking(二分+floyd+最大流)
- 【Android开发小记--16】数据存储1--assets、raw、内部存储、外部存储——文件的读写