Hadoop完全分布式
来源:互联网 发布:什么叫网络级防火墙 编辑:程序博客网 时间:2024/06/08 11:18
- 修改主机名
[root@localhost:/soft/hadoop2.7/etc/hadoop]nano /etc/hostname[root@localhost:/soft/hadoop2.7/etc/hadoop][root@localhost:/soft/hadoop2.7/etc/hadoop]hostnames130
- 修改主机IP映射
[root@localhost:/soft/hadoop2.7/etc/hadoop]nano /etc/hosts[root@localhost:/soft/hadoop2.7/etc/hadoop]cat /etc/hosts127.0.0.1 localhost192.168.109.130 s130192.168.109.131 s131192.168.109.132 s132192.168.109.133 s133
- 测试修改成功
[root@localhost:/soft/hadoop2.7/etc/hadoop]ping s130PING s130 (192.168.109.130) 56(84) bytes of data.64 bytes from s130 (192.168.109.130): icmp_seq=1 ttl=64 time=0.013 ms64 bytes from s130 (192.168.109.130): icmp_seq=2 ttl=64 time=0.036 ms64 bytes from s130 (192.168.109.130): icmp_seq=3 ttl=64 time=0.020 ms^C--- s130 ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 1999msrtt min/avg/max/mdev = 0.013/0.023/0.036/0.009 ms
- power off 关机。进行克隆
- 右键选择虚拟机--》选择管理---》克隆
- 第二步创建完整克隆
- 第三步 修改克隆的虚拟机名称 ,克隆的位置和原虚拟机在同一个目录下
- 第四步开始克隆
按照这个方法克隆3台虚拟机,并修改对应的IP地址
- 修改每个克隆机器的hostname和IP地址
启动s131虚拟机,进入目录,在网卡中添加IP地址,网关,DNS
- 修改主机名
[root@s130:/usr/bin]nano /etc/hostname
- 重启网络
[root@s130:/usr/bin]service network restart
- 检测是否修改成功
[root@s130:/usr/bin]ping s131
PING s131 (192.168.109.131) 56(84) bytes of data.64 bytes from s131 (192.168.109.131): icmp_seq=1 ttl=64 time=0.014 ms64 bytes from s131 (192.168.109.131): icmp_seq=2 ttl=64 time=0.060 ms^C
[root@s130:/usr/bin]ifconfigeno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.109.131 netmask 255.255.255.0 broadcast 192.168.109.255 inet6 fe80::20c:29ff:fe9d:bddf prefixlen 64 scopeid 0x20<link> ether 00:0c:29:9d:bd:df txqueuelen 1000 (Ethernet) RX packets 1340 bytes 125703 (122.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 906 bytes 232938 (227.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
- 设置域名服务解析DNS resolv.conf
- 检测s130和s131是否互通
[root@s130:/usr/bin]ping s130PING s130 (192.168.109.130) 56(84) bytes of data.64 bytes from s130 (192.168.109.130): icmp_seq=1 ttl=64 time=0.223 ms64 bytes from s130 (192.168.109.130): icmp_seq=2 ttl=64 time=0.141 ms64 bytes from s130 (192.168.109.130): icmp_seq=3 ttl=64 time=0.459 ms^C按照此方法配置另外两个
- 准备完全分布式主机的密钥对
- 删除所有主机的.ssh
[root@s131:/root/.ssh]ssh s132 rm -rf /root/.ssh/*
[root@s131:/root/.ssh]ssh s133 rm -rf /root/.ssh/*
[root@s131:/root/.ssh]ssh s132
[root@s132:/root]cd .ssh[root@s132:/root/.ssh]ls
[root@s130:/root]cd .ssh[root@s130:/root/.ssh]lsknown_hosts[root@s130:/root/.ssh]rm -rf *[root@s130:/root/.ssh]ls[root@s130:/root/.ssh]
- 将s130作为master,在主机s130上生成密钥对
[root@s130:/root/.ssh]ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsaGenerating public/private rsa key pair.Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:db:ac:a6:63:71:eb:7c:ba:00:a1:71:61:7c:71:2c:f2 root@s130The key's randomart image is:+--[ RSA 2048]----+| .o .o. || .o.o.. || . o+ . || + .E || . . S || .. .+ || .o..o || ooo.. || ..=*+ |+-----------------+[root@s130:/root/.ssh]lsid_rsa id_rsa.pub
- 将master的公钥通过远程拷贝到s131-s133主机上
[root@s130:/root/.ssh]scp id_rsa.pub root@s130:/root/.ssh/authorized_keys[root@s130:/root/.ssh]scp id_rsa.pub root@s131:/root/.ssh/authorized_keysroot@s131's password:id_rsa.pub 100% 391 0.4KB/s 00:00[root@s130:/root/.ssh]scp id_rsa.pub root@s132:/root/.ssh/authorized_keysThe authenticity of host 's132 (192.168.109.132)' can't be established.ECDSA key fingerprint is a7:5b:2c:55:73:e9:9a:2e:8d:48:a5:8b:98:dd:f8:05.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 's132,192.168.109.132' (ECDSA) to the list of known hosts.root@s132's password:id_rsa.pub 100% 391 0.4KB/s 00:00[root@s130:/root/.ssh]scp id_rsa.pub root@s133:/root/.ssh/authorized_keysThe authenticity of host 's133 (192.168.109.133)' can't be established.ECDSA key fingerprint is a7:5b:2c:55:73:e9:9a:2e:8d:48:a5:8b:98:dd:f8:05.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 's133,192.168.109.133' (ECDSA) to the list of known hosts.root@s133's password:id_rsa.pub 100% 391 0.4KB/s 00:00[root@s130:/root/.ssh]
- 测试无密登录
[root@s130:/root/.ssh]ssh s131Last login: Fri Dec 22 01:38:12 2017 from s133[root@s131:/root]exitlogoutConnection to s131 closed.[root@s130:/root/.ssh]ssh s132Last login: Fri Dec 22 01:35:47 2017 from s131[root@s132:/root]exitlogoutConnection to s132 closed.[root@s130:/root/.ssh]ssh s133Last login: Fri Dec 22 01:36:01 2017 from s132[root@s133:/root]exitlogoutConnection to s133 closed.[root@s130:/root/.ssh]
- 测试成功,此时就可以在master主机s130上对s131-s133主机进行操作了
[root@s130:/root/.ssh]ssh s132 ls -al /root/.sshtotal 12drwx------. 2 root root 46 Dec 22 01:55 .dr-xr-x---. 8 root root 4096 Dec 21 09:14 ..-rw-r--r--. 1 root root 391 Dec 22 01:55 authorized_keys-rw-r--r--. 1 root root 182 Dec 22 01:35 known_hosts[root@s130:/root/.ssh]ssh s131 hostnames131[root@s130:/root/.ssh]ssh s133 ps -Af
- 进入完全分布式配置
进入full模式
[root@s130:/soft/hadoop2.7/etc]lltotal 12drwxr-xr-x. 2 hadoop hadoop 4096 Dec 21 04:28 fulllrwxrwxrwx. 1 hadoop hadoop 6 Dec 21 04:30 hadoop -> pseudodrwxr-xr-x. 2 hadoop hadoop 4096 Dec 21 04:10 localdrwxr-xr-x. 2 hadoop hadoop 4096 Dec 21 04:28 pseudo[root@s130:/soft/hadoop2.7/etc]cd full[root@s130:/soft/hadoop2.7/etc/full]l
<property> <name>fs.defaultFS</name> <value>hdfs://s130/</value></property><property> <name>hadoop.tmp.dir</name> <value>/soft/hadoop2.7/tmp</value></property>
配置Hadoop-env.sh JAVA_HOME环境变量
export JAVA_HOME=/soft/jdk1.8
[root@s130:/soft/hadoop2.7/etc/hadoop]scp hadoop-env.sh root@s131:/soft/hadoop2.7/etc/fullhadoop-env.sh 100% 4224 4.1KB/s 00:00[root@s130:/soft/hadoop2.7/etc/hadoop]scp hadoop-env.sh root@s132:/soft/hadoop2.7/etc/fullhadoop-env.sh 100% 4224 4.1KB/s 00:00[root@s130:/soft/hadoop2.7/etc/hadoop]scp hadoop-env.sh root@s133:/soft/hadoop2.7/etc/fullhadoop-env.sh
配置hdfs-site.xml
<configuration><property> <name>dfs.replication</name> <value>3</value></property></configuration>配置yarn-site.xml
<property> <name>yarn.resourcemanager.hostname</name> <value>s130</value> </property><property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value></property>配置slaves数据节点
[root@s130:/soft/hadoop2.7/etc/hadoop]nano slaves[root@s130:/soft/hadoop2.7/etc/hadoop]cat slavess131s132s133
[root@s130:/soft/hadoop2.7/etc/hadoop]scp slaves root@s131:/soft/hadoop2.7/etc/fullslaves 100% 15 0.0KB/s 00:00[root@s130:/soft/hadoop2.7/etc/hadoop]scp slaves root@s132:/soft/hadoop2.7/etc/fullslaves 100% 15 0.0KB/s 00:00[root@s130:/soft/hadoop2.7/etc/hadoop]scp slaves root@s133:/soft/hadoop2.7/etc/fullslaves 100% 15 0.0KB/s 00:00[root@s130:/soft/hadoop2.7/etc/hadoop]
- 修改完毕之后,将整个full文件夹递归远程复制给s131-s133主机的相应目录下
[root@s130:/soft/hadoop2.7/etc]scp -r full root@s131:/soft/hadoop2.7/etc/
[root@s130:/soft/hadoop2.7/etc]scp -r full root@s132:/soft/hadoop2.7/etc/
[root@s130:/soft/hadoop2.7/etc]scp -r full root@s133:/soft/hadoop2.7/etc/
- 删除之前的符号链接,创建新的Hadoop链接到full模式
[root@s130:/soft/hadoop2.7/etc]rm hadoop
[root@s130:/soft/hadoop2.7/etc]ln -s full hadoop[root@s130:/soft/hadoop2.7/etc]lltotal 12drwxr-xr-x. 2 hadoop hadoop 4096 Dec 22 02:21 fulllrwxrwxrwx. 1 root root 4 Dec 22 02:21 hadoop -> fulldrwxr-xr-x. 2 hadoop hadoop 4096 Dec 21 04:10 localdrwxr-xr-x. 2 hadoop hadoop 4096 Dec 21 04:28 pseudo
- 修改s131-s133主机的符号链接
[root@s130:/soft/hadoop2.7/etc]ssh s131 rm /soft/hadoop2.7/etc/hadoop[root@s130:/soft/hadoop2.7/etc]ssh s132 rm /soft/hadoop2.7/etc/hadoop[root@s130:/soft/hadoop2.7/etc]ssh s133 rm /soft/hadoop2.7/etc/hadoop[root@s130:/soft/hadoop2.7/etc]ssh s131 ln -s /soft/hadoop2.7/etc/full /soft/hadoop2.7/etc/hadoop [root@s130:/soft/hadoop2.7/etc]ssh s132 ln -s /soft/hadoop2.7/etc/full /soft/hadoop2.7/etc/hadoop[root@s130:/soft/hadoop2.7/etc]ssh s133 ln -s /soft/hadoop2.7/etc/full /soft/hadoop2.7/etc/hadoop
- 删除临时目录文件
由于这里我之前修改过Hadoop的临时目录文件,所以这里就不再删除了
- 删除Hadoop运行日志
[root@s130:/soft/hadoop2.7/logs]rm -rf *[root@s130:/soft/hadoop2.7/logs]ls[root@s130:/soft/hadoop2.7/logs]ssh s131 rm -rf /soft/hadoop2.7/logs[root@s130:/soft/hadoop2.7/logs]ssh s132 rm -rf /soft/hadoop2.7/logs[root@s130:/soft/hadoop2.7/logs]ssh s133 rm -rf /soft/hadoop2.7/logs
- 格式化namenode
[root@s130:/soft/hadoop2.7/etc/hadoop]hadoop namenode -format
- 启动Hadoop
[root@s130:/soft/hadoop2.7/etc/hadoop]jps3841 SecondaryNameNode4273 Jps4012 ResourceManager3599 NameNode查看datanode
[root@s131:/root]jps3723 NodeManager4139 Jps3565 DataNode[root@s131:/root]ssh s132The authenticity of host 's132 (192.168.109.132)' can't be established.ECDSA key fingerprint is a7:5b:2c:55:73:e9:9a:2e:8d:48:a5:8b:98:dd:f8:05.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 's132,192.168.109.132' (ECDSA) to the list of known hosts.root@s132's password:Last login: Fri Dec 22 03:38:14 2017 from 192.168.109.1[root@s132:/root]jps3444 DataNode3609 NodeManager3945 Jps[root@s132:/root]ssh s133root@s133's password:Last login: Fri Dec 22 03:38:12 2017 from 192.168.109.1[root@s133:/root]jps3621 NodeManager3463 DataNode3979 Jps进入到网页可以看到三个数据节点都是启动好了
阅读全文
0 0
- Hadoop完全分布式配置
- Hadoop完全分布式配置
- hadoop完全分布式配置
- hadoop完全分布式实践
- hadoop完全分布式配置
- Hadoop完全分布式配置
- Hadoop完全分布式配置
- Hadoop完全分布式配置
- hadoop完全分布式安装
- Hadoop完全分布式配置
- Hadoop完全分布式配置
- hadoop完全分布式
- Hadoop完全分布式配置
- 完全分布式安全hadoop
- 完全分布式安装hadoop
- hadoop完全分布式搭建
- Hadoop完全分布式配置
- Hadoop完全分布式安装
- HTTP相关协议指南(你可能最需要的完整版本都在这里)
- 安卓 控件的AttributeSet
- github配置
- 7.7 线索二叉树
- Codeforces Round #452 (Div. 2)
- Hadoop完全分布式
- Android--Activity生命周期和Fragment生命周期
- Andorid之jni里面崩溃然后用errno分析结果解决问题
- vue.js axios
- 将openfire源码部署到IDEA中
- 【自定义View】根据鸿洋思路仿写知乎滑动广告
- 记录<a>标签使用中click事件中ajax提交数据后,整体页面也做了刷新操作的坑
- 浅谈MVC,MVP,MVVM区别联系
- 关于Unicode,字符集,字符编码