Hadoop 2.X 分布式安装

来源:互联网 发布:充电宝什么牌子好 知乎 编辑:程序博客网 时间:2024/05/06 00:48

经过好多天的各种折腾,终于在几台电脑里面配置好了Hadoop2.2.0分布式系统,现在总结一下如何配置。

  前提条件:
  (1)、首先在每台Linux电脑上面安装好JDK6或其以上版本,并设置好JAVA_HOME等,测试一下java、javac、jps等命令是否可以在终端使用,如何配置JDK这里就不说了;
  (2)、在每台Linux上安装好SSH,如何安装请参加《Linux平台下安装SSH》。后面会说如何配置SSH无密码登录。

  有了上面的前提条件之后,我们接下来就可以进行安装Hadoop分布式平台了。步骤如下:

  1、先设定电脑的IP为静态地址:

  由于各个Linux发行版本静态IP的设置不一样,这里将介绍CentOS、Ubunt、Fedora 19静态IP的设置步骤:
  (1)、CentOS静态IP地址设置步骤如下:

1
2
3
4
5
6
7
[wyp@wyphadoop]$ sudo vim /etc/sysconfig/network-scripts/ifcfg-eth0
 
在里面添加下面语句:
 
IPADDR=192.168.142.139
NETMASK=255.255.255.0
NETWORK=192.168.0.0

里面的IPADDR地址设置你想要的,我这里是192.168.142.139。
设置好后,需要让IP地址生效,运行下面命令:

1
2
3
4
5
6
7
8
9
[wyp@wyphadoop]$ sudo service network restart
Shutting down interfaceeth0:  Device state: 3(disconnected)
                                                           [  OK  ]
Shutting down loopback interface:                          [  OK  ]
Bringing up loopback interface:                            [  OK  ]
Bringing up interfaceeth0:  Active connection state: activated
Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/7
                                                           [  OK  ]
[wyp@wyphadoop]$

然后运行ifconfig检验一下设置是否生效:

01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
[wyp@wyphadoop]$ ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:9F:FB:C0 
          inet addr:192.168.142.139 Bcast:192.168.142.255 Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe9f:fbc0/64Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500 Metric:1
          RX packets:389330errors:0dropped:0overruns:0frame:0
          TX packets:171679errors:0dropped:0overruns:0carrier:0
          collisions:0txqueuelen:1000
          RX bytes:473612019(451.6MiB)  TX bytes:30110196(28.7MiB)
 
lo        Link encap:Local Loopback 
          inet addr:127.0.0.1 Mask:255.0.0.0
          inet6 addr: ::1/128Scope:Host
          UP LOOPBACK RUNNING  MTU:16436 Metric:1
          RX packets:80221errors:0dropped:0overruns:0frame:0
          TX packets:80221errors:0dropped:0overruns:0carrier:0
          collisions:0txqueuelen:0
          RX bytes:1051174395(1002.4MiB)  TX bytes:1051174395(1002.4MiB)
 
[wyp@wyphadoop]$

可见IP地址已经设置为192.168.142.139了!
  (2)、Ubuntu静态IP地址设置步骤如下:

1
2
3
4
5
6
7
8
9
wyp@node1:~$ sudo vim /etc/network/interfaces
 
在里面添加:
 
auto eth0
iface eth0 inet static
address192.168.142.140
netmask255.255.255.0
gateway192.168.142.1

同样需要让IP地址生效:

1
wyp@node1:~$ sudo /etc/init.d/networking restart

同样也是输入ifconfig来检验IP设置是否生效,这里就不说了。
  (3)、Fedora 19静态IP地址设置步骤如下(Fedora其他版本的静态IP设置和19版本不一样,这里就不给出了):

1
2
3
4
5
6
7
[wyp@wypnetwork-scripts]$ sudo vim /etc/sysconfig/network-scripts/ifcfg-ens33
 
在里面添加:
 
IPADDR0=192.168.142.138
NETMASK0=255.255.255.0
GATEWAY0=192.168.142.0

设置好后,需要让IP地址生效,运行下面命令:

1
2
[wyp@wypnetwork-scripts]$ sudo service network restart
Restarting network (via systemctl):                        [  确定  ]

同样也是输入ifconfig来检验IP设置是否生效,这里就不说了。

  2、设置各个主机的hostname

  在步骤1中,我分别配置了CentOS、Ubuntu以及Fedora三台主机,我打算用它们作为集群中的电脑,其中Fedora主机作为master,其余的两台电脑作为slave。这步将说说如何修改这三台电脑的hostname:
  (1)、Fedora19 设置hostname步骤如下:

1
2
3
4
5
6
[wyp@wypnetwork-scripts]$ sudo hostnamectl set-hostname master
 
查看设置是否生效,运行下面命令
 
[wyp@wypnetwork-scripts]$ hostname
master

  (2)、Ubuntu设置hostname步骤如下:

1
2
3
4
5
6
7
wyp@node1:~$ sudo vim /etc/hostname
 
在里面添加自己需要取的hostname,我这里是取node1。
查看设置是否生效,运行下面命令
 
wyp@node1:~$ hostname
node1

  (3)、CentOS设置hostname步骤如下:

01
02
03
04
05
06
07
08
09
10
[wyp@nodenetwork-scripts]$ sudo vim /etc/sysconfig/network
 
将里面的HOSTNAME修改为你想要的hostname,我这里是取node
 
HOSTNAME=node
 
查看设置是否生效,运行下面命令
 
[wyp@nodenetwork-scripts]$ hostname
node

  3、在以上三台电脑的/etc/hosts添加以下配置:

1
2
3
4
5
6
7
[wyp@master~]$ sudo vim /etc/hosts
 
在里面添加以下语句
 
192.168.142.138master
192.168.142.139node
192.168.142.140node1

其实就是上面三台电脑的静态IP地址和其hostname的对应关系。检验是否修改生效,可以用ping来查看:

1
2
3
4
5
6
7
8
9
[wyp@master~]$ ping node
PING node (192.168.142.139)56(84) bytes of data.
64bytes from node (192.168.142.139): icmp_seq=1ttl=64time=0.541ms
64bytes from node (192.168.142.139): icmp_seq=2ttl=64time=0.220ms
^C
--- node ping statistics ---
2packets transmitted, 2received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.220/0.380/0.541/0.161ms
[wyp@master~]$

如果上面的命令可以ping通,说明设置生效了。

  4、设置SSH无密码登陆

  在本博客里面已经介绍了如何安装SSH(《Linux平台下安装SSH》),和怎么设置SSH无密码登陆(《Ubuntu和CentOS如何配置SSH使得无密码登陆》),这里主要是想说一下需要注意的事项,首先在master主机上面设置好了SSH无密码登陆之后,然后将生成的id_dsa.pub文件拷贝到node和node1上面去,可以运行下面的命令:

1
2
[wyp@localhost~]$ cat /home/wyp/.ssh/id_dsa.pub |    \
ssh wyp@192.168.142.139'cat - >> ~/.ssh/authorized_keys'

  要确保192.168.142.139主机的SSH服务是运行的。wyp@192.168.142.139的wyp是你需要登录192.168.142.139主机的用户名。同样,你也可以用上面类似的命令将id_dsa.pub拷贝到192.168.142.140主机上面去。
  当然,你也可以用scp命令将文件拷贝到相应的主机:

1
2
[wyp@masterDocuments]$ scp /home/wyp/.ssh/id_dsa.pub     \
wyp@192.168.142.139:~/.ssh/authorized_keys

检验是否可以从master无密码登录node和node1,可以用下面的命令:

1
2
3
4
5
6
7
8
[wyp@masterDocuments]$ ssh node
The authenticity of host 'node (192.168.142.139)' can't be established.
RSA key fingerprint is ae:99:43:f0:cf:c6:a9:82:6c:93:a1:65:54:70:a6:97.
Are you sure you want to continueconnecting (yes/no)? yes
Warning: Permanently added 'node,192.168.142.139'(RSA)
to the list of known hosts.
Last login: Wed Nov  614:54:552013 from master
[wyp@node~]$

  第一次运行上面的命令会出现上述信息。上面[wyp@node ~]已经暗示了我们成功从master无密码登录node;如果在登陆过程中出现了需要输入密码才能登录node,说明SSH无密码登录没成功,一般都是文件权限的问题,解决方法请参照《Ubuntu和CentOS如何配置SSH使得无密码登陆》。

  5、下载好Hadoop,这里用到的是hadoop-2.2.0.tar.gz,你可以用下面的命令去下载:
  下面的操作都是在master机器上进行的。

1
2
3
4
5
[wyp@wyp/home]$ mkdir /home/wyp/Downloads/hadoop
[wyp@wyp/home]$ cd /home/wyp/Downloads/hadoop
[wyp@wyphadoop]$ wget \
 
http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.2.0/hadoop-2.2.0.tar.gz

运行完上面的命令之后,hadoop-2.2.0.tar.gz文件将会保存在/home/wyp/Downloads/hadoop里面,请解压它

1
[wyp@wyphadoop]$ tar- zxvf hadoop-2.2.0.tar.gz

之后将会在hadoop文件夹下面生成hadoop-2.2.0文件夹,运行下面的命令

01
02
03
04
05
06
07
08
09
10
11
12
13
14
[wyp@wyphadoop]$ cd hadoop-2.2.0
[wyp@wyphadoop-2.2.0]$ ls -l
total56
drwxr-xr-x.2wyp wyp  4096Oct  714:38bin
drwxr-xr-x.3wyp wyp  4096Oct  714:38etc
drwxr-xr-x.2wyp wyp  4096Oct  714:38include
drwxr-xr-x.3wyp wyp  4096Oct  714:38lib
drwxr-xr-x.2wyp wyp  4096Oct  714:38libexec
-rw-r--r--.1wyp wyp 15164Oct  714:46LICENSE.txt
drwxrwxr-x.3wyp wyp  4096Oct 2814:38logs
-rw-r--r--.1wyp wyp   101Oct  714:46NOTICE.txt
-rw-r--r--.1wyp wyp  1366Oct  714:46README.txt
drwxr-xr-x.2wyp wyp  4096Oct 2812:37sbin
drwxr-xr-x.4wyp wyp  4096Oct  714:38share

显示出刚刚解压文件的文件夹。

  6、配置Hadoop的环境变量

01
02
03
04
05
06
07
08
09
10
11
12
[wyp@wyphadoop]$ sudo vim /etc/profile
  
在/etc/profile文件的末尾加上以下配置
  
export HADOOP_DEV_HOME=/home/wyp/Downloads/hadoop/hadoop-2.2.0
export PATH=$PATH:$HADOOP_DEV_HOME/bin
export PATH=$PATH:$HADOOP_DEV_HOME/sbin
export HADOOP_MAPARED_HOME=${HADOOP_DEV_HOME}
export HADOOP_COMMON_HOME=${HADOOP_DEV_HOME}
export HADOOP_HDFS_HOME=${HADOOP_DEV_HOME}
export YARN_HOME=${HADOOP_DEV_HOME}
export HADOOP_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop

然后按:wq保存。为了让刚刚的设置生效,运行下面的命令

1
[wyp@wyphadoop]$ sudo source /etc/profile

在终端输入hadoop命令查看Hadoop的环境变量是否生效:

01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
[wyp@node~]$ hadoop
Usage: hadoop [--config confdir] COMMAND
       where COMMAND is one of:
  fs                   run a generic filesystem user client
  version              print the version
  jar <jar>            run a jar file
  checknative [-a|-h]  check nativehadoop and compression libraries
                                                availability
  distcp <srcurl> <desturl> copy file or directories recursively
  archive -archiveName NAME -p <parent path> <src>* <dest> create
                                             a hadoop archive
  classpath            prints the classpath needed to get the
                       Hadoop jar and the required libraries
  daemonlog            get/set the log level foreach daemon
 or
  CLASSNAME            run the classnamed CLASSNAME
 
Most commands print help when invoked w/o parameters.
[wyp@node~]$

如果显示上面的信息,说明环境变量生效了,如果显示不了,重启一下电脑再试试。

  7、修改Hadoop的配置文件

修改Hadoop的hadoop-env.sh配置文件,设置jdk所在的路径:

1
2
3
4
5
6
[wyp@wyphadoop]$ vim etc/hadoop/hadoop-env.sh
  
在里面找到JAVA_HOME,并将它的值设置为你电脑jdk所在的绝对路径
  
# The java implementation to use.
export JAVA_HOME=/home/wyp/Downloads/jdk1.7.0_45

依次修改core-site.xml、yarn-site.xml、mapred-site.xml和hdfs-site.xml配置文件

01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
----------------core-site.xml
<property>
  <name>fs.default.name</name>
  <value>hdfs://master:8020</value>
  <final>true</final>
</property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/home/wyp/cloud/tmp/hadoop2.0</value>
</property>
  
------------------------- yarn-site.xml
<property>
  <name>yarn.resourcemanager.address</name>
  <value>master:8032</value>
</property>
 
<property>
  <name>yarn.resourcemanager.scheduler.address</name>
  <value>master:8030</value>
</property>
 
<property>
  <name>yarn.resourcemanager.resource-tracker.address</name>
  <value>master:8031</value>
</property>
 
<property>
  <name>yarn.resourcemanager.admin.address</name>
  <value>master:8033</value>
</property>
 
<property>
  <name>yarn.resourcemanager.webapp.address</name>
  <value>master:8088</value>
</property>
 
<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
  
<property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
  
------------------------ mapred-site.xml
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>
  
<property>
    <name>mapred.system.dir</name>
    <value>file:/opt/cloud/hadoop_space/mapred/system</value>
    <final>true</final>
</property>
  
<property>
    <name>mapred.local.dir</name>
    <value>file:/opt/cloud/hadoop_space/mapred/local</value>
    <final>true</final>
</property>
  
----------- hdfs-site.xml 
<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/opt/cloud/hadoop_space/dfs/name</value>
    <final>true</final>
</property>
  
<property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/opt/cloud/hadoop_space/dfs/data</value>
    <description>Determines where on the local
      filesystem an DFS data node should store its blocks.
      Ifthisis a comma-delimited list of directories,
      then data will be stored in all named
      directories, typically on different devices.
      Directories that donot exist are ignored.
    </description>
    <final>true</final>
</property>
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
  
<property>
    <name>dfs.permissions</name>
    <value>false</value>
</property>

配置好Hadoop的相关东西之后,请将hadoop-2.2.0整个文件夹分别拷贝到node和node1主机上面去,设置都不需要改!

  8、关掉master、node和node1的防火墙

如果在node上启动nodemanager,遇到java.net.NoRouteToHostException异常

01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
java.net.NoRouteToHostException: No Route to Host from 
localhost.localdomain/192.168.142.139to 192.168.142.138:8031
failed on socket timeout exception: java.net.NoRouteToHostException:
No route to host; For more details see:
 
http://wiki.apache.org/hadoop/NoRouteToHost
 
        ..................省略了好多东西
 
Caused by: java.net.NoRouteToHostException: No route to host
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
 
        ..................省略了好多东西
 
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399)
        at org.apache.hadoop.ipc.Client.call(Client.java:1318)
        ...19more

说明了没有关闭防火墙,各个linux平台关闭防火墙的方法不一样,这里也分享一下:
  (1)、对于ubuntu关闭防火墙

1
2
可以运行:ufw disable
如果你要防火墙可以运行: apt-get remove iptables

  (2)、对于fedora关闭防火墙可以运行:

1
2
[wyp@wyphadoop]$  sudo systemctl stop firewalld.service
[wyp@wyphadoop]$  sudo systemctl disable firewalld.service

  9、查看Hadoop是否运行成功

  首先在master上面格式化一下HDFS,如下命令

01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[wyp@wyphadoop]$  cd $hadoop_home
[wyp@wyphadoop-2.2.0]$  hdfs namenode -format
13/10/2816:47:33INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
  
..............此处省略好多文字......................
  
************************************************************/
13/10/2816:47:33INFO namenode.NameNode: registered UNIX signal
handlersfor[TERM, HUP, INT]
Formatting using clusterid: CID-9931f367-92d3-4693-a706-d83e120cacd6
13/10/2816:47:34INFO namenode.HostFileManager: read includes:
HostSet(
)
13/10/2816:47:34INFO namenode.HostFileManager: read excludes:
HostSet(
)
  
..............此处也省略好多文字......................
  
13/10/2816:47:38INFO util.ExitUtil: Exiting with status 0
13/10/2816:47:38INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at wyp/192.168.142.138
************************************************************/
[wyp@wyphadoop-2.2.0]$

在master中启动 namenode 和 resourcemanager

1
2
[wyp@wyphadoop-2.2.0]$ sbin/hadoop-daemon.sh start namenode
[wyp@wyphadoop-2.2.0]$ sbin/yarn-daemon.sh start resourcemanager

在node和node1中启动datanode 和 nodemanager

1
2
[wyp@wyphadoop-2.2.0]$ sbin/hadoop-daemon.sh start datanode
[wyp@wyphadoop-2.2.0]$ sbin/yarn-daemon.sh start nodemanager

检查Hadoop集群是否安装好了,在master上面运行jps,如果有NameNode、ResourceManager二个进程,说明master安装好了。

1
2
3
[wyp@masterhadoop]$ jps
2016NameNode
2602ResourceManager

在node(node1)上面运行jps,如果有DataNode、NodeManager二个进程,说明node(node1)安装好了。

1
2
3
[wyp@nodenetwork-scripts]$ jps
7889DataNode
7979NodeManager
0 0
原创粉丝点击