轻松搭建hadoop-1.2.1集群(3)--配置hadoop集群软件

来源:互联网 发布:布谷鸟网络电视回看 编辑:程序博客网 时间:2024/06/18 03:17

轻松搭建hadoop-1.2.1集群(3)--配置hadoop集群软件

 

1、开始安装JDKHadoop

jdkhadoop进行解压:

如果JDKbin文件增加可执行权限:chmod u+x jdk-6u45-linux-x64.bin


解压完毕:


 

2、对解压的软件文件夹改名:


 

3、在hadoop0主机上进行配置:


配置JDK

export JAVA_HOME=/usr/local/jdk export JRE_HOME=/usr/local/jdk/jre export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH 


 

配置Hadoop:

#sethadoop environment export HADOOP_HOME=/usr/local/hadoop export PATH=$PATH:$HADOOP_HOME/bin


 

配置截图:


 

查看jdk版本:


 

 

4、下面依次配置:


4.1hadoop-env.sh


 

 

4.2core-site.xml


 

4.3hdfs-site.xml


 

4.4mapred-site.xml


 

4.5masters


 

4.6slaves


 

 

5、把hadoop0里的/etc/profilejdkhadoop拷贝到hadoop1hadoop2中:

5.1、往hadoop1上拷贝JDK


拷贝成功:


 

5.2、往hadoop2上拷贝JDK


拷贝成功:


 

5.3、往hadoop1上拷贝hadoop


拷贝成功:


 

5.4、往hadoop2上拷贝hadoop

 


拷贝成功:


 

5.5拷贝/etc/profile 文件:

hadoop1拷贝:


拷贝过程:


 

hadoop1上执行如下命令,是配置文件生效:


 

查看Java版本:


 

 

5.6、往hadoop2拷贝:


 

拷贝过程:


 

hadoop1上执行如下命令,是配置文件生效:


 

查看Java版本:


 

 

 

6、在hadoop0上格式化hadoop


 

附格式化内容:

[root@hadoop0 bin]# hadoop namenode -formatWarning: $HADOOP_HOME is deprecated. 15/02/20 21:02:29 INFO namenode.NameNode: STARTUP_MSG:/************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG:   host = hadoop0/192.168.1.2STARTUP_MSG:   args = [-format]STARTUP_MSG:   version = 1.2.1STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013STARTUP_MSG:   java = 1.6.0_45************************************************************/15/02/20 21:02:29 INFO util.GSet: Computing capacity for map BlocksMap15/02/20 21:02:30 INFO util.GSet: VM type       = 64-bit15/02/20 21:02:30 INFO util.GSet: 2.0% max memory = 101364531215/02/20 21:02:30 INFO util.GSet: capacity      = 2^21 = 2097152 entries15/02/20 21:02:30 INFO util.GSet: recommended=2097152, actual=209715215/02/20 21:02:31 INFO namenode.FSNamesystem: fsOwner=root15/02/20 21:02:31 INFO namenode.FSNamesystem: supergroup=supergroup15/02/20 21:02:31 INFO namenode.FSNamesystem: isPermissionEnabled=false15/02/20 21:02:31 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=10015/02/20 21:02:31 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)15/02/20 21:02:31 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 015/02/20 21:02:31 INFO namenode.NameNode: Caching file names occuring more than 10 times15/02/20 21:02:32 INFO common.Storage: Image file /usr/hadoop/tmp/dfs/name/current/fsimage of size 110 bytes saved in 0 seconds.15/02/20 21:02:32 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/usr/hadoop/tmp/dfs/name/current/edits15/02/20 21:02:32 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/usr/hadoop/tmp/dfs/name/current/edits15/02/20 21:02:32 INFO common.Storage: Storage directory /usr/hadoop/tmp/dfs/name has been successfully formatted.15/02/20 21:02:32 INFO namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at hadoop0/192.168.1.2************************************************************/


 

 

 

7、在hadoop0上启动hadoop


 

附启动内容:

[root@hadoop0 bin]# start-all.shWarning: $HADOOP_HOME is deprecated. starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-namenode-hadoop0.out192.168.1.3: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-datanode-hadoop1.out192.168.1.4: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-datanode-hadoop2.out192.168.1.2: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-secondarynamenode-hadoop0.outstarting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-jobtracker-hadoop0.out192.168.1.4: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-tasktracker-hadoop2.out192.168.1.3: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-tasktracker-hadoop1.out[root@hadoop0 bin]#


 

 

8、在hadoop0上查看启动进程,启动正常:


 

[root@hadoop0 bin]# jps2719 SecondaryNameNode2906 Jps2574 NameNode2788 JobTracker[root@hadoop0 bin]#

hadoop1上查看启动进程,启动正常:


 

[root@hadoop1 ~]# jps2509 TaskTracker2439 DataNode2602 Jps[root@hadoop1 ~]#


hadoop2上查看启动进程,启动正常:


 

[root@hadoop2 ~]# jps2533 Jps2449 TaskTracker2379 DataNode[root@hadoop2 ~]#

9、查看NameNode

地址 http://192.168.1.2:50070 


 

点击ClusterSunmary中的Live Nodes


 

 

查看jodtracker

地址 http://192.168.1.2:50030 


 

 

同一页:


 

点击ClusterSummary中的Nodes


 

 

关闭集群:


 

 

集群搭建完毕。

 

 

 

 

1 0