Hadoop1.2.1集群安装三

来源:互联网 发布:淘宝下订单不付款 编辑:程序博客网 时间:2024/06/05 01:04

配置hadoop

1:下载hadoop-1.2.1.tar.gz

   在/home/jifeng 创建目录   mkdir hadoop  

2:解压

[jifeng@jifeng01 hadoop]$ lshadoop-1.2.1.tar.gz[jifeng@jifeng01 hadoop]$ tar zxf hadoop-1.2.1.tar.gz [jifeng@jifeng01 hadoop]$ lshadoop-1.2.1  hadoop-1.2.1.tar.gz[jifeng@jifeng01 hadoop]$ 

3: 修改hadoop-env.sh配置文件

[jifeng@jifeng01 hadoop]$ cd hadoop-1.2.1[jifeng@jifeng01 hadoop-1.2.1]$ lsbin          hadoop-ant-1.2.1.jar          ivy          sbinbuild.xml    hadoop-client-1.2.1.jar       ivy.xml      sharec++          hadoop-core-1.2.1.jar         lib          srcCHANGES.txt  hadoop-examples-1.2.1.jar     libexec      webappsconf         hadoop-minicluster-1.2.1.jar  LICENSE.txtcontrib      hadoop-test-1.2.1.jar         NOTICE.txtdocs         hadoop-tools-1.2.1.jar        README.txt[jifeng@jifeng01 hadoop-1.2.1]$ cd conf[jifeng@jifeng01 conf]$ lscapacity-scheduler.xml      hadoop-policy.xml      slavesconfiguration.xsl           hdfs-site.xml          ssl-client.xml.examplecore-site.xml               log4j.properties       ssl-server.xml.examplefair-scheduler.xml          mapred-queue-acls.xml  taskcontroller.cfghadoop-env.sh               mapred-site.xml        task-log4j.propertieshadoop-metrics2.properties  masters[jifeng@jifeng01 conf]$ vi hadoop-env.sh# Set Hadoop-specific environment variables here.# The only required environment variable is JAVA_HOME.  All others are# optional.  When running a distributed configuration it is best to# set JAVA_HOME in this file, so that it is correctly defined on# remote nodes.# The java implementation to use.  Required.export JAVA_HOME=/home/jifeng/jdk1.7.0_45# Extra Java CLASSPATH elements.  Optional.# export HADOOP_CLASSPATH=# The maximum amount of heap to use, in MB. Default is 1000.# export HADOOP_HEAPSIZE=2000# Extra Java runtime options.  Empty by default.# export HADOOP_OPTS=-server# Command specific options appended to HADOOP_OPTS when specifiedexport HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"@"hadoop-env.sh" 57L, 2436C 已写入                             [jifeng@jifeng01 conf]$ cat hadoop-env.sh

把#  export JAVA_HOME 修改为“export JAVA_HOME=/home/jifeng/jdk1.7.0_45”

4:修改core-site.xml文件

在hadoop目录下创建目录

 [jifeng@jifeng01 hadoop]$ mkdir tmp

[jifeng@jifeng01 conf]$ vi core-site.xml
修改后如下:

[jifeng@jifeng01 conf]$ cat core-site.xml<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration><property><name>fs.default.name</name>    <value>hdfs://jifeng01:9000</value>  </property><property><name>hadoop.tmp.dir</name>    <value>/home/jifeng/hadoop/tmp</value>  </property></configuration>

5:修改hdfs-site.xml

修改后如下:

[jifeng@jifeng01 conf]$ cat hdfs-site.xml<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration><property>              <name>dfs.replication</name>              <value>1</value>              <description></description>  </property>  </configuration>

6:修改mapred-site.xml文件

修改后如下:

[jifeng@jifeng01 conf]$ cat  mapred-site.xml<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration><property>    <name>mapred.job.tracker</name>    <value>jifeng01:9001</value>    <description>NameNode</description>  </property>  </configuration>[jifeng@jifeng01 conf]$ 

7:修改masters和slaves文件

修改后路下

[jifeng@jifeng01 conf]$ cat mastersjifeng01[jifeng@jifeng01 conf]$ cat slavesjifeng02jifeng03[jifeng@jifeng01 conf]$ 

8:先其它2个节点复制hadoop-1.2.1

[jifeng@jifeng01 hadoop]$ scp -r ./hadoop-1.2.1 jifeng02:/home/jifeng/hadoop

[jifeng@jifeng01 hadoop]$ scp -r ./hadoop-1.2.1 jifeng03:/home/jifeng/hadoop


9:格式化分布式文件系统

[jifeng@jifeng01 hadoop-1.2.1]$ bin/hadoop namenode -format14/07/24 10:29:43 INFO namenode.NameNode: STARTUP_MSG: /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG:   host = jifeng01/10.3.7.214STARTUP_MSG:   args = [-format]STARTUP_MSG:   version = 1.2.1STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013STARTUP_MSG:   java = 1.7.0_45************************************************************/14/07/24 10:29:43 INFO util.GSet: Computing capacity for map BlocksMap14/07/24 10:29:43 INFO util.GSet: VM type       = 64-bit14/07/24 10:29:43 INFO util.GSet: 2.0% max memory = 93218406414/07/24 10:29:43 INFO util.GSet: capacity      = 2^21 = 2097152 entries14/07/24 10:29:43 INFO util.GSet: recommended=2097152, actual=209715214/07/24 10:29:43 INFO namenode.FSNamesystem: fsOwner=jifeng14/07/24 10:29:43 INFO namenode.FSNamesystem: supergroup=supergroup14/07/24 10:29:43 INFO namenode.FSNamesystem: isPermissionEnabled=true14/07/24 10:29:43 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=10014/07/24 10:29:43 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)14/07/24 10:29:43 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 014/07/24 10:29:43 INFO namenode.NameNode: Caching file names occuring more than 10 times 14/07/24 10:29:43 INFO common.Storage: Image file /home/jifeng/hadoop/tmp/dfs/name/current/fsimage of size 112 bytes saved in 0 seconds.14/07/24 10:29:44 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/home/jifeng/hadoop/tmp/dfs/name/current/edits14/07/24 10:29:44 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/home/jifeng/hadoop/tmp/dfs/name/current/edits14/07/24 10:29:44 INFO common.Storage: Storage directory /home/jifeng/hadoop/tmp/dfs/name has been successfully formatted.14/07/24 10:29:44 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at jifeng01/10.3.7.214************************************************************/[jifeng@jifeng01 hadoop-1.2.1]$ 

10:启动hadoop

[jifeng@jifeng01 hadoop-1.2.1]$ bin/start-all.shstarting namenode, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-namenode-jifeng01.outjifeng03: starting datanode, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-datanode-jifeng03.outjifeng02: starting datanode, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-datanode-jifeng02.outThe authenticity of host 'jifeng01 (10.3.7.214)' can't be established.RSA key fingerprint is a8:9d:34:63:fa:c2:47:4f:81:10:94:fa:4b:ba:08:55.Are you sure you want to continue connecting (yes/no)? yesjifeng01: Warning: Permanently added 'jifeng01,10.3.7.214' (RSA) to the list of known hosts.jifeng@jifeng01's password: jifeng@jifeng01's password: jifeng01: Permission denied, please try again.jifeng01: starting secondarynamenode, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-secondarynamenode-jifeng01.outstarting jobtracker, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-jobtracker-jifeng01.outjifeng03: starting tasktracker, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-tasktracker-jifeng03.outjifeng02: starting tasktracker, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-tasktracker-jifeng02.out[jifeng@jifeng01 hadoop-1.2.1]$ 
需要输入密码


11:检测守护进程

[jifeng@jifeng01 hadoop-1.2.1]$ jps4539 JobTracker4454 SecondaryNameNode4269 NameNode4667 Jps[jifeng@jifeng01 hadoop-1.2.1]$ 

[jifeng@jifeng02 hadoop]$ jps2734 TaskTracker2815 Jps2647 DataNode[jifeng@jifeng02 hadoop]$ 

[jifeng@jifeng03 hadoop]$ jps4070 Jps3878 DataNode3993 TaskTracker[jifeng@jifeng03 hadoop]$ 


0 0
原创粉丝点击