windows下安装并启动hadoop2.7.2

来源:互联网 发布:网络数字用语1 编辑:程序博客网 时间:2024/06/15 16:37

64位windows安装hadoop没必要倒腾Cygwin,直接解压官网下载hadoop安装包到本地->最小化配置4个基本文件->执行1条启动命令->完事。一个前提是你的电脑上已经安装了jdk,设置了java环境变量。下面把这几步细化贴出来,以hadoop2.7.2为例

  1、下载hadoop安装包就不细说了:http://hadoop.apache.org/->左边点Releases->点mirror site->点http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common->下载hadoop-2.7.2.tar.gz;

  2、解压也不细说了:复制到D盘根目录直接解压,出来一个目录D:\hadoop-2.7.2,配置到环境变量HADOOP_HOME中,在PATH里加上%HADOOP_HOME%\bin;点击http://download.csdn.net/detail/wuxun1997/9841472下载相关工具类,直接解压后把文件丢到D:\hadoop-2.7.2\bin目录中去,将其中的hadoop.dll在c:/windows/System32下也丢一份;

  3、去D:\hadoop-2.7.2\etc\hadoop找到下面4个文件并按如下最小配置粘贴上去:

core-site.xml

复制代码
<configuration>    <property>        <name>fs.defaultFS</name>        <value>hdfs://localhost:9000</value>    </property>    </configuration>
复制代码

hdfs-site.xml

复制代码
<configuration>    <property>        <name>dfs.replication</name>        <value>1</value>    </property>    <property>            <name>dfs.namenode.name.dir</name>            <value>file:/hadoop/data/dfs/namenode</value>        </property>        <property>            <name>dfs.datanode.data.dir</name>            <value>file:/hadoop/data/dfs/datanode</value>      </property></configuration>
复制代码

mapred-site.xml

复制代码
<configuration>    <property>        <name>mapreduce.framework.name</name>        <value>yarn</value>    </property></configuration>
复制代码

yarn-site.xml

复制代码
<configuration>    <property>        <name>yarn.nodemanager.aux-services</name>        <value>mapreduce_shuffle</value>    </property>    <property>        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>        <value>org.apache.hadoop.mapred.ShuffleHandler</value>    </property></configuration>
复制代码

  4、启动windows命令行窗口,进入hadoop-2.7.2\bin目录,执行下面2条命令,先格式化namenode再启动hadoop

复制代码
D:\hadoop-2.7.2\bin>hadoop namenode -formatDEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.17/05/13 07:16:40 INFO namenode.NameNode: STARTUP_MSG:/************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG:   host = wulinfeng/192.168.8.5STARTUP_MSG:   args = [-format]STARTUP_MSG:   version = 2.7.2STARTUP_MSG:   classpath = D:\hadoop-2.7.2\etc\hadoop;D:\hadoop-2.7.2\share\hadoop\common\lib\activation-1.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\apacheds-i18n-2.0.0-M15.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\api-asn1-api-1.0.0-M20.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\api-util-1.0.0-M20.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\asm-3.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\avro-1.7.4.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-beanutils-1.7.0.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-beanutils-core-1.8.0.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-cli-1.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-codec-1.4.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-collections-3.2.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-compress-1.4.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-configuration-1.6.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-digester-1.8.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-httpclient-3.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-io-2.4.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-lang-2.6.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-logging-1.1.3.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-math3-3.1.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-net-3.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\curator-client-2.7.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\curator-framework-2.7.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\curator-recipes-2.7.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\gson-2.2.4.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\guava-11.0.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\hadoop-annotations-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\hadoop-auth-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\hamcrest-core-1.3.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\htrace-core-3.1.0-incubating.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\httpclient-4.2.5.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\httpcore-4.2.5.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jackson-core-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jackson-jaxrs-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jackson-mapper-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jackson-xc-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\java-xmlbuilder-0.4.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jaxb-api-2.2.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jaxb-impl-2.2.3-1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jersey-core-1.9.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jersey-json-1.9.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jersey-server-1.9.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jets3t-0.9.0.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jettison-1.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jetty-6.1.26.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jetty-util-6.1.26.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jsch-0.1.42.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jsp-api-2.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jsr305-3.0.0.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\junit-4.11.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\log4j-1.2.17.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\mockito-all-1.8.5.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\netty-3.6.2.Final.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\paranamer-2.3.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\protobuf-java-2.5.0.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\servlet-api-2.5.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\slf4j-api-1.7.10.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\slf4j-log4j12-1.7.10.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\snappy-java-1.0.4.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\stax-api-1.0-2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\xmlenc-0.52.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\xz-1.0.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\zookeeper-3.4.6.jar;D:\hadoop-2.7.2\share\hadoop\common\hadoop-common-2.7.2-tests.jar;D:\hadoop-2.7.2\share\hadoop\common\hadoop-common-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\common\hadoop-nfs-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\hdfs;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\asm-3.2.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-cli-1.2.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-codec-1.4.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-daemon-1.0.13.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-io-2.4.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-lang-2.6.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-logging-1.1.3.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\guava-11.0.2.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\htrace-core-3.1.0-incubating.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\jackson-core-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\jackson-mapper-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\jersey-core-1.9.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\jersey-server-1.9.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\jetty-6.1.26.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\jetty-util-6.1.26.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\jsr305-3.0.0.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\leveldbjni-all-1.8.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\log4j-1.2.17.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\netty-3.6.2.Final.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\netty-all-4.0.23.Final.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\protobuf-java-2.5.0.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\servlet-api-2.5.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\xercesImpl-2.9.1.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\xml-apis-1.3.04.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\xmlenc-0.52.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\hadoop-hdfs-2.7.2-tests.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\hadoop-hdfs-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\hadoop-hdfs-nfs-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\activation-1.1.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\aopalliance-1.0.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\asm-3.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-cli-1.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-codec-1.4.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-collections-3.2.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-compress-1.4.1.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-io-2.4.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-lang-2.6.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-logging-1.1.3.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\guava-11.0.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\guice-3.0.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\guice-servlet-3.0.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jackson-core-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jackson-jaxrs-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jackson-mapper-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jackson-xc-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\javax.inject-1.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jaxb-api-2.2.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jaxb-impl-2.2.3-1.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jersey-client-1.9.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jersey-core-1.9.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jersey-guice-1.9.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jersey-json-1.9.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jersey-server-1.9.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jettison-1.1.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jetty-6.1.26.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jetty-util-6.1.26.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jsr305-3.0.0.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\leveldbjni-all-1.8.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\log4j-1.2.17.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\netty-3.6.2.Final.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\protobuf-java-2.5.0.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\servlet-api-2.5.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\stax-api-1.0-2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\xz-1.0.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\zookeeper-3.4.6-tests.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\zookeeper-3.4.6.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-api-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-applications-distributedshell-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-applications-unmanaged-am-launcher-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-client-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-common-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-registry-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-applicationhistoryservice-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-common-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-nodemanager-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-resourcemanager-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-sharedcachemanager-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-tests-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-web-proxy-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\aopalliance-1.0.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\asm-3.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\avro-1.7.4.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\commons-compress-1.4.1.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\commons-io-2.4.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\guice-3.0.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\guice-servlet-3.0.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\hadoop-annotations-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\hamcrest-core-1.3.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\jackson-core-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\jackson-mapper-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\javax.inject-1.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\jersey-core-1.9.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\jersey-guice-1.9.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\jersey-server-1.9.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\junit-4.11.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\leveldbjni-all-1.8.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\log4j-1.2.17.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\netty-3.6.2.Final.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\paranamer-2.3.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\protobuf-java-2.5.0.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\snappy-java-1.0.4.1.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\xz-1.0.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-client-app-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-client-common-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-client-core-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-client-hs-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-client-hs-plugins-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-client-jobclient-2.7.2-tests.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-client-jobclient-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-client-shuffle-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-examples-2.7.2.jarSTARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08ZSTARTUP_MSG:   java = 1.8.0_101************************************************************/17/05/13 07:16:40 INFO namenode.NameNode: createNameNode [-format]Formatting using clusterid: CID-1284c5d0-592a-4a41-b185-e53fb57dcfbf17/05/13 07:16:42 INFO namenode.FSNamesystem: No KeyProvider found.17/05/13 07:16:42 INFO namenode.FSNamesystem: fsLock is fair:true17/05/13 07:16:42 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=100017/05/13 07:16:42 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true17/05/13 07:16:42 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.00017/05/13 07:16:42 INFO blockmanagement.BlockManager: The block deletion will start around 2017 五月 13 07:16:4217/05/13 07:16:42 INFO util.GSet: Computing capacity for map BlocksMap17/05/13 07:16:42 INFO util.GSet: VM type       = 64-bit17/05/13 07:16:42 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB17/05/13 07:16:42 INFO util.GSet: capacity      = 2^21 = 2097152 entries17/05/13 07:16:42 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false17/05/13 07:16:42 INFO blockmanagement.BlockManager: defaultReplication= 117/05/13 07:16:42 INFO blockmanagement.BlockManager: maxReplication= 51217/05/13 07:16:42 INFO blockmanagement.BlockManager: minReplication= 117/05/13 07:16:42 INFO blockmanagement.BlockManager: maxReplicationStreams= 217/05/13 07:16:42 INFO blockmanagement.BlockManager: replicationRecheckInterval= 300017/05/13 07:16:42 INFO blockmanagement.BlockManager: encryptDataTransfer= false17/05/13 07:16:42 INFO blockmanagement.BlockManager: maxNumBlocksToLog= 100017/05/13 07:16:42 INFO namenode.FSNamesystem: fsOwner             = Administrator (auth:SIMPLE)17/05/13 07:16:42 INFO namenode.FSNamesystem: supergroup          = supergroup17/05/13 07:16:42 INFO namenode.FSNamesystem: isPermissionEnabled = true17/05/13 07:16:42 INFO namenode.FSNamesystem: HA Enabled: false17/05/13 07:16:42 INFO namenode.FSNamesystem: Append Enabled: true17/05/13 07:16:43 INFO util.GSet: Computing capacity for map INodeMap17/05/13 07:16:43 INFO util.GSet: VM type       = 64-bit17/05/13 07:16:43 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB17/05/13 07:16:43 INFO util.GSet: capacity      = 2^20 = 1048576 entries17/05/13 07:16:43 INFO namenode.FSDirectory: ACLs enabled? false17/05/13 07:16:43 INFO namenode.FSDirectory: XAttrs enabled? true17/05/13 07:16:43 INFO namenode.FSDirectory: Maximum size of an xattr: 1638417/05/13 07:16:43 INFO namenode.NameNode: Caching file names occuring more than10 times17/05/13 07:16:43 INFO util.GSet: Computing capacity for map cachedBlocks17/05/13 07:16:43 INFO util.GSet: VM type       = 64-bit17/05/13 07:16:43 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB17/05/13 07:16:43 INFO util.GSet: capacity      = 2^18 = 262144 entries17/05/13 07:16:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.999000012874603317/05/13 07:16:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 017/05/13 07:16:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension  = 3000017/05/13 07:16:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 1017/05/13 07:16:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 1017/05/13 07:16:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,2517/05/13 07:16:43 INFO namenode.FSNamesystem: Retry cache on namenode is enabled17/05/13 07:16:43 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis17/05/13 07:16:43 INFO util.GSet: Computing capacity for map NameNodeRetryCache17/05/13 07:16:43 INFO util.GSet: VM type       = 64-bit17/05/13 07:16:43 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB17/05/13 07:16:43 INFO util.GSet: capacity      = 2^15 = 32768 entries17/05/13 07:16:43 INFO namenode.FSImage: Allocated new BlockPoolId: BP-664414510-192.168.8.5-149463100321217/05/13 07:16:43 INFO common.Storage: Storage directory \hadoop\data\dfs\namenode has been successfully formatted.17/05/13 07:16:43 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 017/05/13 07:16:43 INFO util.ExitUtil: Exiting with status 017/05/13 07:16:43 INFO namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at wulinfeng/192.168.8.5************************************************************/D:\hadoop-2.7.2\bin>cd ..\sbinD:\hadoop-2.7.2\sbin>start-all.cmdThis script is Deprecated. Instead use start-dfs.cmd and start-yarn.cmdstarting yarn daemonsD:\hadoop-2.7.2\sbin>jps4944 DataNode5860 NodeManager3532 Jps7852 NameNode7932 ResourceManagerD:\hadoop-2.7.2\sbin>
复制代码

  通过jps命令可以看到4个进程都拉起来了,到这里hadoop的安装启动已经完事了。接着我们可以用浏览器到localhost:8088看mapreduce任务,到localhost:50070->Utilites->Browse the file system看hdfs文件。如果重启hadoop无需再格式化namenode,只要stop-all.cmd再start-all.cmd就可以了。

  上面拉起4个进程时会弹出4个窗口,我们可以看看这4个进程启动时都干了啥:

DataNode

复制代码
************************************************************/17/05/13 07:18:24 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties17/05/13 07:18:25 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).17/05/13 07:18:25 INFO impl.MetricsSystemImpl: DataNode metrics system started17/05/13 07:18:25 INFO datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 104857617/05/13 07:18:25 INFO datanode.DataNode: Configured hostname is wulinfeng17/05/13 07:18:25 INFO datanode.DataNode: Starting DataNode with maxLockedMemory = 017/05/13 07:18:25 INFO datanode.DataNode: Opened streaming server at /0.0.0.0:5001017/05/13 07:18:25 INFO datanode.DataNode: Balancing bandwith is 1048576 bytes/s17/05/13 07:18:25 INFO datanode.DataNode: Number threads for balancing is 517/05/13 07:18:25 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog17/05/13 07:18:26 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.17/05/13 07:18:26 INFO http.HttpRequestLog: Http request log for http.requests.datanode is not defined17/05/13 07:18:26 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)17/05/13 07:18:26 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode17/05/13 07:18:26 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static17/05/13 07:18:26 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs17/05/13 07:18:26 INFO http.HttpServer2: Jetty bound to port 5305817/05/13 07:18:26 INFO mortbay.log: jetty-6.1.2617/05/13 07:18:29 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:5305817/05/13 07:18:41 INFO web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.0:5007517/05/13 07:18:42 INFO datanode.DataNode: dnUserName = Administrator17/05/13 07:18:42 INFO datanode.DataNode: supergroup = supergroup17/05/13 07:18:42 INFO ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue17/05/13 07:18:42 INFO ipc.Server: Starting Socket Reader #1 for port 5002017/05/13 07:18:42 INFO datanode.DataNode: Opened IPC server at /0.0.0.0:5002017/05/13 07:18:42 INFO datanode.DataNode: Refresh request received for nameservices: null17/05/13 07:18:42 INFO datanode.DataNode: Starting BPOfferServices for nameservices: <default>17/05/13 07:18:42 INFO ipc.Server: IPC Server listener on 50020: starting17/05/13 07:18:42 INFO ipc.Server: IPC Server Responder: starting17/05/13 07:18:42 INFO datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000 starting to offer service17/05/13 07:18:43 INFO common.Storage: Lock on \hadoop\data\dfs\datanode\in_use.lock acquired by nodename 4944@wulinfeng17/05/13 07:18:43 INFO common.Storage: Storage directory \hadoop\data\dfs\datanode is not formatted for BP-664414510-192.168.8.5-149463100321217/05/13 07:18:43 INFO common.Storage: Formatting ...17/05/13 07:18:43 INFO common.Storage: Analyzing storage directories for bpid BP-664414510-192.168.8.5-149463100321217/05/13 07:18:43 INFO common.Storage: Locking is disabled for \hadoop\data\dfs\datanode\current\BP-664414510-192.168.8.5-149463100321217/05/13 07:18:43 INFO common.Storage: Block pool storage directory \hadoop\data\dfs\datanode\current\BP-664414510-192.168.8.5-1494631003212 is not formatted for BP-664414510-192.168.8.5-149463100321217/05/13 07:18:43 INFO common.Storage: Formatting ...17/05/13 07:18:43 INFO common.Storage: Formatting block pool BP-664414510-192.168.8.5-1494631003212 directory \hadoop\data\dfs\datanode\current\BP-664414510-192.168.8.5-1494631003212\current17/05/13 07:18:43 INFO datanode.DataNode: Setting up storage: nsid=61861794;bpid=BP-664414510-192.168.8.5-1494631003212;lv=-56;nsInfo=lv=-63;cid=CID-1284c5d0-592a-4a41-b185-e53fb57dcfbf;nsid=61861794;c=0;bpid=BP-664414510-192.168.8.5-1494631003212;dnuuid=null17/05/13 07:18:43 INFO datanode.DataNode: Generated and persisted new Datanode UUID e6e53ca9-b788-4c1c-9308-29b31be2870517/05/13 07:18:43 INFO impl.FsDatasetImpl: Added new volume: DS-f2b82635-0df9-484f-9d12-4364a9279b2017/05/13 07:18:43 INFO impl.FsDatasetImpl: Added volume - \hadoop\data\dfs\datanode\current, StorageType: DISK17/05/13 07:18:43 INFO impl.FsDatasetImpl: Registered FSDatasetState MBean17/05/13 07:18:43 INFO impl.FsDatasetImpl: Adding block pool BP-664414510-192.168.8.5-149463100321217/05/13 07:18:43 INFO impl.FsDatasetImpl: Scanning block pool BP-664414510-192.168.8.5-1494631003212 on volume D:\hadoop\data\dfs\datanode\current...17/05/13 07:18:43 INFO impl.FsDatasetImpl: Time taken to scan block pool BP-664414510-192.168.8.5-1494631003212 on D:\hadoop\data\dfs\datanode\current: 15ms17/05/13 07:18:43 INFO impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-664414510-192.168.8.5-1494631003212: 20ms17/05/13 07:18:43 INFO impl.FsDatasetImpl: Adding replicas to map for block pool BP-664414510-192.168.8.5-1494631003212 on volume D:\hadoop\data\dfs\datanode\current...17/05/13 07:18:43 INFO impl.FsDatasetImpl: Time to add replicas to map for block pool BP-664414510-192.168.8.5-1494631003212 on volume D:\hadoop\data\dfs\datanode\current: 0ms17/05/13 07:18:43 INFO impl.FsDatasetImpl: Total time to add all replicas to map: 17ms17/05/13 07:18:44 INFO datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1494650306107 with interval 2160000017/05/13 07:18:44 INFO datanode.VolumeScanner: Now scanning bpid BP-664414510-192.168.8.5-1494631003212 on volume \hadoop\data\dfs\datanode17/05/13 07:18:44 INFO datanode.VolumeScanner: VolumeScanner(\hadoop\data\dfs\datanode, DS-f2b82635-0df9-484f-9d12-4364a9279b20): finished scanning block pool BP-664414510-192.168.8.5-149463100321217/05/13 07:18:44 INFO datanode.DataNode: Block pool BP-664414510-192.168.8.5-1494631003212 (Datanode Uuid null) service to localhost/127.0.0.1:9000 beginning handshake with NN17/05/13 07:18:44 INFO datanode.VolumeScanner: VolumeScanner(\hadoop\data\dfs\datanode, DS-f2b82635-0df9-484f-9d12-4364a9279b20): no suitable block pools foundto scan.  Waiting 1814399766 ms.17/05/13 07:18:44 INFO datanode.DataNode: Block pool Block pool BP-664414510-192.168.8.5-1494631003212 (Datanode Uuid null) service to localhost/127.0.0.1:9000successfully registered with NN17/05/13 07:18:44 INFO datanode.DataNode: For namenode localhost/127.0.0.1:9000using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=300017/05/13 07:18:44 INFO datanode.DataNode: Namenode Block pool BP-664414510-192.168.8.5-1494631003212 (Datanode Uuid e6e53ca9-b788-4c1c-9308-29b31be28705) service to localhost/127.0.0.1:9000 trying to claim ACTIVE state with txid=117/05/13 07:18:44 INFO datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-664414510-192.168.8.5-1494631003212 (Datanode Uuid e6e53ca9-b788-4c1c-9308-29b31be28705) service to localhost/127.0.0.1:900017/05/13 07:18:44 INFO datanode.DataNode: Successfully sent block report 0x20e81034dafa,  containing 1 storage report(s), of which we sent 1. The reports had 0total blocks and used 1 RPC(s). This took 5 msec to generate and 91 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.17/05/13 07:18:44 INFO datanode.DataNode: Got finalize command for block pool BP-664414510-192.168.8.5-1494631003212
复制代码

NameNode

复制代码
************************************************************/17/05/13 07:18:24 INFO namenode.NameNode: createNameNode []17/05/13 07:18:26 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties17/05/13 07:18:26 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).17/05/13 07:18:26 INFO impl.MetricsSystemImpl: NameNode metrics system started17/05/13 07:18:26 INFO namenode.NameNode: fs.defaultFS is hdfs://localhost:900017/05/13 07:18:26 INFO namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.17/05/13 07:18:28 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:5007017/05/13 07:18:28 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog17/05/13 07:18:28 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.17/05/13 07:18:28 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined17/05/13 07:18:28 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)17/05/13 07:18:28 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs17/05/13 07:18:28 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs17/05/13 07:18:28 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static17/05/13 07:18:28 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)17/05/13 07:18:28 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*17/05/13 07:18:28 INFO http.HttpServer2: Jetty bound to port 5007017/05/13 07:18:28 INFO mortbay.log: jetty-6.1.2617/05/13 07:18:31 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:5007017/05/13 07:18:31 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundantstorage directories!17/05/13 07:18:31 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!17/05/13 07:18:31 INFO namenode.FSNamesystem: No KeyProvider found.17/05/13 07:18:31 INFO namenode.FSNamesystem: fsLock is fair:true17/05/13 07:18:31 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=100017/05/13 07:18:31 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true17/05/13 07:18:31 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.00017/05/13 07:18:31 INFO blockmanagement.BlockManager: The block deletion will start around 2017 五月 13 07:18:3117/05/13 07:18:31 INFO util.GSet: Computing capacity for map BlocksMap17/05/13 07:18:31 INFO util.GSet: VM type       = 64-bit17/05/13 07:18:31 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB17/05/13 07:18:31 INFO util.GSet: capacity      = 2^21 = 2097152 entries17/05/13 07:18:31 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false17/05/13 07:18:31 INFO blockmanagement.BlockManager: defaultReplication= 117/05/13 07:18:31 INFO blockmanagement.BlockManager: maxReplication= 51217/05/13 07:18:31 INFO blockmanagement.BlockManager: minReplication= 117/05/13 07:18:31 INFO blockmanagement.BlockManager: maxReplicationStreams= 217/05/13 07:18:31 INFO blockmanagement.BlockManager: replicationRecheckInterval= 300017/05/13 07:18:31 INFO blockmanagement.BlockManager: encryptDataTransfer= false17/05/13 07:18:31 INFO blockmanagement.BlockManager: maxNumBlocksToLog= 100017/05/13 07:18:31 INFO namenode.FSNamesystem: fsOwner             = Administrator (auth:SIMPLE)17/05/13 07:18:31 INFO namenode.FSNamesystem: supergroup          = supergroup17/05/13 07:18:31 INFO namenode.FSNamesystem: isPermissionEnabled = true17/05/13 07:18:31 INFO namenode.FSNamesystem: HA Enabled: false17/05/13 07:18:31 INFO namenode.FSNamesystem: Append Enabled: true17/05/13 07:18:32 INFO util.GSet: Computing capacity for map INodeMap17/05/13 07:18:32 INFO util.GSet: VM type       = 64-bit17/05/13 07:18:32 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB17/05/13 07:18:32 INFO util.GSet: capacity      = 2^20 = 1048576 entries17/05/13 07:18:32 INFO namenode.FSDirectory: ACLs enabled? false17/05/13 07:18:32 INFO namenode.FSDirectory: XAttrs enabled? true17/05/13 07:18:32 INFO namenode.FSDirectory: Maximum size of an xattr: 1638417/05/13 07:18:32 INFO namenode.NameNode: Caching file names occuring more than10 times17/05/13 07:18:32 INFO util.GSet: Computing capacity for map cachedBlocks17/05/13 07:18:32 INFO util.GSet: VM type       = 64-bit17/05/13 07:18:32 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB17/05/13 07:18:32 INFO util.GSet: capacity      = 2^18 = 262144 entries17/05/13 07:18:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.999000012874603317/05/13 07:18:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 017/05/13 07:18:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension  = 3000017/05/13 07:18:32 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 1017/05/13 07:18:32 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 1017/05/13 07:18:32 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,2517/05/13 07:18:32 INFO namenode.FSNamesystem: Retry cache on namenode is enabled17/05/13 07:18:32 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis17/05/13 07:18:33 INFO util.GSet: Computing capacity for map NameNodeRetryCache17/05/13 07:18:33 INFO util.GSet: VM type       = 64-bit17/05/13 07:18:33 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB17/05/13 07:18:33 INFO util.GSet: capacity      = 2^15 = 32768 entries17/05/13 07:18:33 INFO common.Storage: Lock on \hadoop\data\dfs\namenode\in_use.lock acquired by nodename 7852@wulinfeng17/05/13 07:18:34 INFO namenode.FileJournalManager: Recovering unfinalized segments in \hadoop\data\dfs\namenode\current17/05/13 07:18:34 INFO namenode.FSImage: No edit log streams selected.17/05/13 07:18:34 INFO namenode.FSImageFormatPBINode: Loading 1 INodes.17/05/13 07:18:34 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.17/05/13 07:18:34 INFO namenode.FSImage: Loaded image for txid 0 from \hadoop\data\dfs\namenode\current\fsimage_000000000000000000017/05/13 07:18:34 INFO namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)17/05/13 07:18:34 INFO namenode.FSEditLog: Starting log segment at 117/05/13 07:18:34 INFO namenode.NameCache: initialized with 0 entries 0 lookups17/05/13 07:18:35 INFO namenode.FSNamesystem: Finished loading FSImage in 1331 msecs17/05/13 07:18:36 INFO namenode.NameNode: RPC server is binding to localhost:900017/05/13 07:18:36 INFO ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue17/05/13 07:18:36 INFO namenode.FSNamesystem: Registered FSNamesystemState MBean17/05/13 07:18:36 INFO ipc.Server: Starting Socket Reader #1 for port 900017/05/13 07:18:36 INFO namenode.LeaseManager: Number of blocks under construction: 017/05/13 07:18:36 INFO namenode.LeaseManager: Number of blocks under construction: 017/05/13 07:18:36 INFO namenode.FSNamesystem: initializing replication queues17/05/13 07:18:36 INFO hdfs.StateChange: STATE* Leaving safe mode after 5 secs17/05/13 07:18:36 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes17/05/13 07:18:36 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks17/05/13 07:18:36 INFO blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 017/05/13 07:18:37 INFO blockmanagement.BlockManager: Total number of blocks       = 017/05/13 07:18:37 INFO blockmanagement.BlockManager: Number of invalid blocks       = 017/05/13 07:18:37 INFO blockmanagement.BlockManager: Number of under-replicatedblocks = 017/05/13 07:18:37 INFO blockmanagement.BlockManager: Number of  over-replicatedblocks = 017/05/13 07:18:37 INFO blockmanagement.BlockManager: Number of blocks being written    = 017/05/13 07:18:37 INFO hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 98 msec17/05/13 07:18:37 INFO namenode.NameNode: NameNode RPC up at: localhost/127.0.0.1:900017/05/13 07:18:37 INFO namenode.FSNamesystem: Starting services required for active state17/05/13 07:18:37 INFO ipc.Server: IPC Server Responder: starting17/05/13 07:18:37 INFO ipc.Server: IPC Server listener on 9000: starting17/05/13 07:18:37 INFO blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds17/05/13 07:18:44 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:50010, datanodeUuid=e6e53ca9-b788-4c1c-9308-29b31be28705,infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-1284c5d0-592a-4a41-b185-e53fb57dcfbf;nsid=61861794;c=0) storage e6e53ca9-b788-4c1c-9308-29b31be2870517/05/13 07:18:44 INFO blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 017/05/13 07:18:44 INFO net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:5001017/05/13 07:18:44 INFO blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 017/05/13 07:18:44 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID DS-f2b82635-0df9-484f-9d12-4364a9279b20 for DN 127.0.0.1:5001017/05/13 07:18:44 INFO BlockStateChange: BLOCK* processReport: from storage DS-f2b82635-0df9-484f-9d12-4364a9279b20 node DatanodeRegistration(127.0.0.1:50010, datanodeUuid=e6e53ca9-b788-4c1c-9308-29b31be28705, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-1284c5d0-592a-4a41-b185-e53fb57dcfbf;nsid=61861794;c=0), blocks: 0, hasStaleStorage: false, processing time: 2 msecs
复制代码

NodeManager

复制代码
************************************************************/17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationEventType forclass org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.LocalizationEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEventTypefor class org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.ContainerManagerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.NodeManagerEventType for class org.apache.hadoop.yarn.server.nodemanager.NodeManager17/05/13 07:18:34 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties17/05/13 07:18:34 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).17/05/13 07:18:34 INFO impl.MetricsSystemImpl: NodeManager metrics system started17/05/13 07:18:34 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerEventTypefor class org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler17/05/13 07:18:34 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadService17/05/13 07:18:34 INFO localizer.ResourceLocalizationService: per directory file limit = 819217/05/13 07:18:43 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.LocalizerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker17/05/13 07:18:44 WARN containermanager.AuxServices: The Auxilurary Service named 'mapreduce_shuffle' in the configuration is for class org.apache.hadoop.mapred.ShuffleHandler which has a name of 'httpshuffle'. Because these are not the same tools trying to send ServiceData and read Service Meta Data may have issues unless the refer to the name in the config.17/05/13 07:18:44 INFO containermanager.AuxServices: Adding auxiliary service httpshuffle, "mapreduce_shuffle"17/05/13 07:18:44 INFO monitor.ContainersMonitorImpl:  Using ResourceCalculatorPlugin : org.apache.hadoop.yarn.util.WindowsResourceCalculatorPlugin@4ee203eb17/05/13 07:18:44 INFO monitor.ContainersMonitorImpl:  Using ResourceCalculatorProcessTree : null17/05/13 07:18:44 INFO monitor.ContainersMonitorImpl: Physical memory check enabled: true17/05/13 07:18:44 INFO monitor.ContainersMonitorImpl: Virtual memory check enabled: true17/05/13 07:18:44 WARN monitor.ContainersMonitorImpl: NodeManager configured with 8 G physical memory allocated to containers, which is more than 80% of the total physical memory available (5.6 G). Thrashing might happen.17/05/13 07:18:44 INFO nodemanager.NodeStatusUpdaterImpl: Initialized nodemanager for null: physical-memory=8192 virtual-memory=17204 virtual-cores=817/05/13 07:18:44 INFO ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue17/05/13 07:18:44 INFO ipc.Server: Starting Socket Reader #1 for port 5313717/05/13 07:18:44 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ContainerManagementProtocolPB to the server17/05/13 07:18:44 INFO containermanager.ContainerManagerImpl: Blocking new container-requests as container manager rpc server is still starting.17/05/13 07:18:44 INFO ipc.Server: IPC Server Responder: starting17/05/13 07:18:44 INFO ipc.Server: IPC Server listener on 53137: starting17/05/13 07:18:44 INFO security.NMContainerTokenSecretManager: Updating node address : wulinfeng:5313717/05/13 07:18:44 INFO ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue17/05/13 07:18:44 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.nodemanager.api.LocalizationProtocolPB to the server17/05/13 07:18:44 INFO ipc.Server: IPC Server listener on 8040: starting17/05/13 07:18:44 INFO ipc.Server: Starting Socket Reader #1 for port 804017/05/13 07:18:44 INFO localizer.ResourceLocalizationService: Localizer startedon port 804017/05/13 07:18:44 INFO ipc.Server: IPC Server Responder: starting17/05/13 07:18:44 INFO mapred.IndexCache: IndexCache created with max memory = 1048576017/05/13 07:18:44 INFO mapred.ShuffleHandler: httpshuffle listening on port 1356217/05/13 07:18:44 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.17/05/13 07:18:45 INFO containermanager.ContainerManagerImpl: ContainerManager started at wulinfeng/192.168.8.5:5313717/05/13 07:18:45 INFO containermanager.ContainerManagerImpl: ContainerManager bound to 0.0.0.0/0.0.0.0:017/05/13 07:18:45 INFO webapp.WebServer: Instantiating NMWebApp at 0.0.0.0:804217/05/13 07:18:45 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog17/05/13 07:18:45 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.17/05/13 07:18:45 INFO http.HttpRequestLog: Http request log for http.requests.nodemanager is not defined17/05/13 07:18:45 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)17/05/13 07:18:45 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context node17/05/13 07:18:45 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs17/05/13 07:18:45 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static17/05/13 07:18:45 INFO http.HttpServer2: adding path spec: /node/*17/05/13 07:18:45 INFO http.HttpServer2: adding path spec: /ws/*17/05/13 07:18:46 INFO webapp.WebApps: Registered webapp guice modules17/05/13 07:18:46 INFO http.HttpServer2: Jetty bound to port 804217/05/13 07:18:46 INFO mortbay.log: jetty-6.1.2617/05/13 07:18:46 INFO mortbay.log: Extract jar:file:/D:/hadoop-2.7.2/share/hadoop/yarn/hadoop-yarn-common-2.7.2.jar!/webapps/node to C:\Users\ADMINI~1\AppData\Local\Temp\Jetty_0_0_0_0_8042_node____19tj0x\webapp五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register信息: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices as a root resource class五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register信息: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register信息: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver as a provider class五月 13, 2017 7:18:47 上午 com.sun.jersey.server.impl.application.WebApplicationImpl _initiate信息: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider信息: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton"五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider信息: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton"五月 13, 2017 7:18:48 上午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider信息: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices toGuiceManagedComponentProvider with the scope "Singleton"17/05/13 07:18:48 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:804217/05/13 07:18:48 INFO webapp.WebApps: Web app node started at 804217/05/13 07:18:49 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:803117/05/13 07:18:49 INFO nodemanager.NodeStatusUpdaterImpl: Sending out 0 NM container statuses: []17/05/13 07:18:49 INFO nodemanager.NodeStatusUpdaterImpl: Registering with RM using containers :[]17/05/13 07:18:49 INFO security.NMContainerTokenSecretManager: Rolling master-key for container-tokens, got key with id -61085804717/05/13 07:18:49 INFO security.NMTokenSecretManagerInNM: Rolling master-key for container-tokens, got key with id 201730206117/05/13 07:18:49 INFO nodemanager.NodeStatusUpdaterImpl: Registered with ResourceManager as wulinfeng:53137 with total resource of <memory:8192, vCores:8>17/05/13 07:18:49 INFO nodemanager.NodeStatusUpdaterImpl: Notifying ContainerManager to unblock new container-requests
复制代码

ResourceManager

复制代码
************************************************************/17/05/13 07:18:19 INFO conf.Configuration: found resource core-site.xml at file:/D:/hadoop-2.7.2/etc/hadoop/core-site.xml17/05/13 07:18:20 INFO security.Groups: clearing userToGroupsMap cache17/05/13 07:18:21 INFO conf.Configuration: found resource yarn-site.xml at file:/D:/hadoop-2.7.2/etc/hadoop/yarn-site.xml17/05/13 07:18:21 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher17/05/13 07:18:29 INFO security.NMTokenSecretManagerInRM: NMTokenKeyRollingInterval: 86400000ms and NMTokenKeyActivationDelay: 900000ms17/05/13 07:18:29 INFO security.RMContainerTokenSecretManager: ContainerTokenKeyRollingInterval: 86400000ms and ContainerTokenKeyActivationDelay: 900000ms17/05/13 07:18:29 INFO security.AMRMTokenSecretManager: AMRMTokenKeyRollingInterval: 86400000ms and AMRMTokenKeyActivationDelay: 900000 ms17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStoreEventType for class org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.NodesListManagerEventType for class org.apache.hadoop.yarn.server.resourcemanager.NodesListManager17/05/13 07:18:29 INFO resourcemanager.ResourceManager: Using Scheduler: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$NodeEventDispatcher17/05/13 07:18:29 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties17/05/13 07:18:30 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).17/05/13 07:18:30 INFO impl.MetricsSystemImpl: ResourceManager metrics system started17/05/13 07:18:30 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.RMAppManagerEventType for class org.apache.hadoop.yarn.server.resourcemanager.RMAppManager17/05/13 07:18:30 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncherEventType for class org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher17/05/13 07:18:30 INFO resourcemanager.RMNMInfo: Registered RMNMInfo MBean17/05/13 07:18:30 INFO security.YarnAuthorizationProvider: org.apache.hadoop.yarn.security.ConfiguredYarnAuthorizer is instiantiated.17/05/13 07:18:30 INFO util.HostsFileReader: Refreshing hosts (include/exclude)list17/05/13 07:18:30 INFO conf.Configuration: found resource capacity-scheduler.xml at file:/D:/hadoop-2.7.2/etc/hadoop/capacity-scheduler.xml17/05/13 07:18:30 INFO capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root is undefined17/05/13 07:18:30 INFO capacity.CapacitySchedulerConfiguration: max alloc vcoreper queue for root is undefined17/05/13 07:18:30 INFO capacity.ParentQueue: root, capacity=1.0, asboluteCapacity=1.0, maxCapacity=1.0, asboluteMaxCapacity=1.0, state=RUNNING, acls=SUBMIT_APP:*ADMINISTER_QUEUE:*, labels=*,, reservationsContinueLooking=true17/05/13 07:18:30 INFO capacity.ParentQueue: Initialized parent-queue root name=root, fullname=root17/05/13 07:18:30 INFO capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root.default is undefined17/05/13 07:18:30 INFO capacity.CapacitySchedulerConfiguration: max alloc vcoreper queue for root.default is undefined17/05/13 07:18:30 INFO capacity.LeafQueue: Initializing defaultcapacity = 1.0 [= (float) configuredCapacity / 100 ]asboluteCapacity = 1.0 [= parentAbsoluteCapacity * capacity ]maxCapacity = 1.0 [= configuredMaxCapacity ]absoluteMaxCapacity = 1.0 [= 1.0 maximumCapacity undefined, (parentAbsoluteMaxCapacity * maximumCapacity) / 100 otherwise ]userLimit = 100 [= configuredUserLimit ]userLimitFactor = 1.0 [= configuredUserLimitFactor ]maxApplications = 10000 [= configuredMaximumSystemApplicationsPerQueue or (int)(configuredMaximumSystemApplications * absoluteCapacity)]maxApplicationsPerUser = 10000 [= (int)(maxApplications * (userLimit / 100.0f) * userLimitFactor) ]usedCapacity = 0.0 [= usedResourcesMemory / (clusterResourceMemory * absoluteCapacity)]absoluteUsedCapacity = 0.0 [= usedResourcesMemory / clusterResourceMemory]maxAMResourcePerQueuePercent = 0.1 [= configuredMaximumAMResourcePercent ]minimumAllocationFactor = 0.875 [= (float)(maximumAllocationMemory - minimumAllocationMemory) / maximumAllocationMemory ]maximumAllocation = <memory:8192, vCores:32> [= configuredMaxAllocation ]numContainers = 0 [= currentNumContainers ]state = RUNNING [= configuredState ]acls = SUBMIT_APP:*ADMINISTER_QUEUE:* [= configuredAcls ]nodeLocalityDelay = 40labels=*,nodeLocalityDelay = 40reservationsContinueLooking = truepreemptionDisabled = true17/05/13 07:18:30 INFO capacity.CapacityScheduler: Initialized queue: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=0, numContainers=017/05/13 07:18:30 INFO capacity.CapacityScheduler: Initialized queue: root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>usedCapacity=0.0, numApps=0, numContainers=017/05/13 07:18:30 INFO capacity.CapacityScheduler: Initialized root queue root:numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>usedCapacity=0.0, numApps=0, numContainers=017/05/13 07:18:30 INFO capacity.CapacityScheduler: Initialized queue mappings, override: false17/05/13 07:18:30 INFO capacity.CapacityScheduler: Initialized CapacityScheduler with calculator=class org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator, minimumAllocation=<<memory:1024, vCores:1>>, maximumAllocation=<<memory:8192, vCores:32>>, asynchronousScheduling=false, asyncScheduleInterval=5ms17/05/13 07:18:30 INFO metrics.SystemMetricsPublisher: YARN system metrics publishing service is not enabled17/05/13 07:18:30 INFO resourcemanager.ResourceManager: Transitioning to activestate17/05/13 07:18:31 INFO recovery.RMStateStore: Updating AMRMToken17/05/13 07:18:31 INFO security.RMContainerTokenSecretManager: Rolling master-key for container-tokens17/05/13 07:18:31 INFO security.NMTokenSecretManagerInRM: Rolling master-key for nm-tokens17/05/13 07:18:31 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens17/05/13 07:18:31 INFO security.RMDelegationTokenSecretManager: storing master key with keyID 117/05/13 07:18:31 INFO recovery.RMStateStore: Storing RMDTMasterKey.17/05/13 07:18:31 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.nodelabels.event.NodeLabelsStoreEventType for class org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager$ForwardingEventHandler17/05/13 07:18:31 INFO delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)17/05/13 07:18:31 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens17/05/13 07:18:31 INFO security.RMDelegationTokenSecretManager: storing master key with keyID 217/05/13 07:18:31 INFO recovery.RMStateStore: Storing RMDTMasterKey.17/05/13 07:18:31 INFO ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue17/05/13 07:18:31 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceTrackerPB to the server17/05/13 07:18:31 INFO ipc.Server: Starting Socket Reader #1 for port 803117/05/13 07:18:32 INFO ipc.Server: IPC Server listener on 8031: starting17/05/13 07:18:32 INFO ipc.Server: IPC Server Responder: starting17/05/13 07:18:32 INFO ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue17/05/13 07:18:33 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB to the server17/05/13 07:18:33 INFO ipc.Server: IPC Server listener on 8030: starting17/05/13 07:18:33 INFO ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue17/05/13 07:18:33 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationClientProtocolPB to the server17/05/13 07:18:33 INFO resourcemanager.ResourceManager: Transitioned to active state17/05/13 07:18:33 INFO ipc.Server: IPC Server listener on 8032: starting17/05/13 07:18:33 INFO ipc.Server: Starting Socket Reader #1 for port 803017/05/13 07:18:33 INFO ipc.Server: IPC Server Responder: starting17/05/13 07:18:34 INFO ipc.Server: Starting Socket Reader #1 for port 803217/05/13 07:18:34 INFO ipc.Server: IPC Server Responder: starting17/05/13 07:18:34 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog17/05/13 07:18:34 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.17/05/13 07:18:34 INFO http.HttpRequestLog: Http request log for http.requests.resourcemanager is not defined17/05/13 07:18:34 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)17/05/13 07:18:34 INFO http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context cluster17/05/13 07:18:34 INFO http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context static17/05/13 07:18:34 INFO http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context logs17/05/13 07:18:34 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context cluster17/05/13 07:18:34 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static17/05/13 07:18:34 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs17/05/13 07:18:34 INFO http.HttpServer2: adding path spec: /cluster/*17/05/13 07:18:34 INFO http.HttpServer2: adding path spec: /ws/*17/05/13 07:18:35 INFO webapp.WebApps: Registered webapp guice modules17/05/13 07:18:35 INFO http.HttpServer2: Jetty bound to port 808817/05/13 07:18:35 INFO mortbay.log: jetty-6.1.2617/05/13 07:18:35 INFO mortbay.log: Extract jar:file:/D:/hadoop-2.7.2/share/hadoop/yarn/hadoop-yarn-common-2.7.2.jar!/webapps/cluster to C:\Users\ADMINI~1\AppData\Local\Temp\Jetty_0_0_0_0_8088_cluster____u0rgz3\webapp17/05/13 07:18:36 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens17/05/13 07:18:36 INFO delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)17/05/13 07:18:36 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens五月 13, 2017 7:18:36 上午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register信息: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver as a provider class五月 13, 2017 7:18:36 上午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register信息: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices as a root resource class五月 13, 2017 7:18:36 上午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register信息: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class五月 13, 2017 7:18:36 上午 com.sun.jersey.server.impl.application.WebApplicationImpl _initiate信息: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'五月 13, 2017 7:18:37 上午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider信息: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton"五月 13, 2017 7:18:38 上午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider信息: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton"五月 13, 2017 7:18:40 上午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider信息: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices to GuiceManagedComponentProvider with the scope "Singleton"17/05/13 07:18:41 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:808817/05/13 07:18:41 INFO webapp.WebApps: Web app cluster started at 808817/05/13 07:18:41 INFO ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue17/05/13 07:18:41 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceManagerAdministrationProtocolPB to the server17/05/13 07:18:41 INFO ipc.Server: IPC Server listener on 8033: starting17/05/13 07:18:41 INFO ipc.Server: IPC Server Responder: starting17/05/13 07:18:41 INFO ipc.Server: Starting Socket Reader #1 for port 803317/05/13 07:18:49 INFO util.RackResolver: Resolved wulinfeng to /default-rack17/05/13 07:18:49 INFO resourcemanager.ResourceTrackerService: NodeManager fromnode wulinfeng(cmPort: 53137 httpPort: 8042) registered with capability: <memory:8192, vCores:8>, assigned nodeId wulinfeng:5313717/05/13 07:18:49 INFO rmnode.RMNodeImpl: wulinfeng:53137 Node Transitioned from NEW to RUNNING17/05/13 07:18:49 INFO capacity.CapacityScheduler: Added node wulinfeng:53137 clusterResource: <memory:8192, vCores:8>17/05/13 07:28:30 INFO scheduler.AbstractYarnScheduler: Release request cache is cleaned up


转自:https://www.cnblogs.com/wuxun1997/p/6847950.html


原创粉丝点击