Hadoop HA集群配置问题记录

来源:互联网 发布:风衣款式图 带数据 编辑:程序博客网 时间:2024/06/05 03:09


打算用3个节点搭建一个HA 集群,规划如下

HA集群规划 NameNodeDataNodeJournalNodenode1是是是ndoe2是是是node3 是是

hdfs-site.xml配置如下

<configuration>    <property>        <name>dfs.replication</name>        <value>3</value>    </property>    <property>        <name>dfs.nameservices</name>       <value>cluster1</value>    </property>    <property>        <name>dfs.ha.nameservice.cluster1</name>        <value>node1,node2,node3</value>    </property>    <!-- ##########################  namenode cluster start ########################## -->    <property>        <name>dfs.namenode.rpc-address.cluster1.node1</name>        <value>node1:9000</value>    </property>    <property>        <name>dfs.namenode.http-address.cluster1.node1</name>        <value>node1:50070</value>    </property>   <property>        <name>dfs.namenode.rpc-address.cluster1.node2</name>        <value>node2:9000</value>    </property>    <property>        <name>dfs.namenode.http-address.cluster1.node2</name>        <value>node2:50070</value>    </property>        <property>                <name>dfs.namenode.rpc-address.cluster1.node3</name>                <value>node3:9000</value>        </property>        <property>                <name>dfs.namenode.http-address.cluster1.node3</name>                <value>node3:50070</value>        </property>    <!-- ########################## namenode cluster end ########################## -->    <property>        <name>dfs.ha.automatic-failover.enabled.cluster1</name>        <value>false</value>    </property>    <!-- ##########################  journal node cluster start ########################## --><property>        <name>dfs.namenode.shared.edits.dir</name>        <value>qjournal://node1:8485;node2:8485;node3:8485/cluster1</value></property><property>        <name>dfs.journalnode.edits.dir</name>        <value>/opt/data/journal_tmp_dir</value>    </property><property>                <name>dfs.ha.fencing.methods</name>                <value>sshfence</value>        </property>        <property>                <name>dfs.ha.fencing.ssh.private-key-files</name>                <value>/root/.ssh/id_rsa</value>        </property><property>        <name>dfs.client.failover.proxy.provider.cluster1</name>        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>    </property>    <!-- ##########################  journal node cluster end ########################## --></configuration>

core-site.xml 

    <property>        <name>fs.defaultFS</name>        <value>hdfs://cluster1</value>    </property>    <property><name>hadoop.tmp.dir</name><value>/opt/data/hadoop_tmp_dir</value>    </property>    <property>        <name>fs.trash.interval</name>        <value>1440</value>    </property>

slaves:

node1node2node3

执行bin/hdfs namenode -format 出现了下面的问题

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08ZSTARTUP_MSG:   java = 1.7.0_76************************************************************/17/06/30 13:16:22 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]17/06/30 13:16:22 INFO namenode.NameNode: createNameNode [-format]17/06/30 13:16:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableFormatting using clusterid: CID-ea34a456-6ef6-4781-86bd-ef1fea9cf06717/06/30 13:16:24 INFO namenode.FSNamesystem: No KeyProvider found.17/06/30 13:16:24 INFO namenode.FSNamesystem: fsLock is fair:true17/06/30 13:16:25 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=100017/06/30 13:16:25 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true17/06/30 13:16:25 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.00017/06/30 13:16:25 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Jun 30 13:16:2517/06/30 13:16:25 INFO util.GSet: Computing capacity for map BlocksMap17/06/30 13:16:25 INFO util.GSet: VM type       = 64-bit17/06/30 13:16:25 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB17/06/30 13:16:25 INFO util.GSet: capacity      = 2^21 = 2097152 entries17/06/30 13:16:25 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false17/06/30 13:16:25 INFO blockmanagement.BlockManager: defaultReplication         = 317/06/30 13:16:25 INFO blockmanagement.BlockManager: maxReplication             = 51217/06/30 13:16:25 INFO blockmanagement.BlockManager: minReplication             = 117/06/30 13:16:25 INFO blockmanagement.BlockManager: maxReplicationStreams      = 217/06/30 13:16:25 INFO blockmanagement.BlockManager: replicationRecheckInterval = 300017/06/30 13:16:25 INFO blockmanagement.BlockManager: encryptDataTransfer        = false17/06/30 13:16:25 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 100017/06/30 13:16:25 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)17/06/30 13:16:25 INFO namenode.FSNamesystem: supergroup          = supergroup17/06/30 13:16:25 INFO namenode.FSNamesystem: isPermissionEnabled = true17/06/30 13:16:25 INFO namenode.FSNamesystem: Determined nameservice ID: cluster117/06/30 13:16:25 INFO namenode.FSNamesystem: HA Enabled: false17/06/30 13:16:25 WARN namenode.FSNamesystem: Configured NNs:17/06/30 13:16:25 ERROR namenode.FSNamesystem: FSNamesystem initialization failed.java.io.IOException: Invalid configuration: a shared edits dir must not be specified if HA is not enabled.at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:762)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:697)at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:984)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)17/06/30 13:16:25 INFO namenode.FSNamesystem: Stopping services started for active state17/06/30 13:16:25 INFO namenode.FSNamesystem: Stopping services started for standby state17/06/30 13:16:25 WARN namenode.NameNode: Encountered exception during format: java.io.IOException: Invalid configuration: a shared edits dir must not be specified if HA is not enabled.at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:762)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:697)at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:984)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)17/06/30 13:16:25 ERROR namenode.NameNode: Failed to start namenode.java.io.IOException: Invalid configuration: a shared edits dir must not be specified if HA is not enabled.at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:762)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:697)at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:984)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)17/06/30 13:16:25 INFO util.ExitUtil: Exiting with status 117/06/30 13:16:25 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at node1/172.16.73.143************************************************************/[root@node1 hadoop-2.7.2]# 

根据 FSNamesystem initialization failed.在百度上查找之后,发现一个文章,说应该在hdfs-site.xml增加dfs.ha.namenodes.<>

2.7.2这个版本的文档对这个参数的解释是这样的:

dfs.ha.namenodes.EXAMPLENAMESERVICE

The prefix for a given nameservice, contains a comma-separated list of namenodes for a given nameservice (eg EXAMPLENAMESERVICE).

就是要给出服务名称列表

在hdfs-stie.xml增加了这个参数

        <property>                <name>dfs.ha.namenodes.cluster1</name>                <value>node1,node2,node3</value>        </property>
cluster1是JournalNode集群的名称,这里打算是用3个节点组成

执行bin/hdfs namenode -format 后出现了下面的问题,上面的FSNamesystem initialization failed这个异常消失了,出现了新的问题,似乎是在链接node1 node2,node3 上的8485端口

STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08Z

STARTUP_MSG:   java = 1.7.0_76************************************************************/17/06/30 13:05:35 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]17/06/30 13:05:35 INFO namenode.NameNode: createNameNode [-format]17/06/30 13:05:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableFormatting using clusterid: CID-ec2fe77c-4cb1-4248-b003-f6f9bfc1d82f17/06/30 13:05:37 INFO namenode.FSNamesystem: No KeyProvider found.17/06/30 13:05:37 INFO namenode.FSNamesystem: fsLock is fair:true17/06/30 13:05:37 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=100017/06/30 13:05:37 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true17/06/30 13:05:37 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.00017/06/30 13:05:37 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Jun 30 13:05:3717/06/30 13:05:37 INFO util.GSet: Computing capacity for map BlocksMap17/06/30 13:05:37 INFO util.GSet: VM type       = 64-bit17/06/30 13:05:37 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB17/06/30 13:05:37 INFO util.GSet: capacity      = 2^21 = 2097152 entries17/06/30 13:05:37 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false17/06/30 13:05:37 INFO blockmanagement.BlockManager: defaultReplication         = 317/06/30 13:05:37 INFO blockmanagement.BlockManager: maxReplication             = 51217/06/30 13:05:37 INFO blockmanagement.BlockManager: minReplication             = 117/06/30 13:05:37 INFO blockmanagement.BlockManager: maxReplicationStreams      = 217/06/30 13:05:37 INFO blockmanagement.BlockManager: replicationRecheckInterval = 300017/06/30 13:05:37 INFO blockmanagement.BlockManager: encryptDataTransfer        = false17/06/30 13:05:37 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 100017/06/30 13:05:37 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)17/06/30 13:05:37 INFO namenode.FSNamesystem: supergroup          = supergroup17/06/30 13:05:37 INFO namenode.FSNamesystem: isPermissionEnabled = true17/06/30 13:05:37 INFO namenode.FSNamesystem: Determined nameservice ID: cluster117/06/30 13:05:37 INFO namenode.FSNamesystem: HA Enabled: true17/06/30 13:05:37 INFO namenode.FSNamesystem: Append Enabled: true17/06/30 13:05:38 INFO util.GSet: Computing capacity for map INodeMap17/06/30 13:05:38 INFO util.GSet: VM type       = 64-bit17/06/30 13:05:38 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB17/06/30 13:05:38 INFO util.GSet: capacity      = 2^20 = 1048576 entries17/06/30 13:05:38 INFO namenode.FSDirectory: ACLs enabled? false17/06/30 13:05:38 INFO namenode.FSDirectory: XAttrs enabled? true17/06/30 13:05:38 INFO namenode.FSDirectory: Maximum size of an xattr: 1638417/06/30 13:05:38 INFO namenode.NameNode: Caching file names occuring more than 10 times17/06/30 13:05:38 INFO util.GSet: Computing capacity for map cachedBlocks17/06/30 13:05:38 INFO util.GSet: VM type       = 64-bit17/06/30 13:05:38 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB17/06/30 13:05:38 INFO util.GSet: capacity      = 2^18 = 262144 entries17/06/30 13:05:38 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.999000012874603317/06/30 13:05:38 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 017/06/30 13:05:38 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 3000017/06/30 13:05:38 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 1017/06/30 13:05:38 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 1017/06/30 13:05:38 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,2517/06/30 13:05:38 INFO namenode.FSNamesystem: Retry cache on namenode is enabled17/06/30 13:05:38 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis17/06/30 13:05:38 INFO util.GSet: Computing capacity for map NameNodeRetryCache17/06/30 13:05:38 INFO util.GSet: VM type       = 64-bit17/06/30 13:05:38 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB17/06/30 13:05:38 INFO util.GSet: capacity      = 2^15 = 32768 entries17/06/30 13:05:41 INFO ipc.Client: Retrying connect to server: node1/172.16.73.143:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:41 INFO ipc.Client: Retrying connect to server: node2/172.16.73.43:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:41 INFO ipc.Client: Retrying connect to server: node3/172.16.73.211:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:42 INFO ipc.Client: Retrying connect to server: node1/172.16.73.143:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:42 INFO ipc.Client: Retrying connect to server: node2/172.16.73.43:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:42 INFO ipc.Client: Retrying connect to server: node3/172.16.73.211:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:43 INFO ipc.Client: Retrying connect to server: node1/172.16.73.143:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:43 INFO ipc.Client: Retrying connect to server: node2/172.16.73.43:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:43 INFO ipc.Client: Retrying connect to server: node3/172.16.73.211:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:44 INFO ipc.Client: Retrying connect to server: node1/172.16.73.143:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:44 INFO ipc.Client: Retrying connect to server: node2/172.16.73.43:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:44 INFO ipc.Client: Retrying connect to server: node3/172.16.73.211:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:45 INFO ipc.Client: Retrying connect to server: node1/172.16.73.143:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:45 INFO ipc.Client: Retrying connect to server: node2/172.16.73.43:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:45 INFO ipc.Client: Retrying connect to server: node3/172.16.73.211:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:46 INFO ipc.Client: Retrying connect to server: node1/172.16.73.143:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:46 INFO ipc.Client: Retrying connect to server: node2/172.16.73.43:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:46 INFO ipc.Client: Retrying connect to server: node3/172.16.73.211:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:47 INFO ipc.Client: Retrying connect to server: node2/172.16.73.43:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:47 INFO ipc.Client: Retrying connect to server: node1/172.16.73.143:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:47 INFO ipc.Client: Retrying connect to server: node3/172.16.73.211:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:48 INFO ipc.Client: Retrying connect to server: node2/172.16.73.43:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:48 INFO ipc.Client: Retrying connect to server: node1/172.16.73.143:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:48 INFO ipc.Client: Retrying connect to server: node3/172.16.73.211:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:49 INFO ipc.Client: Retrying connect to server: node1/172.16.73.143:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:49 INFO ipc.Client: Retrying connect to server: node2/172.16.73.43:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:49 INFO ipc.Client: Retrying connect to server: node3/172.16.73.211:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:50 INFO ipc.Client: Retrying connect to server: node2/172.16.73.43:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:50 INFO ipc.Client: Retrying connect to server: node3/172.16.73.211:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:50 INFO ipc.Client: Retrying connect to server: node1/172.16.73.143:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/06/30 13:05:50 WARN namenode.NameNode: Encountered exception during format: org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:172.16.73.211:8485: Call From node1/172.16.73.143 to node3:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefusedat org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:900)at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:184)at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:987)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)17/06/30 13:05:50 ERROR namenode.NameNode: Failed to start namenode.org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:172.16.73.211:8485: Call From node1/172.16.73.143 to node3:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefusedat org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:900)at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:184)at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:987)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)17/06/30 13:05:50 INFO util.ExitUtil: Exiting with status 117/06/30 13:05:50 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at node1/172.16.73.143************************************************************/[root@node1 hadoop-2.7.2]# 

仔细检查hdfs-site.xml 发现里面了少了个配置hadoop.tmp.dir ,增加之后

<property>   <name>hadoop.tmp.dir</name>   <value>/opt/data/hadoop_tmp_dir</value></property>

重新在node1上执行./hdfs namenode -format 发现还是同样的错误,就在几乎要绝望的时候,突然想起来有个教程里面说,HA集群应该先启动journalnode.恍然大悟,原来在格式划的时候,要链接的就是journalnode服务的8485端口。

于是乎立刻在node1、node2、node3各个节点上执行

./hadoop-daemon.sh start journalnode

root@node2 sbin]# ./hadoop-daemon.sh start journalnodestarting journalnode, logging to /opt/package/hadoop-2.7.2/logs/hadoop-root-journalnode-node2.out[root@node2 sbin]# 

[root@node2 sbin]# jps3217 JournalNode3268 Jps[root@node2 sbin]# 

在3个节点上执行jps,可以看到都已经正常启动了JournalNode集群的服务

[root@node1 sbin]# jps19444 JournalNode19495 Jps[root@node1 sbin]# [root@node2 sbin]# jps3217 JournalNode3268 Jps[root@node2 sbin]# root@node3> $JAVA_HOME/bin/jps7830 JournalNode7901 Jpsroot@node3> 

开始重新格式化,终于这次正常了

这上面是很多class_path路径信息,略过.....
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08ZSTARTUP_MSG:   java = 1.7.0_76************************************************************/17/06/30 14:59:17 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]17/06/30 14:59:17 INFO namenode.NameNode: createNameNode [-format]17/06/30 14:59:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableFormatting using clusterid: CID-cfdbd49f-1508-4d0d-81a1-b5b807c98a3817/06/30 14:59:19 INFO namenode.FSNamesystem: No KeyProvider found.17/06/30 14:59:19 INFO namenode.FSNamesystem: fsLock is fair:true17/06/30 14:59:20 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=100017/06/30 14:59:20 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true17/06/30 14:59:20 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.00017/06/30 14:59:20 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Jun 30 14:59:2017/06/30 14:59:20 INFO util.GSet: Computing capacity for map BlocksMap17/06/30 14:59:20 INFO util.GSet: VM type       = 64-bit17/06/30 14:59:20 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB17/06/30 14:59:20 INFO util.GSet: capacity      = 2^21 = 2097152 entries17/06/30 14:59:20 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false17/06/30 14:59:20 INFO blockmanagement.BlockManager: defaultReplication         = 317/06/30 14:59:20 INFO blockmanagement.BlockManager: maxReplication             = 51217/06/30 14:59:20 INFO blockmanagement.BlockManager: minReplication             = 117/06/30 14:59:20 INFO blockmanagement.BlockManager: maxReplicationStreams      = 217/06/30 14:59:20 INFO blockmanagement.BlockManager: replicationRecheckInterval = 300017/06/30 14:59:20 INFO blockmanagement.BlockManager: encryptDataTransfer        = false17/06/30 14:59:20 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 100017/06/30 14:59:20 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)17/06/30 14:59:20 INFO namenode.FSNamesystem: supergroup          = supergroup17/06/30 14:59:20 INFO namenode.FSNamesystem: isPermissionEnabled = true17/06/30 14:59:20 INFO namenode.FSNamesystem: Determined nameservice ID: cluster117/06/30 14:59:20 INFO namenode.FSNamesystem: HA Enabled: true17/06/30 14:59:20 INFO namenode.FSNamesystem: Append Enabled: true17/06/30 14:59:21 INFO util.GSet: Computing capacity for map INodeMap17/06/30 14:59:21 INFO util.GSet: VM type       = 64-bit17/06/30 14:59:21 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB17/06/30 14:59:21 INFO util.GSet: capacity      = 2^20 = 1048576 entries17/06/30 14:59:21 INFO namenode.FSDirectory: ACLs enabled? false17/06/30 14:59:21 INFO namenode.FSDirectory: XAttrs enabled? true17/06/30 14:59:21 INFO namenode.FSDirectory: Maximum size of an xattr: 1638417/06/30 14:59:21 INFO namenode.NameNode: Caching file names occuring more than 10 times17/06/30 14:59:21 INFO util.GSet: Computing capacity for map cachedBlocks17/06/30 14:59:21 INFO util.GSet: VM type       = 64-bit17/06/30 14:59:21 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB17/06/30 14:59:21 INFO util.GSet: capacity      = 2^18 = 262144 entries17/06/30 14:59:21 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.999000012874603317/06/30 14:59:21 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 017/06/30 14:59:21 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 3000017/06/30 14:59:21 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 1017/06/30 14:59:21 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 1017/06/30 14:59:21 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,2517/06/30 14:59:21 INFO namenode.FSNamesystem: Retry cache on namenode is enabled17/06/30 14:59:21 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis17/06/30 14:59:21 INFO util.GSet: Computing capacity for map NameNodeRetryCache17/06/30 14:59:21 INFO util.GSet: VM type       = 64-bit17/06/30 14:59:21 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB17/06/30 14:59:21 INFO util.GSet: capacity      = 2^15 = 32768 entries17/06/30 14:59:23 INFO namenode.FSImage: Allocated new BlockPoolId: BP-918698138-172.16.73.143-149880596332217/06/30 14:59:23 INFO common.Storage: Storage directory /opt/data/hadoop_tmp_dir/dfs/name has been successfully formatted.17/06/30 14:59:23 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 017/06/30 14:59:24 INFO util.ExitUtil: Exiting with status 017/06/30 14:59:24 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at node1/172.16.73.143************************************************************/[root@node1 bin]# 

在node1上执行./hadoop-daemon.sh start namenode 启动namenode, 感谢上帝,到目前还都正常

[root@node1 sbin]# ./hadoop-daemon.sh start namenodestarting namenode, logging to /opt/package/hadoop-2.7.2/logs/hadoop-root-namenode-node1.out[root@node1 sbin]# jps19444 JournalNode19676 Jps19592 NameNode[root@node1 sbin]# 

在node2上执行./hdfs namenode -bootstrapStandby,格式化第二个namenode, 设为为standby模式,这个参数的作用是从node1把namendoe的数据读取过来

[root@node2 bin]# ./hdfs namenode -bootstrapStandby17/06/30 21:28:18 INFO namenode.NameNode: STARTUP_MSG: /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG:   host = node2/172.16.73.43STARTUP_MSG:   args = [-bootstrapStandby]STARTUP_MSG:   version = 2.7.2STARTUP_MSG:   classpath = 略过......STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08ZSTARTUP_MSG:   java = 1.7.0_76************************************************************/17/06/30 21:28:18 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]17/06/30 21:28:18 INFO namenode.NameNode: createNameNode [-bootstrapStandby]17/06/30 21:28:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable=====================================================About to bootstrap Standby ID node2 from:           Nameservice ID: cluster1        Other Namenode ID: node1  Other NN's HTTP address: http://node1:50070  Other NN's IPC  address: node1/172.16.73.143:9000             Namespace ID: 307407276            Block pool ID: BP-918698138-172.16.73.143-1498805963322               Cluster ID: CID-cfdbd49f-1508-4d0d-81a1-b5b807c98a38           Layout version: -63       isUpgradeFinalized: true=====================================================17/06/30 21:28:22 INFO common.Storage: Storage directory /opt/data/hadoop_tmp_dir/dfs/name has been successfully formatted.17/06/30 21:28:24 INFO namenode.TransferFsImage: Opening connection to http://node1:50070/imagetransfer?getimage=1&txid=0&storageInfo=-63:307407276:0:CID-cfdbd49f-1508-4d0d-81a1-b5b807c98a3817/06/30 21:28:24 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds17/06/30 21:28:24 INFO namenode.TransferFsImage: Transfer took 0.01s at 0.00 KB/s17/06/30 21:28:24 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000000 size 351 bytes.17/06/30 21:28:24 INFO util.ExitUtil: Exiting with status 017/06/30 21:28:24 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at node2/172.16.73.43************************************************************/[root@node2 bin]# 

这里有很重要的几行信息:

namenode.TransferFsImage: Opening connection to http://node1:50070/imagetransfer?getimage=1&txid=0&storageInfo=-63:307407276:0:CID-cfdbd49f-1508-4d0d-81a1-b5b807c98a38

Image Transfer timeout configured to 60000 milliseconds

Downloaded file fsimage.ckpt_0000000000000000000 size 351 bytes.

这里打开一个连接使用TransferFsImage把node1上的namendoe的数据给读取到node2上。传输这个文件使用了60000毫秒,保存到本地的 fsimage.ckpt_0000000000000000000这个文件里面,可以看看本地的这个文件的情况

[root@node2 sbin]# pwd/opt/package/hadoop-2.7.2/sbin[root@node2 sbin]# tree /opt/data/opt/data├── hadoop_tmp_dir│   └── dfs│       └── name│           ├── current│           │   ├── fsimage_0000000000000000000│           │   ├── fsimage_0000000000000000000.md5│           │   ├── seen_txid│           │   └── VERSION│           └── in_use.lock└── journal_tmp_dir    └── cluster1        ├── current        │   ├── paxos        │   └── VERSION        └── in_use.lock8 directories, 7 files[root@node2 sbin]# 

在node2上启动namenode

root@node2 sbin]# ./hadoop-daemon.sh start  namenode starting namenode, logging to /opt/package/hadoop-2.7.2/logs/hadoop-root-namenode-node2.out[root@node2 sbin]# jps3458 Jps3217 JournalNode3378 NameNode[root@node2 sbin]# 

可以看到node2上已经启动好了NameNode和JournalNode服务了

接下来开始验证

查看node1上的namenode,http 端口:50070


查看node2上的namende ,http 端口 50070


可以看到,node1、node2上的namenode都是处于standby的状态。接下来我们看看如何把其中的一个切换成actie.

在node1上

root@node1 bin]# ./hdfs haadmin -failover --forceactive node1 node217/06/30 15:56:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableFailover from node1 to node2 successful[root@node1 bin]# 
这里使用了hdfs hadmin这个参数,这个参数是hdsf 高可靠管理的参数,后面跟的node1 和node2表示让node1变成standby,让node2变成active,可以看到已经切换成功,再看看node2的namenode的情况


可以看到已经从standby 变成了active.

其实我还可以把node1变成active,把node2变成standby

[root@node1 bin]# ./hdfs haadmin -failover --forceactive  node2 node117/06/30 16:11:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableFailover from node2 to node1 successful[root@node1 bin]# 

提示已经切换成功

通过node1的Namenode  的WEB监控平台看可看到,node1从原先的standBy 变成了active

通过node2的Namenode  的WEB监控平台看可看到,node1从原先的active变成了standBy 正好状态发生了改变

当两个Namenode  都是standby时对HDFS操作是没有任何回应的,因为namenode服务没有真正启用的。


下面放这个HDFS里上传一个文件试试

[root@node1 bin]# ./hdfs dfs -put list.txt  /17/06/30 16:17:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable17/06/30 16:17:30 WARN hdfs.DFSClient: DataStreamer Exceptionorg.apache.hadoop.ipc.RemoteException(java.io.IOException): File /list.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1547)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:724)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:415)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)at org.apache.hadoop.ipc.Client.call(Client.java:1475)at org.apache.hadoop.ipc.Client.call(Client.java:1412)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)at com.sun.proxy.$Proxy9.addBlock(Unknown Source)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)at com.sun.proxy.$Proxy10.addBlock(Unknown Source)at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1459)at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1255)at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449)put: File /list.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

不幸报错了,提示There are 0 datanode(s) running and no node(s) are excluded in this operation,没有datanode

执行个jps看看

[root@node1 sbin]# jps20703 Jps19444 JournalNode19592 NameNode[root@node1 sbin]# 
原来真木有DataNode进程

在node1启动datanode

[root@node1 sbin]# ./hadoop-daemon.sh start datanodestarting datanode, logging to /opt/package/hadoop-2.7.2/logs/hadoop-root-datanode-node1.out[root@node1 sbin]# 


[root@node1 sbin]# jps19444 JournalNode20810 Jps20726 DataNode19592 NameNode[root@node1 sbin]# 
 可以看到node1上已经有datanode服务了

但是用jps查了node2,node3,发现并木有datanode服务,集群模式下,这个是不正常的,其实是有问题,仔细检查后发现原来是命令用错了,

本来应该用

./hadoop-daemons.sh start datanode

但是我用的是

./hadoop-daemon.sh start datanode

就是少了一个s,差距如此之大。


好吧,先把node1的datanode关闭

[root@node1 sbin]# ./hadoop-daemon.sh stop datanodestopping datanode


下面这样才是正确的
[root@node1 sbin]# ./hadoop-daemons.sh start datanodenode1: starting datanode, logging to /opt/package/hadoop-2.7.2/logs/hadoop-root-datanode-node1.outnode2: starting datanode, logging to /opt/package/hadoop-2.7.2/logs/hadoop-root-datanode-node2.outnode3: starting datanode, logging to /opt/package/hadoop-2.7.2/logs/hadoop-root-datanode-node3.out[root@node1 sbin]# 

可以看到node1、node2、node3上的datanode同时都启动了

OK,下面开始测试向HDFS里面上传文件

root@node1 bin]# ./hdfs dfs -put list.txt /data17/06/30 16:49:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable[root@node1 bin]# ./hdfs dfs -ll /data-ll: Unknown command[root@node1 bin]# ./hdfs dfs -ls /data17/06/30 16:50:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableFound 1 items-rw-r--r--   3 root supergroup        670 2017-06-30 16:49 /data/list.txt[root@node1 bin]# 

删除文件操作

[root@node1 bin]# ./hdfs dfs -rmr  /data/list.txtrmr: DEPRECATED: Please use 'rm -r' instead.17/06/30 16:52:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable17/06/30 16:52:12 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 1440 minutes, Emptier interval = 0 minutes.Moved: 'hdfs://cluster1/data/list.txt' to trash at: hdfs://cluster1/user/root/.Trash/Current[root@node1 bin]# 

可以看到删除文件之后,文件被转移到HDFS的trash,这个就是回收站目录,为了让用户能够快速恢复,

可以查看这个回收站目录

[root@node1 bin]# ./hdfs dfs -ls /user/root/.Trash/Current17/06/30 16:57:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableFound 2 itemsdrwx------   - root supergroup          0 2017-06-30 16:52 /user/root/.Trash/Current/data-rw-r--r--   3 root supergroup        670 2017-06-30 16:44 /user/root/.Trash/Current/list.txt[root@node1 bin]# 


把这个文件恢复出来

[root@node1 bin]# ./hdfs dfs -mv /user/root/.Trash/Current/list.txt   /user

查看恢复后的文件
[root@node1 bin]# ./hdfs dfs -ls   /user
Found 2 items
-rw-r--r--   3 root supergroup        670 2017-06-30 16:44 /user/list.txt
drwx------   - root supergroup          0 2017-06-30 16:46 /user/root

一直都是在Node1上操作的,其实可以在node2 node3上操作HDFS的,因为这是一个集群

在Node2上上传一个文件

[root@node2 logs]# hdfs dfs -put hadoop-root-datanode-node2.log   /user
[root@node2 logs]# 


查看这个路径

[root@node2 logs]# /opt/package/hadoop-2.7.2/bin/hdfs dfs -ls   /user
Found 3 items
-rw-r--r--   3 root supergroup      27694 2017-06-30 17:05 /user/hadoop-root-datanode-node2.log
-rw-r--r--   3 root supergroup        670 2017-06-30 16:44 /user/list.txt
drwx------   - root supergroup          0 2017-06-30 16:46 /user/root
[root@node2 logs]# 

node3上上传文件

root@node3> ./hdfs dfs -put data.txt /


查看路径

root@node3> ./hdfs dfs -ls /
Found 3 items
drwxr-xr-x   - root supergroup          0 2017-06-30 16:52 /data
-rw-r--r--   3 root supergroup        670 2017-06-30 17:03 /data.txt
drwx------   - root supergroup          0 2017-06-30 17:00 /user
root@node3> 

把node1上的namedoe关闭试试

[root@node1 sbin]# ./hadoop-daemon.sh stop namenode
stopping namenode

此时再向HDFS上传文件时发现会失败

[root@node1 bin]# ./hdfs dfs -put ddd1.txt /user
17/06/30 17:12:53 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/06/30 17:12:55 INFO retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over node2/172.16.73.43:9000 after 1 fail over attempts. Trying to fail over after sleeping for 1251ms.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state st


此时 node1的namenode的WEB监控平台已经大不开了,node1的namenode的WEB监控平台可以打开,但是是standby状态,此时这个集群处于不可用状态,我们让node2 进入active状态,让集群变的可用,

[root@node2 bin]# ./hdfs haadmin -failover --forceactive node1 node217/06/30 23:32:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable17/06/30 23:32:41 INFO ipc.Client: Retrying connect to server: node1/172.16.73.143:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)17/06/30 23:32:41 WARN ha.FailoverController: Unable to gracefully make NameNode at node1/172.16.73.143:9000 standby (unable to connect)java.net.ConnectException: Call From node2/172.16.73.43 to node1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefusedat sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:526)at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)at org.apache.hadoop.ipc.Client.call(Client.java:1479)at org.apache.hadoop.ipc.Client.call(Client.java:1412)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)at com.sun.proxy.$Proxy8.transitionToStandby(Unknown Source)at org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.transitionToStandby(HAServiceProtocolClientSideTranslatorPB.java:112)at org.apache.hadoop.ha.FailoverController.tryGracefulFence(FailoverController.java:172)at org.apache.hadoop.ha.FailoverController.failover(FailoverController.java:210)at org.apache.hadoop.ha.HAAdmin.failover(HAAdmin.java:295)at org.apache.hadoop.ha.HAAdmin.runCmd(HAAdmin.java:455)at org.apache.hadoop.hdfs.tools.DFSHAAdmin.runCmd(DFSHAAdmin.java:120)at org.apache.hadoop.ha.HAAdmin.run(HAAdmin.java:384)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)at org.apache.hadoop.hdfs.tools.DFSHAAdmin.main(DFSHAAdmin.java:132)Caused by: java.net.ConnectException: Connection refusedat sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)at org.apache.hadoop.ipc.Client.call(Client.java:1451)... 13 more17/06/30 23:32:41 INFO ha.NodeFencer: ====== Beginning Service Fencing Process... ======17/06/30 23:32:41 INFO ha.NodeFencer: Trying method 1/1: org.apache.hadoop.ha.SshFenceByTcpPort(null)17/06/30 23:32:42 INFO ha.SshFenceByTcpPort: Connecting to node1...17/06/30 23:32:42 INFO SshFenceByTcpPort.jsch: Connecting to node1 port 2217/06/30 23:32:42 INFO SshFenceByTcpPort.jsch: Connection established17/06/30 23:32:42 INFO SshFenceByTcpPort.jsch: Remote version string: SSH-2.0-OpenSSH_5.317/06/30 23:32:42 INFO SshFenceByTcpPort.jsch: Local version string: SSH-2.0-JSCH-0.1.4217/06/30 23:32:42 INFO SshFenceByTcpPort.jsch: CheckCiphers: aes256-ctr,aes192-ctr,aes128-ctr,aes256-cbc,aes192-cbc,aes128-cbc,3des-ctr,arcfour,arcfour128,arcfour25617/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: aes256-ctr is not available.17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: aes192-ctr is not available.17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: aes256-cbc is not available.17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: aes192-cbc is not available.17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: arcfour256 is not available.17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: SSH_MSG_KEXINIT sent17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: SSH_MSG_KEXINIT received17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: kex: server->client aes128-ctr hmac-md5 none17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: kex: client->server aes128-ctr hmac-md5 none17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: SSH_MSG_KEXDH_INIT sent17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: expecting SSH_MSG_KEXDH_REPLY17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: ssh_rsa_verify: signature true17/06/30 23:32:43 WARN SshFenceByTcpPort.jsch: Permanently added 'node1' (RSA) to the list of known hosts.17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: SSH_MSG_NEWKEYS sent17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: SSH_MSG_NEWKEYS received17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: SSH_MSG_SERVICE_REQUEST sent17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: SSH_MSG_SERVICE_ACCEPT received17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: Authentications that can continue: gssapi-with-mic,publickey,keyboard-interactive,password17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: Next authentication method: gssapi-with-mic17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: Authentications that can continue: publickey,keyboard-interactive,password17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: Next authentication method: publickey17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: Authentication succeeded (publickey).17/06/30 23:32:43 INFO ha.SshFenceByTcpPort: Connected to node117/06/30 23:32:43 INFO ha.SshFenceByTcpPort: Looking for process running on port 900017/06/30 23:32:43 INFO ha.SshFenceByTcpPort: Indeterminate response from trying to kill service. Verifying whether it is running using nc...17/06/30 23:32:43 WARN ha.SshFenceByTcpPort: nc -z node1 9000 via ssh: bash: nc: command not found17/06/30 23:32:43 INFO ha.SshFenceByTcpPort: Verified that the service is down.17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: Disconnecting from node1 port 2217/06/30 23:32:43 INFO ha.NodeFencer: ====== Fencing successful by method org.apache.hadoop.ha.SshFenceByTcpPort(null) ======17/06/30 23:32:43 INFO SshFenceByTcpPort.jsch: Caught an exception, leaving main loop due to Socket closedFailover from node1 to node2 successful[root@node2 bin]# 


可以看到会首先链接node1上的namenode,但是node1上的namenode进程早就死掉了,因此链接失败的,但是成功启用了node2,可以看到已经变成了激活状态了

但是此时JournalNode集群当中只有一个NameNode在服务,所以我们把另一个NameNode启动起来

在node1上操作

[root@node1 hadoop-2.7.2]# sbin/hadoop-daemon.sh start namenodestarting namenode, logging to /opt/package/hadoop-2.7.2/logs/hadoop-root-namenode-node1.out[root@node1 hadoop-2.7.2]# jps22006 NameNode19444 JournalNode20917 DataNode22048 Jps[root@node1 hadoop-2.7.2]# 

 查看Node1的NameNode WEB 监控平台


可以看到这个nameNode已经启动并且是standby状态

再看看HDFS上的文件还在不在了

root@node3> ./hdfs dfs -ls /17/06/30 16:36:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableFound 3 itemsdrwxr-xr-x   - root supergroup          0 2017-06-30 16:52 /data-rw-r--r--   3 root supergroup        670 2017-06-30 17:03 /data.txtdrwx------   - root supergroup          0 2017-06-30 17:05 /userroot@node3> 

JournalNode 集群中的两个NameNode经过了人为的产生了故障并且进行了切换,但是里面保存的数据还是存在的。

OK至此已经完整的部署了可以用于手工切换NameNode的Hadoop 高可用集群,开头比较曲折,后面还是很顺利的,继续前进!!





























原创粉丝点击