15/11/18 17:59:30 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 
15/11/18 17:59:30 INFO util.GSet: Computing capacity for map NameNodeRetryCache 
15/11/18 17:59:30 INFO util.GSet: VM type       = 64-bit 
15/11/18 17:59:30 INFO util.GSet: 0.029999999329447746% max memory 3.4 GB = 1.0 MB 
15/11/18 17:59:30 INFO util.GSet: capacity      = 2^17 = 131072 entries 
15/11/18 17:59:30 INFO namenode.NNConf: ACLs enabled? false 
15/11/18 17:59:30 INFO namenode.NNConf: XAttrs enabled? true 
15/11/18 17:59:30 INFO namenode.NNConf: Maximum size of an xattr: 16384 
15/11/18 17:59:30 WARN ssl.FileBasedKeyStoresFactory: The property 'ssl.client.truststore.location' has not been set, no TrustStore will be loaded 
15/11/18 17:59:32 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:32 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:32 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:33 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:33 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:33 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:34 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:34 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:34 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:35 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:35 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:35 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:36 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:36 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:36 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:37 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:37 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:37 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:38 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:38 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:38 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:39 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:39 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:39 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:40 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:40 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:40 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:41 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:41 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:41 WARN namenode.NameNode: Encountered exception during format:  
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown: 
192.168.10.203:8485: Call From hadoop203/192.168.10.203 to hadoop203:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused 
at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) 
at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223) 
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:875) 
at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171) 
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:922) 
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1354) 
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473) 
15/11/18 17:59:41 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
15/11/18 17:59:41 FATAL namenode.NameNode: Exception in namenode join 
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown: 
192.168.10.203:8485: Call From hadoop203/192.168.10.203 to hadoop203:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused 
at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) 
at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223) 
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:875) 
at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171) 
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:922) 
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1354) 
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473) 
15/11/18 17:59:41 INFO util.ExitUtil: Exiting with status 1 
15/11/18 17:59:41 INFO namenode.NameNode: SHUTDOWN_MSG:  
/************************************************************ 
SHUTDOWN_MSG: Shutting down NameNode at hadoop203/192.168.10.203 

************************************************************/

解决办法:

启动各个zookeeper(命令 ./zkServer.sh start ,再用启动设计好的 JournalNode进程(./ hadoop-daemon.sh start journalnode)。然后再进行格式化即可。

在其中一个namenode上格式化:./hdfs namenode -format -bjsxt  (bin目录下)

启动刚刚格式化的namenode(./hadoop-daemon.sh start namenode)(sbin目录下) 
 
在另一个没有格式化的namenode上执行:./hdfs namenode -bootstrapStandby  (bin)

启动第二个namenode  (./hadoop-daemon.sh start namenode)(    sbin)

在其中一个namenode上初始化zkfc:./hdfs zkfc -formatZK   (bin)
在四台虚拟机上输入jps


停止上面节点:./stop-all.sh   (sbin)
启动:./start-all.sh     (sbin)
访问  http://hadoop202:50070/     http://hadoop203:50070/