cdh4 ha (hadoop-2.0.0-cdh4.1.2.tar.gz)

来源:互联网 发布:韦德得分王赛季数据 编辑:程序博客网 时间:2024/05/21 04:00

$HADOOP_HOME/hdfs/name state: NON_EXISTENT


因为没有format


format时报错连接不上



From all of your JN : sudo service hadoop-hdfs-journalnode start

or if you deployed using tarball, maybe something like:

hadoop-daemon.sh start journalnode(所有节点)


然后才可以format namenode



然后zookeeper在所有节点都zkServer.sh start,然后hdfs zkfc -formatZK









































hdfs-site.xml参考:

<configuration>


<property>
  <name>dfs.replication</name>
  <value>3</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>


<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/usr/local/cloudwave2.0/hadoop/hdfs/name</value>
  <final>true</final>
</property>


<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/usr/local/cloudwave2.0/hadoop/hdfs/data</value>
  <final>true</final>
</property>


<property>
  <name>dfs.nameservices</name>
  <value>mycluster</value>
</property>


<property>
  <name>dfs.ha.namenodes.mycluster</name>
  <value>nn1,nn2</value>
</property>


<property>
  <name>dfs.namenode.rpc-address.mycluster.nn1</name>
  <value>cloudwave0:8020</value>
</property>


<property>
  <name>dfs.namenode.rpc-address.mycluster.nn2</name>
  <value>cloudwave1:8020</value>
</property>


<property>
  <name>dfs.namenode.http-address.mycluster.nn1</name>
  <value>cloudwave0:50070</value>
</property>


<property>
  <name>dfs.namenode.http-address.mycluster.nn2</name>
  <value>cloudwave1:50070</value>
</property>


<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://cloudwave0:8485;cloudwave1:8485;cloudwave2:8485/mycluster</value>journalnode启动这几个
</property>


<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/usr/local/cloudwave2.0/hadoop/jndata/1/dfs/jn</value>
</property>


<property>
  <name>dfs.client.failover.proxy.provider.mycluster</name>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>


<property>
  <name>dfs.ha.fencing.methods</name>
  <value><!-- sshfence --> shell(/bin/true)</value>
</property>


<property>
  <name>dfs.ha.fencing.ssh.private-key-files</name>
  <value>/home/cloudwave/.ssh/id_rsa</value>
</property>


<property>
  <name>dfs.ha.automatic-failover.enabled</name>
  <value>true</value>
</property>


</configuration>



core-site.xml:

<configuration>


<property>
  <name>ha.zookeeper.quorum</name>
  <value>cloudwave0:2181,cloudwave1:2181,cloudwave2:2181</value>
</property>


<property>
  <name>fs.defaUltFS</name>
  <value>hdfs://mycluster</value>
</property>


</configuration>



zoo.cfg:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/usr/local/cloudwave2.0/zookeeper/zookeeper_data
# the port at which the clients will connect
dataLogDir=/usr/local/cloudwave2.0/zookeeper/logs
clientPort=2181
server.1=cloudwave0:2888:3888
server.2=cloudwave1:2888:3888
server.3=cloudwave2:2888:3888

zookeeper启动这三个