HDFS ViewFs配置

来源:互联网 发布:剑三琴萝捏脸数据 编辑:程序博客网 时间:2024/06/05 02:57

1.      core-site.xml文件

<configuration  xmlns:xi="http://www.w3.org/2001/XInclude"><xi:includehref="mountTable.xml" />        <property>          <name>fs.defaultFS</name>         <value>viewfs://mycluster</value>        </property>        <property>         <name>dfs.journalnode.edits.dir</name>         <value>/home/hihadoop/hadoop/jns</value>        </property>        <property>         <name>hadoop.tmp.dir</name>         <value>/hadoop/tmp</value>        </property>        <property>           <name>ha.zookeeper.quorum</name>          <value>hadoopa.highgo.com:2181</value>           </property></configuration>

2.      mountTable.xml文件

<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheettype="text/xsl" href="configuration.xsl"?><configuration>  <property>      <name>fs.viewfs.mounttable.mycluster.homedir</name>          <value>/home</value>      </property>       <property>                 <name>fs.viewfs.mounttable.mycluster.link./home</name>                      <value>hdfs://ns1/home</value>       </property>       <property>             <name>fs.viewfs.mounttable.mycluster.link./tmp</name>             <value>hdfs://ns1/tmp</value>       </property>      <property>          <name>fs.viewfs.mounttable.mycluster.link./projects/foo</name>        <value>hdfs://ns2/projects/foo</value>       </property>     <property>       <name>fs.viewfs.mounttable.mycluster.link./projects/bar</name>       <value>hdfs://ns2/projects/bar</value>      </property></configuration>

3.      hdfs-site.xml文件(使用ViewFs的话,相关的配置,上边的配置已经完成了,下面是一个两个高可用的集群联邦的配置)

<configuration>        <property>           <name>dfs.replication</name>            <value>3</value>          </property>          <property>            <name>dfs.namenode.name.dir</name>           <value>/hadoop/dfs/name</value>          </property>          <property>           <name>dfs.datanode.data.dir</name>           <value>/hadoop/dfs/data/data1</value>          </property>                <!--          <property>            <name>dfs.namenode.secondary.http-address</name>            <value>hadoopb.highgo.com:50090</value>          </property>           <property>            <name>dfs.namenode.secondary.https-address</name>             <value>hadoopb.highgo.com:50091</value>           </property>           -->            <property>              <name>dfs.nameservices</name>              <value>ns1,ns2</value>            </property>             <property>                  <name>dfs.ha.namenodes.ns1</name>                 <value>nna,nnb</value>             </property>             <property>                   <name>dfs.ha.namenodes.ns2</name>                   <value>nng,nnh</value>             </property>             <property>              <name>dfs.namenode.rpc-address.ns1.nna</name>                <value>hadoopa.highgo.com:8020</value>                 </property>                 <property>                  <name>dfs.namenode.rpc-address.ns1.nnb</name>                     <value>hadoopb.highgo.com:8020</value>                     </property>        <property>         <name>dfs.namenode.http-address.ns1.nna</name>           <value>hadoopa.highgo.com:50070</value>            </property>            <property>             <name>dfs.namenode.http-address.ns1.nnb</name>               <value>hadoopb.highgo.com:50070</value>                </property>          <property>           <name>dfs.namenode.shared.edits.dir.ns1</name>              <value>qjournal://hadoopa.highgo.com:8485;hadoopb.highgo.com:8485;hadoopc.highgo.com:8485/ns1</value>              </property>         <property>          <name>dfs.client.failover.proxy.provider.ns1</name>             <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>             </property><property>         <name>dfs.namenode.rpc-address.ns2.nng</name>         <value>hadooph.highgo.com:8020</value></property><property>         <name>dfs.namenode.rpc-address.ns2.nnh</name>        <value>hadoopg.highgo.com:8020</value></property><property>    <name>dfs.namenode.http-address.ns2.nng</name>     <value>hadooph.highgo.com:50070</value></property><property>         <name>dfs.namenode.http-address.ns2.nnh</name>          <value>hadoopg.highgo.com:50070</value></property><!--<property>      <name>dfs.namenode.shared.edits.dir.ns2</name>      <value>qjournal://hadoopf.highgo.com:8485;hadoopg.highgo.com:8485;hadooph.highgo.com:8485/ns2</value></property>--><property>                                 <name>dfs.client.failover.proxy.provider.ns2</name>       <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value></property>        <property>         <name>dfs.ha.fencing.methods</name>            <value>sshfence</value>            </property><property> <name>dfs.ha.fencing.ssh.private-key-files</name>   <value>/home/hihadoop/.ssh/id_rsa</value>    </property><property>   <name>dfs.ha.automatic-failover.enabled.ns1</name>      <value>true</value>       </property><property>  <name>dfs.ha.automatic-failover.enabled.ns2</name>         <value>true</value>                </property></configuration>

4.      格式化集群。需要在两个NameNode集群上分别格式化。启动整个JNs,在每一个集群上,在其中一个NameNode执行

hdfs namenode -format -clusterId mycluster

然后启动

hadoop-daemon.sh start namenode

在另一个NameNode上执行

hdfs namenode –bootstrapStandby

然后启动

hadoop-daemon.sh start namenode

在两个NameNode集群的任意NameNode上执行

hdfs zkfc –formatZK

在所有节点执行

hadoop-daemon.sh start zkfc

启动所有的DataNode

5.      此时可以用Hadoop shell进行一些操作了,操作的目录是所用的NameNode就是挂载表中的配置。

 

0 0
原创粉丝点击