HDFS 高可用,hdfs-site.xml 配置及说明,更详细参考官网
来源:互联网 发布:7u分享网络官网 编辑:程序博客网 时间:2024/06/06 02:26
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<!--the logical name for this new nameservice -->
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<!--unique identifiers for each NameNode in the nameservice -->
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<!--the fully-qualified RPC address for each NameNode to listen on -->
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>bigdatastorm:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>bigdataspark:8020</value>
</property>
<!--the fully-qualified HTTP address for each NameNode to listen on -->
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>bigdatastorm:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>bigdataspark:50070</value>
</property>
<!-- the Java class that HDFS clients use to contact the Active NameNode -->
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!--the URI which identifies the group of JNs where the NameNodes will write/read edits -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://bigdatastorm:8485;bigdataspark:8485;bigdatacloud:8485/mycluster</value>
</property>
<!--a list of scripts or Java classes which will be used to fence the Active NameNode during a failover-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_dsa</value>
</property>
<!--the path where the JournalNode daemon will store its local state -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/hadoop-2.5.1/data</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<!--the logical name for this new nameservice -->
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<!--unique identifiers for each NameNode in the nameservice -->
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<!--the fully-qualified RPC address for each NameNode to listen on -->
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>bigdatastorm:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>bigdataspark:8020</value>
</property>
<!--the fully-qualified HTTP address for each NameNode to listen on -->
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>bigdatastorm:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>bigdataspark:50070</value>
</property>
<!-- the Java class that HDFS clients use to contact the Active NameNode -->
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!--the URI which identifies the group of JNs where the NameNodes will write/read edits -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://bigdatastorm:8485;bigdataspark:8485;bigdatacloud:8485/mycluster</value>
</property>
<!--a list of scripts or Java classes which will be used to fence the Active NameNode during a failover-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_dsa</value>
</property>
<!--the path where the JournalNode daemon will store its local state -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/hadoop-2.5.1/data</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
0 0
- HDFS 高可用,hdfs-site.xml 配置及说明,更详细参考官网
- Hdfs-site.xml配置说明
- hadoop-2.2.* hdfs-site.xml 配置说明
- 配置hdfs-site.xml
- hdfs-site.xml默认配置及意思
- HDFS高可用配置
- HDFS For hdfs-site.xml
- Hadoop配置项(hdfs-site.xml)
- Hadoop配置项整理(hdfs-site.xml)
- Hadoop配置项整理(hdfs-site.xml)
- Hadoop配置项整理(hdfs-site.xml)
- Hadoop配置项整理(hdfs-site.xml)
- Hadoop配置项整理(hdfs-site.xml)
- Hadoop配置项整理(hdfs-site.xml)
- hdfs-site.xml配置参数详情
- Hadoop的hdfs-site.xml配置描述
- Hadoop配置项整理(hdfs-site.xml)
- hdfs-site.xml配置参数详情
- Java 5种字符串拼接方式性能比较。
- session多服务器共享的方案梳理
- 高德地图API
- Sql server基础
- struts2校验器验证表单避免无效sql查询
- HDFS 高可用,hdfs-site.xml 配置及说明,更详细参考官网
- 可扩展的列表组件
- 关于在myeclipse 2014中发布web service服务时的报错解决
- android studio 以及SO的关系
- linux - grep | cut
- APP启动白屏、黑屏的问题
- adb logcat 命令行用法
- 待完成任务 —— appearance意义及功能使用方法
- SQL多表连接查询