hadoop secondarynamenode的两种配置方式

来源:互联网 发布:mysql insert触发器 编辑:程序博客网 时间:2024/05/22 14:39

hadoop secondarynamenode的两种配置方式,hadoop版本是hadoop-1.0.4:

集群分配关系:

masterJobTracker&&Namenodenode1Secondarynamenodenode2TaskTracker&&Datanodenode3TaskTracker&&Datanodenode4TaskTracker&&Datanode

配置1:

1.conf/core-site.xml:

<configuration><property><name>hadoop.tmp.dir</name><value>/home/hadoop/hadooptmp</value><description>A base for other temporary directories.</description></property><property><name>fs.default.name</name><value>hdfs://master:9000</value></property></configuration>

2.conf/hadoop-env.sh:

export JAVA_HOME=/home/hadoop/jdk1.x.x_xx

3. conf/hdfs-site.xml:

<configuration><property><name>dfs.replication</name><value>2</value></property><property>     <name>dfs.data.dir</name>     <value>/home/hadoop/hadoopfs/data</value> </property><property><name>dfs.http.address</name><value>master:50070</value></property><property><name>dfs.back.http.address</name><value>node1:50070</value></property><property><name>dfs.name.dir</name><value>/home/hadoop/hadoopfs/name</value></property><property><name>fs.checkpoint.dir</name><value>/home/hadoop/hadoopcheckpoint</value></property><property><name>dfs.permissions</name><value>false</value></property></configuration>

4.conf/mapred-site.xml:

<configuration><property><name>mapred.job.tracker</name><value>master:9001</value></property><property><name>mapred.tasktracker.map.tasks.maximum</name><value>4</value></property><property><name>mapred.tasktracker.reduce.tasks.maximum</name><value>4</value></property><property><name>mapred.child.java.opts</name><value>-Xmx1000m</value></property></configuration>

5. conf/masters:

master

6.conf/secondarynamenode(此为新建的文件)

node1

7. conf/slaves:

node2node3node4

8.bin/start-dfs.sh:

"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode $nameStartOpt"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode $dataStartOpt"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts secondarynamenode start secondarynamenode

9.bin/stop-dfs.sh:

"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop namenode $nameStartOpt"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR stop datanode $dataStartOpt"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts secondarynamenode stop secondarynamenode

配置2:

1.conf/core-site.xml:

<configuration><property><name>hadoop.tmp.dir</name><value>/home/hadoop/hadooptmp</value><description>A base for other temporary directories.</description></property><property><name>fs.default.name</name><value>hdfs://master:9000</value></property></configuration>

2.conf/hadoop-env.sh:

export JAVA_HOME=/home/hadoop/jdk1.x.x_xx

3. conf/hdfs-site.xml:

<configuration><property><name>dfs.replication</name><value>2</value></property><property>     <name>dfs.data.dir</name>     <value>/home/hadoop/hadoopfs/data</value> </property><property><name>dfs.http.address</name><value>master:50070</value></property><property><name>dfs.back.http.address</name><value>node1:50070</value></property><property><name>dfs.name.dir</name><value>/home/hadoop/hadoopfs/name</value></property><property><name>fs.checkpoint.dir</name><value>/home/hadoop/hadoopcheckpoint</value></property><property><name>dfs.permissions</name><value>false</value></property></configuration>

4.conf/mapred-site.xml:

<configuration><property><name>mapred.job.tracker</name><value>master:9001</value></property><property><name>mapred.tasktracker.map.tasks.maximum</name><value>4</value></property><property><name>mapred.tasktracker.reduce.tasks.maximum</name><value>4</value></property><property><name>mapred.child.java.opts</name><value>-Xmx1000m</value></property></configuration>

5. conf/masters:

node1

7. conf/slaves:

node2node3node4

还有就是昨天写的secondarynamenode的使用,应该也是有拷贝secondarynamenode的文件到namenode的过程的,

因为昨天测试的时候只是在一个机器上搞,所以就少了复制这一步,今天用了集群进行测试,才发现要拷贝的。

上面的两种方式已经经过集群测试,证明可用。


http://blog.csdn.net/fansy1990/article/details/8990206

0 0
原创粉丝点击