Read note of HDFS User Guide

来源:互联网 发布:京东金融 知乎 编辑:程序博客网 时间:2024/05/19 17:23

CheckPoint Node:

The Checkpoint node's memory requirements are on the same order as the NameNode. The Checkpoint node is started by (execute on checkpoint node)

bin/hdfs namenode -checkpoint 

two configuration parameters

  • dfs.namenode.checkpoint.period, set to 1 hour by default, specifies the maximum delay between two consecutive checkpoints
  • dfs.namenode.checkpoint.txns, set to 1 million by default, defines the number of uncheckpointed transactions on the NameNode which will force an urgent checkpoint, even if the checkpoint period has not been reached.

Backup Node:

As the Backup node maintains a copy of the namespace in memory,its RAM requirements are the same as the NameNode.

The NameNode supports one Backup node at a time.No Checkpoint nodes may be registered if a Backup node is in use

The Backup node is started by (execute on Backup node)

bin/hdfs namenode -backup

Import CheckPoint

The latest checkpoint can be imported to the NameNode if all other copies of the image and the edits files are lost. In order to do that one should:

  • Create an empty directory specified in the dfs.namenode.name.dir configuration variable;
  • Specify the location of the checkpoint directory in the configuration variable dfs.namenode.checkpoint.dir;
  • and start the NameNode with -importCheckpoint option.


Balancer all datanode

To start: 
       bin/hadoop-daemon.sh start balancer [-threshold <threshold>] 
       Example: 

bin/hadoop-daemon.sh start balancer
start the balancer with a default threshold of 10% 
bin/hadoop-daemon.sh start balancer -threshold 5
start the balancer with a threshold of 5% 

To stop: 

bin/hadoop-daemon.sh stop balancer

Recovery mode

However, what can you do if the only storage locations available are corrupt? In this case, there is a special NameNode startup mode called Recovery mode that may allow you to recover most of your data.

You can start the NameNode in recovery mode like so: namenode -recover

Recovery mode can cause you to lose data, you should always back up your edit log and fsimage before using it.


HDFS Upgrade and RollBack

Before upgrading, administrators need to remove existing backup using bin/hadoop dfsadmin -finalizeUpgrade command. The following briefly describes the typical upgrade procedure:

  • Before upgrading Hadoop software, finalize if there an existing backup. dfsadmin -upgradeProgress status can tell if the cluster needs to be finalized.
  • Stop the cluster and distribute new version of Hadoop.
  • Run the new version with -upgrade option (bin/start-dfs.sh -upgrade).
  • Most of the time, cluster works just fine. Once the new HDFS is considered working well (may be after a few days of operation), finalize the upgrade.Note that until the cluster is finalized, deleting the files that existed before the upgrade does not free up real disk space on the DataNodes.
  • If there is a need to move back to the old version,
    • stop the cluster and distribute earlier version of Hadoop.
    • start the cluster with rollback option. (bin/start-dfs.h -rollback).









 

0 0
原创粉丝点击