Read note of HDFS User Guide
来源:互联网 发布:京东金融 知乎 编辑:程序博客网 时间:2024/05/19 17:23
CheckPoint Node:
The Checkpoint node's memory requirements are on the same order as the NameNode. The Checkpoint node is started by (execute on checkpoint node)
bin/hdfs namenode -checkpoint
two configuration parameters
- dfs.namenode.checkpoint.period, set to 1 hour by default, specifies the maximum delay between two consecutive checkpoints
- dfs.namenode.checkpoint.txns, set to 1 million by default, defines the number of uncheckpointed transactions on the NameNode which will force an urgent checkpoint, even if the checkpoint period has not been reached.
Backup Node:
As the Backup node maintains a copy of the namespace in memory,its RAM requirements are the same as the NameNode.
The NameNode supports one Backup node at a time.No Checkpoint nodes may be registered if a Backup node is in use.
The Backup node is started by (execute on Backup node)
bin/hdfs namenode -backup
Import CheckPoint
The latest checkpoint can be imported to the NameNode if all other copies of the image and the edits files are lost. In order to do that one should:
- Create an empty directory specified in the dfs.namenode.name.dir configuration variable;
- Specify the location of the checkpoint directory in the configuration variable dfs.namenode.checkpoint.dir;
- and start the NameNode with -importCheckpoint option.
To start:
bin/hadoop-daemon.sh start balancer [-threshold <threshold>]
Example:
bin/hadoop-daemon.sh start balancerstart the balancer with a default threshold of 10%
bin/hadoop-daemon.sh start balancer -threshold 5start the balancer with a threshold of 5%
To stop:
bin/hadoop-daemon.sh stop balancer
Recovery mode
However, what can you do if the only storage locations available are corrupt? In this case, there is a special NameNode startup mode called Recovery mode that may allow you to recover most of your data.
You can start the NameNode in recovery mode like so: namenode -recover
Recovery mode can cause you to lose data, you should always back up your edit log and fsimage before using it.
HDFS Upgrade and RollBack
Before upgrading, administrators need to remove existing backup using bin/hadoop dfsadmin -finalizeUpgrade command. The following briefly describes the typical upgrade procedure:
- Before upgrading Hadoop software, finalize if there an existing backup. dfsadmin -upgradeProgress status can tell if the cluster needs to be finalized.
- Stop the cluster and distribute new version of Hadoop.
- Run the new version with -upgrade option (bin/start-dfs.sh -upgrade).
- Most of the time, cluster works just fine. Once the new HDFS is considered working well (may be after a few days of operation), finalize the upgrade.Note that until the cluster is finalized, deleting the files that existed before the upgrade does not free up real disk space on the DataNodes.
- If there is a need to move back to the old version,
- stop the cluster and distribute earlier version of Hadoop.
- start the cluster with rollback option. (bin/start-dfs.h -rollback).
- Read note of HDFS User Guide
- [Read note] Android Dev Guide -> Application Fundamentals
- User's Guide Of Dagger2
- [Read note] Android Dev Guide -> What is Android?
- hadoop2.7.2学习笔记15-HDFS user guide
- hdfs-note
- user guide
- user guide
- Studying note 2 of mysql--user account manager
- Study Notes of Linux queuing discipline manual - user guide
- Uncaught TypeError: Cannot read property 'PRINT_INIT' of undefined user:100
- note for rails guide
- C++ FQA Read Note
- node read note
- fedora16 note---user&group
- The 1st Part Reading Note of the Android - A Programmer's Guide
- HDFS Architecture Guide
- HDFS Users Guide
- AOJ-AHU-OJ-7 Redraiment猜想(优化)
- J2EE面试题集锦(附答案)
- LVS三种模式:lvs-nat,lvs-dr,lvs-lun简介
- C#获取当前路径的方法集合
- Java---SSH(MVC)面试题
- Read note of HDFS User Guide
- 第三周作业 冒泡排序和归并排序
- 花血本,不代表着花钱,谈谈我玩信用卡的那些事儿
- JAVA程序员面试宝典
- 统计字符串中的汉字个数
- 古代文明
- JAVA面试题集
- 最近学到关于指针的一点小知识
- WEB容器,HTTP服务器,Servlet之间的关系