ccah-500 第7题 swap Hadoop daemon data from RAM to disk

来源:互联网 发布:巫妖王之怒3.35数据库 编辑:程序博客网 时间:2024/04/20 15:10

7. You want to node to only swap Hadoop daemon data from RAM to disk when absolutely necessary. What should you do? 

A. Delete the /dev/vmswap file on the node 

B. Delete the /etc/swap file on the node 

C. Set the ram.swap parameter to 0 in core-site.xml 

D. Set vm.swappiness=0 in /etc/sysctl.conf

E. Delete the /swapfile file on the node 


Answer: D 

 

reference

http://www.aiotestking.com/cloudera/what-should-you-do-2/

Improving Performance
This section summarizes some recent code improvements and configuration best practices.

Setting the vm.swappiness Linux Kernel Parameter
vm.swappiness is a Linux Kernel Parameter that controls how aggressively memory pages are swapped to disk. It can be set to a value between 0-100; the higher the value, the more aggressive the kernel is in seeking out inactive memory pages and swapping them to disk.

You can see what value vm.swappiness is currently set to by looking at /proc/sys/vm; for example:

cat /proc/sys/vm/swappiness
On most systems, it is set to 60 by default. This is not suitable for Hadoop clusters nodes, because it can cause processes to get swapped out even when there is free memory available. This can affect stability and performance, and may cause problems such as lengthy garbage collection pauses for important system daemons. Cloudera recommends that you set this parameter to 0; for example:

# sysctl -w vm.swappiness=0


0 0
原创粉丝点击