hadoop 2.7 安装nfs

来源:互联网 发布:手机淘宝如何联系客服 编辑:程序博客网 时间:2024/05/30 05:02

1.先安装完hadoop 2.7

2. cd /usr/local/hadoop

向 etc/hadoop/core-site.xml添加以下内容:

<!-- The nfs setttings begin--> <property>   <name>hadoop.proxyuser.hadoop.groups</name>   <value>*</value>   <description>          The 'nfsserver' user is allowed to proxy all members of the 'users-group1' and       'users-group2' groups. Note that in most cases you will need to include the       group "root" because the user "root" (which usually belonges to "root" group) will       generally be the user that initially executes the mount on the NFS client system.       Set this to '*' to allow nfsserver user to proxy any group.   </description> </property> <property>   <name>hadoop.proxyuser.hadoop.hosts</name>   <value>*</value>   <description>             This is the host where the nfs gateway is running. Set this to '*' to allow             requests from any hosts to be proxied.   </description> </property>



3.向hdfs-site.xml中添加以下内容:

 

<property>   <name>dfs.namenode.accesstime.precision</name>   <value>3600000</value>   <description>The access time for HDFS file is precise upto this value.       The default value is 1 hour. Setting a value of 0 disables       access times for HDFS.  </description> </property> <property>       <name>nfs.dump.dir</name>   <value>/tmp/.hdfs-nfs</value> </property> <property>   <name>nfs.exports.allowed.hosts</name>   <value>* rw</value> </property> <property>   <name>nfs.superuser</name>   <value>hadoop</value> </property> <property>   <name>nfs.metrics.percentiles.intervals</name>   <value>100</value>   <description>Enable the latency histograms for read, write and         commit requests. The time unit is 100 seconds in this example.   </description> </property>



4.把上面两个文件分发到所有结点上。


5.以root的身份执行以下命令。

#yum -y install rpcbind #sbin/hadoop-daemon.sh --script /usr/local/hadoop/bin/hdfs start portmap


6.以hadoop的身份执行以下命令:

$ sbin/hadoop-daemon.sh --script /usr/local/hadoop/bin/hdfs start nfs3



7.以root 的身份执行rpcinfo -p m-10-140-60-85,如果输出像以下的内容,则表示nfs gateway没问题:

[root@m-10-140-60-85 hadoop]#  rpcinfo -p m-10-140-60-85   program vers proto   port  service    100005    2   tcp   4242  mountd    100000    2   udp    111  portmapper    100000    2   tcp    111  portmapper    100005    1   tcp   4242  mountd    100003    3   tcp   2049  nfs    100005    1   udp   4242  mountd    100005    3   udp   4242  mountd    100005    3   tcp   4242  mountd    100005    2   udp   4242  mountd



8.以root身份进入另一台服务器,执行以下命令(m-10-140-60-85为nfs gateway的服务器)
Export list for m-10-140-60-85:/ *



9.创建/hadoop-nfs目录,把远程nfs gateway挂载到/hadoop-nfs目录。

mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync m-10-140-60-85:/ /hadoop-nfs

10.给各gateway使用相同的方法安装好之后,把nfs client的ip列表放到nfs_hosts,使用paste命令把这两个文件t组成nfs_pair文件,如下所示。

[root@m-10-140-60-85 setupHadoop]# cat nfs_pair 10.140.60.8510.140.60.4810.140.60.8610.140.60.5010.140.60.8710.140.60.5110.140.60.8810.140.60.5310.140.60.8910.140.60.5410.140.60.9010.140.60.5510.140.60.9110.140.60.5610.140.60.9210.140.60.5910.140.60.9510.140.60.6010.140.60.9610.140.60.6110.140.60.4910.140.60.62

11.在nfs_hosts服务器上创建/hadoop-nfs目录。

 ./upgrade.sh common nfs_hosts "mkdir /hadoop-nfs"

12使用以下的方法生成挂载语句

cat nfs_pair | awk -F ' ' '{print "ssh "$2"  \"mount -o hard,nolock " $1":/ /hadoop-nfs\""}'



13,执行生成的挂载语句


14.验证各客户端挂载成功。

./upgrade.sh common nfs_hosts "ls /hadoop-nfs"


0 0
原创粉丝点击