Hadoop、ZooKeeper、Hive、HBase 七节点分布式集群搭建

来源:互联网 发布:数据分析论坛推荐 编辑:程序博客网 时间:2024/06/09 03:06


Hadoop、ZooKeeper、Hive、HBase 七节点分布式集群搭建

 

一、系统版本及相关软件:

CentOS6.7 64位 2.6.32-573.el6.x86_64,选择安装时为mini选项,不安装图形界面

hadoop-2.7.1.tar.gz

jdk-7u79-linux-x64.tar.gz

zookeeper-3.4.9.tar.gz

apache-hive-2.1.1-bin.tar.gz

hbase-1.2.4-bin.tar.gz

hadoop-2.7.1 与 hbase-1.2.4 版本匹配

Hadoop集群、NameNode+HA、ResourceManager+HA、Hive使用MYSQL管理元数据、HBase+HA、使用ZooKeeper来管理Hadoop集群

 

 

集群规划

主机名

IP地址

应用软件

运行进程(运行jps后的显示)

Node01

192.168.1.1

Jdk hadoop

DFSZKFailoverController

NameNode

Node02

192.168.1.2

Jdk hadoop

DFSZKFailoverController

NameNode

Node03

192.168.1.3

Jdk hadoop hbase

ResourceManager

Main

HMaster

Node04

192.168.1.4

Jdk hadoop hbase

HMaster

ResourceManager

Node05

192.168.1.5

Jdk hadoop hbase zookeeper

QuorumPeerMain HRegionServer

DataNode

NodeManager

JournalNode

Node06

192.168.1.6

Jdk hadoop hbase zookeeper

JournalNode NodeManager

HRegionServer

QuorumPeerMain

DataNode

Node07

192.168.1.7

Jdk hadoop hbase  zookeeper hive mysql

HRegionServer

JournalNode

DataNode

QuorumPeerMain

NodeManager

 

说明:

1、本次搭建,安排了两个NameNode节点,分别是1、2号机,两台NameNode,一台处于active状态,另一台处于standby状态。ActiveNameNode对外提供服务,Standby NameNode不对外提供服务,仅同步active namenode状态,以便能够在它失败时进行     快速切换。

2、Hadoop 官方提供了两种 HDFSHA的解决方案,一种是NFS,另一种是QJM。在本次搭建中,我们使用QJK,主备NameNode通过JournalNode同步数据,只要成功写入多数JournalNode即认为写入成功,所以要配置奇数个JournalNode,我们配置了3个。

3、本次搭建上,配置了2个ResourceManager,一个是Active,一个是Standby,状态由zookeeper进行协调

二、安装步骤大概有以下几个部分组成,开通用户、时钟同步、关闭防火墙、配置JAVA运行环境、hadoop安装、zookeeper、hive、mysql、hbase等相关软件的安装。

2.1、开能用户,开通一个专属用户hduser,以后所有用户都以这个普通用户的权限进行操作,提高安全性,在7台机器都开通同样的用户名,设置同样的口令,编辑 /etc/sudoers文件,给用户hduser开放部分root权限。

2.2、编辑 /etc/hosts,在1号机上编辑,等设置好无密码登录后,复制到其它6台机器上

cat /etc/hosts

127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4

::1        localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.1     node01

192.168.1.2     node02

192.168.1.3     node03

192.168.1.4     node04

192.168.1.5     node05

192.168.1.6     node06

192.168.1.7     node07

2.3、实现机器间无密登,录配置免密码登陆

首先要配置node01到node02、node03、node04、node05、node06、node07的免密码登陆在node01上生产一对钥匙

                                     ssh-keygen-t rsa

将公钥拷贝到其他节点,包括自己

                                     ssh-coyp-id node01

                                     ssh-coyp-id node02

                                     ssh-coyp-id node03

                                     ssh-coyp-id node04

                                     ssh-coyp-id node05

                                     ssh-coyp-id node06

                                     ssh-coyp-id node07

配置node03到node04、node05、node06、node07的免密码登陆,在node03上生产一对钥匙

                                     ssh-keygen -t rsa

将公钥拷贝到其他节点

                                     ssh-coyp-id node04

                                     ssh-coyp-id node05

                                     ssh-coyp-id node06

                                     ssh-coyp-id node07

两个namenode之间要配置ssh免密码登陆,配置node02到node01的免登陆,在node02上生产一对钥匙

                                     ssh-keygen -t rsa

                                     ssh-coyp-id-i node01                                  

 

2.4、时钟同步,本次搭建以1号机的时钟为准,1号机开通ntpd服务

[hduser@node01~]$ service ntpd status

ntpd (pid 21729) is running...

其它6部机器运行ntpdate命令,以此类似

[hduser@node02 ~]$ sudo ntpdate node01

[sudo] password for hadoop:

6 Mar 14:24:28 ntpdate[10817]: step timeserver 192.168.1.1 offset -23.277136 sec

为确保时钟同步,应将该命令写入计划任务

2.5、安装lrzsz工具,用于上传下载文件,安装openssh-clients,以便可以执行scp命令

sudo yum install lrzsz

安装lrzsz,使用rz或sz命令可以方便地上传和下载文件

sudo yum install openssh-clients

安装openssh-clients,可以方便地使用scp 命令远程拷贝目录和文件

2.6、安装JDK

         在1号机中,解压释放jdk-7u79-linux-x64.tar.gz,指定放在/home/hadoop下,编辑/etc/profile文件,在文件尾部加上

############################

export      JAVA_HOME=/home/hadoop/jdk1.7.0_79

export      JRE_HOME=${JAVA_HOME}/jre

export      CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib

export      PATH=$JAVA_HOME/bin:$PATH

############################

export      HADOOP_HOME=/home/hadoop/hadoop-2.7.1

export      PATH=$HADOOP_HOME/bin:$PATH

export     PATH=$HADOOP_HOME/sbin:$PATH

export  CLASSPATH=$($HADOOP_HOME/bin/hadoop classpath):$CLASSPATH

在这里只需要上半部分就可以,下半部分是给hadoop准备的,编辑完成后用source命令使之生效,在任务路径下执行java 命令,都可以如下显示

[node01 ~]$ java -version

java version "1.7.0_79"

Java(TM) SE Runtime Environment (build1.7.0_79-b15)

Java HotSpot(TM) 64-Bit Server VM (build24.79-b02, mixed mode)

说明在1号中,已经完成java的运行环境的准备工作

2.7、安装HDFS+HA

HDFS是通过分布的集群来存储文件的,存储的文件被切成一块一块的block,存储文件的block存放在若干台datanode上的,映射关系由namenode进行管理,每一个block在集群中存储多个副本。前面讲到过,使用JournalNode来确保主备NameNode的数据同步,由Zookeeper来解决ResourceManager的单一故障点问题。

2.7.1、安装配置zookeeper集群

zookeeper是google的chubby一个开源的实现,是hadoop分布式协调服务,chubby是一种为了实现mapreduce或bigdata而构建的内部的工具。zookeeper首先是一个集群,提供少量数据的储存和管理,zookeeper不能存业务数据,只能存储状态信息,zookeeper由奇数个节点组成,每个节点的角色分Fllower、Leader两种,Leader为主节点,负责写操作,当超过一半的节点操作成功,就认为本次操作成功,担任什么角色不是事先分配,启动时各个节点地位平等,至于担任什么角色通过选举机制决定

2.7.1.1 在5号机上运行,解压 tar-zxvf  zookeeper-3.4.9.tar.gz

2.7.1.2 修改配置

cd  ./zookeeper-3.4.9/conf/

cp zoo_sample.cfg zoo.cfg

vi zoo.cfg

$ cat zoo.cfg

# The number of milliseconds of each tick

tickTime=2000

# The number of ticks that the initial

# synchronization phase can take

initLimit=10

# The number of ticks that can pass between

# sending a request and getting anacknowledgement

syncLimit=5

# the directory where the snapshot isstored.

# do not use /tmp for storage, /tmp here isjust

# example sakes.

#dataDir=/tmp/zookeeper

dataDir=/home/hadoop/zookeeper-3.4.9/data

# the port at which the clients willconnect

clientPort=2181

# the maximum number of client connections.

# increase this if you need to handle moreclients

#maxClientCnxns=60

#

# Be sure to read the maintenance sectionof the

# administrator guide before turning onautopurge.

#

#http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance

#

# The number of snapshots to retain indataDir

#autopurge.snapRetainCount=3

# Purge task interval in hours

# Set to "0" to disable autopurge feature

#autopurge.purgeInterval=1

server.5=node05:11605:11615

server.6=node06:11605:11615

server.7=node07:11605:11615

红色部为修改的部分

然后创建一个data文件夹

mkdir ./zookeeper-3.4.9/data

再创建一个空文件

touch ./zookeeper-3.4.9/data/myid

最后向该文件写入ID

echo 5 > ./zookeeper-3.4.9/data/myid

2.7.1.3将配置好的zookeeper拷贝到其他节点(node06、node07)

scp -r /home/hadoop/zookeeper-3.4.5/ node06:/home/hadoop/

scp -r /home/hadoop/zookeeper-3.4.5/ node07:/home/hadoop/

注意:修改node06、node07对应/home/hadoop/zookeeper-3.4.9/data/myid内容

         node06:

         echo 6 > /home/hadoop/zookeeper-3.4.9/data/myid

         node07:

         echo 7 > /home/hadoop/zookeeper-3.4.9/data/myid

2.7.1.4 启动测试zookeeper集群

$ pwd

/home/hadoop/zookeeper-3.4.9/bin

./zkServer.sh start

#查看状态:一个leader,两个follower

./zkServer.sh status

$ ./zkServer.sh status

ZooKeeper JMX enabled by default

Using config:/home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg

Mode: follower

2.7.2 安装配置hadoop集群(在1号机上操作)

2.7.2.1 解压 在目录/home/hadoop/下运行tar -zxvf hadoop-2.7.1.tar.gz

2.7.2.2 配置HDFS的环境变量,在/etc/profile下增加

export      HADOOP_HOME=/home/hadoop/hadoop-2.7.1

export      PATH=$HADOOP_HOME/bin:$PATH

export     PATH=$HADOOP_HOME/sbin:$PATH

export  CLASSPATH=$($HADOOP_HOME/bin/hadoop classpath):$CLASSPATH

或是

export JAVA_HOME=/home/hadoop/jdk1.7.0_79

export HADOOP_HOME=/home/hadoop/hadoop-2.7.1

exportPATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

2.7.2.3 进入hadoop的配置目录/home/hadoop/hadoop-2.7.1/etc/hadoop

编辑hadoop-env.sh,详情参阅附录

export JAVA_HOME=/home/hadoop/jdk1.7.0_79

2.7.2.4 编辑core-site.xml

$cat core-site.xml

<?xml version="1.0"encoding="UTF-8"?>

<?xml-stylesheettype="text/xsl" href="configuration.xsl"?>

<!--

 Licensed under the Apache License, Version 2.0 (the"License");

  youmay not use this file except in compliance with the License.

  Youmay obtain a copy of the License at

 

   http://www.apache.org/licenses/LICENSE-2.0

 

 Unless required by applicable law or agreed to in writing, software

 distributed under the License is distributed on an "AS IS"BASIS,

 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

  Seethe License for the specific language governing permissions and

 limitations under the License. See accompanying LICENSE file.

-->

 

<!-- Put site-specific propertyoverrides in this file. -->

 

<configuration>

         <property>      

                   <name>fs.defaultFS</name>

                   <value>hdfs://ns1/</value>

         </property>

         <property>

                   <name>hadoop.tmp.dir</name>

                   <value>/home/hadoop/hadoop-2.7.1/tmp</value>

         </property>

         <property>

                   <name>ha.zookeeper.quorum</name>

                   <value>node05:2181,node06:2181,node07:2181</value>

         </property>

</configuration>

2.7.2.5编辑hdfs-site.xml

cat hdfs-site.xml

 

<configuration>

         <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 -->

         <property>

                   <name>dfs.nameservices</name>

                   <value>ns1</value>

         </property>

         <!--ns1下面有两个NameNode,分别是nn1,nn2 -->

         <property>

                   <name>dfs.ha.namenodes.ns1</name>

                   <value>nn1,nn2</value>

         </property>

         <!--nn1的RPC通信地址 -->

         <property>

                   <name>dfs.namenode.rpc-address.ns1.nn1</name>

                   <value>node01:9000</value>

         </property>

         <!--nn1的http通信地址 -->

         <property>

                   <name>dfs.namenode.http-address.ns1.nn1</name>

                   <value>node01:50070</value>

         </property>

         <!--nn2的RPC通信地址 -->

         <property>

                   <name>dfs.namenode.rpc-address.ns1.nn2</name>

                   <value>node02:9000</value>

         </property>

         <!--nn2的http通信地址 -->

         <property>

                   <name>dfs.namenode.http-address.ns1.nn2</name>

                   <value>node02:50070</value>

         </property>

         <!--指定NameNode的元数据在JournalNode上的存放位置 -->

         <property>

                   <name>dfs.namenode.shared.edits.dir</name>

                  <value>qjournal://node05:8485;node06:8485;node07:8485/ns1</value>

         </property>

                   <!--指定JournalNode在本地磁盘存放数据的位置 -->

         <property>

                   <name>dfs.journalnode.edits.dir</name>

                   <value>/home/hadoop/hadoop-2.7.1/journaldata</value>

         </property>

         <!--开启NameNode失败自动切换 -->

         <property>

                   <name>dfs.ha.automatic-failover.enabled</name>

                   <value>true</value>

         </property>

         <!--配置失败自动切换实现方式 -->

         <property>

                   <name>dfs.client.failover.proxy.provider.ns1</name>

                   <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

         </property>

         <!--配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->

         <property>

                   <name>dfs.ha.fencing.methods</name>

                   <value>

                            sshfence

                            shell(/bin/true)

                   </value>

         </property>

         <!--使用sshfence隔离机制时需要ssh免登陆 -->

         <property>

                   <name>dfs.ha.fencing.ssh.private-key-files</name>

                   <value>/home/hadoop/.ssh/id_rsa</value>

         </property>

         <!--配置sshfence隔离机制超时时间 -->

         <property>

                   <name>dfs.ha.fencing.ssh.connect-timeout</name>

                   <value>30000</value>

         </property>

</configuration>

2.7.2.6 编辑mapred-site.xml

<configuration>

         <property>

                   <name>mapred.job.tracker</name>

                   <value>node01:9001</value>

         </property>

         <property>

                   <name>mapred.map.tasks</name>

                   <value>20</value>

         </property>

         <property>

                   <name>mapred.reduce.tasks</name>

                   <value>4</value>

         </property>

         <property>

                   <name>mapreduce.framework.name</name>

                   <value>yarn</value>

         </property>

         <property>

                   <name>mapreduce.jobhistory.address</name>

                   <value>node01:10020</value>

         </property>

         <property>

                   <name>mapreduce.jobhistory.webapp.address</name>

                   <value>node01:19888</value>

         </property>

</configuration>

2.7.2.7 编辑 yarn-site.xml

<configuration>

         <property>

                   <name>yarn.resourcemanager.ha.enabled</name>

                   <value>true</value>

         </property>

         <property>

                   <name>yarn.resourcemanager.cluster-id</name>

                   <value>yrc</value>

         </property>

         <property>

                   <name>yarn.resourcemanager.ha.rm-ids</name>

                   <value>rm1,rm2</value>

         </property>

         <property>

                   <name>yarn.resourcemanager.hostname.rm1</name>

                   <value>node03</value>

         </property>

         <property>

                   <name>yarn.resourcemanager.hostname.rm2</name>

                   <value>node04</value>

         </property>

         <property>

                   <name>yarn.resourcemanager.zk-address</name>

                   <value>node05:2181,node06:2181,node07:2181</value>

         </property>

         <property>

                   <name>yarn.nodemanager.aux-services</name>

                   <value>mapreduce_shuffle</value>

         </property>

</configuration>

2.7.2.8修改slaves(slaves是指定子节点的位置,因为要在node01上启动HDFS、在node03启动yarn,所以node01上的slaves文件指定的是datanode的位置,node03上的slaves文件指定的是nodemanager的位置)

[node01 hadoop]$ cat slaves

#localhost

node05

node06

node07

[node03]$ cat slaves

#localhost

node05

node06

node07

2.7.2.9 将配置好的hadoop拷贝到其它节点

scp -r hadoop-2.7.1/ node02:/home/hadoop/

scp -r hadoop-2.7.1/ node03:/home/hadoop/

其它几个节点类似

同时也要把 /etc/profile   /etc/hosts 两个文件复制到所有节点机上

2.7.2.10 hadoop格式化

在1号上执行 hdfs namenode -fromat,注意格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件,/home/hadoop/hadoop-2.7.1/tmp,然后将/home/hadoop/hadoop-2.7.1/tmp拷贝到2号机的/home/hadoop/hadoop-2.7.1/下。

                            scp-r tmp/ node02:/home/hadoop/hadoop-2.7.1/

                            ##也可以这样,建议hdfsnamenode -bootstrapStandby

2.8、HA部分

2.8.1 启动 zookeeper集群

分别在node05、node06、node07上运行

/home/hadoop/zookeeper-3.4.9/bin/zkServer.sh start

查看状态

./zkServer.sh status

2.8.2 启动journalnode

分别在node05、node06、node07上运行

/home/hadoop-2.7.1/sbin/hadoop-daemon.shstart journalnode

运行jps命令检验,node05、node06、node07上多了JournalNode进程

2.8.3格式化ZKFC,在node01上执行

                            hdfs zkfc  -formatZK

2.8.4 启动HDFS,noded01上执行

                            sbin/start-dfs.sh

2.8.5启动YARN(#####注意#####:是在node03上执行start-yarn.sh,把namenode和resourcemanager分开是因为性能问题,因为他们都要占用大量资源,所以把他们分开了,他们分开了就要分别在不同的机器上启动)

                            sbin/start-yarn.sh

2.8.6测试HDFS

hadoop-2.7.1配置完毕,可以统计浏览器访问:

                   http://192.168.1.1:50070

                   NameNode'node01:9000' (active)

                   http://192.168.1.2:50070

                   NameNode'node2:9000' (standby)

验证HDFS HA

                   首先向hdfs上传一个文件

                   hadoopfs -put /etc/profile /profile

                   hadoopfs -ls /

                   然后再kill掉active的NameNode

                   kill-9 <pid of NN>

                   通过浏览器访问:http://192.168.1.2:50070

                   NameNode'node02:9000' (active)

                   这个时候node02上的NameNode变成了active

                   在执行命令:

                   hadoopfs -ls /

                   -rw-r--r--   3 root supergroup       1926 2014-02-06 15:36 /profile

                   刚才上传的文件依然存在!!!

                   手动启动那个挂掉的NameNode

                   sbin/hadoop-daemon.shstart namenode

                   通过浏览器访问:http://192.168.1.1:50070

                   NameNode'node01:9000' (standby)

验证YARN:

                   运行一下hadoop提供的demo中的WordCount程序:

                   hadoopjar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount/profile /out,这个试验还没有做

测试集群工作状态的一些指令:

bin/hdfs dfsadmin -report        查看hdfs的各节点状态信息

bin/hdfs haadmin -getServiceState nn1           获取一个namenode节点的HA状态

sbin/hadoop-daemon.sh start namenode  单独启动一个namenode进程

./hadoop-daemon.sh start zkfc   单独启动一个zkfc进程

 

三、hive+mysql 的安装与配置

在7号机上运行,hive依赖hdfs,注意要先启动zookeeper,hive本质上翻译工具,本次搭建用mysql来管理元数据

下载 apache-hive-2.1.1-bin.tar.gz

tar -zxvf apache-hive-2..1.1.tar.gz

cp hive-default.xml.template hive-site.xml

vi hive-site.xml

<?xml version="1.0"encoding="UTF-8" standalone="no"?>

<?xml-stylesheettype="text/xsl" href="configuration.xsl"?><!--

  Licensed to the Apache Software Foundation (ASF) under one or more

  contributor license agreements. See the NOTICE file distributed with

  this work for additional information regarding copyright ownership.

  The ASF licenses this file to You under the Apache License, Version 2.0

  (the "License"); you may not use this file except incompliance with

  the License.  You may obtain acopy of the License at

 

      http://www.apache.org/licenses/LICENSE-2.0

 

  Unless required by applicable law or agreed to in writing, software

  distributed under the License is distributed on an "AS IS"BASIS,

  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

  See the License for the specific language governing permissions and

  limitations under the License.

--><configuration>

 <!-- WARNING!!! This file is auto generated for documentationpurposes ONLY! -->

 <!-- WARNING!!! Any changes you make to this file will be ignored byHive.   -->

 <!-- WARNING!!! You must make your changes in hive-site.xmlinstead.         -->

 <!-- Hive Execution Parameters -->

       <property> 

       <name>javax.jdo.option.ConnectionURL</name> 

       <value>jdbc:mysql://192.168.1.7:3306/hive?createDatabaseIfNotExist=true</value> 

   </property> 

   <property> 

       <name>javax.jdo.option.ConnectionDriverName</name> 

       <value>com.mysql.jdbc.Driver</value> 

   </property> 

   <property> 

       <name>javax.jdo.option.ConnectionUserName</name> 

        <value>root</value> 

   </property> 

   <property> 

       <name>javax.jdo.option.ConnectionPassword</name> 

       <value>hadoop111</value> 

   </property> 

  <property>   

    <name>hive.metastore.schema.verification</name>   

     <value>false</value>   

    <description>   

   Enforce metastore schema version consistency.   

   True: Verify that version information stored in metastore matches withone from Hive jars.  Also disableautomatic   

         schema migration attempt. Users are required to manully migrate schemaafter Hive upgrade which ensures   

         proper metastore schema migration. (Default)   

   False: Warn if the version information stored in metastore doesn't matchwith one from in Hive jars.   

    </description>   

  </property>  

</configuration>

注意配置文件中需要指定使用mysql来管理metastore.db,指定驱动,指定mysql 的用户名和密码

如果报com.mysql.jdbc.Driver找不到的错误,请下载

sudo yum install mysql-connector-java

编辑hive-env.sh

cp hive-env.sh.template hive-env.sh

vi hive-env.sh

在尾部加上

exportHADOOP_HOME=/home/hadoop/hadoop-2.7.1

#export HIVE_HOME=/home/hadoop/hive

export HIVE_CONF_DIR=/home/hadoop/hive/conf

exportHIVE_AUX_JARS_PATH=/home/hadoop/hive/lib

对mysql 进行连接测试

Schematool -initSchema -dbType mysql

运行 ./hive

注意要确认mysql开启远程登录

四、

hbase安装  http://blog.csdn.net/lepton126/article/details/60322279

机架感知http://blog.csdn.net/lepton126/article/details/53115270

 

 

 

附录1:hadoop-env.sh

# Licensed to the Apache SoftwareFoundation (ASF) under one

# or more contributor licenseagreements.  See the NOTICE file

# distributed with this work for additionalinformation

# regarding copyright ownership.  The ASF licenses this file

# to you under the Apache License, Version2.0 (the

# "License"); you may not usethis file except in compliance

# with the License.  You may obtain a copy of the License at

#

#    http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law oragreed to in writing, software

# distributed under the License isdistributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANYKIND, either express or implied.

# See the License for the specific languagegoverning permissions and

# limitations under the License.

 

# Set Hadoop-specific environment variableshere.

 

# The only required environment variable isJAVA_HOME.  All others are

# optional. When running a distributed configuration it is best to

# set JAVA_HOME in this file, so that it iscorrectly defined on

# remote nodes.

 

# The java implementation to use.

#export JAVA_HOME=${JAVA_HOME}

export JAVA_HOME=/home/hadoop/jdk1.7.0_79

# The jsvc implementation to use. Jsvc isrequired to run secure datanodes

# that bind to privileged ports to provideauthentication of data transfer

# protocol. Jsvc is not required if SASL is configured for authentication of

# data transfer protocol using non-privilegedports.

#export JSVC_HOME=${JSVC_HOME}

 

exportHADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}

 

# Extra Java CLASSPATH elements.  Automatically insert capacity-scheduler.

for f in$HADOOP_HOME/contrib/capacity-scheduler/*.jar; do

  if[ "$HADOOP_CLASSPATH" ]; then

   export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f

 else

   export HADOOP_CLASSPATH=$f

  fi

done

 

# The maximum amount of heap to use, in MB.Default is 1000.

#export HADOOP_HEAPSIZE=

#exportHADOOP_NAMENODE_INIT_HEAPSIZE=""

 

# Extra Java runtime options.  Empty by default.

export HADOOP_OPTS="$HADOOP_OPTS-Djava.net.preferIPv4Stack=true"

 

# Command specific options appended toHADOOP_OPTS when specified

exportHADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS}-Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender}$HADOOP_NAMENODE_OPTS"

exportHADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS$HADOOP_DATANODE_OPTS"

 

export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS}-Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender}$HADOOP_SECONDARYNAMENODE_OPTS"

 

exportHADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"

export HADOOP_PORTMAP_OPTS="-Xmx512m$HADOOP_PORTMAP_OPTS"

 

# The following applies to multiplecommands (fs, dfs, fsck, distcp etc)

export HADOOP_CLIENT_OPTS="-Xmx512m$HADOOP_CLIENT_OPTS"

#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData$HADOOP_JAVA_PLATFORM_OPTS"

 

# On secure datanodes, user to run thedatanode as after dropping privileges.

# This **MUST** be uncommented to enablesecure HDFS if using privileged ports

# to provide authentication of datatransfer protocol.  This **MUST NOT** be

# defined if SASL is configured for authenticationof data transfer protocol

# using non-privileged ports.

exportHADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}

 

# Where log files are stored.  $HADOOP_HOME/logs by default.

#exportHADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER

 

# Where log files are stored in the securedata environment.

exportHADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}

 

###

# HDFS Mover specific parameters

###

# Specify the JVM options to be used whenstarting the HDFS Mover.

# These options will be appended to the optionsspecified as HADOOP_OPTS

# and therefore may override any similarflags set in HADOOP_OPTS

#

# export HADOOP_MOVER_OPTS=""

 

###

# Advanced Users Only!

###

 

# The directory where pid files are stored./tmp by default.

# NOTE: this should be set to a directorythat can only be written to by

#      the user that will run the hadoop daemons.  Otherwise there is the

#      potential for a symlink attack.

export HADOOP_PID_DIR=${HADOOP_PID_DIR}

export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}

 

# A string representing this instance ofhadoop. $USER by default.

export HADOOP_IDENT_STRING=$USER



0 0
原创粉丝点击