cloudera hadoop 4.7 安装手册(实践版)

来源:互联网 发布:js配置文件怎么写 编辑:程序博客网 时间:2024/05/17 02:01

Cdh4.7安装手册

目录

1      安装包准备...1

2      预备工作...1

3      Jdk的安装...2

3.1       测试...2

4      Hosts配置...3

5      SSH配置...4

5.1       测试...5

6      mysql的安装和配置...5

6.1       安装包...5

6.2       安装步骤...5

6.3       配置mysql6

6.4       安装connector:(以下为官网文档)...7

6.5       配置root用户...8

6.6       见表...9

6.7       Backing Up the MySQL Databases(数据库备份的命令,可暂时不管)9

6.7.1        Mysql目录...10

6.8       测试...11

7      clouderamanager文件的安装...11

8      hadoop手动安装...12

8.1       测试...17

9      Hive安装...18

9.1       Mysql的设置...20

9.2       测试...20

10             Zookeeper的安装...21

10.1         测试...22

11             Hbase的安装...22

11.1         测试...24

12             Pdsh安装...24

12.1         测试...25

13             Nagios安装...25

14             测试...25

 

 

1      安装包准备

如上图所示,安装的时候都从官方网站下载4.7版本的所有的安装包和jdk包,这样在以后的安装中就回避了因为安装包版本不一致而导致安装失败。

2      预备工作

1.  关闭防火墙:serviceiptables stop(临时关闭)

2.  关闭selinux:修改/etc/selinux/config:SELINUX=disabled;配置完重启有效。

3.  配置代理:在/etc/yum.conf加入如下内容:http_proxy=http://server:port。

 

3      Jdk的安装

1.  首先将jdk-xxx-rpm放到自己的目录下。

2.  赋予执行权限chmod 755 ./jdk-xxx-rpm
# rpm -ivh jdk-xxx-rpm
安装软件会将JDK自动安装到 /usr/java/目录下。 

3.  然后配置#vi /etc/profile在里面添加如下内容(JDK版本号与自己的一样,可直接粘贴)

exportJAVA_HOME=/usr/java/jdk1.6.0_31

export JAVA_BIN=/usr/java/jdk1.6.0_31/bin

export PATH=$PATH:$JAVA_HOME/bin

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

export PATH USER LOGNAMEMAIL HOSTNAME HISTSIZE HISTCONTROL JAVA_HOME JAVA_BIN CLASSPATH

4.  让/etc/profile文件修改后立即生效 ,可以使用如下命令:
# . /etc/profile      注意: . 和 /etc/profile 有空格.

3.1    测试

#java -version 在屏幕输出jdk的相关信息。

4      Hosts配置

修改主机名

  修改/etc/sysconfig/network文件

    NETWORKING=yes

 

    HOSTNAME=???(自己确定的主机名)

 

 

重启后生效:service networkrestart

 

修改/etc/hosts文件,(根据自己情况来定)

 

然后把hosts文件复制到其他节点,所有节点都要一样:

 [root@localhost cloudera-manager]# rsync-vzrtopgu --progress /etc/hosts master02:/etc/hosts

出现如下信息:

The authenticity of host'master02 (192.168.85.135)' can't be established.

RSA key fingerprint is59:8a:73:60:49:b7:5e:0c:7a:11:f9:05:2c:88:68:67.

Are you sure you want tocontinue connecting (yes/no)? yes

Warning: Permanently added'master02,192.168.85.135' (RSA) to the list of known hosts.

root@master02's password:

sending incremental file list

hosts

         229 100%    0.00kB/s   0:00:00 (xfer#1, to-check=0/1)

 

sent 165 bytes  received 37 bytes  23.76 bytes/sec

total size is 229  speedup is 1.13

 

其它节点一样。

5      SSH配置

进入root根目录:

[root@localhost ~]# cd ~

[root@localhost ~]#ssh-keygen -t rsa 一路回车

[root@localhost ~]# cd .ssh/

[root@localhost .ssh]# ls

id_rsa     id_rsa.pub  (一个是公钥,另一个是私钥,给谁权限就把公钥给谁即可,所有机器都要手动设置)

 

将work02(代表所有子节点)上的公钥文件拷贝到work01上

[root@work02~]#scp ~/.ssh/id_rsa.pub root@work01:~/.ssh/work02.pub

拷贝时注意区分文件名

将work01(代表主机)、work02(代表子节点)和work03三台机器的公钥都加入work01的authorized_keys文件中:

cat id_rsa.pub >>authorized_keys

cat work02.pub >>authorized_keys

cat work03.pub >>authorized_keys

 

将work01上的authorized_keys文件拷贝到work02、work03上

[root@work01 ~]# scp ~/.sshauthorized_keysroot@work02:~/.ssh/

[root@work01 ~]# scp ~/.sshauthorized_keysroot@work03:~/.ssh/

注意:无密码登录只对生成公钥的帐号有效,注意这里产生公钥的帐号要与之后需要远程启动服务的帐号一致

5.1    测试

6      mysql的安装和配置

6.1    安装包

6.2    安装步骤

首先检查是否已经安装

[root@localhost mysql]# rpm-qa|grep -i mysql

mysql-libs-5.1.71-1.el6.x86_64

 

[root@localhost mysql]# rpm-e mysql-libs-5.1.71-1.el6.x86_64 --nodeps

[root@localhost mysql]# rpm-qa|grep -i mysql

这样就把以前的卸载了。--nodeps表示不要删除依赖关系。

 

添加mysql组和mysql用户,用于设置mysql安装目录文件所有者和所属组。

[root@localhost mysql]#groupadd mysql

[root@localhost mysql]#useradd -r -g mysql mysql

*useradd -r参数表示mysql用户是系统用户,不可用于登录系统。

 

安装MySQL的服务器端软件,注意切换到root用户:

[root@localhost mysql]# rpm-ivh MySQL-server-5.5.39-2.linux2.6.x86_64.rpm

 

安装完成后,安装进程会在Linux中添加一个mysql组,以及属于mysql组的用户mysql。可通过id命令查看:

[root@localhost mysql]# idmysql

uid=496(mysql) gid=501(mysql)groups=501(mysql)

 

MySQL服务器安装之后虽然配置了相关文件,但并没有自动启动mysqld服务,需自行启动:

[root@localhost mysql]#service mysql start

Starting MySQL.. SUCCESS!

 

可通过检查端口是否开启来查看MySQL是否正常启动

[root@localhost mysql]#netstat -anp|grep 3306

tcp        0     0 0.0.0.0:3306               0.0.0.0:*                   LISTEN      3222/mysqld        

 

安装MySQL的客户端软件:

[root@localhost mysql]# rpm-ivh MySQL-client-5.5.39-2.linux2.6.x86_64.rpm

Preparing...                                          ########################################### [100%]

   1:MySQL-client                                     ########################################### [100%]

如果安装成功应该可以运行mysql命令,注意必须是mysqld服务以及开启:

[root@localhost mysql]# mysql

会进入mysql控制台页面。

 

6.3    配置mysql

在/etc下新建my.cnf,添加内容如下:

 

[mysqld]

transaction-isolation=READ-COMMITTED

# Disabling symbolic-links isrecommended to prevent assorted security risks;

# to do so, uncomment thisline:

# symbolic-links=0

 

key_buffer              = 16M

key_buffer_size         = 32M

max_allowed_packet      = 16M

thread_stack            = 256K

thread_cache_size       = 64

query_cache_limit       = 8M

query_cache_size        = 64M

query_cache_type        = 1

# Important: see Configuringthe Databases and Setting max_connections

max_connections         = 550

 

# For MySQL version 5.1.8 orlater. Comment out binlog_format for older versions.

binlog_format           = mixed

 

read_buffer_size = 2M

read_rnd_buffer_size = 16M

sort_buffer_size = 8M

join_buffer_size = 8M

 

# InnoDB settings

innodb_file_per_table = 1

innodb_flush_log_at_trx_commit  = 2

innodb_log_buffer_size          = 64M

innodb_buffer_pool_size         = 4G

innodb_thread_concurrency       = 8

innodb_flush_method             = O_DIRECT

innodb_log_file_size = 512M

 

[mysqld_safe]

log-error=/var/log/mysqld.log

pid-file=/var/run/mysqld/mysqld.pid

 

执行如下命令:

#mv/var/lib/mysql/ib_logfile* ~
重启mysql
service mysqlrestart

 

 

rm-f /var/run/yum.pid  //删除centos的自动更新,这样就不会占用yum进程了。

 

6.4    安装connector:(以下为官网文档)

sudo yuminstall mysql-connector-java

ConfigureMySQL to use a strong password and to start at boot.

6.5    配置root用户

Setthe MySQL root password. Note that in the following procedure, your current root password is blank. Press theEnter keywhen you're prompted for the root password.

1.  $ sudo /usr/bin/mysql_secure_installation

2.  [...]

3.  Enter current password for root (enter for none):

4.  OK, successfully used password, moving on...

5.  [...]

6.  Set root password? [Y/n] y

7.  New password:

8.  Re-enter new password:

9.  Remove anonymous users? [Y/n] Y

10.[...]

11.Disallow root login remotely? [Y/n] N

12.[...]

13.Remove test database and access to it [Y/n] Y

14.[...]

15.Reload privilege tables now? [Y/n] Y

Alldone!Ensure the MySQL server starts at boot.

o   Red Hat

o   $ sudo/sbin/chkconfig mysql on

o   $ sudo/sbin/chkconfig --list mysql

mysqld          0:off   1:off  2:on    3:on    4:on   5:on    6:off

 

6.6    见表

1.  Createa database for the Activity Monitor.

  Note: Thedatabase name, user name, and password can be anything you want. The examplesshown match the default names provided in the Cloudera Manager Hiveconfiguration settings.

createdatabase amon DEFAULT CHARACTER SET utf8;

grantall on amon.* TO 'amon'@'%' IDENTIFIED BY 'amon_password';

createdatabase smon DEFAULT CHARACTER SET utf8;

grantall on smon.* TO 'smon'@'%' IDENTIFIED BY 'smon_password';

createdatabase rman DEFAULT CHARACTER SET utf8;

grantall on rman.* TO 'rman'@'%' IDENTIFIED BY 'rman_password';

createdatabase hmon DEFAULT CHARACTER SET utf8;

grantall on hmon.* TO 'hmon'@'%' IDENTIFIED BY 'hmon_password';

createdatabase nav DEFAULT CHARACTER SET utf8;

grantall on nav.* TO 'nav'@'%' IDENTIFIED BY 'nav_password';

create database hive DEFAULT CHARACTER SET utf8;
grant all on hive.* TO 'hive'@'%' IDENTIFIED BY 'hive_password';

6.7    Backing Up the MySQL Databases(数据库备份的命令,可暂时不管)

Cloudera recommendsthat you periodically back up the databases that Cloudera Manager uses to storeconfiguration, monitoring, and reporting data and for managed services:

·        Cloudera Manager database: Containsall the information about what services you have configured, their roleassignments, all configuration history, commands, users, and running processes.This is a relatively small database (<100MB), and is the most important toback up.

·        Activity Monitor database: Containsinformation about past activities. In large clusters, this database can growlarge.

·        Service Monitor database: Containsmonitoring information about daemons. In large clusters, this database can growlarge.

·        Report Manager database: Keeps trackof disk utilization over time. Medium-sized.

·        Host Monitor database: Containsinformation about host status. In large clusters, this database can grow large.

·        Cloudera Navigator database: Containsauditing information. In large clusters, this database can grow large.

·        Hive Metastore database: ContainsHive metadata. Relatively small.

To back up the MySQLdatabase, run the mysqldump command on the MySQLhost, as follows:

$ mysqldump -h<hostname> -u<username> -p<password> <database> > /tmp/<database-backup>.sql

For example, to back upthe sample database created for the Activity Monitor above, amon on the local host asthe root user, with the password mypasswd:

$ mysqldump -pmypasswd amon > /tmp/amon-backup.sql

To back up the sampleActivity Monitor database amon onremote host myhost.example.com as the root user, withthe password mypasswd:

$ mysqldump -hmyhost.example.com -uroot -pcloudera amon > /tmp/amon-backup.sql

以下为官网链接:

http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Installation-Guide/cmig_install_mysql.html

6.7.1  Mysql目录

mysql默认安装的目录如下:

数据库目录:/var/lib/mysql/

配置文件:/usr/share/mysql(mysql.server命令及配置文件)

相关命令:/usr/bin(mysqladmin、mysqldump等命令)(*mysql的一种安全启动方式:/usr/bin/mysqld_safe  --user=root &)

启动脚本:/etc/rc.d/init.d/

6.8    测试

7      clouderamanager文件的安装

#chmod 755./cloudera-manager-installer.bin

# sudo./cloudera-manager-installer.bin

 

然后就就是一路的默认配置就行,直到装好为止,中间配置mysql是用户名密码要和上面mysql的配置一样即可。安装好后要重新启动才能继续安装CDH。

sudo service cloudera-scm-server restart

用户名和密码都是admin。就开始慢慢等待吧。

8      hadoop手动安装

在安装前需要有如下配置:

安装JDK,

关闭防火墙,

关闭selinux,

配置SSH,

安装配置mysql,

以上安装方法见前面文档。

 

解压CDH4.7的hadoop压缩包,进入里面的conf目录,配置如下几个配置文件:

 

vi core-site.xml

<configuration>

<property>

     <name>fs.defaultFS</name>

        <value>hdfs://master01</value>

</property>

<property>

        <name>fs.trash.interval</name>

        <value>10080</value>

</property>

<property>

        <name>fs.trash.checkpoint.interval</name>

        <value>10080</value>

</property>

</configuration>

 

 

 

vi hdfs-site.xml

<configuration>

<property>

          <name>dfs.replication</name>

          <value>1</value>

</property>

<property>

        <name>hadoop.tmp.dir</name>

        <value>/opt/data/</value> 没有的文件要新建

</property>

<property>

        <name>dfs.namenode.http-address</name>

        <value>master01:50070</value>

</property>

<property>

        <name>dfs.secondary.http.address</name>

        <value>master02:50090</value>

</property>

<property>

        <name>dfs.webhdfs.enabled</name>

        <value>true</value>

</property>

</configuration>

 

修改如下masters文件里面的内容,为snn的主机名:

 

修改如下slaves文件,为dn的主机名:

 

cp mapred-site.xml.template mapred-site.xml

vi mapred-site.xml

<configuration>

<property>

         <name>mapreduce.framework.name</name>

         <value>yarn</value>

</property>

<property>

         <name>mapreduce.jobhistory.address</name>

         <value>master01:10020</value>

</property>

<property>

         <name>mapreduce.jobhistory.webapp.address</name>

         <value>master01:19888</value>

</property>

</configuration>

 

 

vi yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->

<property>

    <name>yarn.resourcemanager.resource-tracker.address</name>

    <value>master01:8031</value>

  </property>

  <property>

    <name>yarn.resourcemanager.address</name>

    <value>master01:8032</value>

  </property>

  <property>

    <name>yarn.resourcemanager.scheduler.address</name>

    <value>master01:8030</value>

  </property>

  <property>

    <name>yarn.resourcemanager.admin.address</name>

    <value>master01:8033</value>

  </property>

  <property>

    <name>yarn.resourcemanager.webapp.address</name>

    <value>master01:8088</value>

  </property>

  <property>

    <description>Classpath for typical applications.</description>

    <name>yarn.application.classpath</name>

    <value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,

    $HADOOP_COMMON_HOME/share/hadoop/common/lib/*,

 $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,

    $YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,

    $YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*</value>

  </property>

  <property>

    <name>yarn.nodemanager.aux-services</name>

    <value>mapreduce.shuffle</value>

  </property>

  <property>

    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>

    <value>org.apache.hadoop.mapred.ShuffleHandler</value>

  </property>

  <property>

    <name>yarn.nodemanager.local-dirs</name>

    <value>/opt/data/yarn/local</value>

  </property>

  <property>

    <name>yarn.nodemanager.log-dirs</name>

    <value>/opt/data/yarn/logs</value>

  </property>

  <property>

    <description>Where to aggregate logs</description>

    <name>yarn.nodemanager.remote-app-log-dir</name>

    <value>/opt/data/yarn/logs</value>

  </property>

  <property>

    <name>yarn.app.mapreduce.am.staging-dir</name>

    <value>/user</value>

 </property>

</configuration>

 

 

cd ~

vi .bashrc (里面的各种具体参数根据自己的情况来定。)

export LANG=zh_CN.utf8

export JAVA_HOME=/usr/java/jdk1.7.0

export JRE_HOME=$JAVA_HOME/jre

export CLASSPATH=./:$JAVA_HOME/lib:$JRE_HOME/lib:$JRE_HOME/lib/tools.jar

export HADOOP_HOME=/opt/hadoop

export HIVE_HOME=/opt/hive

export HBASE_HOME=/opt/hbase

export HADOOP_MAPRED_HOME=${HADOOP_HOME}

export HADOOP_COMMON_HOME=${HADOOP_HOME}

export HADOOP_HDFS_HOME=${HADOOP_HOME}

export YARN_HOME=${HADOOP_HOME}

export HADOOP_YARN_HOME=${HADOOP_HOME}

export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin

source .bashrc 使得配置生效

 

libexec/hadoop-config.sh文件中设置export JAVA_HOME=/PATH/TO/JDK。。。。。不然会报错

 

把配置好的hadoop文件夹拷贝到其它节点。

rsync -vzrtopgu   --progress hadoop  slave01:/opt/

rsync -vzrtopgu   --progress hadoop  slave02:/opt/

第一次需要初始化:

/opt/hadoop/bin/hadoop namenode –format

进入sbin文件夹下启动:

./start-dfs.sh

./start-yarn.sh

8.1    测试

9      Hive安装

在如下文件中添加hive的环境变量
/etc/profile
/home/hadoop/.bashrc
/home/hadoop/hive-0.9.0/conf/hive-env.sh

hive-conf.sh

JAVA_HOME一样。把变量添加进去。

 

以下是样例,根据自己具体情况修改:

export LANG=zh_CN.utf8

export JAVA_HOME=/usr/java/jdk1.6.0_31

export JRE_HOME=$JAVA_HOME/jre

export HADOOP_HOME=/opt/hadoop

export HIVE_HOME=/opt/hive

export HBASE_HOME=/opt/hbase

export HADOOP_MAPRED_HOME=${HADOOP_HOME}

export HADOOP_COMMON_HOME=${HADOOP_HOME}

export HADOOP_HDFS_HOME=${HADOOP_HOME}

export YARN_HOME=${HADOOP_HOME}

export HADOOP_YARN_HOME=${HADOOP_HOME}

export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop

exportCLASSPATH=./:$JAVA_HOME/lib:$JRE_HOME/lib:$JRE_HOME/lib/tools.jar:$HIVE_H

OME/lib

export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin

 

 

还要把mysql-connector添加进hivelib下。/usr/share/java/mysql-connector-java.jar

[root@master01 java]# cp mysql-connector-java-5.1.17.jar/opt/hive/lib/


“/home/hadoop/hive-0.9.0/conf”目录下,没有这两个文件,只有一个“hive-default.xml.template”,所以我们要复制两个“hive-default.xml.template”,并命名为“hive-default.xml”

[root@master01 conf]# cp hive-default.xml.template hive-default.xml

 

 

conf文件夹下新建一个hive-site.xml文件

[root@master01 conf]#vim hive-site.xml

b. change metastore to use mysql

cat hive-site.xml

<configuration>

<property>

<name>hive.metastore.local</name>

<value>true</value>

</property>

<property>

<name>javax.jdo.option.ConnectionURL</name>

<value>jdbc:mysql://slave1:3306/hive?createDatabaseIfNotExist=true</value>

</property>   这里要和自己的配置一样。

<property>

<name>javax.jdo.option.ConnectionDriverName</name>

<value>com.mysql.jdbc.Driver</value>

</property>

<property>

<name>javax.jdo.option.ConnectionUserName</name>

<value>hive</value>

</property>

<property>

<name>javax.jdo.option.ConnectionPassword</name>

<value>hivepass</value>

</property>

</configuration>

9.1    Mysql的设置

[root@master01conf]# mysql -u root -p

Enterpassword:

mysql>CREATE DATABASE metastore;  创建hivemetastore用来存储hive映射信息。

mysql>USE metastore;

mysql>SOURCE

 

/opt/hive/scripts/metastore/upgrade/mysql/hive-schema-0.10.0.mysql.sql;这个是初始化的吧,会生成一些表,用来学习的。

 

http://blog.chinaunix.net/uid-545411-id-2385599.htmlMySQL的字符集设置网页。

 

修改配置文件:

Vim /etc/my.cnf

加入character_set_server=utf8 这样就把mysql的字符集都改成utf8了。但是对于hive来说存在问题,如下所说:

Mysql> usehive

Mysql>alterdatabase hive character set latin1 不能把所有的mysql都设置为utf8,不然会报错,具体原因请参考上面的网站或查阅mysql字符集讲解。

9.2    测试

10            Zookeeper的安装

将安装包解压。

将zookeeper/conf目录下的zoo_sample.cfg文件拷贝一份,命名为为“zoo.cfg”加入以下配置:

 

tickTime=2000

# The number of ticks thatthe initial

# synchronization phase cantake

initLimit=10

# The number of ticks thatcan pass between

# sending a request andgetting an acknowledgement

syncLimit=5

# the directory where thesnapshot is stored.

# do not use /tmp forstorage, /tmp here is just

# example sakes.

dataDir=/opt/data/zookeeper

# the port at which theclients will connect

clientPort=2181

server.11=master01:2888:3888  这里根据自己配置来设定。

server.21=slave01:2888:3888   

server.22=slave02:2888:3888

 

在我们配置的dataDir指定的目录下面,创建一个myid文件,里面内容为一个数字,用来标识当前主机,conf/zoo.cfg文件中配置的server.X中X为什么数字,则myid文件中就输入这个数字,例如上面的11等,数字和节点对应,用来标志节点的。

 

在/etc/profile里面配置环境变量,和Java一样。

远程复制分发安装文件(相应的子节点的环境变量也要配好,方法和JAVA_HOME是一样的。)

在bin问价下启动,./zkServer.sh start/stop

10.1       测试

其中,quorumpeermain为其守护进程。

11            Hbase的安装

hbase安装的先决条件是先安装好hadoop,zookeeper因为我们使用独立的zookeeper集群。

 

配置hbase对jdk,hadoop的依赖:

在conf/hbase-env.sh文件的前面加上如下四行脚本。

exportJAVA_HOME=/usr/java/jdk1.6.0_39 

export HBASE_MANAGES_ZK=false告诉hbase不使用自带的zookeeper。

exportHBASE_HOME=/usr/local/apache/hbase  

exportHADOOP_HOME=/usr/local/apache/hadoop  //hadoop的home目录。

 

 

#Vim hbase-site.xml

-->

<configuration>

<property>

<name>hbase.master.maxclockskew</name>

<value>180000</value>

</property>

<property>

<name>hbase.rootdir</name>

<value>hdfs://master01:9000/hbase</value>这个地方一定要和hadoop里面的core-site.xml里面的defaultFS一模一样。可以改动hadoop里面的配置。

</property>

<property>

<name>hbase.cluster.distributed</name>

<value>true</value>

</property>

<property>

<name>hbase.master</name>

<value>master01:60000</value>

</property>

<property>

<name>hbase.zookeeper.quorum</name>

<value>master01,slave01,slave02</value>根据自己情况来定,但是这些主机必须已经安装了zookeeper

</property>

</configuration>

 

hbase.rootdir指定Hbase数据存储目录

hbase.cluster.distributed指定是否是完全分布式模式,单机模式和伪分布式模式需要将该值设为false

hbase.master指定Master的位置

hbase.zookeeper.quorum指定zooke的集群,多台机器以逗号分隔

 

修改conf下的regionservers文件
   slave1 
   slave2

 

 

添加Hbase环境变量

“/etc/profile”文件的尾部添加以下内容,并使其有效(source /etc/profile):

 

# set hbase environment

export HBASE_HOME=/opt/hbase

export PATH=$PATH :$ HBASE_HOME /bin

 

http://blog.csdn.net/hguisu/article/details/7244413

http://rangochen.blog.51cto.com/2445286/1376162

11.1       测试

Hmaster为管理的守护进程。

 

12            Pdsh安装

1.        首先从官网下载安装包:http://sourceforge.net/projects/pdsh/files/latest/download并放到自己的文件夹里面。[root@master01pdsh]# cp pdsh-2.26.tar.bz2 /usr/local/

2.        [root@master01 local]# tar -jxfpdsh-2.26.tar.bz2

3.        在pdsh-2.26文件夹里面执行./configure --with-ssh--without-rsh && make && make install

4.      验证安装:[root@master01 pdsh-2.26]# pdsh -V

pdsh-2.26

rcmd modules: ssh,exec (default: ssh)

misc modules: (none)

12.1       测试

[root@master01 pdsh-2.26]# pdsh -wssh:root@slave01,slave02 hostname

slave02: slave02

slave01: slave01

成功。

13            Nagios安装

http://www.tuicool.com/articles/q2Qze2

http://www.cnblogs.com/mchina/archive/2013/02/20/2883404.html

已经很详细了,还没安好。

14            测试

成功启动后,浏览器访问以下页面可以查看hadoop相关信息:http://master:50070/dfshealth.jsp

http://master:8088/cluster/nodes 

 

在/cdh/share/hadoop/mapreduce下有hadoop自带的实例程序,可以通过wordcount程序测试hadoop集群是否正常运行。  Wordcount是hadoop-mapreduce-examples-2.0.0-cdh4.5.0.jar中的一个程序,用于统计文本中单词的出现次数。 

 

1.      在hdfs根目录下创建存储输入文本的目录 hadoop fs -mkdir /testinput   然后用以下命令可以查看hdfs文件目录,绿色文字是输出:

2.      hadoop fs -ls /  13/12/10 16:42:34 WARN util.NativeCodeLoader:Unable to load native-hadoop library for your platform... using builtin-javaclasses where applicable Found 2 items drwxr-xr-x   - rootsupergroup          0 2013-12-10 16:42/testinput drwx------   - rootsupergroup     

0 2013-12-1014:28 /tmp  

3.      将输入文本上传到hdfs:  hadoop fs -put/home/fulong/hadoop/cdh/etc/hadoop/yarn-site.xml /testinput 

4.      执行wordcount程序:

hadoop jarhadoop-mapreduce-examples-2.0.0-cdh4.5.0.jar wordcount  /testinput/* /testoutput 

5.       执行完后,通过以下命令查看结果: hadoop fs -cat /testoutput/*

 

 

0 0
原创粉丝点击