搭建hadood2.8.0集群开发环境

来源:互联网 发布:js比较值 编辑:程序博客网 时间:2024/06/03 07:17

目标:

 搭建hadoop+hbase+zoopkeer+hive 开发环境

安装环境:

1、centeros    192.168.1.101

2、 centeros    192.168.1.102

开发环境:

 window +eclipse 

一、安装hadoop集群

1、配置hosts 

#vi  /etc/hosts

192.168.1.101        master

192.168.1.101        slave1

2、关闭防火墙:

# systemctl status firewalld.service  #检查防火墙状态

# systemctl stop firewalld.service  #关闭防火墙

# systemctl disable firewalld.service  #禁止开机启动防火墙

3、配置ssh 无密码访问

# ssh-keygen -t rsa  #生成密钥

slave1

# cp ~/.ssh/id_rsa.pub ~/.ssh/slave1.id_rsa.pub

#scp ~/.ssh/slave1.id_rsa.pub  master:~/.ssh

master:

# cd ~/.ssh

# cat id_rsa.pub >> authorized_keys

# cat slave1.id_rsa.pub >>authorized_keys

# scp authorized_keys slave1:~/.ssh

测试:ssh master

       ssh slave1

4、安装hadoop

tar -zxvf   hadoop-2.8.0.tar.gz

# mkdir /usr/hadoop-2.8.0/tmp

# mkdir /usr/hadoop-2.8.0/logs

# mkdir /usr/hadoop-2.8.0/hdf

# mkdir/usr/hadoop-2.8.0/hdf/data

# mkdir /usr/hadoop-2.8.0/hdf/name

修改配置文件

修改hadoop-env.sh  增加 export JAVA_HOME

修改mapred-env.sh 增加 export JAVA_HOME

修改yarn-env.sh 增加 export JAVA_HOME

修改 core-site.xml :

<configuration>
<property>
    <name>fs.default.name</name>
    <value>hdfs://master:9000</value>
    <description>HDFS address </description>
</property>
<property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/hadoop/hadoop-2.8.0/tmp</value>
    <description>namenode  tmp  file </description>
</property>
<property>
 <name>fs.defaultFS</name>
    <value>hdfs://master:9000</value>
    <description>HDFS address </description>
</property>

修改mapred.site.xml

<configuration>
<property>
        <name>mapred.job.tracker</name>
        <value>http://master:9001</value>
</property>
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
</property>
<property>
        <name>mapred.system.dir</name>
        <value>/usr/hadoop/hadoop-2.8.0/mapred/system</value>
</property>
<property>
        <name>mapred.local.dir</name>
        <value>/usr/hadoop/hadoop-2.8.0/mapred/local</value>
                <final>true</final>
</property>
<property>
    <name>mapreduce.jobhistory.address</name>
    <value>master:10020</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>master:19888</value>
  </property>
</configuration>

修改 yarn-site.xml

<property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
  </property>
 <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.mapred.ShuffleHandler</value>
  </property>
 <property>
    <name>yarn.resourcemanager.address</name>
    <value>master:8032</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>master:8030</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>master:8031</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>master:8033</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>master:8088</value>
  </property>

修改hdfs-site.xml

<configuration>
<property>
    <name>dfs.name.dir</name>
    <value>/usr/hadoop/hadoop-2.8.0/hdfs/name</value>
    <description>namenode  </description> 
</property>


<property>
    <name>dfs.data.dir</name>
    <value>/usr/hadoop/hadoop-2.8.0/hdfs/data</value>
    <description>datanode脡</description>
</property>
<property>
    <name>dfs.replication</name>
    <value>3</value>
    <description>num</description>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value> 
<description>
true or  false
</description>
</property>
</configuration>

建立slaves文件并加入slave1

复制文件到slave1

scp  -r  ~/usr/hadoop  slave1:~/usr

加入hadoop bin 到环境变量中

格式化namenode

hadoop namenode -format

启动hadoop

./start-all.sh

检查服务启动情况 :jps

master :包含ResourceManager、SecondaryNameNode、NameNode

slave1 :包含datanode NodeManager


下次再说zoopker +hbase





原创粉丝点击