HDFS

来源:互联网 发布:photoshop软件功能 编辑:程序博客网 时间:2024/05/29 07:27

jdk:jdk-8u131-linux-x64.tar.gz
hadoop:hadoop-2.8.0.tar.gz
centos:CentOS-7-x86_64-Minimal-1611.iso

一.关闭防火墙和强制安全策略

systemctl stop firewalld.servicesystemctl disable firewalld.service

修改配置vi /etc/selinux/config

 SELINUX=disabled

二.修改主机名

修改配置 vi /etc/sysconfig/network

HOSTNAME=centos201

三.配置三台主机名映射

修改配置 vi /etc/hosts

192.168.193.201 centos201192.168.193.202 centos202192.168.193.203 centos203

重启系统 生效

四.添加用户

groupadd -g 601 wuyanguseradd -g wuyang -u 601 wuyang(echo '123456';sleep 1;echo '123456')| passwd wuyang

五.免密钥登录

su wuyangssh-keygen -t rsassh-copy-id centos201ssh-copy-id centos202ssh-copy-id centos203

六.安装JDK

A:解压/usr/java
B:vi /etc/profile
C:添加

    export JAVA_HOME=/usr/java/jdk1.8.0_131    export JRE_HOME=/usr/java/jdk1.8.0_131/jre    export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar    export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH

D:source /etc/profile
E:alternatives --install /usr/bin/java java /usr/java/jdk1.8.0_131/bin/java 300
F:alternatives --config java
G:alternatives --install /usr/bin/javac javac /usr/java/jdk1.8.0_131/bin/java 300
H:alternatives --config javac

七.安装HDFS

A:解压到/usr/hadoop
B:把Hadoop的安装路径添加到”/etc/profile”中

    vi /etc/profile
    export HADOOP_HOME=/usr/hadoop      export PATH=$PATH:$HADOOP_HOME/bin

C:source /etc/profile
D:配置hadoop-env.sh,并确认生效

    vi /usr/hadoop/etc/hadoop/hadoop-env.sh
    export JAVA_HOME=/usr/java/jdk1.8.0_131    export HADOOP_CONF_DIR=/usr/hadoop/etc/hadoop/
    source /usr/hadoop/etc/hadoop/hadoop-env.sh 
    hadoop version

E:在/usr/hadoop目录下创建子目录

    [root@centos201 hadoop]# mkdir tmp    [root@centos201 hadoop]# mkdir hdfs    [root@centos201 hadoop]# cd hdfs    [root@centos201 hdfs]# mkdir name    [root@centos201 hdfs]# mkdir tmp    [root@centos201 hdfs]# mkdir data

F:配置core-site.xml文件 在/usr/hadoop/etc/hadoop”目录下。修改Hadoop核心配置文件core-site.xml,这里配置的是HDFS master(即namenode)的地址和端口号。

vi /usr/hadoop/etc/hadoop/core-site.xml 
<configuration><property>    <name>hadoop.tmp.dir</name>    <value>/usr/hadoop/tmp</value>    <final>true</final><!--拢篓卤赂注拢潞??? /usr/hadoop 目录?陆篓b tmp ?录镁录校漏 -->    <description>A base for other temporary directories.</description></property><property>    <name>fs.default.name</name>    <value>hdfs://centos201:9000</value>   <!-- hdfs://Master.Hadoop:22-->        <final>true</final></property><property>     <name>io.file.buffer.size</name>     <value>131072</value></property>  </configuration>

G:配置hdfs-site.xml文件

vi /usr/hadoop/etc/hadoop/hdfs-site.xml
<property>           <name>dfs.replication</name>           <value>3</value>       </property>       <property>           <name>dfs.name.dir</name>           <value>/usr/hadoop/hdfs/name</value>       </property>       <property>           <name>dfs.data.dir</name>           <value>/usr/hadoop/hdfs/data</value>       </property>       <property>             <name>dfs.namenode.secondary.http-address</name>             <value>centos201:9001</value>        </property>        <property>             <name>dfs.webhdfs.enabled</name>             <value>true</value>        </property>        <property>             <name>dfs.permissions</name>             <value>false</value>        </property>

H:配置mapred-site.xml文件

cp /usr/hadoop/etc/hadoop/mapred-site.xml.template /usr/hadoop/etc/hadoop/mapred-site.xmlvi /usr/hadoop/etc/hadoop/mapred-site.xml
<configuration>      <property>              <name>mapreduce.framework.name</name>              <value>yarn</value>        </property>     </configuration>

I:配置yarn-site.xml文件

vi /usr/hadoop/etc/hadoop/yarn-site.xml
<property>      <name>yarn.resourcemanager.address</name>      <value>centos201:18040</value>    </property>    <property>      <name>yarn.resourcemanager.scheduler.address</name>      <value>centos201:18030</value>    </property>    <property>      <name>yarn.resourcemanager.webapp.address</name>      <value>centos201:18088</value>    </property>    <property>      <name>yarn.resourcemanager.resource-tracker.address</name>      <value>centos201:18025</value>    </property>    <property>      <name>yarn.resourcemanager.admin.address</name>      <value>centos201:18141</value>    </property>    <property>      <name>yarn.nodemanager.aux-services</name>      <value>mapreduce_shuffle</value>    </property>    <property>      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>      <value>org.apache.hadoop.mapred.ShuffleHandler</value>    </property>

J:配置slaves文件

vi /usr/hadoop/etc/hadoop/slaves
centos201centos202centos203

K:给权限

chown -R wuyang:wuyang /usr/hadoop/chown -R wuyang:wuyang /usr/java/sudo chmod -R a+w /usr/hadoop/

L:切换用户

su wuyang

M:初始化

/usr/hadoop/bin/hdfs namenode -format

N:启动

/usr/hadoop/sbin/start-all.sh

O:测试

http://192.168.193.201:18088

重点内容

  1. 资源设定

这里写图片描述

2.参数配置
这里写图片描述

原创粉丝点击