spark之hadoop2.6环境搭建笔记

来源:互联网 发布:郭德纲人品知乎 编辑:程序博客网 时间:2024/05/17 23:24

一 spark、hadoop、yarn关系

spark :计算
hadoop:存储
yarn: 资源管理

在这里主要配置hdfs和yarn
hdfs
yarn
mapreduce(计算框架, spark)

yarn: 主进程:resourcemanager
yarn的开: sbin/start-yarn.sh
yarn的关闭:sbin/stop-yarn.sh
登录的url:http://localhost:8088

hdfs:
namenode 进程
datanode 进程
dfs的开与关: sbin/start-dfs.sh , sbin/stop-dfs.sh
登录url:http://localhost:50070

二 hadoop2.6集群环境搭建

  1. hadoop 下载,解压

  2. 设置环境变量
    2.1 HADOOP_HOME 设置
    2.2 hadoop_CONF_DIR 设置 $HADOOP_HOME/etc/hadoop

    2.3 YARN_CONF_DIR 设置 $HADOOP_HOME/etc/hadoop
    具体见:

    vim ~/bashrcexport JAVA_HOME=/usr/lib/java/jdk1.8.0_45export JRE_HOME=${JAVA_HOME}/jreexport HADOOP_HOME=/usr/local/hadoop/hadoop-2.6.0export CLASS_PATH=.:${JAVA_HOME}/lib:${JRE_HOME}/libexport PATH=${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATHexport HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoopexport YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop
  3. core-site.xml 设置

    <configuration>      <property>          <name>fs.default.name</name>          <value>hdfs://localhost:9000</value>      </property>      <property>          <name>hadoop.tmp.dir</name>          <value>/usr/local/hadoop/hadoop-2.6.0/tmp</value>      </property>    </configuration>
  1. hdfs-site.xml 设置
  <configuration>      <property>          <name>dfs.replication</name>          <value>1</value>      </property>      <property>          <name>dfs.name.dir</name>          <value>/usr/local/hadoop/hadoop-2.6.0/dfs/name</value>      </property>      <property>          <name>dfs.data.dir</name>          <value>/usr/local/hadoop/hadoop-2.6.0/dfs/data</value>      </property>    </configuration>
  1. mapred-site.xml 设置
    <configuration>       <property>            <name>mapred.job.tracker</name>            <value>localhost:9001</value>       </property>    </configuration>
  1. hadoop-env.sh设置 $JAVA_HOME
  # The java implementation to use.  export JAVA_HOME=/usr/lib/java/jdk1.8.0_45
  1. YARN-env.sh设置 $JAVA_HOME
export JAVA_HOME=/usr/lib/java/jdk1.8.0_45
0 0
原创粉丝点击