hadoop集群搭建
来源:互联网 发布:齐鲁证券交易软件 编辑:程序博客网 时间:2024/05/21 16:52
hadoop集群搭建
- hadoop集群搭建
- 前期准备
- hadoop namenode 配置
- core-sitexml
- hdfs-sitexml
- mapred-sitexml
- yarn-sitexml
- slaves
- 格式化NAMENODE
- 启动hadoop集群
前期准备
- ### 虚拟机三台
- ### jdk及Hadoop软件准备
jdk-8u45-linux-x64.tar.gz
上传至hadoop目录下
/home/hadoop/env/jdk1.8.0_45
配置环境变量
#------------------------------------------# set java environmentJAVA_HOME=/home/hadoop/env/jdk1.8.0_45PATH=$JAVA_HOME/bin:$PATHCLASSPATH=.:$JAVA_HOME/lib/dt.jar::$JAVA_HOME/lib/tools.jarexport JAVA_HOME PATH CLASSPATH#------------------------------------------
hadoop压缩包下载
我们这里使用的是hadoop-2.8.0.tar.gz
上传至/home/hadoop/bigdata/hadoop/hadoop-2.8.0
- ### 免密登陆 ssh
1. su - hadoop
2. ssh-keygen
3. ssh-copy-id hadoop@hadoop-server01
[root@localhost ~]# su - hadoop [hadoop@localhost ~]$ ssh-keygenGenerating public/private rsa key pair.Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): /home/hadoop/.ssh/id_rsaEnter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa.Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.The key fingerprint is:cd:ee:49:e7:51:f1:cc:74:97:d2:be:11:47:9c:4d:8a hadoop@localhost.localdomainThe key's randomart image is:+--[ RSA 2048]----+| o=|| ..++|| E.o+=|| o o*=|| S o .o+|| . . o|| o o . || o + . || o . |+-----------------+[hadoop@localhost ~]$ ssh-copy-id hadoop@hadoop-server01The authenticity of host 'hadoop-server01 (192.168.3.101)' can't be established.RSA key fingerprint is f2:6e:ce:2c:4b:bf:58:61:c1:c1:0d:d4:46:a4:f8:24.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'hadoop-server01,192.168.3.101' (RSA) to the list of known hosts.hadoop@hadoop-server01's password: Now try logging into the machine, with "ssh 'hadoop@hadoop-server01'", and check in: .ssh/authorized_keysto make sure we haven't added extra keys that you weren't expecting.[hadoop@localhost ~]$ ssh-copy-id hadoop@hadoop-server02The authenticity of host 'hadoop-server02 (192.168.3.102)' can't be established.RSA key fingerprint is f2:6e:ce:2c:4b:bf:58:61:c1:c1:0d:d4:46:a4:f8:24.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'hadoop-server02,192.168.3.102' (RSA) to the list of known hosts.hadoop@hadoop-server02's password: Now try logging into the machine, with "ssh 'hadoop@hadoop-server02'", and check in: .ssh/authorized_keysto make sure we haven't added extra keys that you weren't expecting.
hadoop namenode 配置
- ### 用户环境变量的设置
/home/hadoop/
[hadoop@hadoop-server01 ~]$ vi .bashrc# User specific aliases and functions# .bashrc# Source global definitionsif [ -f /etc/bashrc ]; then . /etc/bashrcfi# User specific aliases and funct#------------------------------------------# set java environmentJAVA_HOME=/home/hadoop/env/jdk1.8.0_45PATH=$JAVA_HOME/bin:$PATHCLASSPATH=.:$JAVA_HOME/lib/dt.jar::$JAVA_HOME/lib/tools.jarexport JAVA_HOME PATH CLASSPATH#------------------------------------------HADOOP_HOME=/home/hadoop/bigdata/hadoop/hadoop-2.8.0PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATHexport HADOOP_HOME PATH
- ### 关闭防火墙
service iptables stop
- ### 修改配置文件
core-site.xml
/home/hadoop/bigdata/hadoop/hadoop-2.8.0/etc/hadoop[hadoop@hadoop-server01 hadoop]$ vi core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop-server01:9000/</value> <description>namenode settings</description> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/tmp/hadoop-${user.name}</value> <description> temp folder </description> </property> <property> <name>hadoop.proxyuser.hadoop.hosts</name> <value>hadoop-server01,hadoop-server-02,hadoop-server-03</value> <description> its hive need </description> </property> <property> <name>hadoop.proxyuser.hadoop.groups</name> <value>hadoop</value> <description> its hive need </description> </property></configuration>
hdfs-site.xml
/home/hadoop/bigdata/hadoop/hadoop-2.8.0/etc/hadoop[hadoop@hadoop-server01 hadoop]$ vi hdfs-site.xml
<configuration> <property> <name>dfs.namenode.http-address</name> <value>hadoop-server01:50070</value> <description> fetch NameNode images and edits.注意主机名称 </description> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop-server01:50090</value> <description> fetch SecondNameNode fsimage </description> </property> <property> <name>dfs.replication</name> <value>2</value> <description> replica count </description> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///home/hadoop/bigdata/hadoop/hadoop-2.8.0/hdfs/name</value> <description> namenode </description> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///home/hadoop/bigdata/hadoop/hadoop-2.8.0/hdfs/data</value> <description> DataNode </description> </property> <property> <name>dfs.namenode.checkpoint.dir</name> <value>file:///home/hadoop/bigdata/hadoop/hadoop-2.8.0/hdfs/namesecondary</value> <description> check point </description> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <name>dfs.stream-buffer-size</name> <value>131072</value> <description> buffer </description> </property> <property> <name>dfs.namenode.checkpoint.period</name> <value>3600</value> <description> duration </description> </property> </configuration>
mapred-site.xml
/home/hadoop/bigdata/hadoop/hadoop-2.8.0/etc/hadoop[hadoop@hadoop-server01 hadoop]$ cp mapred-site.xml.template mapred-site.xml[hadoop@hadoop-server01 hadoop]$ vi mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobtracker.address</name> <value>hdfs://hadoop-server01:9001</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop-server01:10020</value> <description>MapReduce JobHistory Server host:port, default port is 10020.</description> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop-server01:19888</value> <description>MapReduce JobHistory Server Web UI host:port, default port is 19888.</description> </property></configuration>
yarn-site.xml
/home/hadoop/bigdata/hadoop/hadoop-2.8.0/etc/hadoop[hadoop@hadoop-server01 hadoop]$ vi yarn-site.xml
<configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop-server01</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>hadoop-server01:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>hadoop-server01:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>hadoop-server01e:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>hadoop-server01:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>hadoop-server01:8088</value> </property></configuration>
slaves
vi slaveshadoop-server02hadoop-server03
格式化NAMENODE
hadoop namenode -format
启动hadoop集群
start-all.sh
阅读全文
0 0
- hadoop集群搭建(hadoop)
- HADOOP: 搭建hadoop集群
- hadoop集群搭建
- Hadoop集群的搭建
- Hadoop集群搭建
- Hadoop集群搭建
- 搭建hadoop集群
- Nutch+Hadoop集群搭建
- Hadoop集群搭建
- Hadoop集群搭建
- Hadoop环境搭建-集群
- 搭建hadoop集群
- 搭建hadoop集群
- Hadoop集群搭建
- Nutch+Hadoop集群搭建
- 如何搭建hadoop集群
- hadoop分布式集群搭建
- Hadoop集群搭建
- 我在这里有了自己的博客了
- 关于数据库sql优化的一些小建议
- Spring Boot 系列(八)@ControllerAdvice 拦截异常并统一处理
- JavaScript的事件模型解析
- 对FIRST集,FOLLOW集,的例题讲解。
- hadoop集群搭建
- 分享一篇文章,让你知道计算机为什么认识0和1
- 索引类型
- PM8916 Codec Hardware Multibutton Headset Control (MBHC)
- 草稿箱
- Struts1和Struts2的区别
- 闲记
- 对ES6Generator函数的理解
- H5视频