Hadoop 2.5.2分布式集群配置
来源:互联网 发布:stateserver 端口 编辑:程序博客网 时间:2024/06/06 00:52
这里一共有三台机器,系统为Ubuntu 14.04.2,其中一台为master 其余两台为slave
1、集群之间各台机器上添加相同的用户
首先用adduser命令添加一个普通用户,命令如下:
#adduser lq //添加一个名为tommy的用户#passwd lq //修改密码Changing password for user lq.New UNIX password: //在这里输入新密码Retype new UNIX password: //再次输入新密码passwd: all authentication tokens updated successfully.
2、赋予root权限
修改 /etc/sudoers 文件,找到下面一行,把前面的注释(#)去掉
修改 /etc/sudoers 文件,找到下面一行,在root下面添加一行,如下所示:## Allow root to run any commands anywhereroot ALL=(ALL) ALLlq ALL=(ALL) ALL修改完毕,现在可以用lq帐号登录,然后用命令 su - ,即可获得root权限进行操作。
这里设置三台机器的用户名统一为lq
3、修改三台机器上的/etc/hostname文件
/etc/hostname文件中存放的是主机名,修改文件后保存然后重启机器,重新登陆后主机名生效 这里设置master机器主机名为RfidLabMaster 其余两台机器主机名分别为RfidLabSlave1、RfidLabSlave2
重启sudo reboot
配置hosts 在三台机器上分别修改/etc/hosts文件,如在master机器上修改:
lq@RfidLabMaster:~$ sudo vim /etc/hosts
用以下形式添加
Master机器ip RfidLabMasterSlave1机器ip RfidLabSlave1Slave2机器ip RfidLabSlave2
4、master到slave配置ssh无密码验证配置
在master机器下
cd ~cd .ssh/ssh-keygen -t rsa
一直回车。.ssh目录下多出两个文件
私钥文件:id_rsa公钥文件:id_rsa.pub
复制id_rsa.pub文件为authorized_keys
cp id_rsa.pub authorized_keys
将公钥文件authorized_keys分发到节点RfidLabSlave1、RfidLabSlave2上:
scp authorized_keys lq@RfidLabSlave1:/home/lq/.ssh/ scp authorized_keys lq@RfidLabSlave2:/home/lq/.ssh/
注意:如果当前用户目录下没有.ssh目录,可以自己创建一个该目录,该目录的权限最好设置为700,authorized_keys权限最好设置为600
验证ssh无密码登录:
lq@RfidLabMaster:~$ ssh RfidLabSlave1lq@RfidLabSlave1:~$
5、配置JDK环境
由于master机器上已经安装过java,安装目录为/usr/lib/jvm/jdk1.8.0_60,所以直接将安装目录发到其他的slave节点,如果没有java,就要去官网下载解压。
sudo scp -r /usr/lib/jvm/jdk1.8.0_60 root@RfidLabSlave1:/usr/lib/jvm/sudo scp -r /usr/lib/jvm/jdk1.8.0_60 root@RfidLabSlave2:/usr/lib/jvm/
修改/etc/profile文件 配置java环境变量
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_60export JRE_HOME=${JAVA_HOME}/jreexport CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/libexport PATH=${JAVA_HOME}/bin:$PATH
6、安装hadoop
hadoop下载地址
可以选择一个自己需要的版本,这里选择的是hadoop-2.5.2
先下载一个到master服务器的/opt/tools路径下,如果该路径不存在就自己创建一个该目录
lq@RfidLabMaster:~$ cd /opt/tools/lq@RfidLabMaster:/opt/tools$ wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.5.2/hadoop-2.5.2.tar.gz下载完成后解压lq@RfidLabMaster:/opt/tools$ tar -zxvf hadoop-2.5.2.tar.gz
修改hadoop xml文件配置
lq@RfidLabMaster:/opt/tools/hadoop-2.5.2$ vim etc/hadoop/core-site.xml
修改etc/conf/core-site.xml 配置如下
<configuration> <property> <name>fs.default.name</name> <value>hdfs://RfidLabMaster:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/lq/hadoop/tmp</value> </property></configuration>
修改etc/conf/mapred-site.xml 配置如下
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobtracker.http.address</name> <value>RfidLabMaster:50030</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>RfidLabMaster:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>RfidLabMaster:19888</value> </property></configuration>
conf/hdfs-site.xml 配置如下,注意文件路径中不要包含一些点、逗号等特殊字符,文件路径需要写成完全路径,以file:开头
<configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/lq/hadoop/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/lq/hadoop/dfs/data</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <name>dfs.namenode.rpc-address</name> <value>RfidLabMaster:9000</value> </property> <property> <name>dfs.block.size</name> <value>134217728</value> </property></configuration
修改etc/hadoop/yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>RfidLabMaster:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>RfidLabMaster:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>RfidLabMaster:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>RfidLabMaster:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>RfidLabMaster:8088</value> </property></configuration>
修改 etc/hadoop/slaves
RfidLabSlave1RfidLabSlave2
etc/hadoop/hadoop-env.sh和yarn-env.sh中配置Java环境变量
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_60
使用scp 直接把以上配置copy到另外的集群上
lq@RfidLabMaster:/opt/tools$ scp -r hadoop-2.5.2 lq@RfidLabSlave1:/opt/toolslq@RfidLabMaster:/opt/tools$ scp -r hadoop-2.5.2 lq@RfidLabSlave2:/opt/tools
修改/etc/profile文件 配置hadoop环境变量
export HADOOP_HOME=/opt/tools/hadoop-2.5.2export HADOOP_COMMON_HOME=$HADOOP_HOMEexport HADOOP_HDFS_HOME=$HADOOP_HOMEexport HADOOP_MAPRED_HOME=$HADOOP_HOMEexport HADOOP_YARN_HOME=$HADOOP_HOMEexport HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoopexport PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/libexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_HOME/lib/native"
分布式hadoop环境搭建完毕
7、启动验证hadoop
(1)格式化文件系统
lq@RfidLabMaster:/opt/tools/hadoop-2.5.2$ bin/hdfs namenode -format
如果启动失败 则需要手动创建目录
mkdir /home/lq/hadoop/dfs
成功会显示 INFO common.Storage: Storage directory /home/lq/hadoop/dfs/name has been successfully formatted.
...16/03/02 16:44:54 INFO namenode.NNConf: ACLs enabled? false16/03/02 16:44:54 INFO namenode.NNConf: XAttrs enabled? true16/03/02 16:44:54 INFO namenode.NNConf: Maximum size of an xattr: 1638416/03/02 16:44:54 INFO namenode.FSImage: Allocated new BlockPoolId: BP-677850346-120.25.162.238-145690829443616/03/02 16:44:54 INFO common.Storage: Storage directory /home/lq/hadoop/dfs/name has been successfully formatted.16/03/02 16:44:54 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 016/03/02 16:44:54 INFO util.ExitUtil: Exiting with status 016/03/02 16:44:54 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at RfidLabMaster/120.25.162.238************************************************************/
(2) 启动hadoop
lq@RfidLabMaster:/opt/tools/hadoop-2.5.2$ sbin/start-all.sh
输出:
lq@RfidLabMaster:/opt/tools/hadoop-2.5.2$ sbin/start-all.shThis script is Deprecated. Instead use start-dfs.sh and start-yarn.shStarting namenodes on [RfidLabMaster]RfidLabMaster: starting namenode, logging to /opt/tools/hadoop-2.5.2/logs/hadoop-lq-namenode-RfidLabMaster.outRfidLabSlave2: starting datanode, logging to /opt/tools/hadoop-2.5.2/logs/hadoop-lq-datanode-RfidLabSlave2.outRfidLabSlave3: starting datanode, logging to /opt/tools/hadoop-2.5.2/logs/hadoop-lq-datanode-RfidLabSlave3.outRfidLabSlave1: starting datanode, logging to /opt/tools/hadoop-2.5.2/logs/hadoop-lq-datanode-RfidLabSlave1.outStarting secondary namenodes [0.0.0.0]0.0.0.0: starting secondarynamenode, logging to /opt/tools/hadoop-2.5.2/logs/hadoop-lq-secondarynamenode-RfidLabMaster.outstarting yarn daemonsstarting resourcemanager, logging to /opt/tools/hadoop-2.5.2/logs/yarn-lq-resourcemanager-RfidLabMaster.outRfidLabSlave1: starting nodemanager, logging to /opt/tools/hadoop-2.5.2/logs/yarn-lq-nodemanager-RfidLabSlave1.outRfidLabSlave3: starting nodemanager, logging to /opt/tools/hadoop-2.5.2/logs/yarn-lq-nodemanager-RfidLabSlave3.outRfidLabSlave2: starting nodemanager, logging to /opt/tools/hadoop-2.5.2/logs/yarn-lq-nodemanager-RfidLabSlave2.out输入jps查看java进程lq@RfidLabMaster:/opt/tools/hadoop-2.5.2$ jps25073 NameNode25412 ResourceManager25676 Jps25262 SecondaryNameNode
如果出现以下输出使其卡着不动,则要在/etc/ssh/ssh_config 文件中添加
StrictHostKeyChecking no 然后重启ssh服务/etc/init.d/ssh restart
...The authenticity of host 'localhost (127.0.0.1)' can't be established.ECDSA key fingerprint is 08:1d:db:e4:d2:e0:87:89:ed:ca:69:82:17:6a:83:57 ...
去其他slave节点输入 jps 查看
lq@RfidLabSlave1:~/hadoop$ jps2646 NodeManager2733 Jps2526 DataNode
8、遇到的问题
FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to master/xxx. Exiting. java.io.IOException: Incompatible clusterIDs
问题定位:所有namenode目录、所有datanode目录、从节点临时目录
问题原因:
1) 主节点的namenode clusterID与从节点的datanode clusterID不一致
2) 多次格式化了namenode跟datanode之后的结果,格式化之后从节点生成了新的ID,造成不一致
解决办法:
在格式化之前,先把所有的服务停掉(stop-dfs.sh、stop-yarn.sh或者stop-all.sh),确保都停掉了之后,分别到所有节点的namenode目录、datanode目录、临时目录,把以上目录里面的所有内容都删除掉。然后再重新启动就可以了。一个个机器删除比较麻烦 附上一个脚本可以在各台机器上批量删除,参考该博客改写的
链接:http://blog.csdn.net/nuaazdh/article/details/39643283
创建脚本:allcmd.sh
if [ "$#" -ne 2 ] ; then echo "USAGE: $0 -f server_list_file cmd" exit -1fifile_name=$1cmd_str=$2cwd=$(pwd)cd $cwdserverlist_file="$cwd/$file_name"cmdlist_file="$cwd/$cmd_str"if [ ! -e $serverlist_file ] ; then echo 'server.list not exist'; exit 0fiif [ ! -e $cmdlist_file ] ; then echo 'cmd.list not exist'; exit 0fiwhile read line do #echo $line if [ -n "$line" ] ; then echo "DOING--->>>>>" $line "<<<<<<<" while read cmd_str do ssh $line $cmd_str < /dev/null > /dev/null if [ $? -eq 0 ] ; then echo "$cmd_str done!" else echo "error: " $? fi done<$cmdlist_file fidone < $serverlist_file
创建完执行chmod +x allcmd.sh
创建命令文件 cmdList
rm -r /home/lq/hadoop/dfs/*rm -r /home/lq/hadoop/tmp/*rm -r /opt/tools/hadoop-2.5.2/logs/*
创建服务器列表文件 serverList
RfidLabMasterRfidLabSlave1RfidLabSlave2RfidLabSlave3
使用方法:在脚本所在目录下建立cmdList文件和serverList
然后调用:./allcmd.sh serverList cmdList 即可
- Hadoop 2.5.2分布式集群配置
- Hadoop分布式集群配置
- Hadoop分布式集群配置
- Hadoop 2.5.2分布式集群配置(VirtualBox虚拟机模拟)
- hadoop 2.5 分布式集群安装配置
- Hadoop分布式集群配置总结
- hadoop 分布式 集群配置 笔记
- 分布式Hadoop集群安装配置
- Hadoop 完全分布式集群配置
- Hadoop完全分布式集群配置
- Hadoop配置集群/分布式环境
- 【Hadoop】 分布式Hadoop集群安装配置
- Hadoop 2.5.2 集群配置
- 完全分布式hadoop集群安装之三:hadoop集群配置
- Docker实战之安装配置Hadoop-2.5.2完全分布式集群
- Hadoop-0.22.0分布式集群配置
- Hadoop-0.20.0分布式集群配置
- Hadoop伪分布式集群环境配置
- struts2学习笔记-----action名称的搜索顺序
- sqlserver创建临时表
- 关于JPA方法名创建自动查询
- PCT-36.523
- 101.Examine the data in the PROMO_BEGIN_DATE column of the PROMOTIONS table:
- Hadoop 2.5.2分布式集群配置
- zzuoj 10402: C.机器人 【数论 exgcd】
- IDEA配置tomcat
- compile boost with -fPIC
- ios UISearchDisplayController 实现 UITableView 搜索功能
- dos命令行连接数据库 oracle11g
- Maven--搭建开发环境(一)
- libsvm 在matlab中保存读取model文件的接口
- hdu2544 最短路 Dijstra算法堆优化,Bellman-Ford,Bellman-Ford队列优化