centos6.5 CDH5.6 clouder 集群环境搭建

来源:互联网 发布:mysql删除语句怎么写 编辑:程序博客网 时间:2024/05/06 17:14
1.准备软件 hadoop-2.6.0-cdh5.6.0.tar.gz
2.jdk:/opt/jdk1.7.0_25
---------------------------------------------------
1.配置静态IP
[root@master ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
#TYPE=Ethernet
UUID=01db0e1f-ea88-40d6-bb6d-b90a5bf00459
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.201.11.102
NETMASK=255.255.255.0
GATEWAY=10.201.11.1
DNS=10.240.1.254
HWADDR=BC:30:5B:AF:A0:02
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no

2.配置SSH无密码登陆

遇到的问题及解决:http://feilong2483.iteye.com/admin/blogs/2319158

主从机器一定要关闭防火强。

否怎会影响无密码登陆或集群部分节点启动不起来。

service iptables stop

chkconfig iptables off

3.配置hadoop

文件夹:/opt/clouder/hadoop-2.6.0-cdh5.6.0/etc/hadoop

  a.core-site.xml

  <configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
</configuration>

b.hdfs-site.xml


<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/opt/clouder/hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/opt/clouder/hdfs/data</value>
</property>
</configuration>

c.mapred-site.xml

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>


d.yarn-site.xml

<configuration>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8080</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8082</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>

e:修改slaves

配置从节点机器名

slave1
slave2
slave3

f.修改hadoop-evn.sh
文件末尾追加变量

export JAVA_HOME=/opt/jdk1.7.0_25
export HADOOP_HOME=/opt/clouder/hadoop-2.6.0-cdh5.6.0
export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib:$HADOOP_PREFIX/lib/native"

g.编辑环境变量

vi /etc/profile

最后追加:

JAVA_HOME=/opt/jdk1.7.0_25
HADOOP_HOME=/opt/clouder/hadoop-2.6.0-cdh5.6.0
PATH=$JAVA_HOME/bin:$PATH
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME
export PATH
export CLASSPATH
export PATH=$PATH:$HADOOP_HOME/bin
#这个变量很有意义,搭建环境总会遇到各种问题,便于发现异常错误及时纠正
export HADOOP_ROOT_LOGGER=DEBUG,console
export HADOOP_PREFIX=/opt/clouder/hadoop-2.6.0-cdh5.6.0

h:vi /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.201.11.102 mysqlmaster.localdomain mysqlmaster
10.201.11.119 slave1.localdomain slave1
10.201.11.120 slave2.localdomain slave2
10.201.11.121 slave3.localdomain slave3







4 .按同样步骤配置各从节点

主节点执行下面命令

5.hdfs namenode -format

6.sbin/start-dfs.sh

7.sbin/start-yarn.sh

启动后发现:

主节点:只出现了 NameNode,SecondaryNameNode
从节点:只出现了DataNode

然后从hadoop日志文件

1)yarn-root-nodemanager-slave1.log

2016-08-21 01:14:07,309 FATAL org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Failed to initialize >mapreduce_shuffle
java.lang.IllegalArgumentException: The ServiceName: >mapreduce_shuffle set in yarn.nodemanager.aux-services is invalid.The valid service name should only contain a-zA-Z0-9_ and can not start with numbers
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.serviceInit(AuxServices.java:114)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:236)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)

2)yarn-root-resourcemanager-master.log

java.lang.IllegalArgumentException: Does not contain a valid host:port authority: >master:8080 (configuration property 'yarn.resourcemanager.address')
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:197)
at org.apache.hadoop.yarn.conf.YarnConfiguration.getSocketAddr(YarnConfiguration.java:1638)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.getBindAddress(ResourceManager.java:1265)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.doSecureLogin(ResourceManager.java:1095)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:243)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1216)
2016-08-20 17:33:20,013 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning to standby state

经查是yarn-site.xml中配置的value引入不合法字符引起。

其他错误
java.lang.UnsatisfiedLinkError: Cannot load libcrypto.so
解决:http://feilong2483.iteye.com/admin/blogs/2319251

       http://feilong2483.iteye.com/admin/blogs/2319252

  最后大功告成!








0 0