完全分布式安装,注意事项

来源:互联网 发布:施瓦茨科普知乎 编辑:程序博客网 时间:2024/05/19 13:58

1.ssh

完全安装的时候需要将每台机器的id_rsa.pub都写入到一个login.txt里,然后 cat login.txt  >>  authorized_keys

每台机器都要有

修改 authorized_keys的权限为600

chmod 600 authorized_keys

2.hbase

再启动hbase的时候要先启动zookeeper

在hbase的配置中zookeeper直接注释为false不然默认是true

hdfs的hbase文件的权限最好是755

3.hive

在配置hive之前要配置mysql,mysql的账号密码都要与hive的一致

建立hive相应的数据库

有可能无法启动metastore

使用如下命令直接启动

<span style="font-family:Consolas, Monaco, Lucida Console, monospace;"><span style="line-height: 16px; white-space: pre-wrap;">hive --service metastore</span></span>

4遇到的问题及解决方案

hmaster和hregionserver 16020 端口冲突问题
报错:
2016-01-11 16:53:52,986 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer
        at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2458)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2473)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2456)
        ... 5 more
Caused by: java.net.BindException: Problem binding to node111/127.0.0.1:16020 : Address already in use2016-01-11 16:53:52,986 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer
        at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2458)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2473)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2456)
        ... 5 more
Caused by: java.net.BindException: Problem binding to node111/127.0.0.1:16020 : Address already in use
解决方案
先启动
bin/start-hbase.sh
再启动
bin/local-regionservers.sh start 1
bin/local-regionservers.sh start 2
hostname不存在
解决办法:
hostname -f 先查看下是否存在主机名
hostname newname //设置主机名为newname


报错
Can't get master address from ZooKeeper; znode data == null 
                                        ↓
Call From node167/127.0.0.1 to node111:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
        ... ...
        at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:432)
        at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:879)
        at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:411)
        at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:145)
        at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:125)
        at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:513)
        at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:157)
        at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1332)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
解决办法:查看hadoop fs -ls /hbase的权限 给改成775 就可以了
yarn无法启动
报错:
16/01/18 10:00:54 INFO client.RMProxy: Connecting to ResourceManager at node111/172.24.2.167:8032
16/01/18 10:00:56 INFO ipc.Client: Retrying connect to server: node111/ip:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/01/18 10:00:57 INFO ipc.Client: Retrying connect to server: node111/ip:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/01/18 10:00:58 INFO ipc.Client: Retrying connect to server: node111/ip:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/01/18 10:00:59 INFO ipc.Client: Retrying connect to server: node111/ip:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/01/18 10:01:00 INFO ipc.Client: Retrying connect to server: node111/ip:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/01/18 10:01:01 INFO ipc.Client: Retrying connect to server: node111/ip:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
解决方案:
将$hadoop/dir的目录以及目录下的dfs/name和dfs/date清空
bin/hadoop namenode -format  格式化namenode

然后重启即可

0 0
原创粉丝点击