启动HDFS遇到问题

来源:互联网 发布:阿里软件有哪些 编辑:程序博客网 时间:2024/06/05 08:50

今天在启动HDFS时遇到问题,日志如下:

logs


SLF4J: Found binding in [jar:file:/home/iespark/hadoop_program_files/hadoop-2.6.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/iespark/hadoop_program_files/hbase-1.1.2/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/iespark/hadoop_program_files/hbase-1.1.2/lib/phoenix-4.5.2-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/12/30 15:29:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ls: Call From hadoopadmin/219.226.86.155 to hadoopadmin:9000 failed on connection exception: java.net.ConnectException: Connection refused;

我的原因是没有将集群关了,就直接shutdown,导致有的进程没有kill,将有关进程全部kill,重新启动hadoop集群即可。

下面是官网给出的解释。

Connection Refused

You get a ConnectionRefused Exception when there is a machine at the address specified, but there is no program listening on the specific TCP port the client is using -and there is no firewall in the way silently dropping TCP connection requests. If you do not know what a TCP connection request is, please consult the specification.

Unless there is a configuration error at either end, a common cause for this is the Hadoop service isn't running.

This stack trace is very common when the cluster is being shut down -because at that point Hadoop services are being torn down across the cluster, which is visible to those services and applications which haven't been shut down themselves. Seeing this error message during cluster shutdown is not anything to worry about.

If the application or cluster is not working, and this message appears in the log, then it is more serious.

  1. Check the hostname the client using is correct
  2. Check the IP address the client is trying to talk to for the hostname is correct.
  3. Make sure the destination address in the exception isn't 0.0.0.0 -this means that you haven't actually configured the client with the real address for that

service, and instead it is picking up the server-side property telling it to listen on every port for connections.

  1. Check that there isn't an entry for your hostname mapped to 127.0.0.1 or 127.0.1.1 in /etc/hosts (Ubuntu is notorious for this)
  2. Check the port the client is trying to talk to using matches that the server is offering a service on.
  3. On the server, try a telnet localhost <port> to see if the port is open there.

  4. On the client, try a telnet <server> <port> to see if the port is accessible remotely.

  5. Try connecting to the server/port from a different machine, to see if it just the single client misbehaving.
  6. If you are using a Hadoop-based product from a third party, including those from Cloudera, Hortonworks, Intel, EMC and others -please use the support channels provided by the vendor.
  7. Please do not file bug reports related to your problem, as they will be closed as Invalid

None of these are Hadoop problems, they are host, network and firewall configuration issues. As it is your cluster, only you can find out and track down the problem.



0 0
原创粉丝点击