hadoop安装--排错2 connection refused

来源:互联网 发布:underscore.js 详解 编辑:程序博客网 时间:2024/05/02 10:48

这个参考官网提示,其实原因太TM多了,是个精力活儿

Connection Refused

You get a ConnectionRefused Exception when there is a machine at the address specified, but there is no program listening on the specific TCP port the client is using -and there is no firewall in the way silently dropping TCP connection requests. If you do not know what a TCP connection request is, please consult the specification.

Unless there is a configuration error at either end, a common cause for this is the Hadoop service isn't running.

This stack trace is very common when the cluster is being shut down -because at that point Hadoop services are being torn down across the cluster, which is visible to those services and applications which haven't been shut down themselves. Seeing this error message during cluster shutdown is not anything to worry about.

If the application or cluster is not working, and this message appears in the log, then it is more serious.

  1. Check the hostname the client using is correct
  2. Check the IP address the client gets for the hostname is correct.
  3. Check that there isn't an entry for your hostname mapped to 127.0.0.1 or 127.0.1.1 in /etc/hosts (Ubuntu is notorious for this)
  4. Check the port the client is using matches that the server is offering a service on.
  5. On the server, try a telnet localhost <port> to see if the port is open there.
  6. On the client, try a telnet <server> <port> to see if the port is accessible remotely.
  7. Try connecting to the server/port from a different machine, to see if it just the single client misbehaving.
  8. If you are using a Hadoop-based product from a third party, including those from Cloudera, Hortonworks, Intel, EMC and others -please use the support channels provided by the vendor.
  9. Please do not file bug reports related to your problem, as they will be closed as Invalid


None of these are Hadoop problems, they are host, network and firewall configuration issues. As it is your cluster, only you can find out and track down the problem.





一定要关闭 master node 的防火墙

0 0