hadoop conf erros

来源:互联网 发布:关于网络的力量的段子 编辑:程序博客网 时间:2024/06/14 13:42
start-all.sh
> ./sbin/start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.shStarting namenodes on [localhost]localhost: starting namenode, logging to /usr/bin/hadoop-2.2.0/logs/hadoop-usergrid-namenode-sornora-localdomain.outlocalhost: starting datanode, logging to /usr/bin/hadoop-2.2.0/logs/hadoop-usergrid-datanode-sornora-localdomain.outStarting secondary namenodes [0.0.0.0]0.0.0.0: starting secondarynamenode, logging to /usr/bin/hadoop-2.2.0/logs/hadoop-usergrid-secondarynamenode-sornora-localdomain.outstarting yarn daemonsstarting resourcemanager, logging to /usr/bin/hadoop-2.2.0/logs/yarn-usergrid-resourcemanager-sornora-localdomain.outlocalhost: starting nodemanager, logging to /usr/bin/hadoop-2.2.0/logs/yarn-usergrid-nodemanager-sornora-localdomain.out


Cluster Summary

Security is OFF
1 files and directories, 0 blocks = 1 total.
Heap Memory used 65.70 MB is 57% of Commited Heap Memory 115.25 MB. Max Heap Memory is 889 MB.
Non Heap Memory used 15.70 MB is 58% of Commited Non Heap Memory 26.75 MB. Max Non Heap Memory is 112 MB.
Configured Capacity:0 BDFS Used:0 BNon DFS Used:0 BDFS Remaining:0 BDFS Used%:100.00%DFS Remaining%:0.00%Block Pool Used:0 BBlock Pool Used%:100.00%DataNodes usages:Min %Median %Max %stdev %  0.00%0.00%0.00%0.00%Live Nodes:0 (Decommissioned: 0)Dead Nodes:0 (Decommissioned: 0)Decommissioning Nodes:0Number of Under-Replicated Blocks:0

There are no datanodes in the cluster.

http://blog.csdn.net/greensurfer/article/details/7618660


HTTP ERROR 500

Problem accessing /nn_browsedfscontent.jsp. Reason:

    Can't browse the DFS since there are no live nodes available to redirect to.

Caused by:

java.io.IOException: Can't browse the DFS since there are no live nodes available to redirect to.at org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:646)at org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)at org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1081)at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)at org.mortbay.jetty.Server.handle(Server.java:326)at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

2.

is not in the sudoers file 解决(转)

解决方案:
首需要切换到root身份
$su -
(注意有- ,这和su是不同的,在用命令"su"的时候只是切换到root,但没有把root的环境变量传过去,还是当前用户的环境变量,用"su -"命令将环境变量也一起带过去,就象和root登录一样)

然后
$visudo     //切记,此处没有vi和sudo之间没有空格

1、移动光标,到最后一行
2、按a,进入append模式
3、输入
your_user_name ALL=(ALL)  ALL
4、按Esc
5、输入“:w”(保存文件)
6、输入“:q”(退出)

这样就把自己加入了sudo组,可以使用sudo命令了。 


3.

hadoop启动错误 JAVA_HOME is not set and could not be found

Starting namenodes on []

localhost: Error: JAVA_HOME is not set and could not be found.

localhost: Error: JAVA_HOME is not set and could not be found.

...

starting yarn daemons
starting resourcemanager, logging to /home/lihanhui/open-source/hadoop-2.1.0-beta/logs/yarn-admin-resourcemanager-localhost.out
localhost: Error: JAVA_HOME is not set and could not be found.


直接命令行执行export JAVA_HOME=/PATH/TO/JDK无法解决问题

grep命令发现libexec/hadoop-config.sh文件中有错误提示中的黑体字,于是在对应位置前直接设置export JAVA_HOME=/PATH/TO/JDK

错误消失。


hadoop版本:hadoop-2.1.0-beta


4.

Hadoop conf 下的 masters 和 slaves 文件的作用是什么?

http://blog.163.com/ly_89/blog/static/186902299201242151129443/

从字面意思上来看,masters 是主要的,应该记录的是 namenode 的 IP 或者是域名。但是文件的名称确实有点误导人,它主要记录运行辅助 namenode 的所有机器。slaves 文件记录了运行 datanode 和 tasktracker 的所有机器。用户也可以改变  hadoop-env.sh  的 HADOOP_SLAVES 项的值,将 slaves 文件放在其他地方。这两个文件无需分发到各个工作节点,因为只有运行在 namenode 或 jobtracker 上的控制脚本能使用这些文件。不过,分发了也不会有什么影响。


5.

http://blog.csdn.net/meng_tianshi/article/details/6784962

http://www.infoq.com/cn/articles/hadoop-config-tip

http://blog.csdn.net/guxch/article/details/7786831

http://wiki.apache.org/hadoop/HowToConfigure

http://gbif.blogspot.com/2011/01/setting-up-hadoop-cluster-part-1-manual.html


6.

down vote

Let me add a bit more to kkrugler's answer:

There're three HDFS properties which contain hadoop.tmp.dir in their values

  1. dfs.name.dir: directory where namenode stores its metadata, with default value${hadoop.tmp.dir}/dfs/name.
  2. dfs.data.dir: directory where HDFS data blocks are stored, with default value${hadoop.tmp.dir}/dfs/data.
  3. fs.checkpoint.dir: directory where secondary namenode store its checkpoints, default value is${hadoop.tmp.dir}/dfs/namesecondary.

This is why you saw the /mnt/hadoop-tmp/hadoop-${user.name} in your HDFS after formatting namenode.