ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException

来源:互联网 发布:java高级架构师 编辑:程序博客网 时间:2024/06/06 20:44
    2013-06-24 11:39:32,383 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:zqgame cause:java.io.IOException: File /data/zqhadoop/data/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1      2013-06-24 11:39:32,384 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000, call addBlock(/data/zqhadoop/data/mapred/system/jobtracker.info, DFSClient_NONMAPREDUCE_-344066732_1, null) from 192.168.216.133:59866: error: java.io.IOException: File /data/zqhadoop/data/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1      java.io.IOException: File /data/zqhadoop/data/mapred/system/jobtracker.info <span style="color:#FF0000;">could only be replicated to 0 nodes, instead of 1</span>              at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)              at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)              at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)              at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)              at java.lang.reflect.Method.invoke(Method.java:601)              at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)              at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)              at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)              at java.security.AccessController.doPrivileged(Native Method)              at javax.security.auth.Subject.doAs(Subject.java:415)              at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)              at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)  



这里hadoop去查找可用的节点,但是结果找不到。

问题处在/etc/hosts和$HADOOP_HOME/conf/mapred-site.xml和core-site.xml。

解决方法:

1、修改$HADOOP_HOME/conf/mapred-site.xml和core-site.xml,把host修改为IP地址

core-site.xml

[plain] view plaincopy
  1. zqgame@master:~/hadoop-1.2.0/bin$ more ../conf/core-site.xml   
  2. <?xml version="1.0"?>  
  3. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>  
  4.   
  5. <!-- Put site-specific property overrides in this file. -->  
  6.   
  7. <configuration>  
  8.         <property>  
  9.          <name>fs.default.name</name>  
  10.          <value>hdfs://192.168.216.133:9000</value>  
  11.      </property>  
  12.         <property>  
  13.                 <name>hadoop.tmp.dir</name>  
  14.                 <value>/data/zqhadoop/data</value>  
  15.         </property>  
  16. </configuration>  
mapred-site.xml
[plain] view plaincopy
  1. zqgame@master:~/hadoop-1.2.0/bin$ more ../conf/mapred-site.xml   
  2. <?xml version="1.0"?>  
  3. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>  
  4.   
  5. <!-- Put site-specific property overrides in this file. -->  
  6.   
  7. <configuration>  
  8.         <property>  
  9.          <name>mapred.job.tracker</name>  
  10.          <value>192.168.216.133:9001</value>  
  11.      </property>  
  12. </configuration>  

2、修改/etc/hosts配置,添加本机IP绑定

[plain] view plaincopy
  1. zqgame@master:~/hadoop-1.2.0/bin$ more /etc/hosts  
  2. 127.0.0.1       localhost  
  3. 127.0.1.1       master  
  4. <span style="color:#FF0000;">192.168.216.133 localhost.localdomain localhost</span>  

3、关闭防火墙


PS:我的hosts配置文件是:


127.0.0.1 localhost192.168.20.114 master192.168.20.84 slave1192.168.20.85 slave2





转账请注明:http://blog.csdn.net/weijonathan/article/details/9162619
0 0
原创粉丝点击