Hadoop2.7.1 配置三个.xml文件的创建地址
来源:互联网 发布:怎么在淘宝上买小说 编辑:程序博客网 时间:2024/05/23 13:52
我最初是hadoop1.0.1,已经搭建好了的,现在想把它换成2.7.1,结果起来之后,只有JPS服务起来了,其他几个没有服务没有起来,我见网上说的是在写权限不够16/03/01 21:21:00 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /home/hadoop/leen/hadoop/tmp/dfs/name/current
这个文件的时候权限不够,但是我现在用的是root权限啊!求大神指点,相关的配置文件是按照这个链接博文写的,稍微结合自己的做了修改,希望大家能帮忙指点一下
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04ZSTARTUP_MSG: java = 1.7.0_79************************************************************/16/03/01 21:20:54 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]16/03/01 21:20:54 INFO namenode.NameNode: createNameNode [-format]Formatting using clusterid: CID-2e0fcf64-17b7-46da-88ce-4e4c0abc902916/03/01 21:20:59 INFO namenode.FSNamesystem: No KeyProvider found.16/03/01 21:20:59 INFO namenode.FSNamesystem: fsLock is fair:true16/03/01 21:20:59 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=100016/03/01 21:20:59 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true16/03/01 21:20:59 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.00016/03/01 21:20:59 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Mar 01 21:20:5916/03/01 21:20:59 INFO util.GSet: Computing capacity for map BlocksMap16/03/01 21:20:59 INFO util.GSet: VM type = 64-bit16/03/01 21:20:59 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB16/03/01 21:20:59 INFO util.GSet: capacity = 2^21 = 2097152 entries16/03/01 21:20:59 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false16/03/01 21:20:59 INFO blockmanagement.BlockManager: defaultReplication = 116/03/01 21:20:59 INFO blockmanagement.BlockManager: maxReplication = 51216/03/01 21:20:59 INFO blockmanagement.BlockManager: minReplication = 116/03/01 21:20:59 INFO blockmanagement.BlockManager: maxReplicationStreams = 216/03/01 21:20:59 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false16/03/01 21:20:59 INFO blockmanagement.BlockManager: replicationRecheckInterval = 300016/03/01 21:20:59 INFO blockmanagement.BlockManager: encryptDataTransfer = false16/03/01 21:20:59 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 100016/03/01 21:20:59 INFO namenode.FSNamesystem: fsOwner = leen (auth:SIMPLE)16/03/01 21:20:59 INFO namenode.FSNamesystem: supergroup = supergroup16/03/01 21:20:59 INFO namenode.FSNamesystem: isPermissionEnabled = true16/03/01 21:20:59 INFO namenode.FSNamesystem: HA Enabled: false16/03/01 21:20:59 INFO namenode.FSNamesystem: Append Enabled: true16/03/01 21:21:00 INFO util.GSet: Computing capacity for map INodeMap16/03/01 21:21:00 INFO util.GSet: VM type = 64-bit16/03/01 21:21:00 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB16/03/01 21:21:00 INFO util.GSet: capacity = 2^20 = 1048576 entries16/03/01 21:21:00 INFO namenode.FSDirectory: ACLs enabled? false16/03/01 21:21:00 INFO namenode.FSDirectory: XAttrs enabled? true16/03/01 21:21:00 INFO namenode.FSDirectory: Maximum size of an xattr: 1638416/03/01 21:21:00 INFO namenode.NameNode: Caching file names occuring more than 10 times16/03/01 21:21:00 INFO util.GSet: Computing capacity for map cachedBlocks16/03/01 21:21:00 INFO util.GSet: VM type = 64-bit16/03/01 21:21:00 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB16/03/01 21:21:00 INFO util.GSet: capacity = 2^18 = 262144 entries16/03/01 21:21:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.999000012874603316/03/01 21:21:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 016/03/01 21:21:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 3000016/03/01 21:21:00 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 1016/03/01 21:21:00 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 1016/03/01 21:21:00 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,2516/03/01 21:21:00 INFO namenode.FSNamesystem: Retry cache on namenode is enabled16/03/01 21:21:00 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis16/03/01 21:21:00 INFO util.GSet: Computing capacity for map NameNodeRetryCache16/03/01 21:21:00 INFO util.GSet: VM type = 64-bit16/03/01 21:21:00 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB16/03/01 21:21:00 INFO util.GSet: capacity = 2^15 = 32768 entries16/03/01 21:21:00 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1510641234-127.0.1.1-145683846019516/03/01 21:21:00 WARN namenode.NameNode: Encountered exception during format:java.io.IOException: Cannot create directory /home/hadoop/leen/hadoop/tmp/dfs/name/current at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)16/03/01 21:21:00 ERROR namenode.NameNode: Failed to start namenode.java.io.IOException: Cannot create directory /home/hadoop/leen/hadoop/tmp/dfs/name/current at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)16/03/01 21:21:00 INFO util.ExitUtil: Exiting with status 116/03/01 21:21:00 INFO namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1************************************************************/leen@ubuntu:/usr/share/hadoop-2.7.1$ sbin/start-dfs.shStarting namenodes on [localhost]localhost: starting namenode, logging to /usr/share/hadoop-2.7.1/logs/hadoop-leen-namenode-ubuntu.outlocalhost: starting datanode, logging to /usr/share/hadoop-2.7.1/logs/hadoop-leen-datanode-ubuntu.outStarting secondary namenodes [0.0.0.0]0.0.0.0: starting secondarynamenode, logging to /usr/share/hadoop-2.7.1/logs/hadoop-leen-secondarynamenode-ubuntu.outleen@ubuntu:/usr/share/hadoop-2.7.1$ jps9155 Jps
我的配置是按着网上的来的,
http://zhidao.baidu.com/link?url=a5o-u1MuyMW6HoCTjH5YoFcQbmFNZIHwl-VBFuUZELx3IeSkLbZML-VguNmQMgF_zuQjy4mPFLpBlBAJukkL-HGDyVkD9JqBM0S_Ml7T9Nm
具体如下:
1.conf/core-site.xml:<configuration><property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/hadooptmp</value> <description>A base for other temporary directories.</description></property><property> <name>fs.default.name</name> <value>hdfs://master:9000</value></property></configuration>2.conf/hadoop-env.sh:export JAVA_HOME=/home/hadoop/jdk1.x.x_xx3. conf/hdfs-site.xml:<configuration><property> <name>dfs.replication</name> <value>2</value></property><property> <name>dfs.data.dir</name> <value>/home/hadoop/hadoopfs/data</value> </property><property> <name>dfs.http.address</name> <value>master:50070</value></property><property> <name>dfs.back.http.address</name> <value>node1:50070</value></property><property> <name>dfs.name.dir</name> <value>/home/hadoop/hadoopfs/name</value></property><property> <name>fs.checkpoint.dir</name> <value>/home/hadoop/hadoopcheckpoint</value></property><property><name>dfs.permissions</name><value>false</value></property></configuration>4.conf/mapred-site.xml:<configuration><property> <name>mapred.job.tracker</name> <value>master:9001</value></property><property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value></property><property> <name>mapred.tasktracker.reduce.tasks.maximum</name> <value>4</value></property><property> <name>mapred.child.java.opts</name> <value>-Xmx1000m</value></property></configuration>
0 0
- Hadoop2.7.1 配置三个.xml文件的创建地址
- hadoop2.2.0 XML配置
- hadoop2需要的配置的文件
- XML文件的创建
- xml文件的创建
- xml文件的创建
- maven创建ssm框架的pom.xml文件配置
- 创建filter,及web.xml文件配置
- 创建应用程序的XML文件
- 创建简单的XML文件
- hadoop2.x安装文件配置
- 简单的XML操作:XML文件创建
- 简单的XML操作:XML文件创建
- hadoop2的配置参数
- hadoop2.2.0的配置
- Struts2 WEB项目启动时候的三个xml文件
- hadoop2.7.1配置备忘
- hadoop2.7.1 HA配置
- 3876: [Ahoi2014]支线剧情|有上下界的费用流
- 跨域的方法
- ContentProvider 跨程序间数据沟通
- 字母重排问题
- opencv在视频中捕捉人脸--opencv3.0
- Hadoop2.7.1 配置三个.xml文件的创建地址
- BZOJ 2527 Meteors(整体二分)
- 查看应用内存和cpu
- Single Number解题思路
- ios 支付宝支付流程
- ThreadLocal实现方式&使用介绍—无锁化线程封闭
- Programming Exercise5:Regularized Linera Regression and Bias v.s Variance
- [019]Sencha Ext JS 6.0使用教程1
- js中的等号问题