Linux_Hadoop2.7.3 安装笔记

来源:互联网 发布:河北seo 编辑:程序博客网 时间:2024/05/29 15:41

环境:Vmware Workstation 10,CentOS-7-x86_64-DVD-1511.iso,Xshell 4.0,Master:192.168.216.144,Slave1:192.168.216.145,Slave2:192.168.216.146.

参照Linux_Hadoop1.2.1 安装笔记到无密码登录:

这里写图片描述

官方安装手册

[hadoop@localhost ~]$ wget http://archive.apache.org/dist/hadoop/core/hadoop-2.7.3/hadoop-2.7.3.tar.gz

[hadoop@localhost ~]$ sudo tar -zxvf hadoop-2.7.3.tar.gz -C /usr/local/

[hadoop@localhost ~]$ sudo mv /usr/local/hadoop-2.7.3/ /usr/local/hadoop

[hadoop@localhost ~]$ ll /usr/local/ | grep hadoop

drwxr-xr-x. 9 root root 4096 8月 18 2016 hadoop

[hadoop@localhost ~]$ sudo chown -R hadoop:hadoop /usr/local/hadoop/

[hadoop@localhost ~]$ ll /usr/local/ | grep hadoop

drwxr-xr-x. 9 hadoop hadoop 4096 8月 18 2016 hadoop

[hadoop@localhost ~]$ sudo vim /etc/profile

这里写图片描述

[hadoop@localhost ~]$ hadoop

-bash: hadoop: 未找到命令

[hadoop@localhost ~]$ source /etc/profile

[hadoop@localhost ~]$ hadoop

Usage: hadoop [–config confdir] [COMMAND | CLASSNAME]

配置文件

[hadoop@localhost ~]$ vim /usr/local/hadoop/etc/hadoop/hadoop-env.sh

这里写图片描述

[hadoop@localhost ~]$ vim /usr/local/hadoop/etc/hadoop/core-site.xml

这里写图片描述

[hadoop@localhost ~]$ vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml

这里写图片描述

如果不想启动 YARN,务必把配置文件 mapred-site.xml 重命名,改成 mapred-site.xml.template,需要用时改回来就行。否则在该配置文件存在,而未开启 YARN 的情况下,运行程序会提示 “Retrying connect to server: 0.0.0.0/0.0.0.0:8032” 的错误,这也是为何该配置文件初始文件名为 mapred-site.xml.template。

[hadoop@localhost ~]$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml

[hadoop@localhost ~]$ vim /usr/local/hadoop/etc/hadoop/mapred-site.xml

这里写图片描述

[hadoop@localhost ~]$ vim /usr/local/hadoop/etc/hadoop/yarn-site.xml

这里写图片描述

[hadoop@localhost ~]$ vim /usr/local/hadoop/etc/hadoop/slaves

这里写图片描述

关闭防火墙

注意:下面命令中“–”是两个“-”,csdn显示有误。

[hadoop@localhost ~]$ firewall-cmd –state

running

[hadoop@localhost ~]$ sudo systemctl stop firewalld

[sudo] password for hadoop:

[hadoop@localhost ~]$ firewall-cmd –state

not running

[hadoop@localhost ~]$ hdfs namenode -format

17/06/22 13:26:17 INFO namenode.NameNode: STARTUP_MSG:
/**************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.3
。。。。。。
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by ‘root’ on 2016-08-18T01:41Z
STARTUP_MSG: java = 1.8.0_131
**************************************************/
17/06/22 13:26:17 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/06/22 13:26:17 INFO namenode.NameNode: createNameNode [-format]
17/06/22 13:26:18 WARN common.Util: Path /usr/local/hadoop/name should be specified as a URI in configuration files. Please update hdfs configuration.
17/06/22 13:26:18 WARN common.Util: Path /usr/local/hadoop/name should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-d6a65f14-6165-4fa1-bcd0-7e01903b2c87
17/06/22 13:26:18 INFO namenode.FSNamesystem: No KeyProvider found.
17/06/22 13:26:18 INFO namenode.FSNamesystem: fsLock is fair:true
17/06/22 13:26:18 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/06/22 13:26:18 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/06/22 13:26:18 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/06/22 13:26:18 INFO blockmanagement.BlockManager: The block deletion will start around 2017 六月 22 13:26:18
17/06/22 13:26:18 INFO util.GSet: Computing capacity for map BlocksMap
17/06/22 13:26:18 INFO util.GSet: VM type = 64-bit
17/06/22 13:26:18 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
17/06/22 13:26:18 INFO util.GSet: capacity = 2^21 = 2097152 entries
17/06/22 13:26:18 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/06/22 13:26:18 INFO blockmanagement.BlockManager: defaultReplication = 2
17/06/22 13:26:18 INFO blockmanagement.BlockManager: maxReplication = 512
17/06/22 13:26:18 INFO blockmanagement.BlockManager: minReplication = 1
17/06/22 13:26:18 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
17/06/22 13:26:18 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/06/22 13:26:18 INFO blockmanagement.BlockManager: encryptDataTransfer = false
17/06/22 13:26:18 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
17/06/22 13:26:18 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE)
17/06/22 13:26:18 INFO namenode.FSNamesystem: supergroup = supergroup
17/06/22 13:26:18 INFO namenode.FSNamesystem: isPermissionEnabled = true
17/06/22 13:26:18 INFO namenode.FSNamesystem: HA Enabled: false
17/06/22 13:26:18 INFO namenode.FSNamesystem: Append Enabled: true
17/06/22 13:26:18 INFO util.GSet: Computing capacity for map INodeMap
17/06/22 13:26:18 INFO util.GSet: VM type = 64-bit
17/06/22 13:26:18 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
17/06/22 13:26:18 INFO util.GSet: capacity = 2^20 = 1048576 entries
17/06/22 13:26:18 INFO namenode.FSDirectory: ACLs enabled? false
17/06/22 13:26:18 INFO namenode.FSDirectory: XAttrs enabled? true
17/06/22 13:26:18 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
17/06/22 13:26:18 INFO namenode.NameNode: Caching file names occuring more than 10 times
17/06/22 13:26:19 INFO util.GSet: Computing capacity for map cachedBlocks
17/06/22 13:26:19 INFO util.GSet: VM type = 64-bit
17/06/22 13:26:19 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
17/06/22 13:26:19 INFO util.GSet: capacity = 2^18 = 262144 entries
17/06/22 13:26:19 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/06/22 13:26:19 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/06/22 13:26:19 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
17/06/22 13:26:19 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
17/06/22 13:26:19 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
17/06/22 13:26:19 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
17/06/22 13:26:19 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/06/22 13:26:19 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/06/22 13:26:19 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/06/22 13:26:19 INFO util.GSet: VM type = 64-bit
17/06/22 13:26:19 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
17/06/22 13:26:19 INFO util.GSet: capacity = 2^15 = 32768 entries
17/06/22 13:26:19 INFO namenode.FSImage: Allocated new BlockPoolId: BP-806774403-127.0.0.1-1498109179100
17/06/22 13:26:19 INFO common.Storage: Storage directory /usr/local/hadoop/name has been successfully formatted.
17/06/22 13:26:19 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop/name/current/fsimage.ckpt_0000000000000000000 using no compression
17/06/22 13:26:19 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop/name/current/fsimage.ckpt_0000000000000000000 of size 353 bytes saved in 0 seconds.
17/06/22 13:26:19 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/06/22 13:26:19 INFO util.ExitUtil: Exiting with status 0
17/06/22 13:26:19 INFO namenode.NameNode: SHUTDOWN_MSG:
/**************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
**************************************************/

Master

[hadoop@localhost ~]$ start-dfs.sh

Starting namenodes on [192.168.216.144]
192.168.216.144: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-localhost.out
192.168.216.146: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-localhost.out
192.168.216.145: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-localhost.out
Starting secondary namenodes [192.168.216.144]
192.168.216.144: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-localhost.out

[hadoop@localhost ~]$ jps

15009 SecondaryNameNode
14818 NameNode
15118 Jps

Slave1和Slave2

[hadoop@localhost ~]$ jps

12660 DataNode
12732 Jps

Master

[hadoop@localhost ~]$ start-yarn.sh

starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-localhost.out
192.168.216.145: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-localhost.out
192.168.216.146: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-localhost.out

[hadoop@localhost ~]$ jps

15009 SecondaryNameNode
14818 NameNode
15173 ResourceManager
15430 Jps

Slave1和Slave2

[hadoop@localhost ~]$ jps

12660 DataNode
12773 NodeManager
12869 Jps

Master

[hadoop@localhost ~]$ mr-jobhistory-daemon.sh start historyserver

starting historyserver, logging to /usr/local/hadoop/logs/mapred-hadoop-historyserver-localhost.out

[hadoop@localhost ~]$ jps

15009 SecondaryNameNode
14818 NameNode
15507 Jps
15173 ResourceManager
15470 JobHistoryServer


注意:貌似并未成功

[hadoop@localhost ~]$ hdfs dfsadmin -report

Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

http://192.168.216.144:50070
这里写图片描述

http://192.168.216.144:8088
这里写图片描述

原创粉丝点击