Hadoop1.2.0在MAC下伪分布安装

来源:互联网 发布:淘宝优惠券转换手机端 编辑:程序博客网 时间:2024/06/10 17:58

一:下载jdk

选择最新版本下载,地址:http://www.oracle.com/technetwork/java/javase/downloads/index.html

安装完成之后,打开终端,输入java -version ,出现类似如下说明安装成功。

java version "1.8.0"

Java(TM) SE Runtime Environment (build 1.8.0-b132)

Java HotSpot(TM) 64-Bit Server VM (build 25.0-b70, mixed mode)

二:配置hadoop

下载hadoop,自己可到官网下载稳定版本,不用下载最新的,因为最新的可能不稳定。

配置hadoop 里面conf文件夹四个文件(hadoop-env.sh,core-site.xml,mapred-site.xml,hdfs-site.xml)

下载完hadoop之后,把它解压到你想存放的文件夹,然后进入hadoop的conf目录

1.配置hadopp-env.sh

打开该文件之后找到

1
2
3
#export JAVA_HOME=
#export HADOOP_HEAPSIZE=2000
#export HADOOP_OPTS=-server

修改为:

1
2
3
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home
export HADOOP_HEAPSIZE=2000
export HADOOP_OPTS=-server

即去掉前面的解释符#

特别注意:

JAVA_HOME=的配置是类似上面的目录,百度上有些文章写到是在终端输入whereis java出现的目录,这是不对的。

因为mac的jdk安装在根目录的Library文件夹下面。

2.配置core-site.xml

1
2
3
4
5
6
7
8
9
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

3.配置mapred-site.xml

 

1
2
3
4
5
6
7
8
9
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>

 

 

 

4.配置hdfs-site.xml

 

1
2
3
4
5
6
7
8
9
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

 

 

 

三:配置ssh,无密码登录

mac 上已经ssh了,在终端输入ssh-keygen -t rsa命令,碰到需要输入密码的直接按enter健即可。出现如下成功

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /Users/jia/.ssh/id_rsa.
Your public key has been saved in /Users/jia/.ssh/id_rsa.pub.
The key fingerprint is:
d4:85:aa:83:ae:db:50:48:0c:5b:dd:80:bb:fa:26:a7 jia@JIAS-MacBook-Pro.local
The key's randomart image is:
+--[ RSA 2048]----+
|. .o.o     ..    |
| =. . .  ...     |
|. o.    ...      |
| ...   ..        |
|  .... .S        |
|  ... o          |
| ...   .         |
|o oo.            |
|E*+o.            |
+-----------------+

 

 在终端输入cd .ssh 进入.ssh目录,输入命令。

cp id_rsa.pub authorized_keys

即可。

四:启动hadoop

1.进入hadoop文件夹,用如下命令格式化:

bin/hadoop namenode -format

出现如下,说明成功

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = JIAS-MacBook-Pro.local/192.168.1.3
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
Re-format filesystem in /tmp/hadoop-jia/dfs/name ? (Y or N) Y
14/07/14 13:55:17 INFO namenode.FSNamesystem: fsOwner=jia,staff,everyone,localaccounts,_appserverusr,admin,_appserveradm,_lpadmin,com.apple.sharepoint.group.1,_appstore,_lpoperator,_developer,com.apple.access_screensharing,com.apple.access_ssh
14/07/14 13:55:17 INFO namenode.FSNamesystem: supergroup=supergroup
14/07/14 13:55:17 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/07/14 13:55:17 INFO common.Storage: Image file of size 93 saved in 0 seconds.
14/07/14 13:55:17 INFO common.Storage: Storage directory /tmp/hadoop-jia/dfs/name has been successfully formatted.
14/07/14 13:55:17 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at JIAS-MacBook-Pro.local/192.168.1.3
************************************************************/

 

2.启动hadoop守护进程

bin/start-all.sh

3.停止hadoop守护进程

bin/stop-all.sh

4.向hdfs创建input文件夹,将readme.txt放入input

bin/hadoop fs -mkdir /input

bin/hadoop fs -put README.txt /input

5.运行wordcount

bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /input /output


五:WEB界面管理

检查运行状态:可以通过下面的操作来查看服务是否正常,在Hadoop中用于监控集群健康状态的Web界面

http://localhost:50030/jobtracker.jsp    - Hadoop 管理介面
http://localhost:50060/tasktracker.jsp   - Hadoop Task Tracker 状态
http://localhost:50070/dfshealth.jsp     - Hadoop DFS 状态

 
0 0
原创粉丝点击