hadoop spark全真式分布集群配置

来源:互联网 发布:编程小游戏用什么软件 编辑:程序博客网 时间:2024/04/29 07:10

 

 

127.0.0.1 localhost

172.16.77.94 master master

172.16.77.95 slave1 slave1

172.16.77.97 slave2 slave2

172.16.77.98 slave3 slave3

sxq112004@ubuntu:~$ ssh xqshi@172.16.77.94

xqshi@172.16.77.94's password:

[xqshi@groot-nn ~]$  vi /etc/hosts

在文件中添加

172.16.77.94 master master

172.16.77.95 slave1 slave1

172.16.77.97 slave2 slave2

172.16.77.98 slave3 slave3

提示错误:E45: 'readonly' option is set (add ! to override)

这一步放弃了,不修改名字了,直接用IP来做

 

步骤二:验证ssh是已经安装

[xqshi@groot-nn ~]$ which ssh

/usr/bin/ssh

[xqshi@groot-nn ~]$ which sshd

/usr/sbin/sshd

[xqshi@groot-nn ~]$ which ssh-Keygen

/usr/bin/which: no ssh-Keygen in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/xqshi/bin)

 

步骤三:SSH免密码登录

[xqshi@groot-nn ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/xqshi/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/xqshi/.ssh/id_rsa.

Your public key has been saved in /home/xqshi/.ssh/id_rsa.pub.

The key fingerprint is:

a8:e8:03:fd:31:06:c3:74:f4:c9:58:9f:90:b7:65:64 xqshi@groot-nn

The key's randomart image is:

+--[ RSA 2048]----+

|   .. o. .E      |

|  . .=.+.oo      |

| o .. +.o+       |

|  +    ..        |

| . o  . S        |

|. ..+.           |

| ..o.o           |

| .. .            |

|  ..             |

+-----------------+

[xqshi@groot-nn ~]$

[xqshi@groot-nn ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[xqshi@groot-nn ~]$ ssh localhost

xqshi@localhost's password:

Permission denied, please try again.

xqshi@localhost's password:

 

[xqshi@groot-nn .ssh]$ ll id_rsa authorized_keys

将这二个文件直接拷贝到别的电脑上.ssh中就可以了

[xqshi@groot-nn ~]$ cd .ssh/

[xqshi@groot-nn .ssh]$ ll -lrt

total 16

-rw-r--r--. 1 xqshi xqshi  391 Jun 20 16:40 known_hosts

-rw-------. 1 xqshi xqshi 1679 Jun 20 17:30 id_rsa

-rw-r--r--. 1 xqshi xqshi  396 Jun 20 17:30 id_rsa.pub

-rw-rw-r--. 1 xqshi xqshi 1188 Jun 20 17:32 authorized_keys

[xqshi@groot-nn .ssh]$ cat  id_rsa.pub

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAzAr13dlMAPvol6p6rcWJT5lCa1Ufro6KMojsQWq4YFhUuDxkekmNzKTyRwuXPKaNHuqvK1ylYnwwkGQR4oNhTfIxNZzc8iQgfSqjq+WGrn5BVKklCNzbuP3fRcR+N4agCrAY1pld4yQyxFOkxwU8tkBryUDaZ1OedWMS+A8KxZSeucLiZ5fx6byyS++f8I64okzFpqlsXeAro63Rwm2peBC01xMBhwFsY5t/yIonL+m0mKLWNDhiZyR3nnbH3XStLrv4oZqhxu+zIHcDon05egtonBd6GJOgrCN78cCsD2pBzhxT1UVB/fQ26XSLymkCct4SYH+UlyusYOkhQmvmgw== xqshi@groot-nn

[xqshi@groot-nn .ssh]$ cat authorized_keys

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAywF3yigc07kiRbJJTP4C9bJMe+VjKs5ZYdvOgqifRo2iFLUq0ketQSZlPbgj5Jbg/0R/5qC64QWI7ItHUlDJMTVCBC7e7P5dSkh8YyNg2AM3KFyAdLd5JAFyBYUn8E5AhyVzFzuXFbnxjQG5FCw/omtFxSMwPELEpgFFG+S2MYkcnIw0RMVzfWKtl7MhHweeoTOy+DFiKTuJzXXOTK6+XZDtBhKTrDbQ9FsD9s0hhbu7Dyq3EU4x6XAMVDtHOsnTT3fdrfduqgnb24F8BL4vAt+atsGDzGSAN7PG9V2hIFQOzKbRJMFgKrAVnTTp2d9LX+63/cOq5D+q2I1ZErcmkw== xqshi@groot-nn

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAywF3yigc07kiRbJJTP4C9bJMe+VjKs5ZYdvOgqifRo2iFLUq0ketQSZlPbgj5Jbg/0R/5qC64QWI7ItHUlDJMTVCBC7e7P5dSkh8YyNg2AM3KFyAdLd5JAFyBYUn8E5AhyVzFzuXFbnxjQG5FCw/omtFxSMwPELEpgFFG+S2MYkcnIw0RMVzfWKtl7MhHweeoTOy+DFiKTuJzXXOTK6+XZDtBhKTrDbQ9FsD9s0hhbu7Dyq3EU4x6XAMVDtHOsnTT3fdrfduqgnb24F8BL4vAt+atsGDzGSAN7PG9V2hIFQOzKbRJMFgKrAVnTTp2d9LX+63/cOq5D+q2I1ZErcmkw== xqshi@groot-nn

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAzAr13dlMAPvol6p6rcWJT5lCa1Ufro6KMojsQWq4YFhUuDxkekmNzKTyRwuXPKaNHuqvK1ylYnwwkGQR4oNhTfIxNZzc8iQgfSqjq+WGrn5BVKklCNzbuP3fRcR+N4agCrAY1pld4yQyxFOkxwU8tkBryUDaZ1OedWMS+A8KxZSeucLiZ5fx6byyS++f8I64okzFpqlsXeAro63Rwm2peBC01xMBhwFsY5t/yIonL+m0mKLWNDhiZyR3nnbH3XStLrv4oZqhxu+zIHcDon05egtonBd6GJOgrCN78cCsD2pBzhxT1UVB/fQ26XSLymkCct4SYH+UlyusYOkhQmvmgw== xqshi@groot-nn

[xqshi@groot-nn .ssh]$ ssh localhost

xqshi@localhost's password:

 

[xqshi@groot-nn .ssh]$ ll

total 16

-rw-rw-r--. 1 xqshi xqshi 1188 Jun 20 17:32 authorized_keys

-rw-------. 1 xqshi xqshi 1679 Jun 20 17:30 id_rsa

-rw-r--r--. 1 xqshi xqshi  396 Jun 20 17:30 id_rsa.pub

-rw-r--r--. 1 xqshi xqshi  391 Jun 20 16:40 known_hosts

[xqshi@groot-nn .ssh]$ mv authorized_keys authorized_keys.bak

[xqshi@groot-nn .ssh]$ cp id_rsa.pub authorized_keys

[xqshi@groot-nn .ssh]$ ssh localhosrt

ssh: Could not resolve hostname localhosrt: Name or service not known

[xqshi@groot-nn .ssh]$ ssh localhost

Last login: Mon Jun 20 17:29:18 2016 from 172.18.254.232

[xqshi@groot-nn ~]$ exit

logout

Connection to localhost closed.

[xqshi@groot-nn .ssh]$ diff authorized_keys*

0a1,2

> ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAywF3yigc07kiRbJJTP4C9bJMe+VjKs5ZYdvOgqifRo2iFLUq0ketQSZlPbgj5Jbg/0R/5qC64QWI7ItHUlDJMTVCBC7e7P5dSkh8YyNg2AM3KFyAdLd5JAFyBYUn8E5AhyVzFzuXFbnxjQG5FCw/omtFxSMwPELEpgFFG+S2MYkcnIw0RMVzfWKtl7MhHweeoTOy+DFiKTuJzXXOTK6+XZDtBhKTrDbQ9FsD9s0hhbu7Dyq3EU4x6XAMVDtHOsnTT3fdrfduqgnb24F8BL4vAt+atsGDzGSAN7PG9V2hIFQOzKbRJMFgKrAVnTTp2d9LX+63/cOq5D+q2I1ZErcmkw== xqshi@groot-nn

> ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAywF3yigc07kiRbJJTP4C9bJMe+VjKs5ZYdvOgqifRo2iFLUq0ketQSZlPbgj5Jbg/0R/5qC64QWI7ItHUlDJMTVCBC7e7P5dSkh8YyNg2AM3KFyAdLd5JAFyBYUn8E5AhyVzFzuXFbnxjQG5FCw/omtFxSMwPELEpgFFG+S2MYkcnIw0RMVzfWKtl7MhHweeoTOy+DFiKTuJzXXOTK6+XZDtBhKTrDbQ9FsD9s0hhbu7Dyq3EU4x6XAMVDtHOsnTT3fdrfduqgnb24F8BL4vAt+atsGDzGSAN7PG9V2hIFQOzKbRJMFgKrAVnTTp2d9LX+63/cOq5D+q2I1ZErcmkw== xqshi@groot-nn


[xqshi@groot-nn .ssh]$ ll

total 20

-rw-r--r--. 1 xqshi xqshi  396 Jun 20 17:40 authorized_keys

-rw-rw-r--. 1 xqshi xqshi 1188 Jun 20 17:32 authorized_keys.bak

-rw-------. 1 xqshi xqshi 1679 Jun 20 17:30 id_rsa

-rw-r--r--. 1 xqshi xqshi  396 Jun 20 17:30 id_rsa.pub

-rw-r--r--. 1 xqshi xqshi  391 Jun 20 16:40 known_hosts

[xqshi@groot-nn .ssh]$ ssh localhost

Last login: Mon Jun 20 17:40:39 2016 from localhost

[xqshi@groot-nn ~]$ exit

logout

Connection to localhost closed.

[xqshi@groot-nn .ssh]$ ll

total 20

-rw-r--r--. 1 xqshi xqshi  396 Jun 20 17:40 authorized_keys

-rw-rw-r--. 1 xqshi xqshi 1188 Jun 20 17:32 authorized_keys.bak

-rw-------. 1 xqshi xqshi 1679 Jun 20 17:30 id_rsa

-rw-r--r--. 1 xqshi xqshi  396 Jun 20 17:30 id_rsa.pub

-rw-r--r--. 1 xqshi xqshi  391 Jun 20 16:40 known_hosts

 

[xqshi@groot-nn .ssh]$ scp id_rsa.pub authorized_keys xqshi@172.16.77.95:/home/xqshi/.ssh/

id_rsa.pub                                                                                          100%  396     0.4KB/s   00:00    

authorized_keys                                                                                     100%  396     0.4KB/s   00:00

[xqshi@groot-nn .ssh]$ scp id_rsa  authorized_keys xqshi@172.16.77.95:/home/xqshi/.ssh/

验证master是否可以无密码登录slaves

[xqshi@groot-nn hadoop]$ ssh xqshi@172.16.77.95

Last login: Tue Jun 21 09:40:23 2016 from 172.18.254.232

无密码登录成功

 

步骤四:配置JDK

sxq112004@ubuntu:~/Downloads$ scp jdk-8u91-linux-x64.tar.gz   

xqshi@172.16.77.94's password:

jdk-8u91-linux-x64.tar.gz                     100%  173MB  10.8MB/s   00:16    

sxq112004@ubuntu:~/Downloads$

94电脑上解压:

[xqshi@groot-nn ~]$ tar  -zxvf jdk-8u91-linux-x64.tar.gz

 

jdk的路径。

[xqshi@groot-nn ~]$ cd ~

[xqshi@groot-nn ~]$ ls -a

.  ..  .bash_history  .bash_logout  .bash_profile  .bashrc  jdk1.8.0_91  jdk-8u91-linux-x64.tar.gz  .ssh

[xqshi@groot-nn ~]$

 

/home/xqshi/jdk1.8.0_91

export JAVA_HOME=/home/xqshi/jdk1.8.0_91

export JRE_HOME=/home/xqshi/jdk1.8.0_91/jre

export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib

export PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin:$PATH

 

步骤五:安装hadoop

1、在94这台机器的.bashrc文件里添加路径

export HADOOP_HOME=/home/xqshi/hadoop-2.6.4

export PATH=${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATH

2.修改hadoop-env.sh文件,添加jdk路径

[xqshi@groot-nn hadoop]$ pwd

/home/xqshi/hadoop-2.6.4/etc/hadoop

[xqshi@groot-nn hadoop]$ vi hadoop-env.sh

添加export JAVA_HOME=/home/xqshi/jdk1.8.0_91

3.创建tmp文件夹

[xqshi@groot-nn hadoop-2.6.4]$ mkdir tmp

[xqshi@groot-nn tmp]$ pwd

/home/xqshi/hadoop-2.6.4/tmp

4.修改core-site.xml文件

94

<configuration>

       <property>

             <name>hadoop.tmp.dir</name>

             <value>/home/xqshi/hadoop-2.6.4/tmp</value>

             <description>Abase for other temporary directories.</description>

        </property>

        <property>

             <name>fs.default.name</name>

             <value>hdfs://172.16.77.94:9000</value>

        </property>

</configuration>

95

<configuration>

       <property>

             <name>hadoop.tmp.dir</name>

             <value>/home/xqshi/hadoop-2.6.4/tmp</value>

             <description>Abase for other temporary directories.</description>

        </property>

        <property>

             <name>fs.default.name</name>

             <value>hdfs://172.16.77.94:9000</value>

        </property>

</configuration>

97

 

<configuration>

       <property>

             <name>hadoop.tmp.dir</name>

             <value>/home/xqshi/hadoop-2.6.4/tmp</value>

             <description>Abase for other temporary directories.</description>

        </property>

        <property>

             <name>fs.default.name</name>

             <value>hdfs://172.16.77.94:9000</value>

        </property>

</configuration>

98

<configuration>

       <property>

             <name>hadoop.tmp.dir</name>

             <value>/home/xqshi/hadoop-2.6.4/tmp</value>

             <description>Abase for other temporary directories.</description>

        </property>

        <property>

             <name>fs.default.name</name>

             <value>hdfs://172.16.77.94:9000</value>

        </property>

</configuration>

 

5.修改hdfs-site.xml文件

94

<configuration>

        <property>

             <name>dfs.replication</name>

             <value>1</value>

        </property>

        <property>

             <name>dfs.namenode.name.dir</name>

             <value>file:/home/xqshi/hadoop-2.6.4/hdfs/name1,file:/home/xqshi/hadoop-2.6.4/hdfs/name2</value>

        </property>

        <property>

             <name>dfs.datanode.data.dir</name>

             <value>file:/home/xqshi/hadoop-2.6.4/hdfs/data</value>

        </property>

</configuration>

 

 

95

<configuration>

        <property>

             <name>dfs.replication</name>

             <value>1</value>

        </property>

        <property>

             <name>dfs.namenode.name.dir</name>

             <value>file:/home/xqshi/hadoop-2.6.4/hdfs/name1,file:/home/xqshi/hadoop-2.6.4/hdfs/name2</value>

        </property>

        <property>

             <name>dfs.datanode.data.dir</name>

             <value>file:/home/xqshi/hadoop-2.6.4/hdfs/data</value>

        </property>

</configuration>

 

97

<configuration>

        <property>

             <name>dfs.replication</name>

             <value>1</value>

        </property>

        <property>

             <name>dfs.namenode.name.dir</name>

             <value>file:/home/xqshi/hadoop-2.6.4/hdfs/name1,file:/home/xqshi/hadoop-2.6.4/hdfs/name2</value>

        </property>

        <property>

             <name>dfs.datanode.data.dir</name>

             <value>file:/home/xqshi/hadoop-2.6.4/hdfs/data</value>

        </property>

</configuration>

 

98

<configuration>

        <property>

             <name>dfs.replication</name>

             <value>1</value>

        </property>

        <property>

             <name>dfs.namenode.name.dir</name>

             <value>file:/home/xqshi/hadoop-2.6.4/hdfs/name1,file:/home/xqshi/hadoop-2.6.4/hdfs/name2</value>

        </property>

        <property>

             <name>dfs.datanode.data.dir</name>

             <value>file:/home/xqshi/hadoop-2.6.4/hdfs/data</value>

        </property>

</configuration>

 

6.配置mapred-site.xml文件

94

<configuration>

<property>

     <name>mapred.job.tracker</name>

     <value>hdfs://172.16.77.94:9001</value>

  </property>

</configuration>

 

95

<configuration>

<property>

     <name>mapred.job.tracker</name>

     <value>hdfs://172.16.77.94:9001</value>

  </property>

</configuration>

 

97

<configuration>

<property>

     <name>mapred.job.tracker</name>

     <value>hdfs://172.16.77.94:9001</value>

  </property>

</configuration>

98

<configuration>

<property>

     <name>mapred.job.tracker</name>

     <value>hdfs://172.16.77.94:9001</value>

  </property>

 

 

 

<property>

            <name>mapreduce.shuffle.port</name>

            <value>9034</value>

       </property>

 

<property>

<name>yarn.nodemanager.webapp.address</name>

<value>172.16.77.98:8045</value>

</property>

</configuration>

 

 

7.修改slaves文件

94

172.16.77.95 

172.16.77.97 

172.16.77.98

 

8.配置yarn-site.xml文件

94

<configuration>

 

<!-- Site specific YARN configuration properties -->

<property>

            <name>yarn.resourcemanager.address</name>

            <value>172.16.77.94</value>

        </property>

 

        <!-- reducer获取数据的方式 -->

 

 <property>

            <name>yarn.nodemanager.aux-services</name>

            <value>mapreduce_shuffle</value>

        </property>

 

<property>

            <name>yarn.nodemanager.localizer.address</name>

            <value>172.16.77.94:8034</value>

        </property>

 

<property>

            <name>mapreduce.shuffle.port</name>

            <value>172.16.77.94:9034</value>

       </property>

</configuration>

 

 

95

<configuration>

 

<!-- Site specific YARN configuration properties -->

<property>

            <name>yarn.resourcemanager.address</name>

            <value>172.16.77.94</value>

        </property>

 

        <!-- reducer获取数据的方式 -->

 

 <property>

            <name>yarn.nodemanager.aux-services</name>

            <value>mapreduce_shuffle</value>

        </property>

<property>

<name>yarn.resourcemanager.resource-tracker.address</name>

<value>172.16.77.94:8031</value>

</property>

 

 

</configuration>

 

 

 

97

<configuration>

 

<!-- Site specific YARN configuration properties -->

<property>

            <name>yarn.resourcemanager.address</name>

            <value>172.16.77.94</value>

        </property>

 

        <!-- reducer获取数据的方式 -->

 

 <property>

            <name>yarn.nodemanager.aux-services</name>

            <value>mapreduce_shuffle</value>

        </property>

<property>

<name>yarn.resourcemanager.resource-tracker.address</name>

<value>172.16.77.94:8031</value>

</property>

</configuration>

 

 

98

 

<configuration>

 

<!-- Site specific YARN configuration properties -->

<property>

            <name>yarn.resourcemanager.address</name>

            <value>172.16.77.94</value>

        </property>

 

        <!-- reducer获取数据的方式 -->

 

 <property>

            <name>yarn.nodemanager.aux-services</name>

            <value>mapreduce_shuffle</value>

        </property>

 

<property>

            <name>yarn.nodemanager.localizer.address</name>

            <value>172.16.77.98:8034</value>

        </property>

 

 

 

 

 

<property>

<name>yarn.resourcemanager.resource-tracker.address</name>

<value>172.16.77.94:8031</value>

</property>

 

<property>

<name>yarn.nodemanager.webapp.address</name>

<value>172.16.77.98:8045</value>

</property>

 

 

</configuration>

 

9.hadoop安装拷贝到其它电脑上

scp -r hadoop-2.6.4 xqshi@172.16.77.95:/home/xqshi/

 

10.运行hadoop namenode -format,出现如下警告(未解决)

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

11.94上运行start-all.sh

[xqshi@groot-nn hadoop]$ jps

2163 SecondaryNameNode


1976 NameNode

2619 ResourceManager

2876 Jps

95 97 98机器

[xqshi@groot-rm ~]$ jps

4176 Jps

3969 DataNode

 

12.运行例子 hadoop jar hadoop-mapreduce-examples-2.6.4.jar pi 10 100

 

13.修改spark/conf/下的slaves文件

172.16.77.95 

172.16.77.97 

172.16.77.98

 

14.最终的.bashrc文件里的配置如下:

export JAVA_HOME=/home/xqshi/jdk1.8.0_91

export JRE_HOME=/home/xqshi/jdk1.8.0_91/jre

export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib

export PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin:$PATH

export HADOOP_HOME=/home/xqshi/hadoop-2.6.4

export PATH=${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATH

export SPARK_HOME=/home/xqshi/spark-1.6.1-bin-hadoop2.6/

export  PATH=${SPARK_HOME}/bin:$PATH

export SPARK_JAR=/home/xqshi/spark-1.6.1-bin-hadoop2.6/lib/spark-assembly-1.6.1-hadoop2.6.0.jar

export SPARK_LIBARY_PATH=${JAVA_HOME}/lib:${JAVA_HOME}/jre/lib:${HADOOP_HOME}/lib/native

export SPARK_Master=172.16.77.94

 

15.94上配置好的spark文件包拷贝到slaves机器上

scp -r spark-1.6.1-bin-hadoop2.6 xqshi@172.16.77.98:/home/xqshi/

 

16.94机器上运行./start-all.sh

94机器上的

[xqshi@groot-nn sbin]$ jps

6705 Jps

2163 SecondaryNameNode

1976 NameNode

2619 ResourceManager

6621 Master

 

95  97  98 机器

[xqshi@groot-rm ~]$ jps

3969 DataNode

5245 Jps

5182 Worker

 

17.sparkhadoop上运行

创建一个hadoop文件夹

Hadoop fs -mkdir /tmp

hadoop上上传一个文件

hadoop fs -put /home/xqshi/spark-1.6.1-bin-hadoop2.6/README.md /tmp

scala> val textFile = sc.textFile("hdfs://172.16.77.94:9000/tmp/README.md")

textFile: org.apache.spark.rdd.RDD[String] = hdfs://172.16.77.94:9000/tmp/README.md MapPartitionsRDD[3] at textFile at <console>:27

 

scala> textFile.count()

res1: Long = 95

 

18.Spark on yarn 里报错

[xqshi@groot-nn spark-1.6.1-bin-hadoop2.6]$ ./bin/spark-shell --master yarn --deploy-mode client

Exception in thread "main" java.lang.Exception: When running with master 'yarn' either HADOOP_CONF_DIR or YARN_CONF_DIR must be set in the environment.

at org.apache.spark.deploy.SparkSubmitArguments.validateSubmitArguments(SparkSubmitArguments.scala:251)

at org.apache.spark.deploy.SparkSubmitArguments.validateArguments(SparkSubmitArguments.scala:228)

at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:109)

at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:114)

at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

[xqshi@groot-nn spark-1.6.1-bin-hadoop2.6]$

 

修改.barshrc文件,在该文件后面添加二个路径

export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop

 

 

./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster lib/spark-examples*.jar  10

 

 

 

19.安装成功后

http://172.16.77.94:8088/

可以查看资源管理器

 

http://172.16.77.94:50070/

可以查看namenode

 

http://172.16.77.94:8080/

可以查看所有worker节点

 

http://172.16.77.94:4040/

可以查看所有的worker工作情况

 

 

20.运行Grep程序

bin/hadoop org.apache.hadoop.examples.Grep /tmp  /output 'dfs[a-z.]+'

 

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /data/input /data/output/1

 

21.查看运行后的输出文件

hadoop fs -cat /output/1/part-r-00000

 

 

 

 

 

Zookeeper安装

tar  -zxvf zookeeper-3.4.8.tar.gz

 

 

 

 

 

 

 

 


1 0
原创粉丝点击