安装Hadoop
来源:互联网 发布:淘宝 运营 系统架构 编辑:程序博客网 时间:2024/06/10 03:26
1.设置ssh免密码登录。
2.设置Hadoop的环境变量
export HADOOP_HOME=/opt/op/hadoop-2.7.4PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin3.设置hadoop-env.sh
export JAVA_HOME=/opt/op/jdk1.8.0_144
4.设置core-site.xml
<configuration><property> <name>hadoop.tmp.dir</name> <value>file:/data/hadoop/</value> <description>Abase for other temporary directories.</description> </property> <property> <name>fs.defaultFS</name> <value>hdfs://node5:9000</value> </property> <property><name>dfs.permissions</name><value>false</value><description>If "true", enable permission checking in HDFS.If "false", permission checking is turned off,but all other behavior is unchanged.Switching from one parameter value to the other does not change the mode,owner or group of files or directories.</description></property><property> <name>hadoop.proxyuser.root.hosts</name> <value>*</value></property><property> <name>hadoop.proxyuser.root.groups</name> <value>*</value></property></configuration>
5.设置hdfs-site.xml
<configuration><property> <name>dfs.namenode.secondary.http-address</name> <value>node5:50090</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/data/hadoop/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/data/hadoop/dfs/data</value> </property> <property><name>dfs.permissions</name><value>false</value><description>If "true", enable permission checking in HDFS.If "false", permission checking is turned off,but all other behavior is unchanged.Switching from one parameter value to the other does not change the mode,owner or group of files or directories.</description></property></configuration>6.设置mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property></configuration>
7.设置slave
node5node1node2node3node4
8.我没有设置master
9.把node5(master)所有的文件复制到slave中
scp /opt/op/hadoop node1:/opt/op/
10.设置各个slave的环境变量
export HADOOP_HOME=/opt/op/hadoop-2.7.4PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin运行命令
source /etc/profile
11.在master(node5)运行./start-all.sh命令
12.在master上运行jps命令查看是否包含以下进程
9236 ResourceManager8788 NameNode9732 Jps8920 DataNode9083 SecondaryNameNode9341 NodeManager13.在各个slave中jps查看是否包含以下进程
3190 DataNode
3432 Jps
3246 NodeManager
14.通过网页查看是否成功
http://node5:50070
阅读全文
0 0
- 【hadoop】 1003-hadoop安装
- hadoop安装之-hadoop
- [Hadoop]Hadoop安装
- 【hadoop】hadoop安装篇
- hadoop安装
- 安装hadoop
- hadoop安装
- Hadoop安装
- hadoop 安装
- Hadoop 安装
- hadoop安装
- hadoop 安装
- Hadoop 安装
- hadoop安装
- Hadoop安装
- hadoop安装
- 安装hadoop
- Hadoop安装
- 如何在小米手机上使用monitor工具HierarchyViewer查看app的UI
- 1.4OpenCV边缘检测
- poj2060
- java 异常处理
- jzoj 2017.9.30 模拟赛
- 安装Hadoop
- TCP/IP(1)
- JavaScript实现简易计算器
- Linux system 5
- python--leetcode690. Employee Importance
- Linux系统管理、系统安全命令概述
- 字符串知识
- 实现一个卷积神经网络
- 用jquery写编辑 删除 提交功能