单机Hadoop的安装与使用
来源:互联网 发布:淘宝超级运动会是什么 编辑:程序博客网 时间:2024/06/05 15:58
第一步:安装操作系统并创建Hadoop用户OS:RHEL6.5[root@hadoop ~]# useradd hadoop[root@hadoop ~]# passwd hadoop第二步:Java安装自带Java[root@hadoop ~]# java -versionjava version "1.7.0_45"OpenJDK Runtime Environment (rhel-2.4.3.3.el6-x86_64 u45-b15)OpenJDK 64-Bit Server VM (build 24.45-b08, mixed mode)JAVA_HOME为/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64第三步:SSH登陆权限设置对于Hadoop的伪分布和全分布而言,Hadoop的NameNode需要启动集群中所有机器的Hadoop守护进程。通过SSH实现。配置SSHsu - hadoopmkdir ~/.sshchmod 700 ~/.ssh/usr/bin/ssh-keygen -t rsa/usr/bin/ssh-keygen -t dsa检查是否有~/.ssh/authorized_keys 如果没有执行下面,如果有,跳过$ touch ~/.ssh/authorized_keys$ cd ~/.ssh$ ls----------------------------------ssh rac1 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keysssh rac1 cat /home/oracle/.ssh/id_dsa.pub >> authorized_keysssh rac2 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keysssh rac2 cat /home/oracle/.ssh/id_dsa.pub >>authorized_keysscp authorized_keys rac2:/home/oracle/.ssh/第四步:单机Hadoop安装下载安装包:hadoop-2.8.1.tar.gz上传安装包创建合适的目录,解压安装包。 cd /usr/local mkdir hadoopcp /usr/hadoop-2.8.1.tar.gz /usr/local/hadoop/tar -xzvf hadoop-2.8.1.tar.gz [hadoop@hadoop hadoop-2.8.1]$ export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64/jre[hadoop@hadoop hadoop-2.8.1]$ ./bin/hadoop versionHadoop 2.8.1Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 20fe5304904fc2f5a18053c389e43cd26f7a70feCompiled by vinodkv on 2017-06-02T06:14ZCompiled with protoc 2.5.0From source with checksum 60125541c2b3e266cbf3becc5bda666This command was run using /usr/local/hadoop/hadoop-2.8.1/share/hadoop/common/hadoop-common-2.8.1.jar测试:mkdir inputcp /usr/local/hadoop/hadoop-2.8.1/etc/hadoop /usr/local/hadoop/hadoop-2.8.1/input./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.1.jar grep input output 'dfs[a-z.]+'结果:。。。 File System Counters FILE: Number of bytes read=1500730 FILE: Number of bytes written=2509126 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=12 Map output records=12 Map output bytes=274 Map output materialized bytes=304 Input split bytes=133 Combine input records=0 Combine output records=0 Reduce input groups=5 Reduce shuffle bytes=304 Reduce input records=12 Reduce output records=12 Spilled Records=24 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=34 Total committed heap usage (bytes)=274628608 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=468 File Output Format Counters Bytes Written=214output下的信息:[root@hadoop output]# lltotal 4-rw-r--r--. 1 hadoop hadoop 202 Jul 23 14:57 part-r-00000-rw-r--r--. 1 hadoop hadoop 0 Jul 23 14:57 _SUCCESS[root@hadoop output]# vi part-r-000006 dfs.audit.logger4 dfs.class3 dfs.server.namenode.3 dfs.logger2 dfs.period2 dfs.audit.log.maxfilesize2 dfs.audit.log.maxbackupindex1 dfsmetrics.log1 dfsadmin1 dfs.servers1 dfs.log1 dfs.file
阅读全文
0 0
- 单机Hadoop的安装与使用
- Hadoop单机模式的配置与安装
- Hadoop单机(独立)模式的安装与配置
- redis的安装与使用(单机)
- hadoop实战之安装与单机模式
- hadoop单机版安装的配置文件们
- Hadoop单机模式的伪分布式安装
- Hadoop单机版安装
- hadoop单机版安装
- linux hadoop 单机安装
- ubnutu hadoop单机安装
- 单机版hadoop安装
- hadoop单机模式安装
- Hadoop单机模式安装
- hadoop单机版安装
- hadoop cdh5单机安装
- Hadoop单机模式安装
- hadoop 单机安装
- Chrome浏览器插件Postman用法简介-Http请求模拟工具
- 【图】图的相关概念以及图的存储
- wifi
- ElasticSearch学习笔记(七)请求的发送
- codeforces 828C. String Reconstruction
- 单机Hadoop的安装与使用
- 乱码过滤器
- 计算文本的宽高
- 【计算机视觉】全景相机
- 玲珑学院 1138 震惊,99%+的中国人都会算错的问题 【容斥】【技巧】
- 登录过滤器
- 无法将文件“..\bin\Debug \**.dll”复制到“bin\**.dll”。对路径“bin \**.dll”的访问被拒绝。
- 各种随机数
- G