Ubuntu server 14.0464位上编译Hadoop 2.6.4(纯键盘模式)

来源:互联网 发布:我要开淘宝网店怎么开 编辑:程序博客网 时间:2024/06/05 10:40

需要的环境和工具
(1) Ubuntu server 14.04 64位机器 四台左右(虚拟机也可以,装机器的时候,安装一台,然后克隆出来四台,是一样的,不需要一个个安装),虚拟机不会安装的详细见我的:http://blog.csdn.net/u014716068/article/details/51829084
(2) Windows 7操作系统
(3) Xshell 远程软件,如果不向麻烦就是这种方法
参考文档:
安装JDK:
(1) http://blog.csdn.net/shines/article/details/45463497
(2) http://www.linuxidc.com/Linux/2015-03/114618.htm
编译Hadoop:
(1) http://blog.csdn.net/myarrow/article/details/51037368
(2) http://blog.sina.com.cn/s/blog_549667a50102v83x.html
(3) http://blog.csdn.net/yuzhiyuxia/article/details/19617483
下载Hadoop地址:
(1) http://hadoop.apache.org/ 官网
(2) http://hadoop.apache.org/releases.html releases地址
(3) http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.6.4/hadoop-2.6.4-src.tar.gz Hadoop 2.6.4 下载地址
lantern下载地址:
(1) https://github.com/getlantern/lantern 项目地址
(2) https://raw.githubusercontent.com/getlantern/lantern-binaries/master/lantern-installer-3.0.4-64-bit.deb 下载地址

注意:以下都是在root用户下完成的,不喜欢撸(打)sudo这个单词。
Step1 配置JDK环境

  • 自动化安装jdk
    因为安装的ubuntu server 14.04 的初始版本是没有JDK的:
root@master:~# javacThe program 'javac' can be found in the following packages: * default-jdk * ecj * gcj-4.8-jdk * openjdk-7-jdk * gcj-4.6-jdk * openjdk-6-jdkTry: apt-get install <selected package>root@master:~#

所以需要我们安装包安装jdk也行,自动安装也行。安装包的话,见我的参考文档,自动的话,我们开始吧。
输入命令:

root@master:~# apt-get install openjdk-7-jdk

安装的时间比较长,需要等待一段时间。

  • 配置JDK的环境变量
    Jdk安装完成后,我们需要配置一下环境变量,至于这个java安装到哪了,我们需要查找一下,在这里查找jre*即可,因为jre一般都放在java目录下的:
root@master:~# find / -name  'jre*'/usr/lib/jvm/java-7-openjdk-amd64/jreroot@master:~#

配置java的环境变量:

root@master:~# vi /etc/profile将这些输入到最后:export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64export JRE_HOME=$JAVA_HOME/jreexport CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATHexport PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

重新启动机器,查看java的版本信息,如果出现如下信息,则安装成功:

root@master:~# java -versionjava version "1.7.0_101"OpenJDK Runtime Environment (IcedTea 2.6.6) (7u101-2.6.6-0ubuntu0.14.04.1)OpenJDK 64-Bit Server VM (build 24.95-b01, mixed mode)root@master:~#
  • 安装Maven编译器
    Maven:
    maven是一个项目构建和管理的工具,提供了帮助管理 构建、文档、报告、依赖、scms、发布、分发的方法。可以方便的编译代码、进行依赖管理、管理二进制库等等。
    maven的好处在于可以将项目过程规范化、自动化、高效化以及强大的可扩展性
    利用maven自身及其插件还可以获得代码检查报告、单元测试覆盖率、实现持续集成等等。
    在这里是自动化安装,当然是直接命令啦:
root@master:~# apt-get install maven

经过漫长的等待,如果输入查看版本信息,输出版本信息则证明安装成功:

root@master:~# mvn --versionApache Maven 3.0.5Maven home: /usr/share/mavenJava version: 1.7.0_101, vendor: Oracle CorporationJava home: /usr/lib/jvm/java-7-openjdk-amd64/jreDefault locale: en_US, platform encoding: UTF-8OS name: "linux", version: "3.13.0-24-generic", arch: "amd64", family: "unix"root@master:~#
  • 安装openssh
    Openssh:
    OpenSSH 是 SSH (Secure SHell) 协议的免费开源实现。SSH协议族可以用来进行远程控制, 或在计算机之间传送文件。而实现此功能的传统方式,如telnet(终端仿真协议)、 rcp ftp、 rlogin、rsh都是极为不安全的,并且会使用明文传送密码。OpenSSH提供了服务端后台程序和客户端工具,用来加密远程控件和文件传输过程中的数据,并由此来代替原来的类似服务。
    OpenSSH是使用SSH透过计算机网络加密通讯的实现。它是取代由SSH Communications Security所提供的商用版本的开放源代码方案。目前OpenSSH是OpenBSD的子计划。
    OpenSSH常常被误认以为与OpenSSL有关联,但实际上这两个计划的有不同的目的,不同的发展团队,名称相近只是因为两者有同样的软件发展目标──提供开放源代码的加密通讯软件。
    现在上命令:
apt-get install openssh-server openssh-client

又是一段漫长的等待啊:

root@master:~# apt-get install openssh-server openssh-clientReading package lists... DoneBuilding dependency tree       Reading state information... DoneSuggested packages:  ssh-askpass libpam-ssh keychain monkeysphere rssh molly-guardThe following packages will be upgraded:  openssh-client openssh-server2 upgraded, 0 newly installed, 0 to remove and 204 not upgraded.Need to get 885 kB of archives.After this operation, 4,096 B of additional disk space will be used.Get:1 http://us.archive.ubuntu.com/ubuntu/ trusty-updates/main openssh-server amd64 1:6.6p1-2ubuntu2.7 [322 kB]Get:2 http://us.archive.ubuntu.com/ubuntu/ trusty-updates/main openssh-client amd64 1:6.6p1-2ubuntu2.7 [564 kB]                  Fetched 885 kB in 2min 44s (5,390 B/s)                                                                                           Preconfiguring packages ...(Reading database ... 65436 files and directories currently installed.)Preparing to unpack .../openssh-server_1%3a6.6p1-2ubuntu2.7_amd64.deb ...Unpacking openssh-server (1:6.6p1-2ubuntu2.7) over (1:6.6p1-2ubuntu1) ...Preparing to unpack .../openssh-client_1%3a6.6p1-2ubuntu2.7_amd64.deb ...Unpacking openssh-client (1:6.6p1-2ubuntu2.7) over (1:6.6p1-2ubuntu1) ...Processing triggers for ureadahead (0.100.0-16) ...ureadahead will be reprofiled on next rebootProcessing triggers for ufw (0.34~rc-0ubuntu2) ...Processing triggers for man-db (2.6.7.1-1) ...Setting up openssh-client (1:6.6p1-2ubuntu2.7) ...Setting up openssh-server (1:6.6p1-2ubuntu2.7) ...ssh stop/waitingssh start/running, process 4902root@master:~#
  • 安装protobuf-compiler
    Protobuf:
    Protocol Buffers (ProtocolBuffer/ protobuf )是Google公司开发的一种数据描述语言,类似于XML能够将结构化数据序列化,可用于数据存储、通信协议等方面。现阶段支持C++、JAVA、Python等三种编程语言。
root@master:~# apt-get install protobuf-compilerReading package lists... DoneBuilding dependency tree       Reading state information... DoneThe following extra packages will be installed:  libprotobuf8 libprotoc8The following NEW packages will be installed:  libprotobuf8 libprotoc8 protobuf-compiler0 upgraded, 3 newly installed, 0 to remove and 204 not upgraded.Need to get 550 kB of archives.After this operation, 2,133 kB of additional disk space will be used.Do you want to continue? [Y/n] yGet:1 http://us.archive.ubuntu.com/ubuntu/ trusty/main libprotobuf8 amd64 2.5.0-9ubuntu1 [296 kB]Get:2 http://us.archive.ubuntu.com/ubuntu/ trusty/main libprotoc8 amd64 2.5.0-9ubuntu1 [235 kB]Get:3 http://us.archive.ubuntu.com/ubuntu/ trusty/main protobuf-compiler amd64 2.5.0-9ubuntu1 [19.8 kB]Fetched 550 kB in 3s (173 kB/s)         Selecting previously unselected package libprotobuf8:amd64.(Reading database ... 65436 files and directories currently installed.)Preparing to unpack .../libprotobuf8_2.5.0-9ubuntu1_amd64.deb ...Unpacking libprotobuf8:amd64 (2.5.0-9ubuntu1) ...Selecting previously unselected package libprotoc8:amd64.Preparing to unpack .../libprotoc8_2.5.0-9ubuntu1_amd64.deb ...Unpacking libprotoc8:amd64 (2.5.0-9ubuntu1) ...Selecting previously unselected package protobuf-compiler.Preparing to unpack .../protobuf-compiler_2.5.0-9ubuntu1_amd64.deb ...Unpacking protobuf-compiler (2.5.0-9ubuntu1) ...Processing triggers for man-db (2.6.7.1-1) ...Setting up libprotobuf8:amd64 (2.5.0-9ubuntu1) ...Setting up libprotoc8:amd64 (2.5.0-9ubuntu1) ...Setting up protobuf-compiler (2.5.0-9ubuntu1) ...Processing triggers for libc-bin (2.19-0ubuntu6) ...root@master:~# protoc --versionlibprotoc 2.5.0root@master:~#
  • 安装依赖库
    这些库啊包啊基本都会在编译过程中用到,缺少的话会影响编译,看到error了再找solution非常麻烦,提前装好一劳永逸。
apt-get install g++ autoconf automake libtool cmake zlib1g-dev pkg-config libssl-dev
  • 安装findbugs
    FindBugs:
    是一个静态分析工具,它检查类或者 JAR 文件,将字节码与一组缺陷模式进行对比以发现可能的问题。有了静态分析工具,就可以在不实际运行程序的情况对软件进行分析。不是通过分析类文件的形式或结构来确定程序的意图,而是通常使用 Visitor 模式(请参阅 参考资料)。图 1 显示了分析一个匿名项目的结果(为防止可怕的犯罪,这里不给出它的名字):
    在FindBugs的GUI中,需要先选择待扫描的.class文件(FindBugs其实就是对编译后的class进行扫描,藉以发现一些隐藏的bug。)。如果你拥有这些.class档对应的源文件,可把这些.java文件再选上,这样便可以从稍后得出的报告中快捷的定位到出问题的代码上面。此外,还可以选上工程所使用的library,这样似乎可以帮助FindBugs做一些高阶的检查,藉以发现一些更深层的bug。
    选定了以上各项后,便可以开始检测了。检测的过程可能会花好几分钟,具体视工程的规模而定。检测完毕可生成一份详细的报告,藉由这份报告,可以发现许多代码中间潜在的bug。比较典型的,如引用了空指针(null pointer dereference), 特定的资源(db connection)未关闭,等等。如果用人工检查的方式,这些bug可能很难才会被发现,或许永远也无法发现,直到运行时发作…当除掉了这些典型的 (classic) bug后,可以确信的是,我们的系统稳定度将会上一个新的台阶。
    以目前遇到的状况来看,FindBugs可以有两种使用时机。
    输入命令(自我感觉这个用处不大,但是为了防止后面出现问题,还是多此一举吧):
root@master:~# apt-get install findbugs
  • 开始编译
    给大家道个歉,因为我说过,是纯手打编译的,但是这个命令是我拷贝的,因为太长了,地址怕搞错,所以在现实工作环境中,没有拷贝环境的话,还是请大家手打吧,敲得时候认真点。首先,我们需要在网上下载一个Hadoop版本,我用的是Hadoop 2.6.4版本的,下载地址:
root@master:~# wget http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.6.4/hadoop-2.6.4-src.tar.gz--2016-07-05 18:24:46--  http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.6.4/hadoop-2.6.4-src.tar.gzResolving mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)... 166.111.206.63, 2402:f000:1:416:166:111:206:63Connecting to mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)|166.111.206.63|:80... connected.HTTP request sent, awaiting response... 200 OKLength: 17282122 (16M) [application/octet-stream]Saving to: ‘hadoop-2.6.4-src.tar.gz100%[========================================================================================>] 17,282,122  10.2MB/s   in 1.6s   2016-07-05 18:24:48 (10.2 MB/s) - ‘hadoop-2.6.4-src.tar.gz’ saved [17282122/17282122]root@master:~# lshadoop-2.6.4-src.tar.gz

解压文件夹到当前目录:

root@master:~# tar -zxvf hadoop-2.6.4-src.tar.gzroot@master:~# lshadoop-2.6.4-src  hadoop-2.6.4-src.tar.gz进入到Hadoop-2.6.4-src文件中:root@master:~#cd /root/xyj/hadoop-2.6.4-src

激动人心的时刻到了,终于到了编译时刻了:

root@master:~/hadoop-2.6.4-src# mvn clean package -Pdist,native -DskipTests -Dtar

OK,在这里我们需要等待漫长的时间。期间保证网速达到要求,不断电,一般都是一两个小时即可完成编译,如果设备和网络的原因,那就需要时间大于2小时不等了。
Hadoop下载地址怎么找?
【首先来到官网:http://hadoop.apache.org/,找到Getting Started,点击Download,进入到Hadoop的release版本下载地址:http://hadoop.apache.org/releases.html,我们会发现很多版本的下载地址,在这里选择一个你认为顺眼的一个,我选择是Hadoop 2.6.4,点击Tarball下对应的source,进入到:http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.6.4/hadoop-2.6.4-src.tar.gz,我们会发现一个http地址,将这个地址就是我们的下载地址。把地址放在wget后面即可!】

  • 问题解决
    程序员最怕就是bug了,改不出bug,就是我们的噩梦,Error同样恐怖啊!!!但是我们还是要面对问题的,遇到问题要慢慢来解决,随话说,改bug不只有今天的苟且,还有明天和后天,我就不信改不出来,有时候你会发现,突然他就那么好了,虽然不知道什么问题。
    错误1:
[INFO] ------------------------------------------------------------------------[INFO] BUILD FAILURE[INFO] ------------------------------------------------------------------------[INFO] Total time: 54:47.276s[INFO] Finished at: Tue Jul 05 07:16:18 EDT 2016[INFO] Final Memory: 80M/473M[INFO] ------------------------------------------------------------------------[ERROR] Failed to execute goal on project hadoop-hdfs-httpfs: Could not resolve dependencies for project org.apache.hadoop:hadoop-hdfs-httpfs:war:2.6.4: Failed to collect dependencies for [junit:junit:jar:4.11 (test), org.mockito:mockito-all:jar:1.8.5 (test), org.apache.hadoop:hadoop-auth:jar:2.6.4 (compile), com.sun.jersey:jersey-core:jar:1.9 (compile), com.sun.jersey:jersey-server:jar:1.9 (compile), javax.servlet:servlet-api:jar:2.5 (provided), com.google.guava:guava:jar:11.0.2 (compile), com.googlecode.json-simple:json-simple:jar:1.1 (compile), org.mortbay.jetty:jetty:jar:6.1.26 (test), org.apache.hadoop:hadoop-common:jar:2.6.4 (compile), org.apache.hadoop:hadoop-hdfs:jar:2.6.4 (compile), org.apache.hadoop:hadoop-common:jar:tests:2.6.4 (test), org.apache.hadoop:hadoop-hdfs:jar:tests:2.6.4 (test), log4j:log4j:jar:1.2.17 (compile), org.slf4j:slf4j-api:jar:1.7.5 (compile), org.slf4j:slf4j-log4j12:jar:1.7.5 (runtime)]: Failed to read artifact descriptor for com.googlecode.json-simple:json-simple:jar:1.1: Could not transfer artifact com.googlecode.json-simple:json-simple:pom:1.1 from/to central (http://repo.maven.apache.org/maven2): Connection to http://repo.maven.apache.org refused: Connection refused -> [Help 1][ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.[ERROR] Re-run Maven using the -X switch to enable full debug logging.[ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles:[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException[ERROR] [ERROR] After correcting the problems, you can resume the build with the command[ERROR]   mvn <goals> -rf :hadoop-hdfs-httpfsroot@master:~/xyj/hadoop-2.6.4-src#

去吃了个饭,发现编译出错了,好了,开始解决吧!在网上搜错误怎么解决,没找到,还是自己看日志吧,日志大致意思是tomcat服务器有问题,所以我感觉自己安装的新系统没有装tomcat服务器,所以输入命令:

root@master:~/xyj/hadoop-2.6.4-src# apt-get install tomcat7重新编译:root@master:~/xyj/hadoop-2.6.4-src# mvn clean package -Pdist,native -DskipTests -Dtar

漫长的等待,以为上面没有解决问题呢,但是我发现,编译刚才错的那一块竟然过去了,过去了就过去了吧,这也是解决问题的方法之一,总比没解决问题吧,唯一的不足之处就是真正不知道错哪了,只是凭感觉走的,正好误打误撞过去了,希望不是吧!
错误2:

[INFO] ------------------------------------------------------------------------[INFO] BUILD FAILURE[INFO] ------------------------------------------------------------------------[INFO] Total time: 1:14:21.695s[INFO] Finished at: Tue Jul 05 09:21:06 EDT 2016[INFO] Final Memory: 76M/439M[INFO] ------------------------------------------------------------------------[ERROR] Failed to execute goal on project hadoop-yarn-server-nodemanager: Could not resolve dependencies for project org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:2.6.4: Failed to collect dependencies for [org.apache.hadoop:hadoop-common:jar:2.6.4 (provided), org.apache.hadoop:hadoop-yarn-common:jar:2.6.4 (compile), org.apache.hadoop:hadoop-yarn-api:jar:2.6.4 (compile), javax.xml.bind:jaxb-api:jar:2.2.2 (compile), org.codehaus.jettison:jettison:jar:1.1 (compile), commons-lang:commons-lang:jar:2.6 (compile), javax.servlet:servlet-api:jar:2.5 (compile), commons-codec:commons-codec:jar:1.4 (compile), com.sun.jersey:jersey-core:jar:1.9 (compile), com.sun.jersey:jersey-client:jar:1.9 (compile), org.mortbay.jetty:jetty-util:jar:6.1.26 (compile), com.google.guava:guava:jar:11.0.2 (compile), commons-logging:commons-logging:jar:1.1.3 (compile), org.slf4j:slf4j-api:jar:1.7.5 (compile), org.apache.hadoop:hadoop-annotations:jar:2.6.4 (compile), org.apache.hadoop:hadoop-common:jar:tests:2.6.4 (test), com.google.inject.extensions:guice-servlet:jar:3.0 (compile), com.google.protobuf:protobuf-java:jar:2.5.0 (compile), junit:junit:jar:4.11 (test), org.mockito:mockito-all:jar:1.8.5 (test), com.google.inject:guice:jar:3.0 (compile), com.sun.jersey.jersey-test-framework:jersey-test-framework-grizzly2:jar:1.9 (test), com.sun.jersey:jersey-json:jar:1.9 (compile), com.sun.jersey.contribs:jersey-guice:jar:1.9 (compile), org.apache.hadoop:hadoop-yarn-common:jar:tests:2.6.4 (test), org.apache.hadoop:hadoop-yarn-server-common:jar:2.6.4 (compile), org.fusesource.leveldbjni:leveldbjni-all:jar:1.8 (compile)]: Failed to read artifact descriptor for org.glassfish.grizzly:grizzly-http:jar:2.1.2: Could not transfer artifact org.glassfish.grizzly:grizzly-http:pom:2.1.2 from/to apache.snapshots.https (https://repository.apache.org/content/repositories/snapshots): Read timed out -> [Help 1][ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.[ERROR] Re-run Maven using the -X switch to enable full debug logging.[ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles:[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException[ERROR] [ERROR] After correcting the problems, you can resume the build with the command[ERROR]   mvn <goals> -rf :hadoop-yarn-server-nodemanager

真是头疼,好好搜搜问题解决方法吧!告诉大家一个调试详细错误的命令,在原有命令上添加一个 –X参数,我们就可以看到详细的编译过程了:

root@master:~/xyj/hadoop-2.6.4-src# mvn clean package -Pdist,native -DskipTests -Dtar -X

这个问题,我找了很多网站,也是找不到结果,看了很多的帖子和问题,归根一点就是我的网速和镜像站的问题,可能是网速有时候太慢,网络不能访问到外网的原因吧!所以索性下载了一个lantern,翻墙用的(也可能是网络原因,网络在某个时刻下载大文件卡住了,然后编译执行到这一步,没有这个文件,或者是文件下载不全,都会引起编译失败,所以不一定是翻墙的原因,可能是网络在编译时寻找路由或者是网站的时候,没有找到,第二次就找到了,这怎么感觉就是拼人品啊!)。
接下来就教大家安装lantern吧!
Github下载地址(github源码地址见上面网址):

root@master:~/xyj# wget https://raw.githubusercontent.com/getlantern/lantern-binaries/master/lantern-installer-3.0.4-64-bit.deb

安装工具:

root@master:~/xyj# apt-get install gdebi-coreroot@master:~/xyj# apt-get install libappindicator3-1

安装开启:

root@master:~/xyj #gdebi lantern-installer-3.0.4-64-bit.debroot@master:~/xyj#lanternroot@master:~# lantern Running installation script.../usr/lib/lantern/lantern-binary: OKJul 06 09:24:51.955 - 0m0s DEBUG flashlight: flashlight.go:49 ****************************** Package Version: 2.1.2Jul 06 09:24:51.956 - 0m0s DEBUG flashlight.ui: ui.go:58 Creating tarfs filesystem that prefers local resources at /lantern/src/github.com/getlantern/lantern-ui/appJul 06 09:24:51.960 - 0m0s DEBUG flashlight: settings.go:57 Loading settingsJul 06 09:24:51.960 - 0m0s DEBUG flashlight: settings.go:70 Could not read file open /root/.lantern/settings.yaml: no such file or directoryJul 06 09:24:51.961 - 0m0s DEBUG flashlight.ui: service.go:134 Accepting websocket connections at: /dataJul 06 09:24:51.962 - 0m0s DEBUG flashlight: settings.go:99 Sending Lantern settings to new clientJul 06 09:24:51.965 - 0m0s DEBUG flashlight: settings.go:109 Reading settings messages!!Jul 06 09:24:52.000 - 0m0s DEBUG flashlight: flashlight.go:49 ****************************** Package Version: 2.1.2Jul 06 09:24:52.001 - 0m0s DEBUG flashlight.ui: ui.go:58 Creating tarfs filesystem that prefers local resources at /lantern/src/github.com/getlantern/lantern-ui/appJul 06 09:24:52.028 - 0m0s DEBUG flashlight: settings.go:57 Loading settingsJul 06 09:24:52.028 - 0m0s DEBUG flashlight: settings.go:70 Could not read file open /root/.lantern/settings.yaml: no such file or directoryJul 06 09:24:52.031 - 0m0s DEBUG flashlight.ui: service.go:134 Accepting websocket connections at: /dataJul 06 09:24:52.032 - 0m0s DEBUG flashlight: settings.go:99 Sending Lantern settings to new clientJul 06 09:24:52.037 - 0m0s DEBUG flashlight: settings.go:109 Reading settings messages!!(lantern:18322): Gtk-WARNING **: cannot open display:

查看一下:

root@master:~# ps -aux | grep lanternroot      18331  0.0  0.0  11740   940 pts/1    S+   05:26   0:00 grep --color=auto lantern

好了,这个安装完,我们继续吧!我感觉只是给自己一个心理安慰,这个重新编译是否成功,就看人品了!还好,编译通过了,感谢人品啊!

  • 找到编译结果
    以上编译需要经过漫长的等待时间,终于看到了结果,如下:
[INFO] Reactor Summary:[INFO] [INFO] Apache Hadoop Main ................................ SUCCESS [4.493s][INFO] Apache Hadoop Project POM ......................... SUCCESS [3.007s][INFO] Apache Hadoop Annotations ......................... SUCCESS [5.774s][INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.552s][INFO] Apache Hadoop Project Dist POM .................... SUCCESS [5.550s][INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [12.361s][INFO] Apache Hadoop MiniKDC ............................. SUCCESS [15.264s][INFO] Apache Hadoop Auth ................................ SUCCESS [15.034s][INFO] Apache Hadoop Auth Examples ....................... SUCCESS [8.842s][INFO] Apache Hadoop Common .............................. SUCCESS [5:43.242s][INFO] Apache Hadoop NFS ................................. SUCCESS [31.949s][INFO] Apache Hadoop KMS ................................. SUCCESS [35.223s][INFO] Apache Hadoop Common Project ...................... SUCCESS [0.500s][INFO] Apache Hadoop HDFS ................................ SUCCESS [9:19.175s][INFO] Apache Hadoop HttpFS .............................. SUCCESS [1:12.408s][INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [25.086s][INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [14.865s][INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.188s][INFO] hadoop-yarn ....................................... SUCCESS [0.167s][INFO] hadoop-yarn-api ................................... SUCCESS [2:13.539s][INFO] hadoop-yarn-common ................................ SUCCESS [1:57.809s][INFO] hadoop-yarn-server ................................ SUCCESS [1.107s][INFO] hadoop-yarn-server-common ......................... SUCCESS [38.032s][INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [1:03.906s][INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [9.610s][INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [15.939s][INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [1:06.344s][INFO] hadoop-yarn-server-tests .......................... SUCCESS [23.420s][INFO] hadoop-yarn-client ................................ SUCCESS [18.195s][INFO] hadoop-yarn-applications .......................... SUCCESS [0.291s][INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [5.631s][INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [3.816s][INFO] hadoop-yarn-site .................................. SUCCESS [0.152s][INFO] hadoop-yarn-registry .............................. SUCCESS [11.657s][INFO] hadoop-yarn-project ............................... SUCCESS [23.399s][INFO] hadoop-mapreduce-client ........................... SUCCESS [0.980s][INFO] hadoop-mapreduce-client-core ...................... SUCCESS [1:26.429s][INFO] hadoop-mapreduce-client-common .................... SUCCESS [1:00.837s][INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [18.942s][INFO] hadoop-mapreduce-client-app ....................... SUCCESS [28.096s][INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [21.970s][INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [35.789s][INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [5.011s][INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [11.536s][INFO] hadoop-mapreduce .................................. SUCCESS [17.663s][INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [19.450s][INFO] Apache Hadoop Distributed Copy .................... SUCCESS [59.560s][INFO] Apache Hadoop Archives ............................ SUCCESS [9.721s][INFO] Apache Hadoop Rumen ............................... SUCCESS [17.239s][INFO] Apache Hadoop Gridmix ............................. SUCCESS [11.690s][INFO] Apache Hadoop Data Join ........................... SUCCESS [11.958s][INFO] Apache Hadoop Ant Tasks ........................... SUCCESS [7.894s][INFO] Apache Hadoop Extras .............................. SUCCESS [6.610s][INFO] Apache Hadoop Pipes ............................... SUCCESS [29.171s][INFO] Apache Hadoop OpenStack support ................... SUCCESS [25.039s][INFO] Apache Hadoop Amazon Web Services support ......... SUCCESS [4:21.123s][INFO] Apache Hadoop Client .............................. SUCCESS [24.644s][INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [0.710s][INFO] Apache Hadoop Scheduler Load Simulator ............ SUCCESS [22.584s][INFO] Apache Hadoop Tools Dist .......................... SUCCESS [32.450s][INFO] Apache Hadoop Tools ............................... SUCCESS [1.933s][INFO] Apache Hadoop Distribution ........................ SUCCESS [3:01.581s][INFO] ------------------------------------------------------------------------[INFO] BUILD SUCCESS[INFO] ------------------------------------------------------------------------[INFO] Total time: 45:05.494s[INFO] Finished at: Wed Jul 06 06:14:20 EDT 2016[INFO] Final Memory: 101M/473M[INFO] ------------------------------------------------------------------------

上图显示了编译的所有文件,他们都是分布一个个编译,当失败到哪,就会把成功的和失败的,跳过失败之后的那些项目显示出来,这个忘了截图,好吧!你们自己编译会看到的。
编译好之后,我们需要找到我们编译好的包,路径是:

root@master:~/xyj/hadoop-2.6.4-src/hadoop-dist/target# pwd/root/xyj/hadoop-2.6.4-src/hadoop-dist/targetroot@master:~/xyj/hadoop-2.6.4-src/hadoop-dist/target# lltotal 539268drwxr-xr-x 7 root root      4096 Jul  6 06:13 ./drwxr-xr-x 3 root root      4096 Jul  6 06:11 ../drwxr-xr-x 2 root root      4096 Jul  6 06:11 antrun/-rw-r--r-- 1 root root      1866 Jul  6 06:11 dist-layout-stitching.sh-rw-r--r-- 1 root root       639 Jul  6 06:11 dist-tar-stitching.shdrwxr-xr-x 9 root root      4096 Jul  6 06:11 hadoop-2.6.4/-rw-r--r-- 1 root root 183757063 Jul  6 06:12 hadoop-2.6.4.tar.gz-rw-r--r-- 1 root root      2779 Jul  6 06:11 hadoop-dist-2.6.4.jar-rw-r--r-- 1 root root 368403396 Jul  6 06:14 hadoop-dist-2.6.4-javadoc.jardrwxr-xr-x 2 root root      4096 Jul  6 06:13 javadoc-bundle-options/drwxr-xr-x 2 root root      4096 Jul  6 06:11 maven-archiver/drwxr-xr-x 2 root root      4096 Jul  6 06:11 test-dir/

其中有一个hadoop-2.6.4.tar.gz的包就是的,还有解压好的hadoop-2.6.4文件夹。
Hadoop的版本信息:

root@master:~/xyj/hadoop-2.6.4-src/hadoop-dist/target/hadoop-2.6.4/bin# ./hadoop versionHadoop 2.6.4Subversion Unknown -r UnknownCompiled by root on 2016-07-06T09:30ZCompiled with protoc 2.5.0From source with checksum 8dee2286ecdbbbc930a6c87b65cbc010This command was run using /root/xyj/hadoop-2.6.4-src/hadoop-dist/target/hadoop-2.6.4/share/hadoop/common/hadoop-common-2.6.4.jarroot@master:~/xyj/hadoop-2.6.4-src/hadoop-dist/target/hadoop-2.6.4/bin# pwd/root/xyj/hadoop-2.6.4-src/hadoop-dist/target/hadoop-2.6.4/bin

Hadoop的动态库连接库:

/root/xyj/hadoop-2.6.4-src/hadoop-dist/target/hadoop-2.6.4/lib/nativeroot@master:~/xyj/hadoop-2.6.4-src/hadoop-dist/target/hadoop-2.6.4/lib/native# file *libhadoop.a:        current ar archivelibhadooppipes.a:   current ar archivelibhadoop.so:       symbolic link to `libhadoop.so.1.0.0' libhadoop.so.1.0.0: ELF 64-bit LSB  shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=2414b17dc9802b68da89538507e71ff61c8630c4, not strippedlibhadooputils.a:   current ar archivelibhdfs.a:          current ar archivelibhdfs.so:         symbolic link to `libhdfs.so.0.0.0' libhdfs.so.0.0.0:   ELF 64-bit LSB  shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=a5aa61121dfb8d075dca4deab83067c812acd4c4, not strippedroot@master:~/xyj/hadoop-2.6.4-src/hadoop-dist/target/hadoop-2.6.4/lib/native# lltotal 4768drwxr-xr-x 2 root root    4096 Jul  6 06:11 ./drwxr-xr-x 3 root root    4096 Jul  6 06:11 ../-rw-r--r-- 1 root root 1278070 Jul  6 06:11 libhadoop.a-rw-r--r-- 1 root root 1632656 Jul  6 06:11 libhadooppipes.alrwxrwxrwx 1 root root      18 Jul  6 06:11 libhadoop.so -> libhadoop.so.1.0.0*-rwxr-xr-x 1 root root  750783 Jul  6 06:11 libhadoop.so.1.0.0*-rw-r--r-- 1 root root  476210 Jul  6 06:11 libhadooputils.a-rw-r--r-- 1 root root  441046 Jul  6 06:11 libhdfs.alrwxrwxrwx 1 root root      16 Jul  6 06:11 libhdfs.so -> libhdfs.so.0.0.0*-rwxr-xr-x 1 root root  282519 Jul  6 06:11 libhdfs.so.0.0.0*root@master:~/xyj/hadoop-2.6.4-src/hadoop-dist/target/hadoop-2.6.4/lib/native#

那个hadoop-2.6.4.tar.gz中的文件解压后和这个一样。一会用的时候拷贝那个hadoop-2.6.4.tar.gz的文件,人嘛,就喜欢用原装的,嘻嘻。。。

  • 小结
    至此,Hadoop-2.6.4版本的编译好了。给大家的承诺是全键盘操作,不过我自己就作弊了,给大家拷贝了那么多流程,还有一个网址,懒得打,对于初学者,还是建议大家一个字一个字敲下去,对于成熟的开发人员,这个你们自便。
    对于里面出现的一些问题,以及我没遇到的问题,可能你们在安装的时候,会出现问题,请大家积极去解决它,一天不行两天,两天不行三天,三天不行放弃吧!不必浪费这么多时间了,再重新搞,或者是把问题消灭掉。
    编译的过程大致是,mvn这个软件将我们编译Hadoop的文件全部下载到本地,然后慢慢的一个一个编译,直到编译成功。如果中途失败,则程序停止,只有将问题解决了,才能重新运行程序编译成功。
    对于编译过程中,遇到的问题,大多都是网络的原因,还有就是镜像站的原因,我们下载不了外面的资源,只能认倒霉重新让网络找一个好的地址,这样的话,导致后果就是我们要重新编译,重新编译有时也解决不了一些问题,只有大家,慢慢的凭经验吧!
    好了,现在Hadoop的编译到此结束!
    ubuntu server 14.04上编译Hadoop 2.6.4下载地址
0 0