hadoop,hbase,zookeeper源码编译

来源:互联网 发布:旺旺消息群发软件 编辑:程序博客网 时间:2024/06/05 17:12
     本次选用的版本是hadoop-2.2.0, hbase-0.94.14, zookeeper-3.4.5,全部为社区稳定版本。之所以不下载安装包而用手动编译的包,主要是考虑到在后期对发现的一些问题,我们可以自己定位并尝试修复,或给社区提交patch做准备,对源码也有更进一步的把握。
     编译的环境选择了我们测试环境10.28.171.34。在这台linux机器上需提前安装好JDK, MAVEN,ANT, 配置网络使其能访问外网,安装google protobuf库和cmake。然后从社区网站下载好相关源码,放入/usr/local/UDP目录下:
[root@node34@hbase0.94.14$]cd /usr/local/UDP
[root@node34@UDP$]ll
total 16
drwxr-xr-x 15 67974 users 4096 Oct  7 14:46 hadoop-2.2.0
drwxr-xr-x  9 root  root  4096 Dec 10 13:40 hbase0.94.14
drwxr-xr-x 18 root  root  4096 Dec 10 10:56 ycsb-0.1.4
drwxr-xr-x 12 root  root  4096 Dec 10 10:47 zookeeper-3.4.5

     首先编译hadoop,进入/usr/localUDP/hadoop-2.2.0/,执行命令mvn clean package -Pdist,native -DskipTests -Dtar,经过漫长的等待,将看到如下输出:
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Main ................................ SUCCESS [6.880s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [0.963s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [2.746s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.283s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [1.904s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [2.673s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [3.323s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [1.953s]
[INFO] Apache Hadoop Common .............................. SUCCESS [1:17.664s]
[INFO] Apache Hadoop NFS ................................. SUCCESS [4.751s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [0.038s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [1:22.788s]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [13.653s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [5.512s]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [3.233s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.053s]
[INFO] hadoop-yarn ....................................... SUCCESS [0.094s]
[INFO] hadoop-yarn-api ................................... SUCCESS [45.127s]
[INFO] hadoop-yarn-common ................................ SUCCESS [19.781s]
[INFO] hadoop-yarn-server ................................ SUCCESS [0.063s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [6.718s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [12.082s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [2.252s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [9.728s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [0.427s]
[INFO] hadoop-yarn-client ................................ SUCCESS [3.360s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [0.053s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [2.087s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [0.047s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [15.333s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [1.768s]
[INFO] hadoop-yarn-site .................................. SUCCESS [0.104s]
[INFO] hadoop-yarn-project ............................... SUCCESS [4.607s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [15.309s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [2.153s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [8.557s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [4.276s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [4.955s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [1.625s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [4.567s]
[INFO] hadoop-mapreduce .................................. SUCCESS [4.007s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [3.584s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [6.178s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [1.809s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [4.322s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [3.240s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [2.022s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [2.387s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [8.417s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [2.510s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [0.023s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [19.223s]
[INFO] Apache Hadoop Client .............................. SUCCESS [6.440s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [0.098s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 7:19.405s
[INFO] Finished at: Tue Dec 10 13:36:39 CST 2013
[INFO] Final Memory: 95M/1518M
[INFO] ------------------------------------------------------------------------

      借助编译hbase,值得注意的是,hbase编译时需要选定是hadoop2.0支持的相关依赖,激活对应的profile,具体命令是:mvn clean package assembly:assembly -DskipTests -Dhadoop.profile=2.0,也会经历一段时间得到如下输出:
main:
    [mkdir] Created dir: /usr/local/UDP/hbase0.94.14/target/hbase-0.94.14/hbase-0.94.14/lib/native/Linux-amd64-64
[INFO] Executed tasks
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:03.884s
[INFO] Finished at: Tue Dec 10 13:41:05 CST 2013
[INFO] Final Memory: 38M/1493M
[INFO] ------------------------------------------------------------------------

      最后编译zookeeper,其源码可以在github上面找到,由于和之前的两个工程不同,它现在暂时不是maven工程,编译时只能通过ant来做,所以之前需要先安装好ant,编译也很简单,只需进入/usr/local/UDP/zookeeper-3.4.5/,然后执行ant命令,得到如下输出,编译好后,把build目录下的内容移到最外层目录中即可:
build-generated:
    [javac] Compiling 1 source file to /usr/local/UDP/zookeeper-3.4.5/build/classes

compile:

jar:
      [jar] Building jar: /usr/local/UDP/zookeeper-3.4.5/build/zookeeper-3.4.5.jar

BUILD SUCCESSFUL
Total time: 4 seconds

     值得注意的是,如果编译过程中确实下载不下来,可以手动将对应的jar包及pom.xml下载下来,放到指定层级的本地库中即可。补充下安装的时候,需要删除掉hbase中所有hadoop*.jar和protobuf-java-*jar,用hadoop安装包中对应的jar包替换掉,不然会报协议版本不匹配等错误。hadoop2.2.0似乎没有了masters这个配置文件,只需要在hdfs-site.xml指定 dfs.namenode.secondary.http-address属性。


core-site.xml:
<property>   <name>hadoop.proxyuser.hduser.hosts</name>
   <value>*</value>
</property>

<property>
   <name>hadoop.proxyuser.hduser.groups</name>
   <value>*</value>
</property>                                                          


















0 0
原创粉丝点击