CDH3u4环境下部署Spark+Shark

来源:互联网 发布:用java写一个计算器 编辑:程序博客网 时间:2024/06/06 04:18

Hadoop版本: CDH3u4

Spark版本: spark-0.9.1-rc3

Shark版本:git clone https://github.com/amplab/shark.git -b branch-0.9


下载对应的Spark包,并解压:

wget https://github.com/apache/spark/archive/v0.9.1-rc3.tar.gztar xzvf v0.9.1-rc3.tar.gz


进入到spark目录下,并着手编译Spark:

cd spark-0.9.1-rc3/SPARK_HADOOP_VERSIOIN=0.20.2-cdh3u4 sbt/sbt clean assembly

编译过程,可能会经历一段比较长的时间,需要耐心等待。待编译结束之后,修改spark-0.9.1-rc3/conf下的spark-env.sh,加入以下内容(通过mesos来虚拟集群资源):

export MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.soexport MASTER=zk://storm01:2181,monet00:2181,report:2181/mesos

最后,将编译后的spark目录打包,拷贝到所有节点上的相同位置下,解压即可。


可以通过以下测试命令测试Spark是否安装成功:

bin/run-example org.apache.spark.examples.SparkPi zk://storm01:2181,monet00:2181,report:2181/mesos

我这里的输出是:

bin/run-example org.apache.spark.examples.SparkPi zk://storm01:2181,monet00:2181,report:2181/mesosSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/opt/spark-0.9.1-rc3/examples/target/scala-2.10/spark-examples-assembly-0.9.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/opt/spark-0.9.1-rc3/assembly/target/scala-2.10/spark-assembly-0.9.1-hadoop1.0.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]log4j:WARN No appenders could be found for logger (akka.event.slf4j.Slf4jLogger).log4j:WARN Please initialize the log4j system properly.log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.14/04/09 10:17:06 INFO SparkEnv: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties14/04/09 10:17:06 INFO SparkEnv: Registering BlockManagerMaster14/04/09 10:17:06 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140409101706-37d614/04/09 10:17:06 INFO MemoryStore: MemoryStore started with capacity 8.4 GB.14/04/09 10:17:06 INFO ConnectionManager: Bound socket to port 52641 with id = ConnectionManagerId(pnn,52641)14/04/09 10:17:06 INFO BlockManagerMaster: Trying to register BlockManager14/04/09 10:17:06 INFO BlockManagerMasterActor$BlockManagerInfo: Registering block manager pnn:52641 with 8.4 GB RAM14/04/09 10:17:06 INFO BlockManagerMaster: Registered BlockManager14/04/09 10:17:06 INFO HttpServer: Starting HTTP Server14/04/09 10:17:06 INFO HttpBroadcast: Broadcast server started at http://192.168.85.66:5710214/04/09 10:17:06 INFO SparkEnv: Registering MapOutputTracker14/04/09 10:17:06 INFO HttpFileServer: HTTP File server directory is /tmp/spark-018553e5-fe2e-4809-b4b6-9e752b9b2bbd14/04/09 10:17:06 INFO HttpServer: Starting HTTP Server14/04/09 10:17:07 INFO SparkUI: Started Spark Web UI at http://pnn:404014/04/09 10:17:08 INFO SparkContext: Added JAR /opt/spark-0.9.1-rc3/examples/target/scala-2.10/spark-examples-assembly-0.9.1.jar at http://192.168.85.66:57002/jars/spark-examples-assembly-0.9.1.jar with timestamp 13970098281522014-04-09 10:17:08,314:27705(0x52f6f940):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.52014-04-09 10:17:08,315:27705(0x52f6f940):ZOO_INFO@log_env@716: Client environment:host.name=pnn2014-04-09 10:17:08,315:27705(0x52f6f940):ZOO_INFO@log_env@723: Client environment:os.name=Linux2014-04-09 10:17:08,315:27705(0x52f6f940):ZOO_INFO@log_env@724: Client environment:os.arch=2.6.18-308.8.1.el52014-04-09 10:17:08,315:27705(0x52f6f940):ZOO_INFO@log_env@725: Client environment:os.version=#1 SMP Tue May 29 14:57:25 EDT 20122014-04-09 10:17:08,315:27705(0x52f6f940):ZOO_INFO@log_env@733: Client environment:user.name=hoolai2014-04-09 10:17:08,315:27705(0x52f6f940):ZOO_INFO@log_env@741: Client environment:user.home=/root2014-04-09 10:17:08,315:27705(0x52f6f940):ZOO_INFO@log_env@753: Client environment:user.dir=/opt/spark-0.9.1-rc32014-04-09 10:17:08,315:27705(0x52f6f940):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=storm01:2181,monet00:2181,report:2181 sessionTimeout=10000 watcher=0x2aaabda150b0 sessionId=0 sessionPasswd=<null> context=0x2aaab45da2b0 flags=02014-04-09 10:17:08,317:27705(0x68391940):ZOO_INFO@check_events@1703: initiated connection to server [192.168.85.114:2181]2014-04-09 10:17:08,340:27705(0x68391940):ZOO_INFO@check_events@1750: session establishment complete on server [192.168.85.114:2181], sessionId=0x143ade5a8aa01a6, negotiated timeout=10000I0409 10:17:08.341227 27854 group.cpp:310] Group process ((2)@192.168.85.66:48522) connected to ZooKeeperI0409 10:17:08.341346 27854 group.cpp:752] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)I0409 10:17:08.341378 27854 group.cpp:367] Trying to create path '/mesos' in ZooKeeperI0409 10:17:08.343472 27831 detector.cpp:134] Detected a new leader: (id='43')I0409 10:17:08.343940 27872 group.cpp:629] Trying to get '/mesos/info_0000000043' in ZooKeeperI0409 10:17:08.344985 27876 detector.cpp:351] A new leading master (UPID=master@192.168.85.66:5050) is detectedI0409 10:17:08.345324 27834 sched.cpp:218] No credentials provided. Attempting to register without authenticationI0409 10:17:08.345569 27834 sched.cpp:230] Detecting new master14/04/09 10:17:08 INFO MesosSchedulerBackend: Registered as framework ID 201404081047-1112910016-5050-9652-001614/04/09 10:17:08 INFO SparkContext: Starting job: reduce at SparkPi.scala:3914/04/09 10:17:08 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:39) with 2 output partitions (allowLocal=false)14/04/09 10:17:08 INFO DAGScheduler: Final stage: Stage 0 (reduce at SparkPi.scala:39)14/04/09 10:17:08 INFO DAGScheduler: Parents of final stage: List()14/04/09 10:17:08 INFO DAGScheduler: Missing parents: List()14/04/09 10:17:08 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[1] at map at SparkPi.scala:35), which has no missing parents14/04/09 10:17:08 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0 (MappedRDD[1] at map at SparkPi.scala:35)14/04/09 10:17:08 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks14/04/09 10:17:08 INFO TaskSetManager: Starting task 0.0:0 as TID 0 on executor 201404011147-1112910016-5050-5163-18: pdn28 (PROCESS_LOCAL)14/04/09 10:17:08 INFO TaskSetManager: Serialized task 0.0:0 as 1407 bytes in 10 ms14/04/09 10:17:08 INFO TaskSetManager: Starting task 0.0:1 as TID 1 on executor 201404011147-1112910016-5050-5163-14: pdn24 (PROCESS_LOCAL)14/04/09 10:17:08 INFO TaskSetManager: Serialized task 0.0:1 as 1407 bytes in 0 ms14/04/09 10:17:24 INFO BlockManagerMasterActor$BlockManagerInfo: Registering block manager pdn24:54522 with 294.4 MB RAM14/04/09 10:17:31 INFO TaskSetManager: Finished TID 1 in 22685 ms on pdn24 (progress: 1/2)14/04/09 10:17:31 INFO DAGScheduler: Completed ResultTask(0, 1)14/04/09 10:17:38 INFO BlockManagerMasterActor$BlockManagerInfo: Registering block manager pdn28:41054 with 294.4 MB RAM14/04/09 10:17:56 INFO DAGScheduler: Completed ResultTask(0, 0)14/04/09 10:17:56 INFO TaskSetManager: Finished TID 0 in 47380 ms on pdn28 (progress: 2/2)14/04/09 10:17:56 INFO DAGScheduler: Stage 0 (reduce at SparkPi.scala:39) finished in 47.408 s14/04/09 10:17:56 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 14/04/09 10:17:56 INFO SparkContext: Job finished: reduce at SparkPi.scala:39, took 47.606845 sPi is roughly 3.1447614/04/09 10:17:56 INFO MesosSchedulerBackend: driver.run() returned with code DRIVER_STOPPED14/04/09 10:17:57 INFO MapOutputTrackerMasterActor: MapOutputTrackerActor stopped!14/04/09 10:17:57 INFO ConnectionManager: Selector thread was interrupted!14/04/09 10:17:57 INFO ConnectionManager: ConnectionManager stopped14/04/09 10:17:57 INFO MemoryStore: MemoryStore cleared14/04/09 10:17:57 INFO BlockManager: BlockManager stopped14/04/09 10:17:57 INFO BlockManagerMasterActor: Stopping BlockManagerMaster14/04/09 10:17:57 INFO BlockManagerMaster: BlockManagerMaster stopped14/04/09 10:17:57 INFO SparkContext: Successfully stopped SparkContext14/04/09 10:17:57 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.14/04/09 10:17:57 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
运行成功。


下一步,安装Shark,首先获取到0.9分支的最新代码:

git clone https://github.com/amplab/shark.git -b branch-0.9

进入目录并编译:

cd sharkSHARK_HADOOP_VERSION=0.20.2-cdh3u4 sbt/sbt clean package
此编译过程也需要耐心等待一段时间。编译过程中可能会出现一堆警告,可以忽略它们。


待编译完成之后,做一些兼容的工作和配置工作,包括:

1. 将 /opt/shark/lib_managed/jars/org.apache.hadoop/hadoop-core/hadoop-core-1.0.4.jar 换成 hadoop-core-0.20.2-cdh3u4.jar

2. 保证以下jar包出现在/opt/shark/lib目录下:

-rw-r--r-- 1 root root  62230 04-03 17:21 hadoop-lzo-0.4.14.jar-rw-r--r-- 1 root root 832960 04-03 16:29 mysql-connector-java-5.1.22-bin.jar-rw-rw-r-- 1 root root  26083 04-05 01:08 slf4j-api-1.7.2.jar-rw-rw-r-- 1 root root   8819 04-05 01:08 slf4j-log4j12-1.7.2.jar-rw-r--r-- 1 root root 1218645 04-03 17:29 guava-r09-jarjar.jar

这些jar包都可以从当前的Hadoop环境中拷贝过来。

3. JDK7运行环境部署。使用过程中,需要JDK环境来驱动执行,否则jar包的解析会出错,具体的方法很简单,下载并解压一个jdk7的jar包,在shark-env.sh中指定一下具体的JAVA_HOME即可。


4. 拷贝hive-site.xml到/opt/shark/conf下,使shark能够找到Hive的元数据和其它配置信息。这里需要注意,CDH3u4下的Hive版本是0.7的老旧版本,而Shark默认的Hive兼容版本是Hive0.11,所以我在这里是直接指定了shark自己的元数据库,而不和现有的Hive元数据共用一套,以免对Hive元数据造成损坏。


5. 配置shark-env.sh

export JAVA_HOME=/usr/local/src/jdk   # your JDK7 pathexport SPARK_MEM=16g# (Required) Set the master program's memoryexport SHARK_MASTER_MEM=1g# (Required) Point to your Scala installation.export SCALA_HOME="/opt/scala"# (Required) Point to the patched Hive binary distribution#export HIVE_HOME="/opt/hive/build/dist"# (Optional) Specify the location of Hive's configuration directory. By default,# it points to $HIVE_HOME/confexport HIVE_CONF_DIR="/opt/shark/conf"# For running Shark in distributed mode, set the following:export HADOOP_HOME="/usr/lib/hadoop"export SPARK_HOME="/opt/spark-0.9.1-rc3"export MASTER="zk://storm01:2181,monet00:2181,report:2181/mesos"# Only required if using Mesos:export MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.so# Only required if run shark with spark on yarn#export SHARK_EXEC_MODE=yarn#export SPARK_ASSEMBLY_JAR=#export SHARK_ASSEMBLY_JAR=# (Optional) Extra classpathexport SPARK_LIBRARY_PATH="/usr/local/lib:/usr/lib/hadoop/lib/native/Linux-amd64-64"export SPARK_CLASSPATH="/usr/lib/hadoop"# Java options# On EC2, change the local.dir to /mnt/tmpSPARK_JAVA_OPTS=" -Dspark.local.dir=/tmp "SPARK_JAVA_OPTS+="-Dspark.kryoserializer.buffer.mb=10 "SPARK_JAVA_OPTS+="-verbose:gc -XX:-PrintGCDetails -XX:+PrintGCTimeStamps "export SPARK_JAVA_OPTS# 如果你的系统中已经存在以下两个环境变量,要在这里消除掉,保证不要和Shark的执行环境混淆,否则会报错。unset CLASSPATH unset HIVE_HOME

将编译后的shark目录打包,拷贝到所有节点的相同目录下。


至此整个spark+shark环境已经在CDH3u4上组件完毕,经过测试,shark在第一次查询某个表的数据的时候,相对较慢,但是之后的查询速度会上来,因为数据已经缓存在内存里了,以下是一些执行的快照:

shark> select count(1) from a_user_history;39.024: [GC 285408K->28337K(1005568K), 0.1025650 secs]78.176: [GC 290993K->21738K(1005568K), 0.0980660 secs]OK313758633Time taken: 85.519 secondsshark> select count(1) from a_user_history;157.718: [GC 284394K->32173K(1005568K), 0.0856370 secs]OK313758633Time taken: 11.685 secondsshark> select count(1) from a_user_history;220.919: [GC 294829K->22793K(1015808K), 0.1138120 secs]OK313758633Time taken: 10.731 secondsshark> select count(1) from a_user_history;232.750: [GC 304380K->34649K(1014272K), 0.1254580 secs]OK313758633Time taken: 9.11 secondsshark> select count(1) from a_user_history;247.060: [GC 316249K->21154K(1010688K), 0.0349480 secs]OK313758633Time taken: 9.143 secondsshark>




0 0
原创粉丝点击