kylin2.0之spark构建cube

来源:互联网 发布:linux 如何查看nat 编辑:程序博客网 时间:2024/06/05 00:24

在kylin2.0中引入了构建cube的spark引擎,因此在构建cube的时候用spark代替MR。

kylin2.0+spark1.6

kylin2.1+spark2.1.1

        kylin2.0.0+HBase1.X

Hadoop等的底层依赖:hdp2.4,hive,hbase,yarn

1.修改Hadoop配置

在kylin.properties中配置好Hadoop的配置路径(注意要新建一个目录把底层依赖的Hadoop、hive、hbase等的配置都连接或copy过来)

kylin.env.hadoop-conf-dir=/usr/local/apache-kylin-2.0.0-bin/hadoop-conf

core-site.xml, hdfs-site.xml, yarn-site.xml, hive-site.xml and hbase-site.xml

mkdir $KYLIN_HOME/hadoop-confln -s /etc/hadoop/conf/core-site.xml $KYLIN_HOME/hadoop-conf/core-site.xml ln -s /etc/hadoop/conf/hdfs-site.xml $KYLIN_HOME/hadoop-conf/hdfs-site.xml ln -s /etc/hadoop/conf/yarn-site.xml $KYLIN_HOME/hadoop-conf/yarn-site.xml ln -s /etc/hbase/2.4.0.0-169/0/hbase-site.xml $KYLIN_HOME/hadoop-conf/hbase-site.xml cp /etc/hive/2.4.0.0-169/0/hive-site.xml $KYLIN_HOME/hadoop-conf/hive-site.xml vi $KYLIN_HOME/hadoop-conf/hive-site.xml (change "hive.execution.engine" value from "tez" to "mr")

2.检查spark的配置

kylin运行时通过 $KYLIN_HOME/conf/kylin.properties中的kylin.engine.spark-conf变量加载spark配置项;
包括:
kylin.engine.spark-conf.spark.master=yarnkylin.engine.spark-conf.spark.submit.deployMode=clusterkylin.engine.spark-conf.spark.yarn.queue=defaultkylin.engine.spark-conf.spark.executor.memory=1Gkylin.engine.spark-conf.spark.executor.cores=2kylin.engine.spark-conf.spark.executor.instances=1kylin.engine.spark-conf.spark.eventLog.enabled=truekylin.engine.spark-conf.spark.eventLog.dir=hdfs\:///kylin/spark-historykylin.engine.spark-conf.spark.history.fs.logDirectory=hdfs\:///kylin/spark-history#kylin.engine.spark-conf.spark.yarn.jar=hdfs://namenode:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jar#kylin.engine.spark-conf.spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec## uncomment for HDP#kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current#kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current#kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current

配置spark的jar包;
hadoop fs -mkdir -p /kylin/spark/hadoop fs -put $KYLIN_HOME/spark/lib/spark-assembly-1.6.3-hadoop2.6.0.jar /kylin/spark/
配置完spark的jar可以配置上面关于spark引擎的几个选项参数:
kylin.engine.spark-conf.spark.yarn.jar=hdfs://sandbox.hortonworks.com:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jarkylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=currentkylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=currentkylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
在创建cube时可以选择cube Engine为spark 
到此,配置结束。在构建cube的时候就可以用spark引擎构建。
问题解决可以到kylin log/kylin.log里定位查找。
注意:
用MR情况:cube中韩勇超过12个维度的,或者有count distinct和top N等的度量。
       用spark情况:cube的模式较为简单时。所有的度量仅为SUM/MIN/MAX/COUNT,且源数据规模中等时。


原创粉丝点击