sparksql参数配置
来源:互联网 发布:linux下环境变量设置 编辑:程序博客网 时间:2024/06/01 20:55
转载自:http://www.cnblogs.com/wwxbi/p/6114410.html
查看当前环境SQL参数的配置
spark.sql("SET -v")
keyvaluespark.sql.hive.version1.2.1spark.sql.sources.parallelPartitionDiscovery.threshold32spark.sql.hive.metastore.barrierPrefixes spark.sql.shuffle.partitions200spark.sql.hive.metastorePartitionPruningFALSEspark.sql.broadcastTimeout300spark.sql.sources.bucketing.enabledTRUEspark.sql.parquet.filterPushdownTRUEspark.sql.statistics.fallBackToHdfsFALSEspark.sql.adaptive.enabledFALSEspark.sql.parquet.cacheMetadataTRUEspark.sql.hive.metastore.sharedPrefixescom.mysql.jdbcspark.sql.parquet.respectSummaryFilesFALSEspark.sql.warehouse.dirhdfs:///user/spark/warehousespark.sql.orderByOrdinalTRUEspark.sql.hive.convertMetastoreParquetTRUEspark.sql.groupByOrdinalTRUEspark.sql.hive.thriftServer.asyncTRUEspark.sql.thriftserver.scheduler.pool<undefined>spark.sql.orc.filterPushdownFALSEspark.sql.adaptive.shuffle.targetPostShuffleInputSize67108864bspark.sql.sources.defaultparquetspark.sql.parquet.compression.codecsnappyspark.sql.hive.metastore.version1.2.1spark.sql.sources.partitionDiscovery.enabledTRUEspark.sql.crossJoin.enabledFALSEspark.sql.parquet.writeLegacyFormatFALSEspark.sql.hive.verifyPartitionPathFALSEspark.sql.variable.substituteTRUEspark.sql.thriftserver.ui.retainedStatements200spark.sql.hive.convertMetastoreParquet.mergeSchemaFALSEspark.sql.parquet.enableVectorizedReaderTRUEspark.sql.parquet.mergeSchemaFALSEspark.sql.parquet.binaryAsStringFALSEspark.sql.columnNameOfCorruptRecord_corrupt_recordspark.sql.files.maxPartitionBytes134217728spark.sql.streaming.checkpointLocation<undefined>spark.sql.variable.substitute.depth40spark.sql.parquet.int96AsTimestampTRUEspark.sql.autoBroadcastJoinThreshold10485760spark.sql.pivotMaxValues10000spark.sql.sources.partitionColumnTypeInference.enabledTRUEspark.sql.hive.metastore.jarsbuiltinspark.sql.thriftserver.ui.retainedSessions200spark.sql.sources.maxConcurrentWrites1spark.sql.parquet.output.committer.classorg.apache.parquet.hadoop.ParquetOutputCommitter
阅读全文
0 0
- sparksql参数配置
- sparksql必要的配置
- SparkSQL配置和使用初探
- sparkSQL metaData配置到Mysql
- 配置Tableau Desktop连接SparkSQL
- SparkSQL配置(HIVE作为数据源)
- sparksql读取hive数据源配置
- sparkSQL
- SparkSQL
- SparkSQL
- SparkSQL
- sparksql支持传参数的封装
- SparkSQL on Hive配置与实战
- sparkSQL本地单机版测试配置
- SparkSQL ThriftServer配置及连接测试
- sparksql udf自定义函数中参数过多问题的解决
- 大数据环境部署7:SparkSQL配置使用
- 参数配置
- iOS安装cocoapods卡在Creating search index for spec repo 'master'..
- linux命令
- void* 指针、NULL指针、零指针、野指针、悬垂指针
- 并发编程基础知识点
- springBoot+jsp配置
- sparksql参数配置
- 草根站长心酸路:你的网站后来怎么样了?
- R语言:if-else条件判断及any、all、na.omit使用方法
- Codeforces 821 B. Okabe and Banana Trees
- Colossal Fibonacci Numbers! 巨大的斐波那契数 UVA
- web界面打包成桌面程序工具
- ROS采集Android的图像和IMU数据的一些设置
- Missing Private key解决方案——IOS证书 .cer 以p12文件
- java金额字段类型