Druid自动增加timestamp字段

来源:互联网 发布:vb 数组长度 编辑:程序博客网 时间:2024/06/15 10:26

核心
1、druid增加timestamp值
2、druid时间同步导致的问题

在我们用druid进行overload csv文件的时候,如果我们文件中没有一个timestamp这个自动,我们想默认一个值,这个应该怎么操作呢?

配置 timestampSpec 但是timestamp这个列为空值, 然后 format 为auto格式的,出现了下面这个错误

2017-08-22T07:57:34,971 INFO [main] org.apache.hadoop.mapreduce.Job - Task Id : attempt_1502795833184_0005_m_000008_0, Status : FAILEDError: io.druid.java.util.common.RE: Failure on row        at io.druid.indexer.HadoopDruidIndexerMapper.map(HadoopDruidIndexerMapper.java:91)        at io.druid.indexer.DetermineHashedPartitionsJob$DetermineCardinalityMapper.run(DetermineHashedPartitionsJob.java:285)        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)        at java.security.AccessController.doPrivileged(Native Method)        at javax.security.auth.Subject.doAs(Subject.java:422)        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)Caused by: io.druid.java.util.common.parsers.ParseException: Unparseable timestamp found!        at io.druid.data.input.impl.MapInputRowParser.parse(MapInputRowParser.java:75)        at io.druid.data.input.impl.StringInputRowParser.parseMap(StringInputRowParser.java:139)        at io.druid.data.input.impl.StringInputRowParser.parse(StringInputRowParser.java:134)        at io.druid.indexer.HadoopDruidIndexerMapper.parseInputRow(HadoopDruidIndexerMapper.java:101)        at io.druid.indexer.HadoopDruidIndexerMapper.map(HadoopDruidIndexerMapper.java:72)        ... 8 moreCaused by: java.lang.NullPointerException: Null timestamp in input:         at io.druid.data.input.impl.MapInputRowParser.parse(MapInputRowParser.java:67)

如果不配置timestampSpec 这个参数,直接报timestampSpec不存在

2017-08-22T07:59:30,154 ERROR [main] io.druid.cli.CliHadoopIndexer - failure!!!!java.lang.reflect.InvocationTargetException        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_101]        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_101]        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_101]        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_101]        at io.druid.cli.CliHadoopIndexer.run(CliHadoopIndexer.java:117) [druid-services-0.10.0.jar:0.10.0]        at io.druid.cli.Main.main(Main.java:108) [druid-services-0.10.0.jar:0.10.0]Caused by: java.lang.NullPointerException: timestampSpec        at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:229) ~[guava-16.0.1.jar:?]        at io.druid.indexer.HadoopDruidIndexerConfig.verify(HadoopDruidIndexerConfig.java:588) ~[druid-indexing-hadoop-0.10.0.jar:0.10.0]        at io.druid.indexer.HadoopDruidIndexerJob.<init>(HadoopDruidIndexerJob.java:47) ~[druid-indexing-hadoop-0.10.0.jar:0.10.0]        at io.druid.cli.CliInternalHadoopIndexer.run(CliInternalHadoopIndexer.java:130) ~[druid-services-0.10.0.jar:0.10.0]        at io.druid.cli.Main.main(Main.java:108) ~[druid-services-0.10.0.jar:0.10.0]

解决方法:

“timestampSpec” : {     "column" : "timestamp",     "format" : "auto",     "missingValue" : "2017-08-22"  //格式可以自己制定}       

overload的时候,文件的时候确实在我们的overload时间范围内,为什么总是出No buckets?? seems there is no data to index 这个错误,
原因:集群间时间不一样,导致的

java.lang.reflect.InvocationTargetException        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_101]        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_101]        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_101]        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_101]        at io.druid.cli.CliHadoopIndexer.run(CliHadoopIndexer.java:117) [druid-services-0.10.0.jar:0.10.0]        at io.druid.cli.Main.main(Main.java:108) [druid-services-0.10.0.jar:0.10.0]Caused by: java.lang.RuntimeException: java.lang.RuntimeException: No buckets?? seems there is no data to index.        at io.druid.indexer.IndexGeneratorJob.run(IndexGeneratorJob.java:215) ~[druid-indexing-hadoop-0.10.0.jar:0.10.0]        at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) ~[druid-indexing-hadoop-0.10.0.jar:0.10.0]        at io.druid.indexer.HadoopDruidIndexerJob.run(HadoopDruidIndexerJob.java:95) ~[druid-indexing-hadoop-0.10.0.jar:0.10.0]        at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) ~[druid-indexing-hadoop-0.10.0.jar:0.10.0]        at io.druid.cli.CliInternalHadoopIndexer.run(CliInternalHadoopIndexer.java:131) ~[druid-services-0.10.0.jar:0.10.0]        at io.druid.cli.Main.main(Main.java:108) ~[druid-services-0.10.0.jar:0.10.0]        ... 6 moreCaused by: java.lang.RuntimeException: No buckets?? seems there is no data to index.        at io.druid.indexer.IndexGeneratorJob.run(IndexGeneratorJob.java:176) ~[druid-indexing-hadoop-0.10.0.jar:0.10.0]        at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) ~[druid-indexing-hadoop-0.10.0.jar:0.10.0]        at io.druid.indexer.HadoopDruidIndexerJob.run(HadoopDruidIndexerJob.java:95) ~[druid-indexing-hadoop-0.10.0.jar:0.10.0]        at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) ~[druid-indexing-hadoop-0.10.0.jar:0.10.0]        at io.druid.cli.CliInternalHadoopIndexer.run(CliInternalHadoopIndexer.java:131) ~[druid-services-0.10.0.jar:0.10.0]        at io.druid.cli.Main.main(Main.java:108) ~[druid-services-0.10.0.jar:0.10.0]        ... 6 more

原因分析
GNU中对TZ环境变量的说明中指出,如果TZ没有值,会默认选择时区,具体地址由libc.so.6这个库决定。在升级前,
centos的默认时区文件为/etc/localtime。而我新编译的库时,设置了–prefix=/usr/local/glibc-2.14,
导致默认路径为变成了/usr/local/glibc-2.14/etc/localtime,自然就找不到默认时区了。
解决方法
ln -sf /etc/localtime /usr/local/glibc-2.14/etc/localtime

原创粉丝点击