开启hive数据表的update delete

来源:互联网 发布:与农村淘宝相关的业务 编辑:程序博客网 时间:2024/05/16 08:59

转载自:http://blog.csdn.net/suijiarui/article/details/51174406


之前介绍了Hive的安装,hive安装后可以修改建表及查询操作,在执行修改操作时遇到了如下问题。

[html] view plain copy
 在CODE上查看代码片派生到我的代码片
  1. update student set name='zhangsan' where id=3;  
  2. FAILED: SemanticException [Error 10294]: Attempt to do update or delete using transaction manager that does not support these operations.  
经过几番周折终于找到了解决方案。

1,在hive-site.xml文件中,增加如下属性。

[html] view plain copy
 在CODE上查看代码片派生到我的代码片
  1. <property>  
  2.     <name>hive.support.concurrency</name>  
  3.     <value>true</value>  
  4.   </property>  
  5.     <property>  
  6.     <name>hive.enforce.bucketing</name>  
  7.     <value>true</value>  
  8.   </property>  
  9.     <property>  
  10.     <name>hive.exec.dynamic.partition.mode</name>  
  11.     <value>nonstrict</value>  
  12.   </property>  
  13.   <property>  
  14.     <name>hive.txn.manager</name>  
  15.     <value>org.apache.hadoop.hive.ql.lockmgr.DbTxnManager</value>  
  16.   </property>  
  17.     <property>  
  18.     <name>hive.compactor.initiator.on</name>  
  19.     <value>true</value>  
  20.   </property>  
  21.   <property>  
  22.     <name>hive.compactor.worker.threads</name>  
  23.     <value>1</value>  
  24.   </property>  
  25.   <property>  
  26.     <name>hive.in.test</name>  
  27.     <value>true</value>  
  28.   </property>  

2,重启hive服务;

3,采用如下语句创建表;

[sql] view plain copy
 在CODE上查看代码片派生到我的代码片
  1. create table student(  
  2.   id int,  
  3.   name String,  
  4.   sex varchar(2),  
  5.   birthday varchar(10),  
  6.   major varchar(1)  
  7. )clustered by (id) into 2 buckets stored as orc TBLPROPERTIES('transactional'='true');  

4,测试update,delete语句

hive> update student set name='zhangsan' where id=3;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. Spark, tez) or using Hive 1.X releases.
Query ID = hadoop_20160417181258_c96e1b21-3832-4127-af01-62e0910fd9f3
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 2
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1460881375700_0012, Tracking URL = http://master:8088/proxy/application_1460881375700_0012/
Kill Command = /home/Hadoop/bigdata/hadoop-2.7.1/bin/hadoop job  -kill job_1460881375700_0012
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 2
2016-04-17 18:13:05,954 Stage-1 map = 0%,  reduce = 0%
2016-04-17 18:13:12,556 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 1.55 sec
2016-04-17 18:13:13,613 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.17 sec
2016-04-17 18:13:17,977 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU 4.38 sec
2016-04-17 18:13:20,079 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 6.38 sec
MapReduce Total cumulative CPU time: 6 seconds 380 msec
Ended Job = job_1460881375700_0012
Loading data to table default.student
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 2  Reduce: 2   Cumulative CPU: 6.38 sec   HDFS Read: 30517 HDFS Write: 1022 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 380 msec
OK
Time taken: 24.301 seconds

0 0
原创粉丝点击