hive学习(二)

来源:互联网 发布:java参数传递 编辑:程序博客网 时间:2024/05/19 22:26

hive基本知识

hive shell

执行hive进入hive shell

hive shell基本设置

#set xxx当进行赋值是则进行属性设置,无赋值是为查看属性当前值#设置展示表数据时展示字段名hive> set hive.cli.print.header=true;hive> set hive.cli.print.header;     hive.cli.print.header=true#设置展示当前所在数据库hive> set hive.cli.print.current.db=true;hive (default)> 

HQL数据类型

基本数据类型:

TINYINT 1byte
SMALLINT 2byte
INT 4byte
BIGINT 8byte
FLOAT 4byte
DOUBLE 8byte
BOOLEAN
STRING 2G

复杂数据类型:

ARRAY 有序数组
MAP 无序键值对,key必须是基础数据类型
STRUCT 一组字段

explain:获取HQL的执行计划

表类别

内部表:
托管表
由hive进行管理,删除hive表的时候删除数据
外部表:
数据不是由hive进行管理,当删除hive表的时候hive并不删除数据

基本命令

创建数据库

CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] DB_NAME
[COMMENT DATABASE_COMMENT]
[LOCATION HDFS_PATH]
[WITH DBPROPERTIES
(property_name=value, name=value)]

hive> create database if not exists hive_test comment "this is the hive test database";OKTime taken: 4.527 seconds

查看数据库描述

DESCRIBE (EXTENDED–是否显示额外信息) (DATABASE|SCHEMA)

hive (default)> describe database hive_test;OKdb_name comment location    owner_name  owner_type  parametershive_test   this is the hive test database  hdfs://dev-hadoop-single.com:8020/user/hive/warehouse/hive_test.db  hadoop  USER    Time taken: 0.073 seconds, Fetched: 1 row(s)-------------------------------------------------------

删除数据库

DROP (DATABASE|SCHEMA) [IF EXISTS] [RESTRICT(有table拒绝删除)|CASCADE(级联删除)]

hive (hive_test)> drop database hive_test;OKTime taken: 0.62 secondshive (hive_test)> show databases;OKdatabase_namedefaultlog_analysismydbTime taken: 0.033 seconds, Fetched: 3 row(s)-------------------------------------------------------

使用某个库

USER (DATABASE|SCHEMA)

hive (default)> use hive_test;OKTime taken: 0.024 secondshive (hive_test)> 

创建表:

1.create table
2.create table … as select(会产生数据)
3.create table tablename like exist_tablename类似于拷贝表结构但是并不拷贝数据

CREATE [EXTERNAL(外部表)] TABLE [IF NOT EXISTS]
[db_name.]table_name
(col1_name col1_type [COMMENT col1_comment],…)
[COMMENT table_comment]
[PARTITIONED BY (col_name col_type)]——分区信息
[CLUSTERED BY (col_name col_type)[SORTED BY(col_name [ASC|DESC],..)] INTO num_buckets BUCKETS] —表的桶信息
[ROW FORMAT row_format] —表的数据分割信息,格式化信息
[STORED AS file_format] —表的数据存储序列化信息
[LOCATION hdfs_path]; —数据存储的文件地址信息

ROW FORMAT:
row_format : delimited fields terminated by ‘\001’ collection terminated by ‘\002’ map keys terminated by ‘\003’ lines terminated by ‘\004’ NULL DEFINED AS ‘\N’

STORED AS:
file_format : sequencefile, textfile(default), rcfile, orc,avro…

CREATE [EXTERNAL] TABLE [IF NOT EXISTS]
[db_name.]table_name
LIKE existing_table_or_view_name
[LOCATION hdfs_path];

hive> create table test_manager(id int);
hive> create external table test_external(id int);
hive> create table test_location(id int) location ‘/test_location’;
删除hive表时内部表数据会被删除,外部表一定不会被删除

create table customers(id int, name string, phone string) row format delimited fields terminated by ',' location '/user/hadoop/data'create table customers2 like customers;create table customers3 as select * from customers;

=====================>
复杂数据类型

create table complex_table_text(id int, name string, flag boolean, score array<int>, tech map<string, string>, other struct<phone:string, email:string>) row format delimited fields terminated by '\;' collection items terminated by ',' map keys terminated by ':' location '/user/hadoop/data1/';
0 0