hbase-scala
来源:互联网 发布:剑三和尚捏脸数据光头 编辑:程序博客网 时间:2024/06/08 15:24
1:创建一个名为HELLO的表:
>create 'HELLO','id','address','info'2:插入数据:
>put'HELLO','10','info:age','24'>put'HELLO','12','info:age','24'>put'HELLO','12','info:city','shanghai'......3:删除数据:
>disable 'HELLO'>is_enabled 'HELLO'>alter 'HELLO',{NAME=>'id',METHOD=>'delete'}>enable'HELLO'
删除表:
>disable 'HELLO'>drop 'HELLO'>exists 'HELLO'检查表是否存在
4:获取数据:
>get'HELLO','10'>get'HELLO','10','info'>get'HELLO','10','info:age'5:查询表中数据多少行:
>count'HELLO'6:删除整行:
>deleteall'HELLO','10'
7:增加字段:>incr 'HELLO','10','info:city'
8:scan浏览表中的数据:>scan "HELLO"
ROW COLUMN+CELL 10 column=info:age, timestamp=1513232005994, value=24 12 column=address:city, timestamp=1513143006267, value=hefei 12 column=address:contry, timestamp=1513142953004, value=china 12 column=address:province, timestamp=1513142991236, value=anhui 12 column=info:age, timestamp=1513142508015, value=24 12 column=info:birthday, timestamp=1513142882018, value=2017 12 column=info:company, timestamp=1513142919588, value=hpe9:scala读取hbase数据
import org.apache.hadoop.hbase.util.Bytesimport org.apache.hadoop.hbase.{HColumnDescriptor, HTableDescriptor, TableName, HBaseConfiguration}import org.apache.hadoop.hbase.client._import org.apache.spark.SparkContextimport scala.collection.JavaConversions._import org.apache.hadoop.fs.Pathimport kafka.utils.ZkUtilsval hbaseConf = HBaseConfiguration.create()hbaseConf.addResource(new Path("/etc/alternatives/hbase-conf/hbase-site.xml"))val zkClientAndConnection = ZkUtils.createZkClientAndConnection(zkUrl, sessionTimeOut, connectionTimeOut)val conn = ConnectionFactory.createConnection(hbaseConf)val hbaseTableName = "HELLO"val table = conn.getTable(TableName.valueOf(hbaseTableName))//将'10'的age数据改为25val p = new Put("10".getBytes)p.addColumn("info".getBytes,"age".getBytes,"25".getBytes)table.put(p)//查询某条数据val g = new Get("10".getBytes)val result = table.get(g)val value = Bytes.toString(result.getValue("info".getBytes,"age".getBytes))//扫描数据val scan = new Scan()scan.addColumn("info".getBytes,"age".getBytes)val scanner = table.getScanner(scan)val r = scanner.next() //数据10的信息Bytes.toString(r.value)//返回结果:String = 25val r1 = scanner.next() //数据12的信息Bytes.toString(r1.value)//返回结果:String = 24//删除某条数据val d = new Delete("10".getBytes)d.addColumn("info".getBytes,"age".getBytes)table.delete(d)
阅读全文
0 0
- scala Hbase
- hbase-scala
- scala 操作远程hbase
- scala读取hbase
- Hbase Filter Scala 实战
- scala 控制hbase,spark
- Hbase Filter Scala 实战
- hbase-scala-获取连接
- Scala操作Hbase
- Scala 的Hbase接口函数
- scala使用hbase新api
- scala spark hbase 操作案例
- 使用scala操作hbase api
- ScalaHbase 使用scala 操作hbase
- Scala Hbase 问题汇总 stack overflow
- 基于scala 新版API操作HBase
- 如何使用scala+spark读写hbase?
- Hadoop + hbase + Zookeeper + spark + scala 集群搭建
- 全文检索的基本原理
- K Closest Numbers In Sorted Array
- 使用字体图标
- 数据结构实验之排序六:希尔排序
- Oracle 12.2 dbca静默方式加响应文件建库时的log
- hbase-scala
- 第一章 01节 ROS的安装
- New Paper1
- kubelet代码整理(3)
- js: Object
- 数据挖掘——关联规则以及Apriori算法
- NOIP 提高组 2010 乌龟棋
- AndroidStudio配置git并上传项目到git、码云等
- powershell 使用小计