解决HBase中snappy出错
来源:互联网 发布:michael angelo 知乎 编辑:程序博客网 时间:2024/06/06 17:59
错误说明
最近使用hbase shell
时,报错
ERROR: org.apache.hadoop.hbase.DoNotRetryIOException: Compression algorithm 'snappy' previously failed test.
官网解释
参照官方文档的Install Snappy Support小节,
HBase does not ship with Snappy support because of licensing issues. You can install Snappy binaries (for instance, by using yum install snappy on CentOS) or build Snappy from source. After installing Snappy, search for the shared library, which will be called libsnappy.so.X where X is a number. If you built from source, copy the shared library to a known location on your system, such as /opt/snappy/lib/.
In addition to the Snappy library, HBase also needs access to the Hadoop shared library, which will be called something like libhadoop.so.X.Y, where X and Y are both numbers. Make note of the location of the Hadoop library, or copy it to the same location as the Snappy library.
Each of these library locations need to be added to the environment variable HBASE_LIBRARY_PATH for the operating system user that runs HBase. You need to restart the RegionServer for the changes to take effect.
意为HBase因为版权问题没有将snappy库添加进来,需要用户手动设置HBASE_LIBRARY_PATH环境变量,以指明snappy库的位置。
问题解决
测试HBase snappy(未设置HBASE_LIBRARY_PATH环境变量)
hbase org.apache.hadoop.hbase.util.CompressionTest file:///home/asin/Temp/test.txt snappy
意为使用HBase对本地文件
/home/asin/Temp/test.txt
进行压缩,报错如下,Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method) at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63) at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:132) at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:148) at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:163) at org.apache.hadoop.hbase.io.compress.Compression$Algorithm.getCompressor(Compression.java:303) at org.apache.hadoop.hbase.io.encoding.HFileBlockDefaultEncodingContext.<init>(HFileBlockDefaultEncodingContext.java:90) at org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.<init>(HFileBlock.java:849) at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.finishInit(HFileWriterV2.java:124) at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.<init>(HFileWriterV2.java:116) at org.apache.hadoop.hbase.io.hfile.HFileWriterV3.<init>(HFileWriterV3.java:67) at org.apache.hadoop.hbase.io.hfile.HFileWriterV3$WriterFactoryV3.createWriter(HFileWriterV3.java:59) at org.apache.hadoop.hbase.io.hfile.HFile$WriterFactory.create(HFile.java:298) at org.apache.hadoop.hbase.util.CompressionTest.doSmokeTest(CompressionTest.java:124) at org.apache.hadoop.hbase.util.CompressionTest.main(CompressionTest.java:160)
设置环境变量
export HBASE_LIBRARY_PATH=/home/asin/SoftWare/hbase-1.1.4/lib/Linux-amd64-64
注:Linux-amd64-64文件夹下包含
libsnappy.so
等文件,该文件夹从CDH中获取到。重启HBase Server,测试(设置环境变量后)
SUCCESS
注意:将Linux-amd64-64文件夹放到HBase安装目录下的lib中与设置环境变量有同样的效果。
参考
- Hadoop HBase 配置 安装 Snappy 终极教程
- HBase表增加snappy压缩
- HBase使用snappy压缩遇到compression test fail 问题解决
- 解决HBase中snappy出错
- cloudera中hbase使用Snappy算法安装及设置
- HBase开启Snappy压缩
- HBase配置snappy
- HBase开启Snappy压缩
- hbase应用snappy的部署
- Hadoop/Hbase的Snappy安装
- Hbase设置Snappy压缩测试
- 解决hbase 执行shell命令出错问题
- 在hadoop2.X集群中安装压缩工具snappy(主要用于hbase)
- Hbase增加Snappy压缩格式的调研
- Hadoop HBase 配置 安装 Snappy 终极教程
- hbase压缩算法-Snappy算法安装
- hadoop,hbase,hive 安装snappy压缩
- Hadoop HBase 配置 安装 Snappy 终极教程
- Hadoop HBase 配置 安装 Snappy 终极教程
- Hadoop HBase 配置 安装 Snappy 终极教程
- hbase压缩算法-Snappy算法安装
- HDU2017
- APP开发实战47-设计模式介绍5
- 16年6月5号湘潭邀请赛
- 通过Spring Session实现新一代的Session管理
- 【Leetcode】Valid Sudoku
- 解决HBase中snappy出错
- Caffe + Ubuntu 15.04 + CUDA 7.5 新手安装配置指南
- Shell Script备忘录
- bzoj4523【CQOI2016】路由表
- APP开发实战48-MVC架构
- java 日历
- 动态规划2-UNIMODAL PALINDROMIC DECOMPOSITIONS(算法基础 第5周)
- 【Leetcode】4Sum
- APP开发实战49-MVP架构