HBASE内存泄露读文件失败的问题

来源:互联网 发布:阿里云服务器架设网站 编辑:程序博客网 时间:2024/06/03 20:51
2016-12-28 04:04:58,586 INFO  [RS_OPEN_REGION-slave1:16020-2] regionserver.HRegion: Replaying edits from hdfs://master.hbase:9000/hbase/data/ns_GNSS/GPS_His/bacfb351e37f02bf45dbe08f50ec2536/recovered.edits/0000000000000013912
2016-12-28 04:04:58,653 WARN  [StoreFileOpenerThread-i-1] hdfs.BlockReaderFactory: I/O error constructing remote block reader.
java.io.IOException: Got error for OP_READ_BLOCK, self=ip……57775, remote=ip2……:50010, for file /hbase/data/ns_GNSS/GPS_His/5359c1c1b18bd1ac3239950cdad9b337/i/a6f617935976477d8d8e83165083574b, for pool BP-157146195-……-1482714621337 block 1073742363_1542
    at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:445)
    at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:410)
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:787)
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:666)
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:326)
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:570)
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:793)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:840)
    at java.io.DataInputStream.readFully(DataInputStream.java:195)
    at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:391)
    at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:462)
    at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:505)
    at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.<init>(StoreFile.java:1066)
    at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:251)
    at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:394)
    at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:495)
    at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:661)
    at org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:129)
    at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:531)
    at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:528)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

    at java.lang.Thread.run(Thread.java:745)


根据网友提供的问题,百度查询发现这种问题一般是由于垃圾回收处理时间耗时太长导致,类似java虚拟机JVM堆满的问题;

后面

1.参数的含义
-vmargs -Xms128M -Xmx512M -XX:PermSize=64M -XX:MaxPermSize=128M
-vmargs 说明后面是VM的参数,所以后面的其实都是JVM的参数了
-Xms128m JVM初始分配的堆内存
-Xmx512m JVM最大允许分配的堆内存,按需分配
-XX:PermSize=64M JVM初始分配的非堆内存
-XX:MaxPermSize=128M JVM最大允许分配的非堆内存,按需分配


改大可分配堆内存:


重启完成;

关于堆的配置熟悉讲解如下:
http://www.cnblogs.com/mingforyou/archive/2012/03/03/2378143.html

重启之后就正常了,希望有用



0 0
原创粉丝点击