Hbase<踩过的坑--使用intellij idea调用JavaAPI创建预分区>

来源:互联网 发布:常见文本文件格式知乎 编辑:程序博客网 时间:2024/06/05 15:22

本集群 基于:

VMware Workstation12 ProSecureCRT 7.3Xftp 5CentOS-7-x86_64-Everything-1611.isohadoop-2.8.0.tar.gzjdk-8u121-linux-x64.tar.gz

下面是我在使用Intellij IDEA调用JavaAPI创建预分区的时候遇到的问题,写下来,备忘

1.Pom.xml

    <?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0"         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">    <modelVersion>4.0.0</modelVersion>    <groupId>com.hbase</groupId>    <artifactId>HbaseOperation</artifactId>    <version>1.0-SNAPSHOT</version>    <build>        <plugins>            <plugin>                <groupId>org.apache.maven.plugins</groupId>                <artifactId>maven-compiler-plugin</artifactId>                <configuration>                    <source>1.6</source>                    <target>1.6</target>                </configuration>            </plugin>        </plugins>    </build>    <dependencies>        <dependency>            <groupId>org.apache.hbase</groupId>            <artifactId>hbase-common</artifactId>            <version>1.1.2</version>        </dependency>        <dependency>            <groupId>org.apache.hadoop</groupId>            <artifactId>hadoop-common</artifactId>            <version>2.8.0</version>        </dependency>        <dependency>            <groupId>org.apache.hbase</groupId>            <artifactId>hbase-client</artifactId>            <version>1.1.2</version>        </dependency>    </dependencies></project>

新建一个包,创建几个类,感觉应该没什么问题,运行一下,尼玛,抛异常了

[main] WARN org.apache.hadoop.hbase.util.DynamicClassLoader  - Failed to identify the fs of dir hdfs://192.168.195.131:9000/hbase/lib, ignoredjava.io.IOException: No FileSystem for scheme: hdfs    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2798)    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2809)    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2848)    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2830)    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)    at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104)    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:241)    at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)    at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)    at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105)    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:879)    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:635)    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)    at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)    at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)    at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)    at neu.HBaseHelper.<init>(HBaseHelper.java:30)    at neu.HBaseHelper.getHelper(HBaseHelper.java:35)    at neu.HBaseOprations.main(HBaseOprations.java:22)

原因是找不到FileSystem

但是Hadoop-common包下是用这个文件的啊,

这里写图片描述

对应的文件如下:

# Licensed to the Apache Software Foundation (ASF) under one or more# contributor license agreements.  See the NOTICE file distributed with# this work for additional information regarding copyright ownership.# The ASF licenses this file to You under the Apache License, Version 2.0# (the "License"); you may not use this file except in compliance with# the License.  You may obtain a copy of the License at##     http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.org.apache.hadoop.fs.LocalFileSystemorg.apache.hadoop.fs.viewfs.ViewFileSystemorg.apache.hadoop.fs.ftp.FTPFileSystemorg.apache.hadoop.fs.HarFileSystem

来发现,使用的FileSystem应该是另外一个包的:org.apahce.hadoop:hadoop-hdfs:2.8.0的,加上之后就没有没有这个异常了

         <dependency>            <groupId>org.apache.hadoop</groupId>            <artifactId>hadoop-hdfs</artifactId>            <version>2.8.0</version>        </dependency>

这里写图片描述
看一下包的文件结构:发现在引入这个包的同时引入了org.apahce.hadoop:hadoop-hdfs-client:2.8.0

# Licensed to the Apache Software Foundation (ASF) under one or more# contributor license agreements.  See the NOTICE file distributed with# this work for additional information regarding copyright ownership.# The ASF licenses this file to You under the Apache License, Version 2.0# (the "License"); you may not use this file except in compliance with# the License.  You may obtain a copy of the License at##     http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.org.apache.hadoop.hdfs.DistributedFileSystemorg.apache.hadoop.hdfs.web.WebHdfsFileSystemorg.apache.hadoop.hdfs.web.SWebHdfsFileSystemorg.apache.hadoop.hdfs.web.HftpFileSystemorg.apache.hadoop.hdfs.web.HsftpFileSystem

对比一下两个包的FileSystem中的内容是不一样的

2.新异常

11785 [main] WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper  - Unable to create ZooKeeper Connectionjava.net.UnknownHostException: master

不识别,检查一下hbase.xml配置文件,发现写的是

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements.  See the NOTICE file * distributed with this work for additional information * regarding copyright ownership.  The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License.  You may obtain a copy of the License at * *     http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */--><configuration>    <property>        <name>hbase.rootdir</name>        <value>hdfs://192.168.195.131:9000/hbase</value>    </property>    <property>        <name>hbase.cluster.distributed</name>        <value>true</value>    </property>    <property>        <name>hbase.zookeeper.quorum</name>        <value>master,slave1,slave2</value>    </property>    <property>        <name>hbase.master.info.bindAddress</name>        <value>0.0.0.0</value>    </property>    <property>        <name>hbase.master.info.port</name>        <value>16010</value>    </property>    <property>        <name>hbase.master.port</name>        <value>16000</value>    </property></configuration>

windows当然不识别,这里有两种方法:
1).修改Windows下:C:\Windows\System32\drivers\etc\hosts文件,加上对应的IP地址和主机名

192.168.195.131 master192.168.195.132 slave1192.168.195.133 slave2

2).将配置文件中的master,slave1,slave2修改为响应的IP地址

这里强烈推荐在使用配置文件的时候,
a.把主机名修改为IP地址
b.把0.0.0.0修改为对应的主机IP地址

3.给出程序的完整代码与结构:

这里写图片描述

监控页面
这里写图片描述

package neu;/** * Created by root on 2017/5/15. */import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.hbase.*;import org.apache.hadoop.hbase.client.*;import org.apache.hadoop.hbase.io.compress.Compression.Algorithm;import org.apache.hadoop.hbase.util.Bytes;import java.io.Closeable;import java.io.IOException;import java.util.ArrayList;import java.util.List;import java.util.Random;/** * Used by the book examples to generate tables and fill them with test data. */public class HBaseHelper implements Closeable {    private Configuration configuration = null;    private Connection connection = null;    private Admin admin = null;    protected HBaseHelper(Configuration configuration) throws IOException {        this.configuration = configuration;        this.connection = ConnectionFactory.createConnection(configuration);        this.admin = connection.getAdmin();    }    public static HBaseHelper getHelper(Configuration configuration) throws IOException {        return new HBaseHelper(configuration);    }    @Override    public void close() throws IOException {        connection.close();    }    public Connection getConnection() {        return connection;    }    public Configuration getConfiguration() {        return configuration;    }    public void createNamespace(String namespace) {        try {            NamespaceDescriptor nd = NamespaceDescriptor.create(namespace).build();            admin.createNamespace(nd);        } catch (Exception e) {            System.err.println("Error: " + e.getMessage());        }    }    public void dropNamespace(String namespace, boolean force) {        try {            if (force) {                TableName[] tableNames = admin.listTableNamesByNamespace(namespace);                for (TableName name : tableNames) {                    admin.disableTable(name);                    admin.deleteTable(name);                }            }        } catch (Exception e) {            // ignore        }        try {            admin.deleteNamespace(namespace);        } catch (IOException e) {            System.err.println("Error: " + e.getMessage());        }    }    public boolean existsTable(String table)            throws IOException {        return existsTable(TableName.valueOf(table));    }    public boolean existsTable(TableName table)            throws IOException {        return admin.tableExists(table);    }    public void createTable(String table, String... colfams)            throws IOException {        createTable(TableName.valueOf(table), 1, null, colfams);    }    public void createTable(TableName table, String... colfams)            throws IOException {        createTable(table, 1, null, colfams);    }    public void createTable(String table, int maxVersions, String... colfams)            throws IOException {        createTable(TableName.valueOf(table), maxVersions, null, colfams);    }    public void createTable(TableName table, int maxVersions, String... colfams)            throws IOException {        createTable(table, maxVersions, null, colfams);    }    public void createTable(String table, byte[][] splitKeys, String... colfams)            throws IOException {        createTable(TableName.valueOf(table), 1, splitKeys, colfams);    }    public void createTable(TableName table, int maxVersions, byte[][] splitKeys,                            String... colfams)            throws IOException {        HTableDescriptor desc = new HTableDescriptor(table);        desc.setDurability(Durability.SKIP_WAL);        for (String cf : colfams) {            HColumnDescriptor coldef = new HColumnDescriptor(cf);            coldef.setCompressionType(Algorithm.SNAPPY);            coldef.setMaxVersions(maxVersions);            desc.addFamily(coldef);        }        if (splitKeys != null) {            admin.createTable(desc, splitKeys);        } else {            admin.createTable(desc);        }    }    public void disableTable(String table) throws IOException {        disableTable(TableName.valueOf(table));    }    public void disableTable(TableName table) throws IOException {        admin.disableTable(table);    }    public void dropTable(String table) throws IOException {        dropTable(TableName.valueOf(table));    }    public void dropTable(TableName table) throws IOException {        if (existsTable(table)) {            if (admin.isTableEnabled(table)) disableTable(table);            admin.deleteTable(table);        }    }    public void fillTable(String table, int startRow, int endRow, int numCols,                          String... colfams)            throws IOException {        fillTable(TableName.valueOf(table), startRow,endRow, numCols, colfams);    }    public void fillTable(TableName table, int startRow, int endRow, int numCols,                          String... colfams)            throws IOException {        fillTable(table, startRow, endRow, numCols, -1, false, colfams);    }    public void fillTable(String table, int startRow, int endRow, int numCols,                          boolean setTimestamp, String... colfams)            throws IOException {        fillTable(TableName.valueOf(table), startRow, endRow, numCols, -1,                setTimestamp, colfams);    }    public void fillTable(TableName table, int startRow, int endRow, int numCols,                          boolean setTimestamp, String... colfams)            throws IOException {        fillTable(table, startRow, endRow, numCols, -1, setTimestamp, colfams);    }    public void fillTable(String table, int startRow, int endRow, int numCols,                          int pad, boolean setTimestamp, String... colfams)            throws IOException {        fillTable(TableName.valueOf(table), startRow, endRow, numCols, pad,                setTimestamp, false, colfams);    }    public void fillTable(TableName table, int startRow, int endRow, int numCols,                          int pad, boolean setTimestamp, String... colfams)            throws IOException {        fillTable(table, startRow, endRow, numCols, pad, setTimestamp, false,                colfams);    }    public void fillTable(String table, int startRow, int endRow, int numCols,                          int pad, boolean setTimestamp, boolean random,                          String... colfams)            throws IOException {        fillTable(TableName.valueOf(table), startRow, endRow, numCols, pad,                setTimestamp, random, colfams);    }    public void fillTable(TableName table, int startRow, int endRow, int numCols,                          int pad, boolean setTimestamp, boolean random,                          String... colfams)            throws IOException {        Table tbl = connection.getTable(table);        Random rnd = new Random();        for (int row = startRow; row <= endRow; row++) {            for (int col = 1; col <= numCols; col++) {                Put put = new Put(Bytes.toBytes("row-" + padNum(row, pad)));                for (String cf : colfams) {                    String colName = "col-" + padNum(col, pad);                    String val = "val-" + (random ?                            Integer.toString(rnd.nextInt(numCols)) :                            padNum(row, pad) + "." + padNum(col, pad));                    if (setTimestamp) {                        put.addColumn(Bytes.toBytes(cf), Bytes.toBytes(colName), col,                                Bytes.toBytes(val));                    } else {                        put.addColumn(Bytes.toBytes(cf), Bytes.toBytes(colName),                                Bytes.toBytes(val));                    }                }                tbl.put(put);            }        }        tbl.close();    }    public void fillTableRandom(String table,                                int minRow, int maxRow, int padRow,                                int minCol, int maxCol, int padCol,                                int minVal, int maxVal, int padVal,                                boolean setTimestamp, String... colfams)            throws IOException {        fillTableRandom(TableName.valueOf(table), minRow, maxRow, padRow,                minCol, maxCol, padCol, minVal, maxVal, padVal, setTimestamp, colfams);    }    public void fillTableRandom(TableName table,                                int minRow, int maxRow, int padRow,                                int minCol, int maxCol, int padCol,                                int minVal, int maxVal, int padVal,                                boolean setTimestamp, String... colfams)            throws IOException {        Table tbl = connection.getTable(table);        Random rnd = new Random();        int maxRows = minRow + rnd.nextInt(maxRow - minRow);        for (int row = 0; row < maxRows; row++) {            int maxCols = minCol + rnd.nextInt(maxCol - minCol);            for (int col = 0; col < maxCols; col++) {                int rowNum = rnd.nextInt(maxRow - minRow + 1);                Put put = new Put(Bytes.toBytes("row-" + padNum(rowNum, padRow)));                for (String cf : colfams) {                    int colNum = rnd.nextInt(maxCol - minCol + 1);                    String colName = "col-" + padNum(colNum, padCol);                    int valNum = rnd.nextInt(maxVal - minVal + 1);                    String val = "val-" +  padNum(valNum, padCol);                    if (setTimestamp) {                        put.addColumn(Bytes.toBytes(cf), Bytes.toBytes(colName), col,                                Bytes.toBytes(val));                    } else {                        put.addColumn(Bytes.toBytes(cf), Bytes.toBytes(colName),                                Bytes.toBytes(val));                    }                }                tbl.put(put);            }        }        tbl.close();    }    /**     * 鎸夌収鎸囧畾鐨勬媶鍒嗙偣鎷嗗垎琛�     * @param tableName 琛ㄥ悕     * @param splitPoint 鎷嗗垎鐐�     * @throws IOException     */    public void splitTable(String tableName, byte[] splitPoint) throws IOException{        TableName table = TableName.valueOf(tableName);        admin.split(table, splitPoint);    }    /**     * 鎷嗗垎Region     * @param regionName 瑕佹媶鍒嗙殑Region鍚嶇О     * @param splitPoint 鎷嗗垎鐐癸紙蹇呴』鍦≧egion鐨剆tartKey鍜宔ndKey涔嬮棿鎵嶅彲浠ユ媶鍒嗘垚鍔燂級     * @throws IOException     */    public void splitRegion(String regionName, byte[] splitPoint) throws IOException {        admin.splitRegion(Bytes.toBytes(regionName), splitPoint);    }    /**     * 鍚堝苟region     * @param regionNameA regionA鍚嶇О     * @param regionNameB regionB鍚嶇О     * @throws IOException     */    public void mergerRegions(String regionNameA, String regionNameB) throws IOException {        admin.mergeRegions(Bytes.toBytes(regionNameA), Bytes.toBytes(regionNameB), true);    }    public String padNum(int num, int pad) {        String res = Integer.toString(num);        if (pad > 0) {            while (res.length() < pad) {                res = "0" + res;            }        }        return res;    }    public void put(String table, String row, String fam, String qual,                    String val) throws IOException {        put(TableName.valueOf(table), row, fam, qual, val);    }    public void put(TableName table, String row, String fam, String qual,                    String val) throws IOException {        Table tbl = connection.getTable(table);        Put put = new Put(Bytes.toBytes(row));        put.addColumn(Bytes.toBytes(fam), Bytes.toBytes(qual), Bytes.toBytes(val));        tbl.put(put);        tbl.close();    }    public void put(String table, String row, String fam, String qual, long ts,                    String val) throws IOException {        put(TableName.valueOf(table), row, fam, qual, ts, val);    }    public void put(TableName table, String row, String fam, String qual, long ts,                    String val) throws IOException {        Table tbl = connection.getTable(table);        Put put = new Put(Bytes.toBytes(row));        put.addColumn(Bytes.toBytes(fam), Bytes.toBytes(qual), ts,                Bytes.toBytes(val));        tbl.put(put);        tbl.close();    }    public void put(String table, String[] rows, String[] fams, String[] quals,                    long[] ts, String[] vals) throws IOException {        put(TableName.valueOf(table), rows, fams, quals, ts, vals);    }    public void put(TableName table, String[] rows, String[] fams, String[] quals,                    long[] ts, String[] vals) throws IOException {        Table tbl = connection.getTable(table);        for (String row : rows) {            Put put = new Put(Bytes.toBytes(row));            for (String fam : fams) {                int v = 0;                for (String qual : quals) {                    String val = vals[v < vals.length ? v : vals.length - 1];                    long t = ts[v < ts.length ? v : ts.length - 1];                    System.out.println("Adding: " + row + " " + fam + " " + qual +                            " " + t + " " + val);                    put.addColumn(Bytes.toBytes(fam), Bytes.toBytes(qual), t,                            Bytes.toBytes(val));                    v++;                }            }            tbl.put(put);        }        tbl.close();    }    public void dump(String table, String[] rows, String[] fams, String[] quals)            throws IOException {        dump(TableName.valueOf(table), rows, fams, quals);    }    public void dump(TableName table, String[] rows, String[] fams, String[] quals)            throws IOException {        Table tbl = connection.getTable(table);        List<Get> gets = new ArrayList<Get>();        for (String row : rows) {            Get get = new Get(Bytes.toBytes(row));            get.setMaxVersions();            if (fams != null) {                for (String fam : fams) {                    for (String qual : quals) {                        get.addColumn(Bytes.toBytes(fam), Bytes.toBytes(qual));                    }                }            }            gets.add(get);        }        Result[] results = tbl.get(gets);        for (Result result : results) {            for (Cell cell : result.rawCells()) {                System.out.println("Cell: " + cell +                        ", Value: " + Bytes.toString(cell.getValueArray(),                        cell.getValueOffset(), cell.getValueLength()));            }        }        tbl.close();    }}
package neu;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.hbase.*;import org.apache.hadoop.hbase.client.*;import org.apache.hadoop.hbase.util.Bytes;import org.apache.log4j.BasicConfigurator;import java.io.IOException;import java.util.Collection;import java.util.Map;import java.util.NavigableMap;import java.util.Set;public class HBaseOprations {    public static void main(String[] args) throws IOException {        BasicConfigurator.configure();        Configuration conf = HBaseConfiguration.create();        HBaseHelper helper = HBaseHelper.getHelper(conf);        //helper.splitRegion("24261034a736c06db96172b6f648f0bb", Bytes.toBytes("0120151025"));        //helper.mergerRegions("92e57c211228ae4847dac3a02a51e684", "c059a4fee33246a00c95136319d9215f");        createTable(helper);        getRegionSize(conf);    }    public static void createTable(HBaseHelper helper) throws IOException{        helper.dropTable("FAN12");// 删除表        RegionSplit rSplit = new RegionSplit();        byte[][] splitKeys = rSplit.split();        TableName tablename = TableName.valueOf("FAN12");//新建表        helper.createTable(tablename, 1, splitKeys, "INFO");//      helper.createTable(tablename, 1, "INFO");    }    public static void getRegionsInfo(Configuration conf) throws IOException{        Connection connection = ConnectionFactory.createConnection(conf);        TableName tablename = TableName.valueOf(Bytes.toBytes("faninfo8"));        NavigableMap<HRegionInfo, ServerName> regionMap            = MetaScanner.allTableRegions(connection, tablename);        Set<HRegionInfo> set = regionMap.keySet();        TableName tableName = TableName.valueOf(Bytes.toBytes("faninfo8"));        RegionLocator regionLoc = connection.getRegionLocator(tableName);    }    public static void getRegionSize(Configuration conf) throws IOException{        Connection connection = ConnectionFactory.createConnection(conf);        Admin admin = connection.getAdmin();        ClusterStatus status = admin.getClusterStatus();        Collection<ServerName> snList = status.getServers();        int totalSize = 0;        for (ServerName sn : snList) {            System.out.println(sn.getServerName());            ServerLoad sl = status.getLoad(sn);            int storeFileSize = sl.getStorefileSizeInMB();// RS大小            Map<byte[], RegionLoad> rlMap = sl.getRegionsLoad();            Set<byte[]> rlKeys = rlMap.keySet();            for (byte[] bs : rlKeys) {                RegionLoad rl = rlMap.get(bs);                String regionName = rl.getNameAsString();                if(regionName.substring(0, regionName.indexOf(",")).equals("FANPOINTINFO")) {                    int regionSize = rl.getStorefileSizeMB();                    totalSize += regionSize;                    System.out.println(regionSize + "MB");                }            }        }        System.out.println("总大小=" + totalSize + "MB");    }}
package neu;import org.apache.hadoop.hbase.util.Bytes;public class RegionSplit {    private String[] pointInfos1 = {            "JLFC_FJ050_",            "JLFC_FJ100_",            "JLFC_FJ150_",            "JLFC_FJ200_",            "JLFC_FJ250_",            "ZYFC_FJ050_",            "ZYFC_FJ100_",            "ZYFC_FJ150_",            "ZYFC_FJ200_",            "ZYFC_FJ250_",            "WDFC_FJ050_",            "WDFC_FJ100_",            "WDFC_FJ150_",            "WDFC_FJ200_",            "WDFC_FJ250_",            "ZRHFC_FJ050_",            "ZRHFC_FJ100_",            "ZRHFC_FJ150_",            "ZRHFC_FJ200_",            "ZRHFC_FJ250_",            "NXFC_FJ050_",            "NXFC_FJ100_",            "NXFC_FJ150_",            "NXFC_FJ200_",            "NXFC_FJ250_"    };    private String[] pointInfos = {            "0001",            "0002",            "0003",            "0004",            "0005",            "0006",            "0007",            "0008",            "0009",            "0010",            "0011",            "0012",            "0013",            "0014",            "0015",            "0016",            "0017",            "0018",            "0019",            "0020",            "0021",            "0022",            "0023",            "0024",            "0025",            "0026",            "0027",            "0028",            "0029"};    public byte[][] split() {        byte[][] result = new byte[pointInfos.length][];        for (int i = 0; i < pointInfos.length; i++) {            result[i] = Bytes.toBytes(pointInfos[i]);//            System.out.print("'" + pointInfos[i] + "'" + ",");        }        return result;    }    public byte[][] splitByPartition() {        return null;    }    public static void main(String[] args) {        RegionSplit split = new RegionSplit();        split.split();    }}

core-site.xml

<configuration>    <property>        <name>fs.defaultFS</name>        <value>hdfs://master:9000</value>    </property>    <property>        <name>io.file.buffer.size</name>        <value>131072</value>    </property>    <property>        <name>hadoop.tmp.dir</name>        <value>file:/usr/local/hadoop/tmp</value>        <description>Abase for other temporary directories.</description>    </property>    <property>        <name>hadoop.proxyuser.root.hosts</name>        <value>*</value>    </property>    <property>        <name>hadoop.proxyuser.root.groups</name>        <value>*</value>    </property></configuration>

hbase-site.xml

<configuration>    <property>        <name>hbase.rootdir</name>        <value>hdfs://192.168.195.131:9000/hbase</value>    </property>    <property>        <name>hbase.cluster.distributed</name>        <value>true</value>    </property>    <property>        <name>hbase.zookeeper.quorum</name>        <value>192.168.195.131,192.168.195.132,192.168.195.133</value>    </property>    <property>        <name>hbase.master.info.bindAddress</name>        <value>192.168.195.131</value>    </property>    <property>        <name>hbase.master.info.port</name>        <value>16010</value>    </property>    <property>        <name>hbase.master.port</name>        <value>16000</value>    </property></configuration>

hdfs-site.xml

<configuration>    <property>        <name>dfs.namenode.name.dir</name>        <value>file:/usr/local/hadoop/hdfs/name</value>    </property>    <property>        <name>dfs.datanode.data.dir</name>        <value>file:/usr/local/hadoop/hdfs/data</value>    </property>    <property>        <name>dfs.replication</name>        <value>3</value>    </property>    <property>        <name>dfs.namenode.secondary.http-address</name>        <value>192.168.195.131:9001</value>    </property>    <property>        <name>dfs.webhdfs.enabled</name>        <value>true</value>    </property>    <property>         <name>dfs.datanode.max.xcievers</name>         <value>4096</value>    </property></configuration>

pom.xml

<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0"         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">    <modelVersion>4.0.0</modelVersion>    <groupId>com.hbase</groupId>    <artifactId>HbaseOperation</artifactId>    <version>1.0-SNAPSHOT</version>    <build>        <plugins>            <plugin>                <groupId>org.apache.maven.plugins</groupId>                <artifactId>maven-compiler-plugin</artifactId>                <configuration>                    <source>1.6</source>                    <target>1.6</target>                </configuration>            </plugin>        </plugins>    </build>    <dependencies>        <dependency>            <groupId>org.apache.hbase</groupId>            <artifactId>hbase-common</artifactId>            <version>1.1.2</version>        </dependency>        <dependency>            <groupId>org.apache.hadoop</groupId>            <artifactId>hadoop-common</artifactId>            <version>2.8.0</version>        </dependency>        <dependency>            <groupId>org.apache.hbase</groupId>            <artifactId>hbase-client</artifactId>            <version>1.1.2</version>        </dependency>        <dependency>            <groupId>org.apache.hadoop</groupId>            <artifactId>hadoop-hdfs</artifactId>            <version>2.8.0</version>        </dependency>    </dependencies></project>
原创粉丝点击