使用 FileSystem JAVA API 对 HDFS 进行读、写、删除等操作

来源:互联网 发布:好用的c语言编译器 编辑:程序博客网 时间:2024/05/16 03:20
Hadoop文件系统 
基本的文件系统命令操作, 通过hadoop fs -help可以获取所有的命令的详细帮助文件。 

Java抽象类org.apache.hadoop.fs.FileSystem定义了hadoop的一个文件系统接口。该类是一个抽象类,通过以下两种静态工厂方法可以过去FileSystem实例: 
public static FileSystem.get(Configuration conf) throws IOException 
public static FileSystem.get(URI uri, Configuration conf) throws IOException
 

具体方法实现: 
1、public boolean mkdirs(Path f) throws IOException 
一次性新建所有目录(包括父目录), f是完整的目录路径。 

2、public FSOutputStream create(Path f) throws IOException 
创建指定path对象的一个文件,返回一个用于写入数据的输出流 
create()有多个重载版本,允许我们指定是否强制覆盖已有的文件、文件备份数量、写入文件缓冲区大小、文件块大小以及文件权限。 

3、public boolean copyFromLocal(Path src, Path dst) throws IOException 
将本地文件拷贝到文件系统 

4、public boolean exists(Path f) throws IOException 
检查文件或目录是否存在 

5、public boolean delete(Path f, Boolean recursive) 
永久性删除指定的文件或目录,如果f是一个空目录或者文件,那么recursive的值就会被忽略。只有recursive=true时,一个非空目录及其内容才会被删除。 

6、FileStatus类封装了文件系统中文件和目录的元数据,包括文件长度、块大小、备份、修改时间、所有者以及权限信息。 

通过"FileStatus.getPath()"可查看指定HDFS中某个目录下所有文件。 

01package hdfsTest;
02 
03import java.io.IOException;
04 
05import org.apache.hadoop.conf.Configuration;
06import org.apache.hadoop.fs.FSDataOutputStream;
07import org.apache.hadoop.fs.FileStatus;
08import org.apache.hadoop.fs.FileSystem;
09import org.apache.hadoop.fs.Path;
10 
11public class OperatingFiles {
12    //initialization
13    static Configuration conf = new Configuration();
14    static FileSystem hdfs;
15    static {
16        String path = "/usr/java/hadoop-1.0.3/conf/";
17        conf.addResource(new Path(path + "core-site.xml"));
18        conf.addResource(new Path(path + "hdfs-site.xml"));
19        conf.addResource(new Path(path + "mapred-site.xml"));
20        path = "/usr/java/hbase-0.90.3/conf/";
21        conf.addResource(new Path(path + "hbase-site.xml"));
22        try {
23            hdfs = FileSystem.get(conf);
24        catch (IOException e) {
25            e.printStackTrace();
26        }
27    }
28     
29    //create a direction
30    public void createDir(String dir) throws IOException {
31        Path path = new Path(dir);
32        hdfs.mkdirs(path);
33        System.out.println("new dir \t" + conf.get("fs.default.name") + dir);
34    }  
35     
36    //copy from local file to HDFS file
37    public void copyFile(String localSrc, String hdfsDst) throws IOException{
38        Path src = new Path(localSrc);     
39        Path dst = new Path(hdfsDst);
40        hdfs.copyFromLocalFile(src, dst);
41         
42        //list all the files in the current direction
43        FileStatus files[] = hdfs.listStatus(dst);
44        System.out.println("Upload to \t" + conf.get("fs.default.name") + hdfsDst);
45        for (FileStatus file : files) {
46            System.out.println(file.getPath());
47        }
48    }
49     
50    //create a new file
51    public void createFile(String fileName, String fileContent) throws IOException {
52        Path dst = new Path(fileName);
53        byte[] bytes = fileContent.getBytes();
54        FSDataOutputStream output = hdfs.create(dst);
55        output.write(bytes);
56        System.out.println("new file \t" + conf.get("fs.default.name") + fileName);
57    }
58     
59    //list all files
60    public void listFiles(String dirName) throws IOException {
61        Path f = new Path(dirName);
62        FileStatus[] status = hdfs.listStatus(f);
63        System.out.println(dirName + " has all files:");
64        for (int i = 0; i< status.length; i++) {
65            System.out.println(status[i].getPath().toString());
66        }
67    }
68 
69    //judge a file existed? and delete it!
70    public void deleteFile(String fileName) throws IOException {
71        Path f = new Path(fileName);
72        boolean isExists = hdfs.exists(f);
73        if (isExists) { //if exists, delete
74            boolean isDel = hdfs.delete(f,true);
75            System.out.println(fileName + "  delete? \t" + isDel);
76        else {
77            System.out.println(fileName + "  exist? \t" + isExists);
78        }
79    }
80 
81    public static void main(String[] args) throws IOException {
82        OperatingFiles ofs = new OperatingFiles();
83        System.out.println("\n=======create dir=======");
84        String dir = "/test";
85        ofs.createDir(dir);
86        System.out.println("\n=======copy file=======");
87        String src = "/home/ictclas/Configure.xml";
88        ofs.copyFile(src, dir);
89        System.out.println("\n=======create a file=======");
90        String fileContent = "Hello, world! Just a test.";
91        ofs.createFile(dir+"/word.txt", fileContent);
92    }
93}

Using HDFS in java (0.20.0)

Below is a code sample of how to read from and write to HDFS in java. 

1. Creating a configuration object:  To be able to read from or write to HDFS, you need to create a Configuration object and pass configuration parameter to it using hadoop configuration files.  
  
    // Conf object will read the HDFS configuration parameters from these  XML
    // files. You may specify the parameters for your own if you want.
 

    Configuration conf = new Configuration(); 
    conf.addResource(new Path("/opt/hadoop-0.20.0/conf/core-site.xml")); 
    conf.addResource(new Path("/opt/hadoop-0.20.0/conf/hdfs-site.xml")); 

    If you do not assign the configurations to conf object (using hadoop xml file) your HDFS operation will be performed on the local file system and not on the HDFS. 

2. Adding file to HDFS:
 Create a FileSystem object and use a file stream to add a file. 

    FileSystem fileSystem = FileSystem.get(conf);
    
    // Check if the file already exists

    Path path = new Path("/path/to/file.ext");
    if (fileSystem.exists(path)) {
        System.out.println("File " + dest + " already exists");
        return;
    }

    // Create a new file and write data to it.
    FSDataOutputStream out = fileSystem.create(path);
    InputStream in = new BufferedInputStream(new FileInputStream(
        new File(source)));


    byte[] b = new byte[1024];
    int numBytes = 0;
    while ((numBytes = in.read(b)) > 0) {
        out.write(b, 0, numBytes);
    }

    // Close all the file descripters
    in.close();
    out.close();
    fileSystem.close();

3. Reading file from HDFS: Create a file stream object to a file in HDFS and read it. 

    FileSystem fileSystem = FileSystem.get(conf);

    Path path = new Path("/path/to/file.ext");
 
    if (!fileSystem.exists(path)) { 
        System.out.println("File does not exists"); 
        return; 
    }

    FSDataInputStream in = fileSystem.open(path);
 

    String filename = file.substring(file.lastIndexOf('/') + 1,
        file.length());
 

    OutputStream out = new BufferedOutputStream(new FileOutputStream(
        new File(filename)));
 

    byte[] b = new byte[1024]; 
    int numBytes = 0; 
    while ((numBytes = in.read(b)) > 0) { 
        out.write(b, 0, numBytes); 
    } 

    in.close(); 
    out.close(); 
    fileSystem.close(); 

3. Deleting file from HDFS: Create a file stream object to a file in HDFS and delete it. 

    FileSystem fileSystem = FileSystem.get(conf); 

    Path path = new Path("/path/to/file.ext"); 
    if (!fileSystem.exists(path)) { 
        System.out.println("File does not exists"); 
        return; 
    }

    // Delete file
    fileSystem.delete(new Path(file), true);
 

    fileSystem.close(); 

3. Create dir in HDFS: Create a file stream object to a file in HDFS and read it. 

    FileSystem fileSystem = FileSystem.get(conf); 

    Path path = new Path(dir); 
    if (fileSystem.exists(path)) { 
        System.out.println("Dir " + dir + " already not exists"); 
        return; 
    }

    // Create directories
    fileSystem.mkdirs(path);
 

    fileSystem.close(); 

Code:

001import java.io.BufferedInputStream;
002import java.io.BufferedOutputStream;
003import java.io.File;
004import java.io.FileInputStream;
005import java.io.FileOutputStream;
006import java.io.IOException;
007import java.io.InputStream;
008import java.io.OutputStream;
009 
010import org.apache.hadoop.conf.Configuration;
011import org.apache.hadoop.fs.FSDataInputStream;
012import org.apache.hadoop.fs.FSDataOutputStream;
013import org.apache.hadoop.fs.FileSystem;
014import org.apache.hadoop.fs.Path;
015 
016public class HDFSClient {
017    public HDFSClient() {
018 
019    }
020 
021    public void addFile(String source, String dest) throws IOException {
022        Configuration conf = new Configuration();
023 
024        // Conf object will read the HDFS configuration parameters from these
025        // XML files.
026        conf.addResource(new Path("/opt/hadoop-0.20.0/conf/core-site.xml"));
027        conf.addResource(new Path("/opt/hadoop-0.20.0/conf/hdfs-site.xml"));
028 
029        FileSystem fileSystem = FileSystem.get(conf);
030 
031        // Get the filename out of the file path
032        String filename = source.substring(source.lastIndexOf('/') + 1,
033            source.length());
034 
035        // Create the destination path including the filename.
036        if (dest.charAt(dest.length() - 1) != '/') {
037            dest = dest + "/" + filename;
038        else {
039            dest = dest + filename;
040        }
041 
042        // System.out.println("Adding file to " + destination);
043 
044        // Check if the file already exists
045        Path path = new Path(dest);
046        if (fileSystem.exists(path)) {
047            System.out.println("File " + dest + " already exists");
048            return;
049        }
050 
051        // Create a new file and write data to it.
052        FSDataOutputStream out = fileSystem.create(path);
053        InputStream in = new BufferedInputStream(new FileInputStream(
054            new File(source)));
055 
056        byte[] b = new byte[1024];
057        int numBytes = 0;
058        while ((numBytes = in.read(b)) > 0) {
059            out.write(b, 0, numBytes);
060        }
061 
062        // Close all the file descripters
063        in.close();
064        out.close();
065        fileSystem.close();
066    }
067 
068    public void readFile(String file) throws IOException {
069        Configuration conf = new Configuration();
070        conf.addResource(new Path("/opt/hadoop-0.20.0/conf/core-site.xml"));
071 
072        FileSystem fileSystem = FileSystem.get(conf);
073 
074        Path path = new Path(file);
075        if (!fileSystem.exists(path)) {
076            System.out.println("File " + file + " does not exists");
077            return;
078        }
079 
080        FSDataInputStream in = fileSystem.open(path);
081 
082        String filename = file.substring(file.lastIndexOf('/') + 1,
083            file.length());
084 
085        OutputStream out = new BufferedOutputStream(new FileOutputStream(
086            new File(filename)));
087 
088        byte[] b = new byte[1024];
089        int numBytes = 0;
090        while ((numBytes = in.read(b)) > 0) {
091            out.write(b, 0, numBytes);
092        }
093 
094        in.close();
095        out.close();
096        fileSystem.close();
097    }
098 
099    public void deleteFile(String file) throws IOException {
100        Configuration conf = new Configuration();
101        conf.addResource(new Path("/opt/hadoop-0.20.0/conf/core-site.xml"));
102 
103        FileSystem fileSystem = FileSystem.get(conf);
104 
105        Path path = new Path(file);
106        if (!fileSystem.exists(path)) {
107            System.out.println("File " + file + " does not exists");
108            return;
109        }
110 
111        fileSystem.delete(new Path(file), true);
112 
113        fileSystem.close();
114    }
115 
116    public void mkdir(String dir) throws IOException {
117        Configuration conf = new Configuration();
118        conf.addResource(new Path("/opt/hadoop-0.20.0/conf/core-site.xml"));
119 
120        FileSystem fileSystem = FileSystem.get(conf);
121 
122        Path path = new Path(dir);
123        if (fileSystem.exists(path)) {
124            System.out.println("Dir " + dir + " already not exists");
125            return;
126        }
127 
128        fileSystem.mkdirs(path);
129 
130        fileSystem.close();
131    }
132 
133    public static void main(String[] args) throws IOException {
134 
135        if (args.length < 1) {
136            System.out.println("Usage: hdfsclient add/read/delete/mkdir" +
137                " [<local_path> <hdfs_path>]");
138            System.exit(1);
139        }
140 
141        HDFSClient client = new HDFSClient();
142        if (args[0].equals("add")) {
143            if (args.length < 3) {
144                System.out.println("Usage: hdfsclient add <local_path> " +
145                "<hdfs_path>");
146                System.exit(1);
147            }
148 
149            client.addFile(args[1], args[2]);
150        else if (args[0].equals("read")) {
151            if (args.length < 2) {
152                System.out.println("Usage: hdfsclient read <hdfs_path>");
153                System.exit(1);
154            }
155 
156            client.readFile(args[1]);
157        else if (args[0].equals("delete")) {
158            if (args.length < 2) {
159                System.out.println("Usage: hdfsclient delete <hdfs_path>");
160                System.exit(1);
161            }
162 
163            client.deleteFile(args[1]);
164        else if (args[0].equals("mkdir")) {
165            if (args.length < 2) {
166                System.out.println("Usage: hdfsclient mkdir <hdfs_path>");
167                System.exit(1);
168            }
169 
170            client.mkdir(args[1]);
171        else {  
172            System.out.println("Usage: hdfsclient add/read/delete/mkdir" +
173                " [<local_path> <hdfs_path>]");
174            System.exit(1);
175        }
176 
177        System.out.println("Done!");
178    }
179}

from:http://smallwildpig.iteye.com/blog/1705039   Java对HDFS的操作

http://blog.rajeevsharma.in/2009/06/using-hdfs-in-java-0200.html   Using HDFS in java (0.20.0)

0 0
原创粉丝点击