Hadoop笔记(二)

来源:互联网 发布:qt初学者 知乎 编辑:程序博客网 时间:2024/05/23 12:32

HDFS文件的读取方法一,不推荐:

package com.hadooptest.hdfstest;import java.io.IOException;import java.io.InputStream;import java.net.URL;import org.apache.hadoop.fs.FsUrlStreamHandlerFactory;//每个java虚拟机只能调用一次setURLStreamHandlerFactory//这就意味着如果程序的其他组件已经调用了这个方法//你将无法使用这个方法从Hadoop中读取数据public class HdfsTest {static{//非常重要,这样才能识别hdfs URLURL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());}private void printStream(InputStream in) throws IOException{if(in == null){return;}byte[] buffer = new byte[1024];int len = -1;while((len = in.read(buffer)) != -1){System.out.println(new String(buffer,0,len));}}public void readHdfsFile(){InputStream in = null;try{in = new URL("hdfs://ubuntu:9000/test1").openStream();printStream(in);}catch(Exception ex){ex.printStackTrace();}finally{if(in != null){try {in.close();} catch (IOException e) {// TODO Auto-generated catch blocke.printStackTrace();}}}}public static void main(String[] args){HdfsTest test = new HdfsTest();test.readHdfsFile();}}

HDFS文件的读取方式二:

package com.hadooptest.hdfstest;import java.io.IOException;import java.io.InputStream;import java.net.URI;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.FileSystem;import org.apache.hadoop.fs.Path;public class HdfsTest1 {private void printStream(InputStream in) throws IOException{if(in == null){return;}byte[] buffer = new byte[1024];int len = -1;while((len = in.read(buffer)) != -1){System.out.println(new String(buffer,0,len));}}public void readHdfsFile(){String url = "hdfs://ubuntu:9000/test1";Configuration conf = new Configuration();FileSystem fs = null;InputStream in = null;try {fs = FileSystem.get(URI.create(url),conf);in = fs.open(new Path(url));printStream(in);} catch (IOException e) {// TODO Auto-generated catch blocke.printStackTrace();} finally{if(in != null){try {in.close();} catch (IOException e) {// TODO Auto-generated catch blocke.printStackTrace();}}if(fs != null){try {fs.close();} catch (IOException e) {// TODO Auto-generated catch blocke.printStackTrace();}}}}public static void main(String[] args){HdfsTest test = new HdfsTest();test.readHdfsFile();}}

HDFS文件的写入方式:

package com.hadooptest.hdfstest;import java.io.BufferedInputStream;import java.io.FileInputStream;import java.io.IOException;import java.io.InputStream;import java.io.OutputStream;import java.net.URI;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.FileSystem;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IOUtils;import org.apache.hadoop.util.Progressable;public class HdfsTest2 {public void copyFile(){String localPath = "/home/pijing/testdata";String dstPath = "hdfs://ubuntu:9000/testdata";InputStream in = null;FileSystem fs = null;OutputStream out = null;try {in = new BufferedInputStream(new FileInputStream(localPath));Configuration conf = new Configuration();fs = FileSystem.get(URI.create(dstPath),conf);out = fs.create(new Path(dstPath),new Progressable(){@Overridepublic void progress() {System.out.println(">>>>>>>>>>>>");}});IOUtils.copyBytes(in, out, 4096, true);} catch (Exception e) {// TODO Auto-generated catch blocke.printStackTrace();} finally{if(in != null){try {in.close();} catch (IOException e) {// TODO Auto-generated catch blocke.printStackTrace();}}if(out != null){try {out.close();} catch (IOException e) {// TODO Auto-generated catch blocke.printStackTrace();}}if(fs != null){try {fs.close();} catch (IOException e) {// TODO Auto-generated catch blocke.printStackTrace();}}}}public static void main(String[] args){HdfsTest2 test2 = new HdfsTest2();test2.copyFile();}}

1.读文件时,是向namenode发送请求(namenode已经将元数据与datanode的关系记录在了内存中)。文件是分为多个块的,对于一个块在namenode中记录的datanode的位置到相应的datanode进行读取,如果一个块在多个datanode中存在的话,则会自动选取最近的datanode进行读取。
2.写文件时,新建文件到命名空间中,namenode会检测该文件是否存在以及运行程序的用户是否具有该文件的权限,如果有问题会立即抛出异常;否则则会返回给用户Outputstream进行写文件。写文件的块分布在哪些datanode上由namenode分配,它通常会把第一个副本放在随机选取的datanode节点上,块的第二个副本选在与第一个datanode不在一个机架的节点上,第三个副本选在与第二个datanode同一个机架的随机节点上。
在写文件时,文件会立即可见,但写入的数据不会立即可见。通常写完一个块,那个块的数据才会可见;而正在写的块的数据是不可见的。

0 0