Java&Python操作WebHDFS
来源:互联网 发布:2017网络语 编辑:程序博客网 时间:2024/05/16 10:33
有用到通过java client或者python client操作HDFS,记录一下简单的代码片段。
WebHDFS的认证方式
WebHDFS的认证方式有三种:
Authentication
When security is off, the authenticated user is the username specified in the user.name query parameter. If the user.name parameter is not set, the server may either set the authenticated user to a default web user, if there is any, or return an error response.
When security is on, authentication is performed by either Hadoop delegation token or Kerberos SPNEGO. If a token is set in the delegation query parameter, the authenticated user is the user encoded in the token. If the delegation parameter is not set, the user is authenticated by Kerberos SPNEGO.
Below are examples using the curl command tool.
Authentication when security is off:
curl -i “http://:/webhdfs/v1/?[user.name=&]op=…”
Authentication using Kerberos SPNEGO when security is on:
curl -i –negotiate -u : “http://:/webhdfs/v1/?op=…”
Authentication using Hadoop delegation token when security is on:
curl -i “http://:/webhdfs/v1/?delegation=&op=…”
Java访问WebHDFS
这里没有开启安全认证。
测试的REST API: http://10.254.100.198:50070/webhdfs/v1/Project?op=LISTSTATUS&user.name=hdfs
代码片段:
import java.io.BufferedReader;import java.io.IOException;import java.io.InputStream;import java.io.InputStreamReader;import java.net.HttpURLConnection;import java.net.MalformedURLException;import java.net.URL;import java.sql.Timestamp;import java.text.MessageFormat;import java.util.ArrayList;import java.util.List;import com.alibaba.fastjson.JSON;import com.alibaba.fastjson.JSONObject; /** * <b>LISTSTATUS</b> * * curl -i "http://<HOST>:<PORT>/webhdfs/v1/<PATH>?op=LISTSTATUS&user.name=hdfs" * * @param totalDir * @return * @throws IOException */ public List<String> getHDFSDirs(String totalDir, String host, String port) throws IOException { String httpfsUrl = BackupUtils.DEFAULT_PROTOCOL + host + ":" + port; String spec = MessageFormat.format("/webhdfs/v1{0}?op=LISTSTATUS&user.name={1}", totalDir, "hdfs"); URL url = new URL(new URL(httpfsUrl), spec); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setRequestMethod("GET"); conn.connect(); String resp = result(conn, true); conn.disconnect(); JSONObject root = JSON.parseObject(resp); int size = root.getJSONObject("FileStatuses").getJSONArray("FileStatus").size(); List<String> dirs = new ArrayList<>(); for(int i = 0; i < size; ++i) { String dir = root.getJSONObject("FileStatuses").getJSONArray("FileStatus").getJSONObject(i).getString("pathSuffix"); dirs.add(dir); } return dirs; } /** * Report the result in STRING way * * @param conn * @param input * @return * @throws IOException */ public String result(HttpURLConnection conn, boolean input) throws IOException { StringBuffer sb = new StringBuffer(); if (input) { InputStream is = conn.getInputStream(); BufferedReader reader = new BufferedReader(new InputStreamReader(is, "utf-8")); String line = null; while ((line = reader.readLine()) != null) { sb.append(line); } reader.close(); is.close(); } return sb.toString(); }
Python访问WebHDFS
同样也没有开启安全认证。
测试的REST API: http://10.254.100.198:50070/webhdfs/v1/Project?op=LISTSTATUS&user.name=hdfs
代码片段:
agent_config = AmbariConfig()WEBHDFS_CONTEXT_ROOT="/webhdfs/v1"class WebHDFS(object): """ Class for accessing HDFS via WebHDFS To enable WebHDFS in your Hadoop Installation add the following configuration to your hdfs_site.xml (requires Hadoop >0.20.205.0): <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> """ def __init__(self, namenode_host, namenode_port, hdfs_username): self.namenode_host=namenode_host self.namenode_port = namenode_port self.username = hdfs_username def __getNameNodeHTTPClient(self): httpClient = httplib.HTTPConnection(self.namenode_host, self.namenode_port, timeout=600) return httpClient def listdir(self, path): if os.path.isabs(path)==False: raise Exception("Only absolute paths supported: %s"%(path)) url_path = WEBHDFS_CONTEXT_ROOT + path+'?op=LISTSTATUS&user.name='+self.username httpClient = self.__getNameNodeHTTPClient() httpClient.request('GET', url_path, headers={}) response = httpClient.getresponse() data_dict = json.loads(response.read()) files = [] for i in data_dict["FileStatuses"]["FileStatus"]: files.append(i["pathSuffix"]) httpClient.close() return files# webhdfs = WebHDFS("10.254.100.139", 50070, "hdfs")webhdfs = WebHDFS(namenode_host, int(namenode_http_port), "hdfs")source_files = webhdfs.listdir(source)
Reference:
1. https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html
- Java&Python操作WebHDFS
- 线索 webhdfs
- 使用webhdfs
- python java 操作 javascript 函数
- python 操作符 vs java 操作符
- hadoop -webhdfs api 方法
- WebHDFS REST API
- WebHDFS REST API
- 关于WebHDFS与HttpFS
- hadoop webhdfs rest api
- HDP学习--Using WebHDFS
- 关于WebHDFS与HttpFS
- Logstash-Output-Webhdfs
- WebHdfs API使用和开放WebHdfs使用后权限控制
- Hadoop webHDFS设置和使用说明
- Hadoop REST API -- WebHDFS(上)
- Hadoop webHDFS设置和使用说明
- Hadoop WEBHDFS简单配置,使用
- spring源码剖析(八)spring整合mybatis原理
- [图论] zoj1015
- IO流
- Kmp算法
- 191. Number of 1 Bits
- Java&Python操作WebHDFS
- 实现一个简单Socket通信示例
- Spark Streaming源码解读之RDD生成全生命周期详解
- 第11周补充(3)点派生时间类
- iOS 蓝牙开发(二)iOS 连接外设的代码实现
- 镜花水月,过不留痕————铭记那些给我们带来进步的C语言小难题<2>
- Java使用quartz实现作业调度
- java中的Clone(深拷贝,浅拷贝)
- Kmp 模板(邝斌 - 人一我百,人百我万)