大数据IMF传奇行动绝密课程第73课:Spark SQL Thrift Server实战

来源:互联网 发布:java实现ftp断点续传 编辑:程序博客网 时间:2024/04/30 20:06

Spark SQL Thrift Server实战

通过JDBC/ODBC->Thirft Server->Spark SQL->Hive取代传统数据库为后台的系统
启动hive:

hive --service metastore &hive

服务端:启动thrift服务端

./start-thriftserver.sh --master spark://Master:7077 --hiveconf hive.server2.transport.mode=http --hiveconf hive.server2.thrift.http.path=cliservice

Java代码

/** * Java通过JDBC访问Thrift Server,进而访问Hive,这是企业级开发中最为常见的方式 */public class SparkSQLJDBC2ThriftServer {    /**     * @param args     */    public static void main(String[] args) {        Connection conn = null;        ResultSet rs = null;        String sql = "select * from people where age = ?";        try {            Class.forName("org.apache.hive.jdbc.HiveDriver");            conn = DriverManager.getConnection("jdbc:hive2://Master:10001/default?"     //10001为thrift默认端口,default为hive'中的库                    + "hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice",                    "root","");            PreparedStatement stmt = conn.prepareStatement(sql);            stmt.setInt(1, 30);            rs = stmt.executeQuery();            while(rs.next()){                System.out.println(rs.getString(1)); //数据应保存成parquet            }        } catch (ClassNotFoundException e) {            // TODO Auto-generated catch block            e.printStackTrace();        } catch (SQLException e) {            // TODO Auto-generated catch block            e.printStackTrace();        } finally {            try {                rs.close();                conn.close();            } catch (SQLException e) {                // TODO Auto-generated catch block                e.printStackTrace();            }        }    }}
0 0
原创粉丝点击