sqoop 配置及导入导出操作

来源:互联网 发布:陈江华cba数据 编辑:程序博客网 时间:2024/05/21 18:39
 

1、下载相应软件

下面是CDH3和SQOOP 1.2.0的下载地址

http://archive.cloudera.com/cdh/3/hadoop-0.20.2-CDH3B4.tar.gz

http://archive.cloudera.com/cdh/3/sqoop-1.2.0-CDH3B4.tar.gz

其中sqoop-1.2.0-CDH3B4依赖hadoop-core-0.20.2-CDH3B4.jar,所以你需要下载hadoop-0.20.2-CDH3B4.tar.gz,解压缩后将hadoop-0.20.2-CDH3B4/hadoop-core-0.20.2-CDH3B4.jar复制到sqoop-1.2.0-CDH3B4/lib中。

另外,sqoop导入mysql数据运行过程中依赖mysql-connector-java-*.jar,所以你需要下载mysql-connector-java-*.jar并复制到sqoop-1.2.0-CDH3B4/lib中。

2、修改SQOOP的文件configure-sqoop,注释掉hbase和zookeeper检查(除非你准备使用HABASE等HADOOP上的组件)

#if [ ! -d "${HBASE_HOME}" ]; then
# echo “Error: $HBASE_HOME does not exist!”
# echo ‘Please set $HBASE_HOME to the root of your HBase installation.’
# exit 1
#fi
#if [ ! -d "${ZOOKEEPER_HOME}" ]; then
# echo “Error: $ZOOKEEPER_HOME does not exist!”
# echo ‘Please set $ZOOKEEPER_HOME to the root of your ZooKeeper installation.’
# exit 1
#fi

3、启动HADOOP,配置好相关环境变量(例如$HADOOP_HOME),就可以使用SQOOP了

下面是个从数据库导出表的数据到HDFS上文件的例子

[wanghai01@tc-crm-rd01.tc sqoop-1.2.0-CDH3B4]$ bin/sqoop import --connect jdbc:mysql://XXXX:XX/crm --username crm --password 123456 --table company -m 1
11/09/21 15:45:25 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
11/09/21 15:45:26 INFO tool.CodeGenTool: Beginning code generation
11/09/21 15:45:26 INFO manager.MySQLManager: Executing SQL statement: SELECT t.* FROM `company` AS t LIMIT 1
11/09/21 15:45:26 INFO manager.MySQLManager: Executing SQL statement: SELECT t.* FROM `company` AS t LIMIT 1
11/09/21 15:45:26 INFO orm.CompilationManager: HADOOP_HOME is /home/wanghai01/hadoop/hadoop-0.20.2/bin/..
11/09/21 15:45:26 INFO orm.CompilationManager: Found hadoop core jar at: /home/wanghai01/hadoop/hadoop-0.20.2/bin/../hadoop-0.20.2-core.jar
11/09/21 15:45:26 ERROR orm.CompilationManager: Could not rename /tmp/sqoop-wanghai01/compile/2bd70cf2b712a9c7cdb0860722ea7c18/company.java to /home/wanghai01/cloudera/sqoop-1.2.0-CDH3B4/./company.java
11/09/21 15:45:26 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-wanghai01/compile/2bd70cf2b712a9c7cdb0860722ea7c18/company.jar
11/09/21 15:45:26 WARN manager.MySQLManager: It looks like you are importing from mysql.
11/09/21 15:45:26 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
11/09/21 15:45:26 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
11/09/21 15:45:26 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
11/09/21 15:45:26 INFO mapreduce.ImportJobBase: Beginning import of company
11/09/21 15:45:27 INFO manager.MySQLManager: Executing SQL statement: SELECT t.* FROM `company` AS t LIMIT 1
11/09/21 15:45:28 INFO mapred.JobClient: Running job: job_201109211521_0001
11/09/21 15:45:29 INFO mapred.JobClient:  map 0% reduce 0%
11/09/21 15:45:40 INFO mapred.JobClient:  map 100% reduce 0%
11/09/21 15:45:42 INFO mapred.JobClient: Job complete: job_201109211521_0001
11/09/21 15:45:42 INFO mapred.JobClient: Counters: 5
11/09/21 15:45:42 INFO mapred.JobClient:   Job Counters
11/09/21 15:45:42 INFO mapred.JobClient:     Launched map tasks=1
11/09/21 15:45:42 INFO mapred.JobClient:   FileSystemCounters
11/09/21 15:45:42 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=44
11/09/21 15:45:42 INFO mapred.JobClient:   Map-Reduce Framework
11/09/21 15:45:42 INFO mapred.JobClient:     Map input records=8
11/09/21 15:45:42 INFO mapred.JobClient:     Spilled Records=0
11/09/21 15:45:42 INFO mapred.JobClient:     Map output records=8
11/09/21 15:45:42 INFO mapreduce.ImportJobBase: Transferred 44 bytes in 15.0061 seconds (2.9321 bytes/sec)
11/09/21 15:45:42 INFO mapreduce.ImportJobBase: Retrieved 8 records.

查看一下数据

[wanghai01@tc-crm-rd01.tc sqoop-1.2.0-CDH3B4]$ hadoop fs -cat /user/wanghai01/company/part-m-00000
1,xx
2,eee
1,xx
2,eee
1,xx
2,eee
1,xx
2,eee

到数据库中查一下验证一下

mysql> select * from company;
+------+------+
| id   | name |
+------+------+
|    1 | xx   |
|    2 | eee  |
|    1 | xx   |
|    2 | eee  |
|    1 | xx   |
|    2 | eee  |
|    1 | xx   |
|    2 | eee  |
+------+------+
8 rows in set (0.00 sec)

OK,是没有问题的。仔细看执行命令时打出的信息,会发现一个ERROR,这是因为之前我执行过此命令失败了,而再次执行的时候相关的临时数据没有清理。

下面附上一些资料:

SqoopUserGuide

SqoopDevGuide

 

 

3  导入导出数据库
   1)列出mysql数据库中的所有数据库命令
  #  sqoop list-databases --connect jdbc:mysql://localhost:3306/ --username root --password 123456
  
   2)连接mysql并列出数据库中的表命令
   # sqoop list-tables --connect jdbc:mysql://localhost:3306/test --username root --password 123456
   命令中的test为mysql数据库中的test数据库名称  username password分别为mysql数据库的用户密码
  
   3)将关系型数据的表结构复制到hive中
 sqoop create-hive-table --connect jdbc:mysql://localhost:3306/test --table username --username root --password 123456 --hive-table test
其中 --table username为mysql中的数据库test中的表   --hive-table test 为hive中新建的表名称
  
   4)从关系数据库导入文件到hive中
sqoop import --connect jdbc:mysql://localhost:3306/test --username root --password mysql-password --table t1 --hive-import

   5)将hive中的表数据导入到mysql中

./sqoop export --connect jdbc:mysql://localhost:3306/test --username root --password admin --table uv_info --export-dir /user/hive/warehouse/uv/dt=2011-08-03

如果报错
11/08/05 10:51:22 INFO mapred.JobClient: Running job: job_201108051007_0010 
11/08/05 10:51:23 INFO mapred.JobClient:  map 0% reduce 0% 
11/08/05 10:51:36 INFO mapred.JobClient: Task Id : attempt_201108051007_0010_m_000000_0, Status : FAILED 
java.util.NoSuchElementException 
        at java.util.AbstractList$Itr.next(AbstractList.java:350) 
        at uv_info.__loadFromFields(uv_info.java:194) 
        at uv_info.parse(uv_info.java:143) 
        at com.cloudera.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:79) 
        at com.cloudera.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:38) 
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) 
        at com.cloudera.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:187) 
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647) 
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) 
        at org.apache.hadoop.mapred.Child$4.run(Child.java:270) 
        at java.security.AccessController.doPrivileged(Native Method) 
        at javax.security.auth.Subject.doAs(Subject.java:396) 
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127) 
        at org.apache.hadoop.mapred.Child.main(Child.java:264) 
此错误的原因为sqoop解析文件的字段与MySql数据库的表的字段对应不上造成的。因此需要在执行的时候给sqoop增加参数,告诉sqoop文件的分隔符,使它能够正确的解析文件字段。

hive默认的字段分隔符为'\001'
./sqoop export --connect jdbc:mysql://localhost:3306/datacenter --username root --password admin --table uv_info --export-dir /user/hive/warehouse/uv/dt=2011-08-03 --input-fields-terminated-by '\t'

参考http://archive.cloudera.com/cdh/3/sqoop/SqoopUserGuide.html#_literal_sqoop_create_hive_table_literal