Alex 的 Hadoop 菜鸟教程: 第12课 Sqoop1 安装/导入/导出教程

来源:互联网 发布:无法telnet 端口 编辑:程序博客网 时间:2024/06/05 05:32

原帖地址: http://blog.csdn.net/nsrainbow/article/details/41575807

Sqoop是什么

sqoop是用于在传统关系型数据库跟hdfs之间进行数据导入导出的工具。目前sqoop已经出了2,但是截至当前,sqoop2还是个半成品,不支持hbase,功能还很少,所以我还是主要讲sqoop1

安装Sqoop1


 yum install -y sqoop


用help测试下是否有安装好
# sqoop helpWarning: /usr/lib/sqoop/../hive-hcatalog does not exist! HCatalog jobs will fail.Please set $HCAT_HOME to the root of your HCatalog installation.Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.Please set $ACCUMULO_HOME to the root of your Accumulo installation.14/11/28 11:33:11 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.1usage: sqoop COMMAND [ARGS]Available commands:  codegen            Generate code to interact with database records  create-hive-table  Import a table definition into Hive  eval               Evaluate a SQL statement and display the results  export             Export an HDFS directory to a database table  help               List available commands  import             Import a table from a database to HDFS  import-all-tables  Import tables from a database to HDFS  job                Work with saved jobs  list-databases     List available databases on a server  list-tables        List available tables in a database  merge              Merge results of incremental imports  metastore          Run a standalone Sqoop metastore  version            Display version informationSee 'sqoop help COMMAND' for information on a specific command.

拷贝驱动到 /usr/lib/sqoop/lib

mysql jdbc 驱动下载地址 

下载后,解压开找到驱动jar包,upload到服务器上,然后移过去

mv /home/alex/mysql-connector-java-5.1.34-bin.jar /usr/lib/sqoop/lib


导入

数据准备

在mysql中建立sqoop_test库
create database sqoop_test;


在sqoop_test里面建立一个表 
CREATE TABLE `employee` (      `id` int(11) NOT NULL,      `name` varchar(20) NOT NULL,      PRIMARY KEY (`id`)    ) ENGINE=MyISAM  DEFAULT CHARSET=utf8;  

插入几条数据
insert into employee (id,name) values (1,'michael');  insert into employee (id,name) values (2,'ted');insert into employee (id,name) values (3,'jack'); 

导入mysql到hdfs

列出所有表

我们先不急着导入,先做几个准备步骤热身一下,也方便排查问题

先把mysql的测试用户设置成可以远程连接的,因为hadoop会把导入/导出任务分发到不同的机器上执行,所以你的数据库url里面不能写localhost而要写成域名或者IP。本例子中直接用root来测试,所以就改下root的Host (实际生产环境千万别这么干啊!)
mysql> use mysqlmysql> update user set Host='%' where Host='127.0.0.1' and User='root';mysql> flush privileges; 



列出所有数据库
# sqoop list-databases --connect jdbc:mysql://host1:3306/sqoop_test --username root --password rootWarning: /usr/lib/sqoop/../hive-hcatalog does not exist! HCatalog jobs will fail.Please set $HCAT_HOME to the root of your HCatalog installation.Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.Please set $ACCUMULO_HOME to the root of your Accumulo installation.14/12/01 09:20:28 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.114/12/01 09:20:28 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.14/12/01 09:20:28 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.information_schemacactimetastoremysqlsqoop_testwordpresszabbix



先用sqoop连接上数据库并列出所有表

# sqoop list-tables --connect jdbc:mysql://host1/sqoop_test --username root --password rootWarning: /usr/lib/sqoop/../hive-hcatalog does not exist! HCatalog jobs will fail.Please set $HCAT_HOME to the root of your HCatalog installation.Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.Please set $ACCUMULO_HOME to the root of your Accumulo installation.14/11/28 11:46:11 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.114/11/28 11:46:11 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.14/11/28 11:46:11 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.employeestudentworkers

这条命令不用跟驱动的类名是因为sqoop默认支持mysql的,如果要跟jdbc驱动的类名用
# sqoop list-tables --connect jdbc:mysql://localhost/sqoop_test --username root --password root --driver com.mysql.jdbc.Driver

导入数据到hdfs

sqoop import --connect jdbc:mysql://host1:3306/sqoop_test --username root --password root --table employee --m 1 --target-dir /user/test3
  • import 代表是导入任务
  • --connect 指定连接的url
  • --username 指定用户名
  • --password 指定密码
  • --table 指定要导入的数据源表
  • --m 代表任务并发数,这里设置成1
  • --target-dir 代表导入后要存储的hdfs上的文件夹位置
  • 更详细的参数介绍在 http://sqoop.apache.org/docs/1.4.5/SqoopUserGuide.html#_purpose 
执行的效果
[root@host1 hadoop-hdfs]# sqoop import --connect jdbc:mysql://host1:3306/sqoop_test --username root --password root --table employee --m 1 --target-dir /user/test3Warning: /usr/lib/sqoop/../hcatalog does not exist! HCatalog jobs will fail.Please set $HCAT_HOME to the root of your HCatalog installation.Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.Please set $ACCUMULO_HOME to the root of your Accumulo installation.15/01/23 06:48:10 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5-cdh5.2.115/01/23 06:48:10 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.15/01/23 06:48:11 INFO manager.SqlManager: Using default fetchSize of 100015/01/23 06:48:11 INFO tool.CodeGenTool: Beginning code generation15/01/23 06:48:12 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 115/01/23 06:48:12 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 115/01/23 06:48:12 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduceNote: /tmp/sqoop-root/compile/0989201fc3275ff35dc9c41f1031ea42/employee.java uses or overrides a deprecated API.Note: Recompile with -Xlint:deprecation for details.15/01/23 06:48:45 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/0989201fc3275ff35dc9c41f1031ea42/employee.jar15/01/23 06:48:47 WARN manager.MySQLManager: It looks like you are importing from mysql.15/01/23 06:48:47 WARN manager.MySQLManager: This transfer can be faster! Use the --direct15/01/23 06:48:47 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.15/01/23 06:48:47 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)15/01/23 06:48:47 INFO mapreduce.ImportJobBase: Beginning import of employee15/01/23 06:48:57 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar15/01/23 06:49:12 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps15/01/23 06:49:13 INFO client.RMProxy: Connecting to ResourceManager at host1/192.168.199.126:803215/01/23 06:50:10 INFO db.DBInputFormat: Using read commited transaction isolation15/01/23 06:50:10 INFO mapreduce.JobSubmitter: number of splits:115/01/23 06:50:12 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1421771779239_000315/01/23 06:50:22 INFO impl.YarnClientImpl: Submitted application application_1421771779239_000315/01/23 06:50:23 INFO mapreduce.Job: The url to track the job: http://host1:8088/proxy/application_1421771779239_0003/15/01/23 06:50:23 INFO mapreduce.Job: Running job: job_1421771779239_000315/01/23 06:57:10 INFO mapreduce.Job: Job job_1421771779239_0003 running in uber mode : false15/01/23 06:57:16 INFO mapreduce.Job:  map 0% reduce 0%15/01/23 06:58:13 INFO mapreduce.Job:  map 100% reduce 0%15/01/23 06:58:19 INFO mapreduce.Job: Job job_1421771779239_0003 completed successfully15/01/23 06:58:33 INFO mapreduce.Job: Counters: 30File System CountersFILE: Number of bytes read=0FILE: Number of bytes written=128844FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=87HDFS: Number of bytes written=23HDFS: Number of read operations=4HDFS: Number of large read operations=0HDFS: Number of write operations=2Job Counters Launched map tasks=1Other local map tasks=1Total time spent by all maps in occupied slots (ms)=74359Total time spent by all reduces in occupied slots (ms)=0Total time spent by all map tasks (ms)=74359Total vcore-seconds taken by all map tasks=74359Total megabyte-seconds taken by all map tasks=76143616Map-Reduce FrameworkMap input records=3Map output records=3Input split bytes=87Spilled Records=0Failed Shuffles=0Merged Map outputs=0GC time elapsed (ms)=501CPU time spent (ms)=2680Physical memory (bytes) snapshot=107692032Virtual memory (bytes) snapshot=654852096Total committed heap usage (bytes)=17760256File Input Format Counters Bytes Read=0File Output Format Counters Bytes Written=2315/01/23 06:58:35 INFO ipc.Client: Retrying connect to server: host1.localdomain/192.168.199.126:39437. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)15/01/23 06:58:36 INFO ipc.Client: Retrying connect to server: host1.localdomain/192.168.199.126:39437. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)15/01/23 06:58:37 INFO ipc.Client: Retrying connect to server: host1.localdomain/192.168.199.126:39437. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)15/01/23 06:58:37 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server15/01/23 06:59:16 INFO mapreduce.ImportJobBase: Transferred 23 bytes in 601.9783 seconds (0.0382 bytes/sec)15/01/23 06:59:16 INFO mapreduce.ImportJobBase: Retrieved 3 records.


查看一下结果
# hdfs dfs -ls /user/test3Found 2 items-rw-r--r--   2 root supergroup          0 2014-12-01 14:16 /user/test3/_SUCCESS-rw-r--r--   2 root supergroup         16 2014-12-01 14:16 /user/test3/part-m-00000# hdfs dfs -cat /user/test3/part-m-000001,michael2,ted3,jack


我遇到遇到的问题

如果你遇到以下问题

14/12/01 10:12:42 INFO mapreduce.Job: Task Id : attempt_1406097234796_0017_m_000000_0, Status : FAILEDError: employee : Unsupported major.minor version 51.0
用ps aux| grep hadoop看下会发现hadoop用的是jdk1.6 。我的cdh是5.0.1 sqoop版本是 1.4.4 ,我遇到了这个问题。
原因:sqoop是使用jdk1.7编译的,所以如果你用 ps aux| grep hadoop 看到hadoop用的是1.6运行的,那sqoop不能正常工作
注意:CDH4.7以上已经兼容jdk1.7 ,但如果你是从4.5升级上来的会发现hadoop用的是jdk1.6,需要修改一下整个hadoop调用的jdk为1.7,而且这是官方推荐的搭配 

关于改jdk的方法

官方提供了2个方法
http://www.cloudera.com/content/cloudera/en/documentation/cdh4/latest/CDH4-Requirements-and-Supported-Versions/cdhrsv_topic_3.html
这个是让你把 /usr/java/ 下建一个软链叫 default 指向你要的jdk,我这么做了,无效 
http://www.cloudera.com/content/cloudera/en/documentation/archives/cloudera-manager-4/v4-5-3/Cloudera-Manager-Enterprise-Edition-Installation-Guide/cmeeig_topic_16_2.html
这个是叫你增加一个环境变量, 我这么做了,无效
最后我用了简单粗暴的办法:停掉所有相关服务,然后删掉那个该死的jdk1.6然后再重启,这回就用了 /usr/java/default 了

停掉所有hadoop相关服务的命令

for x in `cd /etc/init.d ; ls hive-*` ; do sudo service $x stop ; donefor x in `cd /etc/init.d ; ls hbase-*` ; do sudo service $x stop ; donefor x in `cd /etc/init.d ; ls hadoop-*` ; do sudo service $x stop ; done<span style="font-family: Arial, Helvetica, sans-serif;">/etc/init.d/zookeeper-server stop</span>


zookeeper , hbase, hive 如果你们没装就跳过。建议你们用ps aux | grep jre1.6 去找找有什么服务,然后一个一个关掉,先关其他的,最后关hadoop

启动所有

<pre code_snippet_id="538371" snippet_file_name="blog_20141201_15_171205" name="code" class="plain">/etc/init.d/zookeeper-server start
for x in `cd /etc/init.d ; ls hadoop-*` ; do sudo service $x start ; donefor x in `cd /etc/init.d ; ls hbase-*` ; do sudo service $x start ; donefor x in `cd /etc/init.d ; ls hive-*` ; do sudo service $x start ; done

从hdfs导出数据到mysql

接着这个例子做

数据准备

清空employee
mysql> truncate employee

导出数据到mysql

# sqoop export --connect jdbc:mysql://host1:3306/sqoop_test --username root --password root --table employee --m 1 --export-dir /user/test3
  • export 代表导出任务
  • --connect 连接的url
  • --username 用户名
  • --password 密码
  • --table 要导出的mysql数据表名字
  • --m 并发数
  • --export-dir  导出的hdfs源文件夹
  • 更详细的参数介绍在 http://sqoop.apache.org/docs/1.4.5/SqoopUserGuide.html#_purpose_3
执行效果
[root@host1 hadoop-hdfs]# sqoop export --connect jdbc:mysql://host1:3306/sqoop_test --username root --password root --table employee --m 1 --export-dir /user/test3Warning: /usr/lib/sqoop/../hcatalog does not exist! HCatalog jobs will fail.Please set $HCAT_HOME to the root of your HCatalog installation.Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.Please set $ACCUMULO_HOME to the root of your Accumulo installation.15/01/23 07:04:44 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5-cdh5.2.115/01/23 07:04:44 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.15/01/23 07:04:45 INFO manager.SqlManager: Using default fetchSize of 100015/01/23 07:04:45 INFO tool.CodeGenTool: Beginning code generation15/01/23 07:04:48 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 115/01/23 07:04:48 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 115/01/23 07:04:48 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduceNote: /tmp/sqoop-root/compile/4e6318352dc0beeb6e1e7724c8a6d935/employee.java uses or overrides a deprecated API.Note: Recompile with -Xlint:deprecation for details.15/01/23 07:05:07 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/4e6318352dc0beeb6e1e7724c8a6d935/employee.jar15/01/23 07:05:07 INFO mapreduce.ExportJobBase: Beginning export of employee15/01/23 07:05:11 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar15/01/23 07:05:24 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative15/01/23 07:05:24 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative15/01/23 07:05:24 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps15/01/23 07:05:25 INFO client.RMProxy: Connecting to ResourceManager at host1/192.168.199.126:803215/01/23 07:06:00 INFO input.FileInputFormat: Total input paths to process : 115/01/23 07:06:00 INFO input.FileInputFormat: Total input paths to process : 115/01/23 07:06:01 INFO mapreduce.JobSubmitter: number of splits:115/01/23 07:06:01 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative15/01/23 07:06:02 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1421771779239_000415/01/23 07:06:05 INFO impl.YarnClientImpl: Submitted application application_1421771779239_000415/01/23 07:06:06 INFO mapreduce.Job: The url to track the job: http://host1:8088/proxy/application_1421771779239_0004/15/01/23 07:06:06 INFO mapreduce.Job: Running job: job_1421771779239_000415/01/23 07:08:03 INFO mapreduce.Job: Job job_1421771779239_0004 running in uber mode : false15/01/23 07:08:03 INFO mapreduce.Job:  map 0% reduce 0%15/01/23 07:12:15 INFO mapreduce.Job:  map 100% reduce 0%15/01/23 07:12:49 INFO mapreduce.Job: Job job_1421771779239_0004 completed successfully15/01/23 07:12:52 INFO mapreduce.Job: Counters: 30File System CountersFILE: Number of bytes read=0FILE: Number of bytes written=128509FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=147HDFS: Number of bytes written=0HDFS: Number of read operations=4HDFS: Number of large read operations=0HDFS: Number of write operations=0Job Counters Launched map tasks=1Rack-local map tasks=1Total time spent by all maps in occupied slots (ms)=253584Total time spent by all reduces in occupied slots (ms)=0Total time spent by all map tasks (ms)=253584Total vcore-seconds taken by all map tasks=253584Total megabyte-seconds taken by all map tasks=259670016Map-Reduce FrameworkMap input records=3Map output records=3Input split bytes=121Spilled Records=0Failed Shuffles=0Merged Map outputs=0GC time elapsed (ms)=3872CPU time spent (ms)=3390Physical memory (bytes) snapshot=97640448Virtual memory (bytes) snapshot=652566528Total committed heap usage (bytes)=15585280File Input Format Counters Bytes Read=0File Output Format Counters Bytes Written=015/01/23 07:12:52 INFO mapreduce.ExportJobBase: Transferred 147 bytes in 448.1491 seconds (0.328 bytes/sec)15/01/23 07:12:52 INFO mapreduce.ExportJobBase: Exported 3 records.



最后去mysql看成功导出了3条数据

mysql> select * from employee;+----+---------+| id | name    |+----+---------+|  1 | michael ||  2 | ted     ||  3 | jack    |+----+---------+3 rows in set (0.12 sec)



hadoop是一个分布式系统,所有的map reduce任务都会被分发到各个节点上去执行,所以实际环境上mysql jdbc url 里面千万不要写localhost,否则只有你这台机子上的mysql的数据会被读到,其他机子上的任务都会失败


2 0
原创粉丝点击