Oracle数据迁移至MySQL

来源:互联网 发布:玩具机械战警如何编程 编辑:程序博客网 时间:2024/05/16 01:34
ORACLE DB:11.2.0.3.0

MYSQL DB:5.5.14

因项目需求,需要将ORACLE生产中数据迁移至MYSQL数据库中作为初始数据,方法有如下几种:

1、ORACLE OGG

2、通过手动编写select “insertinto”脚本

3、工具,本次我就是使用了工具(sqluldr2),工具下载地址可以到www.anysql.net去下载


使用方法:
将sqluldr2.bin工具上传到oracle的bin目录下,
[root@db01 bin]# chown oracle.oinstall sqluldr2.bin

[root@db01 bin]# chmod 775 sqluldr2.bin

[root@db01 bin]# su -oracle

[oracle@db01 ~]$ sqluldr2help=yes
Valid Keywords:
user = username/password@tnsname
sql = SQL file name
query = select statement
field = separator string betweenfields
record = separator string betweenrecords
rows = print progress for every givenrows (default, 1000000)
file = output file name(default:uldrdata.txt)
log = log file name, prefix with + toappend mode
fast = auto tuning the session levelparameters(YES)
text = output type (MYSQL, CSV,MYSQLINS, ORACLEINS, FORM, SEARCH).
charset = character set name of thetarget database.
ncharset= national character set nameof the target database.
parfile = read command option fromparameter file
read = setDB_FILE_MULTIBLOCK_READ_COUNT at session level
sort = set SORT_AREA_SIZE at sessionlevel (UNIT:MB)
hash = set HASH_AREA_SIZE at sessionlevel (UNIT:MB)
array = array fetch size
head = print rowheader(Yes|No)
batch = save to new file for everyrows batch (Yes/No)
size = maximum output file piece size(UNIB:MB)
serial = set _serial_direct_read toTRUE at session level
trace = set event 10046 to givenlevel at session level
table = table name in the sqlldrcontrol file
control = sqlldr control file andpath.
mode = sqlldr option, INSERT orAPPEND or REPLACE or TRUNCATE
buffer = sqlldr READSIZE andBINDSIZE, default 16 (MB)
long = maximum long fieldsize
width = customized max column width(w1:w2:...)
quote = optional quotestring
data = disable real data unload (NO,OFF)
alter = alter session SQLs to beexecute before unload
safe = use large buffer to avoidORA-24345 error (Yes|No)
crypt = encrypted user informationonly (Yes|No)
sedf/t = enable character translationfunction
null = replace null with givenvalue
escape = escape character for specialcharacters
escf/t = escape from/to characterslist
format = MYSQL: MySQL Insert SQLs,SQL: Insert SQLs.
exec = the command to execute theSQLs.
prehead = column name prefix for headline.
rowpre = row prefix string for eachline.
rowsuf = row sufix string for eachline.
colsep = separator string betweencolumn name and value.
presql = SQL or scripts to beexecuted before data unload.
postsql = SQL or scripts to beexecuted after data unload.
lob = extract lob values to singlefile (FILE).
lobdir = subdirectory count to storelob files .
split = table name for automaticallyparallelization.
degree = parallelize data copy degree(2-128).
hint = MySQL SQL hint for the Insert,for example IGNORE.
unique = Unique Column List for theMySQL target table.
update = Enable MySQL ON DUPLICATESQL statement(YES/NO).
crack = The crack key for the contentprotection.
uncrack = The uncrack key for thecontent protection.

for field and record, you can use'0x' to specify hex character code,
\r=0x0d \n=0x0a |=0x7c ,=0x2c,\t=0x09, :=0x3a, #=0x23, "=0x22 '=0x27

每次参数所代表的具体含义和意义不描述,自己看

注意以下3点:
1、将ORACLE数据库中数据迁移到MYSQL,不要通过分隔符的方法,测试过分隔符,在导入到MYSQL过程中会有警告,一些数据被截断,避免大家走弯路

2、切记ORACLE生产环境的字符集是GBK,而MYSQL生产环境的字符集是UTF-8

3、将ORACLE数据全部导出成insert into语句,这样在插入的过程避免出错,可以通过MYSQL还原命令直接还原,如出错进程会终止,source插入数据,中间出现警告,无法去验证

在测试ORACLE导出成insertinto文本后,在还原到MYSQL过程中,测试N多次后,最终导出命令如下:
sqluldr2.binuser=yoon/yoon@CS_177query="select* from YOON.CESHI" charset=UTF8text=MYSQLINSformat=SQL file=/u01/backup/oracle_mysql/CESHI_%b.txtsize=30000MB table=CESHI safe=YES

user : 用户名/密码@服务名

query:查询语句

charset:字符集UTF-8 ,将oracle中gbk字符集数据导出为文本格式后的字符集为UTF-8

text :导出文件类型,必须为MYSQLINS,成为的文件数据中自动生成插入mysql语句的``符号,例如:`张三`

format:格式为MYSQL

file:导出数据文件保存路径

%b:字母b必须为小写,可生成多个数据文本文件,例如:CESHI_1,CESHI_2 ......; 避免直接生成一个超级大的数据文件

size:必须以MB为单位,导出的每个文件的大小,30000MB,导出的每个数据文件30G

table:生成的数据文件中直接包含表名,否则会成生成insertinto " ",没有表名

safe:YES这个一定要加,在测试大量的表中,发现有的oracle表导出过程中会报ORA-24345错误,说是工具的BUG,加参数safe=YES即可


总结:
经过大量的测试后最终总结上面的导出命令语句,目前测试的表数据没有问题,至少对于我而言是这样.

0 0