Oracle schema 级别的数据迁移
来源:互联网 发布:淘宝开店成功率 编辑:程序博客网 时间:2024/06/01 07:39
Oracle schema 级别的数据迁移
最近一段时间,跟着两个项目组,做ORACLE DB相关的支持工作,因为项目属于开发阶段,总有一些数据库的复制工作,比较了几种方法,感觉用EXPDP/IMPDP还不错,能顺利实现开发人员的需求。
需求:实现user/schema级别的数据迁移。
版本:Oracle Enterprise 11g R2
总体来说分为以下几步:
1.查看原库和目标库的DUMP目录。
- SQL> select * from dba_directories;
- SYS SUBDIR /opt/oracle/app/oracle/product/11.2.0/dbhome_1/demo/schema/order_entry/2002/Sep
- SYS SS_OE_XMLDIR /opt/oracle/app/oracle/product/11.2.0/dbhome_1/demo/schema/order_entry/
- SYS LOG_FILE_DIR /opt/oracle/app/oracle/product/11.2.0/dbhome_1/demo/schema/log/
- SYS DATA_FILE_DIR /opt/oracle/app/oracle/product/11.2.0/dbhome_1/demo/schema/sales_history/
- SYS XMLDIR /ade/b/2125410156/oracle/rdbms/xml
- SYS MEDIA_DIR /opt/oracle/app/oracle/product/11.2.0/dbhome_1/demo/schema/product_media/
- SYS DATA_PUMP_DIR /opt/oracle/app/oracle/admin/oracle/dpdump/
- SYS ORACLE_OCM_CONFIG_DIR /opt/oracle/app/oracle/product/11.2.0/dbhome_1/ccr/state
- 8 rows selected.
SQL> select * from dba_directories;SYS SUBDIR /opt/oracle/app/oracle/product/11.2.0/dbhome_1/demo/schema/order_entry/2002/SepSYS SS_OE_XMLDIR /opt/oracle/app/oracle/product/11.2.0/dbhome_1/demo/schema/order_entry/SYS LOG_FILE_DIR /opt/oracle/app/oracle/product/11.2.0/dbhome_1/demo/schema/log/SYS DATA_FILE_DIR /opt/oracle/app/oracle/product/11.2.0/dbhome_1/demo/schema/sales_history/SYS XMLDIR /ade/b/2125410156/oracle/rdbms/xmlSYS MEDIA_DIR /opt/oracle/app/oracle/product/11.2.0/dbhome_1/demo/schema/product_media/SYS DATA_PUMP_DIR /opt/oracle/app/oracle/admin/oracle/dpdump/SYS ORACLE_OCM_CONFIG_DIR /opt/oracle/app/oracle/product/11.2.0/dbhome_1/ccr/state8 rows selected.
上面 DATA_PUMP_DIR就是直接可以用的DUMP目录,导出文件和导入文件就放在上面的路径下,如果没有,则需要手动创建并赋给读/写权限。
- SQL>CONN sys /as sysdba
- SQL>CREATE DIRECTORY DATA_PUMP_DIR AS ‘/opt/oracle/app/oracle/admin/oracle/dpdump/’;
- SQL>GRANT READ,WRITE ON DIRECTORY TO TEST;
SQL>CONN sys /as sysdbaSQL>CREATE DIRECTORY DATA_PUMP_DIR AS '/opt/oracle/app/oracle/admin/oracle/dpdump/';SQL>GRANT READ,WRITE ON DIRECTORY TO TEST;
2.对比原库和目标库的username和tablespace
如果目标库中没有需要导入的username和tablespace,则需要在目标库中创建username和tablespace。
- SQL> select username,default_tablespace from dba_users; –查看原库中的用户和对应的默认表空间
- SQL> create tablespace test_tbs datafile ‘/opt/oracle/oradata/test/TEST_TBS01.DBF’ size 1024m autoextend on;
- SQL> create user test identified by test default tablespace test_tbs;
SQL> select username,default_tablespace from dba_users; --查看原库中的用户和对应的默认表空间SQL> create tablespace test_tbs datafile '/opt/oracle/oradata/test/TEST_TBS01.DBF' size 1024m autoextend on;SQL> create user test identified by test default tablespace test_tbs;
3.导出脚本
- –导出指定schema
- expdp system/oracle@orcl DIRECTORY=DATA_PUMP_DIR DUMPFILE=FALABELLA_20111014.DMP SCHEMAS=FALABELLA_CL_CATA,FALABELLA_CL_CATB,FALABELLA_CL_PUB logfile=falabella_expdp_20111014.log status=10 parallel=4 CONTENT=ALL COMPRESSION=ALL
- –导出整个数据库
- expdp system/oracle@orcl DIRECTORY=DATA_PUMP_DIR DUMPFILE=AAXIS_20111125.EXPDP FULL=Y logfile=AAXIS_expdp_20111125.log status=10 parallel=1 CONTENT=ALL flashback_scn=18341888 COMPRESSION=ALL
--导出指定schemaexpdp system/oracle@orcl DIRECTORY=DATA_PUMP_DIR DUMPFILE=FALABELLA_20111014.DMP SCHEMAS=FALABELLA_CL_CATA,FALABELLA_CL_CATB,FALABELLA_CL_PUB logfile=falabella_expdp_20111014.log status=10 parallel=4 CONTENT=ALL COMPRESSION=ALL--导出整个数据库expdp system/oracle@orcl DIRECTORY=DATA_PUMP_DIR DUMPFILE=AAXIS_20111125.EXPDP FULL=Y logfile=AAXIS_expdp_20111125.log status=10 parallel=1 CONTENT=ALL flashback_scn=18341888 COMPRESSION=ALL
4. 导入脚本
- –需要从一个schema导入到另一个schema
- impdp system/oracle@TEST DIRECTORY=dump_dir DUMPFILE=ATG_12OCT11.dmp LOGFILE=impdp_mf_20111013_2.log SCHEMAS=QA2_ATGPUB_MF,QA2_ATGCATALOGA_MF REMAP_SCHEMA=QA2_ATGPUB_MF:QA2_ATGPUB_MF,QA2_ATGCATALOGA_MF:QA2_ATGCATALOGB_MF CONTENT=ALL PARALLEL=4 STATUS=10 TABLE_EXISTS_ACTION=REPLACE
- –不需要更名schema
- impdp system/oracle@TEST DIRECTORY=dump_dir DUMPFILE=ATG_12OCT11.dmp LOGFILE=impdp_mf_20111014_1.log SCHEMAS=QA2_ATGPUB_MF CONTENT=ALL PARALLEL=4 STATUS=10 TABLE_EXISTS_ACTION=REPLACE
--需要从一个schema导入到另一个schemaimpdp system/oracle@TEST DIRECTORY=dump_dir DUMPFILE=ATG_12OCT11.dmp LOGFILE=impdp_mf_20111013_2.log SCHEMAS=QA2_ATGPUB_MF,QA2_ATGCATALOGA_MF REMAP_SCHEMA=QA2_ATGPUB_MF:QA2_ATGPUB_MF,QA2_ATGCATALOGA_MF:QA2_ATGCATALOGB_MF CONTENT=ALL PARALLEL=4 STATUS=10 TABLE_EXISTS_ACTION=REPLACE--不需要更名schemaimpdp system/oracle@TEST DIRECTORY=dump_dir DUMPFILE=ATG_12OCT11.dmp LOGFILE=impdp_mf_20111014_1.log SCHEMAS=QA2_ATGPUB_MF CONTENT=ALL PARALLEL=4 STATUS=10 TABLE_EXISTS_ACTION=REPLACE
附:查看各个SCHEMA占用空间的大小:
- SQL> select owner,round(sum(bytes/1024/1024),2)||‘ MB’ as schema_size from dba_segments group by owner order by SCHEMA_SIZE desc;
- OWNER SCHEMA_SIZE
- —————————— ——————————————
- APEX_030200 85.06MB
- CTXSYS 61.75MB
- DBSNMP 1.63MB
- EXFSYS 3.63MB
- FALABELLA_AR_CATA 96.5MB
- FALABELLA_AR_CATB 90.13MB
- FALABELLA_AR_CORE 10889.63MB
- FALABELLA_AR_PUB 357.38MB
- FALABELLA_CL_CATA 116.06MB
- FALABELLA_CL_CATB 109.63MB
- 37 rows selected.
- –查询己经导入的对象个数
- SQL> select owner,count(*) from dba_objects where owner like ‘FALABELLA_AR%’ group by owner order by owner;
- OWNER COUNT(*)
- —————————— ———-
- FALABELLA_AR2_CATA 463
- FALABELLA_AR2_CATB 462
- FALABELLA_AR2_CORE 1300
- FALABELLA_AR2_PUB 2308
- FALABELLA_AR3_CATA 175
- FALABELLA_AR3_CATB 174
- FALABELLA_AR3_CORE 884
- FALABELLA_AR3_PUB 907
- FALABELLA_AR_CATA 463
- FALABELLA_AR_CATB 462
- FALABELLA_AR_CORE 1300
- FALABELLA_AR_PUB 2308
- 12 rows selected.
SQL> select owner,round(sum(bytes/1024/1024),2)||' MB' as schema_size from dba_segments group by owner order by SCHEMA_SIZE desc;OWNER SCHEMA_SIZE------------------------------ ------------------------------------------APEX_030200 85.06MBCTXSYS 61.75MBDBSNMP 1.63MBEXFSYS 3.63MBFALABELLA_AR_CATA 96.5MBFALABELLA_AR_CATB 90.13MBFALABELLA_AR_CORE 10889.63MBFALABELLA_AR_PUB 357.38MBFALABELLA_CL_CATA 116.06MBFALABELLA_CL_CATB 109.63MB37 rows selected.--查询己经导入的对象个数SQL> select owner,count(*) from dba_objects where owner like 'FALABELLA_AR%' group by owner order by owner;OWNER COUNT(*)------------------------------ ----------FALABELLA_AR2_CATA 463FALABELLA_AR2_CATB 462FALABELLA_AR2_CORE 1300FALABELLA_AR2_PUB 2308FALABELLA_AR3_CATA 175FALABELLA_AR3_CATB 174FALABELLA_AR3_CORE 884FALABELLA_AR3_PUB 907FALABELLA_AR_CATA 463FALABELLA_AR_CATB 462FALABELLA_AR_CORE 1300FALABELLA_AR_PUB 230812 rows selected.
ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes
Job “SYSTEM”.”SYS_EXPORT_SCHEMA_01” stopped due to fatal error at 13:34:40
expdp时parallel不当也会引起ORA-39095
2008-09-24 15:01
在expdp做导出的时候会有碰到ora-39095的错误,引起这个错误的原因有两种。一一说来
先看官方的解释:
ORA-39095: Dump file space has been exhausted: Unable to allocate string bytes
Cause: The Export job ran out of dump file space before the job was completed.
Action: Reattach to the job and add additional dump files to the job restarting the job.
从字面意思就解释了第一种原因,那就是:空间不够了。解决方法也简单,多来点空间。
还有第二中原因:当使用了PARALLEL但是dumpfile却只有一个或小于parallel数,下面是官方的说明:
Oracle? Database Utilities
10g Release 2 (10.2)
2 Data Pump Export
PARALLEL=integer
The value you specify for integer should be less than, or equal to, the number of files in the dump file set (or you should specify substitution variables in the dump file specifications).
Because each active worker process or I/O server process writes exclusively to one file at a time, an insufficient number of files can have adverse effects. Some of the worker processes will be idle while waiting for files, thereby degrading the overall performance of the job.(第一个原因是影响性能)
More importantly, if any member of a cooperating group of parallel I/O server processes cannot obtain a file for output, then the export operation will be stopped with an ORA-39095 error. (ora-39095的成因)Both situations can be corrected by attaching to the job using the Data Pump Export utility, adding more files using the ADD_FILE command while in interactive mode, and in the case of a stopped job, restarting the job.
说白话点就是:parallel io server processes写文件不能同时写一个,如果只有一个dumpfile(或少于parallel)就会影响性能。不但如此,当一个io server process在等待从而不能写dumpfile的时候就会报ora-39095
要解决:expdp ATTACH 连上作业然后减少parallel或者增加dumpfile
从起源解决的话就是:指定parallel的时候就要指定至少同样多的dumpfile或者使用类似下面的命令(注意红字):
expdp hr/hr DIRECTORY=dpump_dir1 LOGFILE=parallel_export.log JOB_NAME=par4_job DUMPFILE=par_exp%u.dmp PARALLEL=4
- Oracle schema 级别的数据迁移
- Oracle schema 级别的数据迁移
- 使用impdp复制oracle的schema数据迁移
- 使用impdp复制Oracle的schema数据迁移
- postgresql不同schema下的数据迁移
- oracle 的数据迁移
- Mysql --->Oracle的数据迁移
- 数据迁移经验总结——亿级别多表异构的数据迁移工作
- Oracle DBA神器:PRM灾难恢复工具,Schema级别数据恢复。JAVA图形化界面版Oracle DUL
- ORACLE与SQL之间的数据迁移
- Oracle数据库的数据迁移方法
- 迁移Oracle数据的两个方案(待续)
- Oracle数据库的数据迁移方法
- oracle 不同表空间的数据迁移
- Oracle 大数据量下的数据迁移
- oracle到mysql的数据迁移
- oracle数据迁移所用到的技术
- oracle 不同表空间的数据迁移
- eclipse反编译
- jQuery+Ajax实现图片的预览和上传
- Spring Boot 在Controller使用@Transactional注解导致@RequestMapping失效
- 再次学习类的知识笔记
- Eclipse PHP环境搭建
- Oracle schema 级别的数据迁移
- linux常用命令
- 数据结构实验之图论一:基于邻接矩阵的广度优先搜索遍历
- ns2代码分析(三)-CBR业务的创建与路由的建立
- Coursera Deep Learning 第四课 卷积神经网络 第三周 测试题目 目标检测 detection algorithm
- android stdio gradle高级使用(二)
- 《算法艺术与信息学竞赛》之 贪心 例二 Enlightened landscape
- MySql 表的约束、字段值自动增加
- tools:text中的文字无法在AS布局预览中显示