IMPORT UTILITY PART1

来源:互联网 发布:mac系统word文件消失 编辑:程序博客网 时间:2024/06/07 13:58

imp 用户名/密码@igrp2_132.159.206.29 file=d:/carmot_xx.dmp fromuser=carmot_xx touser=carmot_xx

Table Objects: Order of Import

Type definitions, Table definitions, Table data, Table indexes, Integrity constraints, views, procedures, and triggers, Bitmap, functional, and domain indexes



Before Using Import

Run the catexp.sql or catalog.sql script

** Assign all necessary privileges to the IMP_FULL_DATABASE role.

** Assign IMP_FULL_DATABASE to the DBA role.

** Create required views of the data dictionary.

** Verifying Access Privileges

Verify that you have the required access privileges

** To use Import, you need the privilege CREATE SESSION to log on to the Oracle database server. This privilege belongs to the CONNECT role established during database creation.

** You can import that file only if you have the IMP_FULL_DATABASE role

Privileges Required to Import Objects into Your Own Schema

Object 

Privileges 

Privilege Type 

Clusters 

 

CREATE CLUSTER 

System 

 

And: 

Tablespace quota, or 

 

 

 

UNLIMITED TABLESPACE 

System 

Database links 

 

CREATE DATABASE LINK 

System 

 

And: 

CREATE SESSION on remote database 

System 

Triggers on tables 

 

CREATE TRIGGER 

System 

Triggers on schemas 

 

CREATE ANY TRIGGER 

System 

Indexes 

 

CREATE INDEX 

System 

 

And: 

Tablespace quota, or 

 

 

 

UNLIMITED TABLESPACE 

System 

Integrity constraints 

 

ALTER TABLE 

Object 

Libraries 

 

CREATE ANY LIBRARY 

System 

Packages 

 

CREATE PROCEDURE 

System 

Private synonyms 

 

CREATE SYNONYM 

System 

Sequences 

 

CREATE SEQUENCE 

System 

Snapshots 

 

CREATE SNAPSHOT 

System 

Stored functions 

 

CREATE PROCEDURE 

System 

Stored procedures 

 

CREATE PROCEDURE 

System 

Table data 

 

INSERT TABLE 

Object 

Table definitions 

 

CREATE TABLE 

System 

(Including
comments and audit options) 

And: 

Tablespace quota, or
UNLIMITED TABLESPACE 

System 

Views 

 

CREATE VIEW 

System 

 

And: 

SELECT on the base table, or 

Object 

 

 

SELECT ANY TABLE 

System 

Object types 

 

CREATE TYPE 

System 

Foreign function libraries 

 

CREATE LIBRARY 

System 

Dimensions 

 

CREATE DIMENSION 

System 

Operators 

 

CREATE OPERATOR 

System 

Indextypes 

 

CREATE INDEXTYPE 

System 

** Importing Grants

Grant  

Conditions 

Object privileges 

The object must exist in the user's schema, or

the user must have the object privileges with the WITH GRANT OPTION or,

the user must have the IMP_FULL_DATABASE role enabled. 

System privileges 

User must have the SYSTEM privilege as well as the WITH ADMIN OPTION. 

** importing Objects into Other Schemas: To import objects into another user's schema, you must have the IMP_FULL_DATABASE role enabled.

** importing System Objects: To import system objects from a full database export file, the role IMP_FULL_DATABASE must be enabled.

** The parameter FULL specifies that these system objects are included in the import when the export file is a full export: Profiles, Public database links, Public synonyms, Roles, Rollback segment definitions, Resource costs, Foreign function libraries, Context objects, System procedural objects, System audit options, System privileges, Tablespace definitions, Tablespace quotas, User definitions, Directory aliases, System event triggers

Invoking Import

Command-Line Entries

imp username/password PARAMETER=value

or imp username/password PARAMETER=(value1,value2,...,valuen)

The number of parameters cannot exceed the maximum length of a command line on the system.

Interactive Import Prompts

imp username/password

Import will display each parameter with a request for you to enter a value.

Parameter Files

You can specify all valid parameters and their values in a parameter file.

Create the parameter file using any flat file text editor. The command-line option PARFILE=filename tells Import to read the parameters from the specified file rather than from the command line. For example:

imp PARFILE=filename

imp username/password PARFILE=filename

The syntax for parameter file specifications is one of the following:

PARAMETER=value

PARAMETER=(value)

PARAMETER=(value1, value2, ...)

e.g.: The following example shows a partial parameter file listing:

FULL=y

FILE=dbay

INDEXES=y

CONSISTENT=y

You can add comments to the parameter file by preceding them with the pound (#) sign. Import ignores all characters to the right of the pound (#) sign.

You can specify a parameter file at the same time that you are entering parameters on the command line.

Invoking Import As SYSDBA

SYSDBA is used internally and has specialized functions; its behavior is not the same as for generalized users. Therefore, you should not typically need to invoke Import as SYSDBA, except in the following situations: 1) At the request of Oracle technical support 2) When importing a transportable tablespace set

To invoke Import as SYSDBA, use the following syntax, adding any desired parameters or parameter filenames:

imp /'username/password AS SYSDBA/'

or, optionally: imp /'username/password@instance AS SYSDBA/'



Import Modes

Full--Only users with the IMP_FULL_DATABASE role can import a full database export dump file. Use the FULL parameter to specify this mode.

Tablespace-- allows a privileged user to move a set of tablespaces from one Oracle database to another. Use the TRANSPORT_TABLESPACE parameter to specify this mode.

User (Owner)--allows you to import all objects that belong to you (such as tables, grants, indexes, and procedures). A privileged user importing in user mode can import all objects in the schemas of a specified set of users. Use the FROMUSER parameter to specify this mode.

Table--allows you to import specific tables and partitions. A privileged user can qualify the tables by specifying the schema that contains them. Use the TABLES parameter to specify this mode.

1) All users can import in table mode and user mode. 2) Users with the IMP_FULL_DATABASE role (privileged users) can import in all modes. 3) A user with the IMP_FULL_DATABASE role must specify one of these modes. Otherwise, an error results. 4) If a user without the IMP_FULL_DATABASE role fails to specify one of these modes, a user-level import is performed.

BUFFER

Default: operating system-dependent

buffer_size = rows_in_array * maximum_row_size

For tables containing LONG, LOB, BFILE, REF, ROWID, UROWID, or DATE columns, rows are inserted individually. The size of the buffer must be large enough to contain the entire row, except for LOB and LONG columns.

CHARSET

This parameter applies to Oracle Version 5 and 6 export files only.

COMMIT

Default: n

Specifies whether Import should commit after each array insert. By default, Import commits only after loading each table, and Import performs a rollback when an error occurs, before continuing with the next object.

If COMMIT=n and a table is partitioned, each partition and subpartition in the Export file is imported in a separate transaction.

Specifying COMMIT=y prevents rollback segments from growing inordinately large and improves the performance of large imports. Specifying COMMIT=y is advisable if the table has a uniqueness constraint. If the import is restarted, any rows that have already been imported are rejected with a recoverable error.

If a table does not have a uniqueness constraint, Import could produce duplicate rows when you reimport the data.

For tables containing LONG, LOB, BFILE, REF, ROWID, UROWID, or DATE columns, array inserts are not done. If COMMIT=y, Import commits these tables after each row.

COMPILE

Default: y

Specifies whether or not Import should compile packages, procedures, and functions as they are created.

If COMPILE=n, these units are compiled on their first use.

CONSTRAINTS

Default: y

Specifies whether or not table constraints are to be imported. Note that primary key constraints for index-organized tables (IOTs) and object tables are always imported.

DATAFILES

Default: none

When TRANSPORT_TABLESPACE is specified as y, use this parameter to list the datafiles to be transported into the database.

DESTROY

Default: n

Specifies whether or not the existing datafiles making up the database should be reused. That is, specifying DESTROY=y causes Import to include the REUSE option in the datafile clause of the CREATE TABLESPACE statement, which causes Import to reuse the original database's datafiles after deleting their contents.

Note that the export file contains the datafile names used in each tablespace. If you specify DESTROY=y and attempt to create a second database on the same system (for testing or other purposes), the Import utility will overwrite the first database's datafiles when it creates the tablespace. In this situation you should use the default, DESTROY=n, so that an error occurs if the datafiles already exist when the tablespace is created. Also, when you need to import into the original database, you will need to specify IGNORE=y to add to the existing datafiles without replacing them.

FEEDBACK

Default: 0 (zero)

Specifies that Import should display a progress meter in the form of a period for n number of rows imported. For example, if you specify FEEDBACK=10, Import displays a period each time 10 rows have been imported. The FEEDBACK value applies to all tables being imported; it cannot be set on a per-table basis.

FILE

Default: expdat.dmp

Specifies the names of the export files to import. The default extension is .dmp. Because Export supports multiple export files (see the following description of the FILESIZE parameter), you may need to specify multiple filenames to be imported.

If you were not the exporter of the export files, you must also have the IMP_FULL_DATABASE role granted to you.

FILESIZE

Default: operating-system dependent

FROMUSER

Default: none

A comma-separated list of schemas to import. This parameter is relevant only to users with the IMP_FULL_DATABASE role. Import objects into the FROMUSER schema if the export file is a full dump or a multischema, user-mode export dump file

FULL

Default: n

Specifies whether to import the entire export file.

GRANTS

Default: y

Specifies whether to import object grants.

By default, the Import utility imports any object grants that were exported. If the export was a user-mode Export, the export file contains only first-level object grants (those granted by the owner).

If the export was a full database mode Export, the export file contains all object grants, including lower-level grants (those granted by users given a privilege with the WITH GRANT OPTION). If you specify GRANTS=n, the Import utility does not import object grants. (Note that system grants are imported even if GRANTS=n.)

HELP

Default: n

Displays a description of the Import parameters.

IGNORE

Default: n

Specifies how object creation errors should be handled. If you specify IGNORE=y, Import overlooks object creation errors when it attempts to create database objects, and continues without reporting the errors. Even if IGNORE=y, Import will not replace an existing object; instead, it will skip the object.

In situations where multiple refreshes from a single export file are done with IGNORE=y, certain objects can be created multiple times (although they will have unique system-defined names). You can prevent this for certain objects (for example, constraints) by doing an import with CONSTRAINTS=n. If you do a full import with the CONSTRAINTS=n, no constraints for any tables are imported.

INDEXES

Default: y

Specifies whether or not to import indexes. System-generated indexes such as LOB indexes, OID indexes, or unique constraint indexes are re-created by Import regardless of the setting of this parameter.

You can postpone all user-generated index creation until after Import completes, by specifying INDEXES=n.

If indexes for the target table already exist at the time of the import, Import performs index maintenance when data is inserted into the table.

INDEXFILE

Default: none

Specifies a file to receive index-creation statements.

When this parameter is specified, index-creation statements for the requested mode are extracted and written to the specified file, rather than used to create indexes in the database. No database objects are imported.

If the Import parameter CONSTRAINTS is set to y, Import also writes table constraints to the index file.

The file can then be edited (for example, to change storage parameters) and used as a SQL script to create the indexes.

To make it easier to identify the indexes defined in the file, the export file's CREATE TABLE statements and CREATE CLUSTER statements are included as comments.

Perform the following steps to use this feature:

Import using the INDEXFILE parameter to create a file of index-creation statements.

Edit the file, making certain to add a valid password to the connect strings.

Rerun Import, specifying INDEXES=n.

(This step imports the database objects while preventing Import from using the index definitions stored in the export file.)

Execute the file of index-creation statements as a SQL script to create the index.

The INDEXFILE parameter can be used only with the FULL=y, FROMUSER, TOUSER, or TABLES parameters.

LOG

Default: none

Specifies a file to receive informational and error messages. If you specify a log file, the Import utility writes all information to the log in addition to the terminal display.

PARFILE

Default: none

Specifies a filename for a file that contains a list of Import parameters.

RECORDLENGTH

Default: operating system dependent

Specifies the length, in bytes, of the file record. The RECORDLENGTH parameter is necessary when you must transfer the export file to another operating system that uses a different default value.

You can set RECORDLENGTH to any value equal to or greater than your system's BUFSIZ. (The highest value is 64 KB.) Changing the RECORDLENGTH parameter affects only the size of data that accumulates before writing to the database. It does not affect the operating system file block size.

You can also use this parameter to specify the size of the Import I/O buffer.

RESUMABLE

Default: n

The RESUMABLE parameter is used to enable and disable resumable space allocation. Because this parameter is disabled by default, you must set RESUMABLE=y in order to use its associated parameters, RESUMABLE_NAME and RESUMABLE_TIMEOUT.

RESUMABLE_NAME

Default: 'User USERNAME (USERID), Session SESSIONID, Instance INSTANCEID'

RESUMABLE_TIMEOUT

Default: 7200 seconds (2 hours)

ROWS

Default: y

Specifies whether or not to import the rows of table data.

SHOW

Default: n

When SHOW=y, the contents of the export file are listed to the display and not imported. The SQL statements contained in the export are displayed in the order in which Import will execute them.

The SHOW parameter can be used only with the FULL=y, FROMUSER, TOUSER, or TABLES parameter.

SKIP_UNUSABLE_INDEXES

Default: n

Specifies whether or not Import skips building indexes that were set to the Index Unusable state (by either system or user). Other indexes (not previously set Index Unusable) continue to be updated as rows are inserted.

This parameter allows you to postpone index maintenance on selected index partitions until after row data has been inserted. You then have the responsibility to rebuild the affected index partitions after the Import.

You can use the INDEXFILE parameter in conjunction with INDEXES=n to provide the SQL scripts for re-creating the index. Without this parameter, row insertions that attempt to update unusable indexes will fail.

STATISTICS

Default: ALWAYS

Specifies what is done with the database optimizer statistics at import time.

The options are: 1) ALWAYS 2) NONE 3) SAFE 4) RECALCULATE

Do not import the database optimizer statistics. Instead, recalculate them on import.

TABLES

Default: none

Specifies that the Import is a table-mode import and lists the table names and partition and subpartition names to import. Table-mode import lets you import entire partitioned or nonpartitioned tables. Parameters: 1) tablenameIf a table in the list is partitioned and you do not specify a partition name, all its partitions and subpartitions are imported. To import all the exported tables, specify an asterisk (*) as the only table name parameter. tablename can contain any number of '%' pattern matching characters, which can each match zero or more characters in the table names in the export file. 2)partition_name and subpartition_name let you restrict the import to one or more specified partitions or subpartitions within a partitioned table.

The syntax you use to specify the preceding is in the form:

tablename:partition_name

tablename:subpartition_name

Table Name Restrictions

1)By default, table names in a database are stored as uppercase. If you have a table name in mixed-case or lowercase, and you want to preserve case-sensitivity for the table name, you must enclose the name in quotation marks. The name must exactly match the table name stored in the database.

In command-line mode:

tables='/"Emp/"'

In interactive mode:

Table(T) to be exported: "Exp"

In parameter file mode:

tables='"Emp"'

2)Table names specified on the command line cannot include a pound (#) sign, unless the table name is enclosed in quotation marks. Similarly, in the parameter file, if a table name includes a pound (#) sign, the Import utility interprets the rest of the line as a comment, unless the table name is enclosed in quotation marks.

TABLES=(emp#, dept, mydata)

TABLES=("emp#", dept, mydata)

TABLESPACES

Default: none

When TRANSPORT_TABLESPACE is specified as y, use this parameter to provide a list of tablespaces to be transported into the database.

TOID_NOVALIDATE

Default: none

When you import a table that references a type, but a type of that name already exists in the database, Import attempts to verify that the preexisting type is, in fact, the type used by the table (rather than a different type that just happens to have the same name).

To do this, Import compares the type's unique identifier (TOID) with the identifier stored in the export file. Import will not import the table rows if the TOIDs do not match.

In some situations, you may not want this validation to occur on specified types (for example, if the types were created by a cartridge installation). You can use the TOID_NOVALIDATE parameter to specify types to exclude from TOID comparison.

The syntax is as follows:

TOID_NOVALIDATE=([schemaname.]typename [, ...])

For example:

imp scott/tiger TABLE=foo TOID_NOVALIDATE=bar

imp scott/tiger TABLE=foo TOID_NOVALIDATE=(fred.type0,sally.type2,type3)

TOUSER

Default: none

Specifies a list of usernames whose schemas will be targets for Import. The
IMP_FULL_DATABASE role is required to use this parameter. To import to a different schema than the one that originally contained the object, specify TOUSER. For example:

imp SYSTEM/password FROMUSER=scott TOUSER=joe TABLES=emp

If multiple schemas are specified, the schema names are paired. The following example imports scott's objects into joe's schema, and fred's objects into ted's schema:

imp SYSTEM/password FROMUSER=scott,fred TOUSER=joe,ted

If the FROMUSER list is longer than the TOUSER list, the remaining schemas will be imported into either the FROMUSER schema, or into the importer's schema, based on normal defaulting rules. You can use the following syntax to ensure that any extra objects go into the TOUSER schema:

imp SYSTEM/password FROMUSER=scott,adams TOUSER=ted,ted

Note that user Ted is listed twice.

TRANSPORT_TABLESPACE

Default: n

When specified as y, instructs Import to import transportable tablespace metadata from an export file.

TTS_OWNERS

Default: none

When TRANSPORT_TABLESPACE is specified as y, use this parameter to list the users who own the data in the transportable tablespace set.

USERID (username/password)

Default: none

Specifies the username/password (and optional connect string) of the user performing the import.

USERID can also be:

username/password AS SYSDBA

or

username/password@instance

or

username/password@instance AS SYSDBA

If you connect as user SYS, you must also specify AS SYSDBA in the connect string. Your operating system may require you to treat AS SYSDBA as a special string, in which case the entire string would be enclosed in quotation marks.

VOLSIZE

Specifies the maximum number of bytes in an export file on each volume of tape.

The VOLSIZE parameter has a maximum value equal to the maximum value that can be stored in 64 bits. See your operating system-specific documentation for more information.

The VOLSIZE value can be specified as number followed by KB (number of kilobytes). For example, VOLSIZE=2KB is the same as VOLSIZE=2048. Similarly, MB specifies megabytes (1024 * 1024) and GB specifies gigabytes (1024**3). The shorthand for bytes remains B; the number is not multiplied to get the final file size (VOLSIZE=2048B is the same as VOLSIZE=2048).



Example Import Sessions

Example Import of Selected Tables for a Specific User

In this example, using a full database export file, an administrator imports the dept and emp tables into the scott schema.

Parameter File Method

> imp SYSTEM/password PARFILE=params.dat

The params.dat file contains the following information:

FILE=dba.dmp

SHOW=n

IGNORE=n

GRANTS=y

FROMUSER=scott

TABLES=(dept,emp)

Command-Line Method

> imp SYSTEM/password FILE=dba.dmp FROMUSER=scott TABLES=(dept,emp)

Example Import of Tables Exported by Another User

Parameter File Method

> imp SYSTEM/password PARFILE=params.dat

The params.dat file contains the following information:

FILE=blake.dmp

SHOW=n

IGNORE=n

GRANTS=y

ROWS=y

FROMUSER=blake

TOUSER=scott

TABLES=(unit,manager)

Command-Line Method

> imp SYSTEM/password FROMUSER=blake TOUSER=scott FILE=blake.dmp -

TABLES=(unit,manager)

Example Import of Tables from One User to Another

Parameter File Method

> imp SYSTEM/password PARFILE=params.dat

The params.dat file contains the following information:

FILE=scott.dmp

FROMUSER=scott

TOUSER=blake

TABLES=(*)

Command-Line Method

> imp SYSTEM/password FILE=scott.dmp FROMUSER=scott TOUSER=blake TABLES=(*)

Example Import Session Using Partition-Level Import

Example 1: A Partition-Level Import

In this example, emp is a partitioned table with three partitions: p1, p2, and p3.

A table-level export file was created using the following command:

> exp scott/tiger TABLES=emp FILE=exmpexp.dat ROWS=y

In a partition-level import you can specify the specific partitions of an exported table that you want to import. In this example, these are p1 and p3 of table emp:

> imp scott/tiger TABLES=(emp:p1,emp:p3) FILE=exmpexp.dat ROWS=y

Example 2: A Partition-Level Import of a Composite Partitioned Table

This example demonstrates that the partitions and subpartitions of a composite partitioned table are imported. emp is a partitioned table with two composite partitions: p1 and p2. P1 has three subpartitions: p1_sp1, p1_sp2, and p1_sp3. P2 has two subpartitions: p2_sp1 and p2_sp2.

A table-level export file was created using the following command:

> exp scott/tiger TABLES=emp FILE=exmpexp.dat ROWS=y

The following import command results in the importing of subpartition p1_sp2 and p1_sp3 of composite partition p1 in table emp and all subpartitions of composite partition p2 in table emp.

> imp scott/tiger TABLES=(emp:p1_sp2,emp:p1_sp3,emp:p2) FILE=exmpexp.dat ROWS=y

Example 3: Repartitioning a Table on a Different Column

This example assumes the emp table has two partitions based on the empno column. This example repartitions the emp table on the deptno column.

Perform the following steps to repartition a table on a different column: 1)Export the table to save the data. 2)Drop the table from the database. 3)Create the table again with the new partitions. 4)Import the table data.

The following example illustrates how to perform these steps.

> exp scott/tiger TABLES=emp FILE=empexp.dat

SQL> connect scott/tiger

Connected.

SQL> drop table emp cascade constraints;

Statement processed.

SQL> create table emp

2> (

3> empno number(4) not null,

4> ename varchar2(10),

5> job varchar2(9),

6> mgr number(4),

7> hiredate date,

8> sal number(7,2),

9> comm number(7,2),

10> deptno number(2)

11> )

12> partition by range (deptno)

13> (

14> partition dept_low values less than (15)

15> tablespace tbs_1,

16> partition dept_mid values less than (25)

17> tablespace tbs_2,

18> partition dept_high values less than (35)

19> tablespace tbs_3

20> );

Statement processed.

SQL> exit

> imp scott/tiger tables=emp file=empexp.dat ignore=y

The following SELECT statements show that the data is partitioned on the deptno column:

SQL> connect scott/tiger

Connected.

SQL> select empno, deptno from emp partition (dept_low);

EMPNO DEPTNO

---------- ----------

7782 10

7839 10

7934 10

3 rows selected.

SQL> select empno, deptno from emp partition (dept_mid);

EMPNO DEPTNO

---------- ----------

7369 20

7566 20

7788 20

7876 20

7902 20

5 rows selected.

SQL> select empno, deptno from emp partition (dept_high);

EMPNO DEPTNO

---------- ----------

7499 30

7521 30

7654 30

7698 30

7844 30

7900 30

6 rows selected.

SQL> exit;

Example Import of Using Pattern Matching to Import Various Tables

Parameter File Method

imp SYSTEM/password PARFILE=params.dat

The params.dat file contains the following information:

FILE=scott.dmp

IGNORE=n

GRANTS=y

ROWS=y

FROMUSER=scott

TABLES=(%d%,b%s)

Command-Line Method

imp SYSTEM/password FROMUSER=scott FILE=scott.dmp TABLES=(%d%,b%s)



Using the Interactive Method

The following table shows the possibilities:

You enter... 

Import's Response 

imp username/password@instance as sysdba 

Starts an Import session 

imp username/password@instance 

Starts an Import session 

imp username/password as sysdba 

Starts an Import session 

imp username/password 

Starts an Import session 

imp username@instance as sysdba 

Prompts for password 

imp username@instance  

Prompts for password 

imp username  

Prompts for password 

imp username as sysdba  

Prompts for password 

imp / as sysdba  

No prompt for password, operating-system authentication is used 

imp / 

No prompt for password, operating-system authentication is used  

imp /@instance as sysdba 

No prompt for password, operating-system authentication is used 

imp /@instance  

No prompt for password, operating-system authentication is used 



Warning Messages

Import does not terminate after recoverable errors.

Import also issues a warning whenever it encounters an invalid object.

Nonrecoverable Error Messages

Completion Messages

Exit Codes for Inspection and Display

Result

Exit Code

Import terminated successfully without warnings

EX_SUCC

Import terminated successfully with warnings

EX_OKWARN

Import terminated unsuccessfully

EX_FAIL

For UNIX, the exit codes are as follows:

EX_SUCC 0

EX_OKWARN 0

EX_FAIL 1



Error Handling During an Import

Row Errors

If a row is rejected due to an integrity constraint violation or invalid data, Import displays a warning message but continues processing the rest of the table. Some errors, such as "tablespace full," apply to all subsequent rows in the table. These errors cause Import to stop processing the current table and skip to the next table.

A "tablespace full" error can suspend the import if the RESUMABLE=y parameter is specified.

Failed Integrity Constraints: NOT NULL constraints, Uniqueness constraints, Primary key (not null and unique) constraints, Referential integrity constraints, Check constraints , Invalid Data

Errors Importing Database Objects

Errors can occur for many reasons when you import database objects, as described in this section. When these errors occur, import of the current database object is discontinued. Import then attempts to continue with the next database object in the export file. 1) Object Already Exists: If IGNORE=n (the default), the error is reported, and Import continues with the next database object. The current database object is not replaced.  If IGNORE=y, object creation errors are not reported. The database object is not replaced. If the object is a table, rows are imported into it. Note that only object creation errors are ignored; all other errors (such as operating system, database, and SQL errors) are reported and processing may stop. 2)SequencesIf sequence numbers need to be reset to the value in an export file as part of an import, you should drop sequences. If the sequence already exists, the export file's CREATE SEQUENCE statement fails and the sequence is not imported.

Resource Errors

If a resource error occurs while you are importing a row, Import stops processing the current table and skips to the next table. If you have specified COMMIT=y, Import commits the partial import of the current table. If not, a rollback of the current table occurs before Import continues.

Domain Index Metadata

Domain indexes can have associated application-specific metadata that is imported using anonymous PL/SQL blocks. These PL/SQL blocks are executed at import time prior to the CREATE INDEX statement. If a PL/SQL block causes an error, the associated index is not created because the metadata is considered an integral part of the index.



Table-Level and Partition-Level Import

You can import tables, partitions, and subpartitions in the following ways:

Table-level Import: imports all data from the specified tables in an Export file.

Partition-level Import: imports only data from the specified source partitions or subpartitions.

You must set the parameter IGNORE=y when loading data into an existing table.

Guidelines for Using Table-Level Import

For each specified table, table-level Import imports all rows of the table. With table-level Import: 1)All tables exported using any Export mode (except TRANSPORT_TABLESPACES) can be imported. 2)Users can import the entire (partitioned or nonpartitioned) table, partitions, or subpartitions from a table-level export file into a (partitioned or nonpartitioned) target table with the same name. 3)If the table does not exist, and if the exported table was partitioned, table-level Import creates a partitioned table. If the table creation is successful, table-level Import reads all source data from the export file into the target table. After Import, the target table contains the partition definitions of all partitions and subpartitions associated with the source table in the Export file. This operation ensures that the physical and logical attributes (including partition bounds) of the source partitions are maintained on Import.

Guidelines for Using Partition-Level Import

1.Import always stores the rows according to the partitioning scheme of the target table.

2.Partition-level Import inserts only the row data from the specified source partitions or subpartitions.

3.If the target table is partitioned, partition-level Import rejects any rows that fall above the highest partition of the target table.

4.Partition-level Import cannot import a nonpartitioned exported table. However, a partitioned table can be imported from a nonpartitioned exported table using table-level Import.

5.Partition-level Import is legal only if the source table (that is, the table called tablename at export time) was partitioned and exists in the Export file.

6.If the partition or subpartition name is not a valid partition in the export file, Import generates a warning.

7. If ROWS=y (default), and the table does not exist in the Import target system, the table is created and all rows from the source partition or subpartition are inserted into the partition or subpartition of the target table. If ROWS=y (default) and IGNORE=y, but the table already existed before Import, all rows for the specified partition or subpartition in the table are inserted into the table. The rows are stored according to the existing partitioning scheme of the target table. If ROWS=n, Import does not insert data into the target table and continues to process other objects associated with the specified table and partition or subpartition in the file. If the target table is nonpartitioned, the partitions and subpartitions are imported into the entire table. Import requires IGNORE=y to import one or more partitions or subpartitions  from the Export file into a nonpartitioned table on the import target system.

Migrating Data Across Partitions and Tables

1. In the following example, the partition specified by the partition-name is a composite partition. All of its subpartitions will be imported:

imp SYSTEM/password FILE=expdat.dmp FROMUSER=scott TABLES=b:py

2. The following example causes row data of partitions qc and qd of table scott.e to be imported into the table scott.e:

imp scott/tiger FILE=expdat.dmp TABLES=(e:qc, e:qd) IGNORE=y



Controlling Index Creation and Maintenance

Delaying Index Creation

Index creation can be time consuming, and therefore can be done more efficiently after the import of all other objects has completed. You can postpone creation of indexes until after the Import completes by specifying INDEXES=n. (INDEXES=y is the default.) You can then store the missing index definitions in a SQL script by running Import while using the INDEXFILE parameter. The index-creation statements that would otherwise be issued by Import are instead stored in the specified file.

Index Creation and Maintenance Controls

If SKIP_UNUSABLE_INDEXES=y, the Import utility postpones maintenance on all indexes that were set to Index Unusable before Import. Other indexes (not previously set Index Unusable) continue to be updated as rows are inserted. This approach saves on index updates during import of existing tables.

Delayed index maintenance may cause a violation of an existing unique integrity constraint supported by the index. The existence of a unique integrity constraint on a table does not prevent existence of duplicate keys in a table that was imported with INDEXES=n. The supporting index will be in an UNUSABLE state until the duplicates are removed and the index is rebuilt.

Example of Postponing Index Maintenance

For example, assume that partitioned table t with partitions p1 and p2 exists on the Import target system. Assume that local indexes p1_ind on partition p1 and p2_ind on partition p2 exist also. Assume that partition p1 contains a much larger amount of data in the existing table t, compared with the amount of data to be inserted by the Export file (expdat.dmp). Assume that the reverse is true for p2.

Consequently, performing index updates for p1_ind during table data insertion time is more efficient than at partition index rebuild time. The opposite is true for p2_ind.

Users can postpone local index maintenance for p2_ind during Import by using the following steps:

Issue the following SQL statement before Import:

ALTER TABLE t MODIFY PARTITION p2 UNUSABLE LOCAL INDEXES;

Issue the following Import command:

imp scott/tiger FILE=expdat.dmp TABLES = (t:p1, t:p2) IGNORE=y SKIP_UNUSABLE_INDEXES=y

This example executes the ALTER SESSION SET SKIP_UNUSABLE_INDEXES=y statement before performing the import.

Issue the following SQL statement after Import:

ALTER TABLE t MODIFY PARTITION p2 REBUILD UNUSABLE LOCAL INDEXES;

In this example, local index p1_ind on p1 will be updated when table data is inserted into partition p1 during Import. Local index p2_ind on p2 will be updated at index rebuild time, after Import.



Reducing Database Fragmentation

You can reduce fragmentation by performing a full database export and import as follows:

Do a full database export (FULL=y) to back up the entire database.

Shut down the Oracle database server after all users are logged off.

Delete the database. See your Oracle operating system-specific documentation for information on how to delete a database.

Re-create the database using the CREATE DATABASE statement.

Do a full database import (FULL=y) to restore the entire database.



Network Considerations

Transporting Export Files Across a Network

Because the export file is in binary format, use a protocol that supports binary transfers to prevent corruption of the file when you transfer it across a network. 

Exporting and Importing with Oracle Net

With Oracle Net, you can perform exports and imports over a network.



To use Import with Oracle Net, include the connection qualifier string @connect_string when entering the username/password in the exp or imp command. For the exact syntax of this clause, see the user's guide for your Oracle Net protocol.



Character Set and Globalization Support Considerations

Character Set Conversion

User Data

Data of datatypes CHAR, VARCHAR2, NCHAR, NVARCHAR2, CLOB, and NCLOB are written to the export file directly in the character sets of the source database. If the character sets of the source database are different than the character sets of the import database, a single conversion is performed.

Data Definition Language (DDL)

Export writes export files using the character set specified in the NLS_LANG environment variable for the user session. A character set conversion is performed if the value of NLS_LANG differs from the database character set.

If the export file's character set is different than the Import user session character set, then Import converts the character set to its user session character set. Import can only perform this conversion for single-byte character sets. This means that for multibyte character sets, the import file's character set must be identical to the export file's character set.

A final character set conversion may be performed if the target database's character set is different from Import's user session character set.

To minimize data loss due to character set conversions, ensure that the export database, the export user session, the import user session, and the import database all use the same character set.

Import and Single-Byte Character Sets

Some 8-bit characters can be lost (that is, converted to 7-bit equivalents) when you import an 8-bit character set export file. This occurs if the system on which the import occurs has a native 7-bit character set, or the NLS_LANG operating system environment variable is set to a 7-bit character set. Most often, this is apparent when accented characters lose the accent mark.

To avoid this unwanted conversion, you can set the NLS_LANG operating system environment variable to be that of the export file character set.

Import and Multibyte Character Sets

For multibyte character sets, the import file's character set must be identical to the Export file's character set. This is because Import can only perform character set conversion for single-byte character sets.

During the conversion, any characters in the export file that have no equivalent in the target character set are replaced with a default character. (The default character is defined by the target character set.) To guarantee 100% conversion, the target character set must be a superset (or equivalent) of the source character set.



原创粉丝点击