分区交换exchange

来源:互联网 发布:北京java培训班多钱 编辑:程序博客网 时间:2024/05/07 05:52

How to move or transport table partition using Transportable Table Space (TTS)? [ID 731559.1]

--------------------------------------------------------------------------------
 
  修改时间 03-AUG-2009     类型 HOWTO     状态 MODERATED  

In this Document
  Goal
  Solution

 

--------------------------------------------------------------------------------


This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process, and therefore has not been subject to an independent technical review.

 

Applies to:
Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 10.2.0.4
Information in this document applies to any platform.

Goal
This Article show by example how to move or transport table partition using Transportable Table Space (TTS) considering Partition exchange option.

This method may help in the following scenarios as example:
- Backup/Restore table partition(s).
- May considered in ETL from OLTP to DWH databases as fast move data load method for table partition(s).

Facts:
. To be able to use TTS we should have self contents tablespace (TBS)
-- self contents that mean the base object and all related/dependents should be included on the export set.
. Table partition is part of global partitioned table, even it was in separate TBS.
. So basically we will consider using TTS and partition exchange features to turn partition to regular table.


Base Idea for this method:
. Create regular table to hold partition data.
. Exchange partition data with that table
. Drop that table partition
. Now we can TTS the TBS
. Turn it back when needed or plug it in another database.
Solution
In this scenario we simulate a transactional OLTP system which is required to keep three years of data on-line, and archive older data in another database. We'll start by creating a transaction table, range partitioned on transaction date. There will be one partition for each year's worth of business. Furthermore, we'll store each partition in its own tablespace to facilitate easy transport when we're ready to archive. The data will be indexed to enforce the primary key constraint. For efficiency, the indexes will be LOCAL PARTITIONED, meaning that each table partition will have its own corresponding index partition in its own tablespace. Individual table and index partitions can then be treated as one. Just to make life interesting the source DB will be Linux the target will be Windows, both 10g.


PART 1 - Design and create the source table
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Start by creating five tablespaces for five years worth of data, and five more for indexes. Note the small initial size and use of AUTOEXTEND with MAXSIZE. We don't want to transport files larger than they need to be.

CREATE TABLESPACE ttsdat1 DATAFILE '/u01/app/oracle/oradata/oltpdb/ttsdat1.dbf' SIZE 1M AUTOEXTEND ON MAXSIZE 50M;
CREATE TABLESPACE ttsidx1 DATAFILE '/u01/app/oracle/oradata/oltpdb/ttsidx1.dbf' SIZE 1M AUTOEXTEND ON MAXSIZE 50M;

CREATE TABLESPACE ttsdat2 DATAFILE '/u01/app/oracle/oradata/oltpdb/ttsdat2.dbf' SIZE 1M AUTOEXTEND ON MAXSIZE 50M;
CREATE TABLESPACE ttsidx2 DATAFILE '/u01/app/oracle/oradata/oltpdb/ttsidx2.dbf' SIZE 1M AUTOEXTEND ON MAXSIZE 50M;

CREATE TABLESPACE ttsdat3 DATAFILE '/u01/app/oracle/oradata/oltpdb/ttsdat3.dbf' SIZE 1M AUTOEXTEND ON MAXSIZE 50M;
CREATE TABLESPACE ttsidx3 DATAFILE '/u01/app/oracle/oradata/oltpdb/ttsidx3.dbf' SIZE 1M AUTOEXTEND ON MAXSIZE 50M;

CREATE TABLESPACE ttsdat4 DATAFILE '/u01/app/oracle/oradata/oltpdb/ttsdat4.dbf' SIZE 1M AUTOEXTEND ON MAXSIZE 50M;
CREATE TABLESPACE ttsidx4 DATAFILE '/u01/app/oracle/oradata/oltpdb/ttsidx4.dbf' SIZE 1M AUTOEXTEND ON MAXSIZE 50M;

CREATE TABLESPACE ttsdat5 DATAFILE '/u01/app/oracle/oradata/oltpdb/ttsdat5.dbf' SIZE 1M AUTOEXTEND ON MAXSIZE 50M;
CREATE TABLESPACE ttsidx5 DATAFILE '/u01/app/oracle/oradata/oltpdb/ttsidx5.dbf' SIZE 1M AUTOEXTEND ON MAXSIZE 50M;

SELECT tablespace_name, status, plugged_in FROM dba_tablespaces;
Now, create the range partitioned table using transaction date as the partition key. Storage parameters, such as tablespace, can be different for each partition.

CREATE TABLE txns (
  trans_id  NUMBER(12),
   trans_dt  DATE,
   from_acct CHAR(10),
   to_acct   CHAR(10),
   amount    NUMBER(12,2))
   PARTITION BY RANGE (trans_dt)
      ( PARTITION fy2004 VALUES LESS THAN (to_date('2005-01-01','yyyy-mm-dd') )
           TABLESPACE ttsdat1,
        PARTITION fy2005 VALUES LESS THAN (to_date('2006-01-01','yyyy-mm-dd') )
           TABLESPACE ttsdat2,
        PARTITION fy2006 VALUES LESS THAN (to_date('2007-01-01','yyyy-mm-dd') )
           TABLESPACE ttsdat3,
        PARTITION fy2007 VALUES LESS THAN (to_date('2008-01-01','yyyy-mm-dd') )
           TABLESPACE ttsdat4,
        PARTITION fy2008 VALUES LESS THAN (to_date('2009-01-01','yyyy-mm-dd') )
           TABLESPACE ttsdat5 );
Create a unique index to enforce the primary key constraint. For manageability we want a LOCAL PARTITIONED index so that removing or adding a partition will not require an index rebuild in either source or target databases. However, this requires the partitioning key be included in the index, which makes one consider how much additional storage and/or processing overhead this may incur. Again each index partition has its own storage parameters.

CREATE UNIQUE INDEX txns_pk_idx ON txns(trans_id,trans_dt) LOCAL
   ( PARTITION fy2004 TABLESPACE ttsidx1,
     PARTITION fy2005 TABLESPACE ttsidx2,
     PARTITION fy2006 TABLESPACE ttsidx3,
     PARTITION fy2007 TABLESPACE ttsidx4,
     PARTITION fy2008 TABLESPACE ttsidx5 );
Once the index is created we add the primary key constraint to the whole table. Remember this will also add a NOT NULL constraint to the two indexed columns.

ALTER TABLE txns ADD CONSTRAINT txns_pk PRIMARY KEY (trans_id, trans_dt)
   USING INDEX LOCAL
   ( PARTITION fy2004,
     PARTITION fy2005,
     PARTITION fy2006,
     PARTITION fy2007,
     PARTITION fy2008 );
Now it's time to create some test data. This little procedure will create a bunch of rows that look like OLTP transaction data, and will spread it evenly over the last four years starting from today. Select a few rows from the table to convince yourself.

CREATE SEQUENCE trans_id_seq;
BEGIN
FOR i IN 1..25000 LOOP
   INSERT INTO txns SELECT
      trans_id_seq.nextval,
      SYSDATE+(to_number(2008 - to_char(sysdate, 'YYYY'))*365)-ABS(dbms_random.random)/power(2,31)*365*4,
      SUBSTR(TO_CHAR(dbms_random.random,'000000000000'),-10,10),
      SUBSTR(TO_CHAR(dbms_random.random,'000000000000'),-10,10),
      TO_CHAR(ABS(dbms_random.random)/100000,'999999999.00') FROM dual;
      COMMIT;
   END LOOP;
END;

/
-- Check data existence
SELECT * FROM txns WHERE rownum < 25;
SELECT trunc(trans_dt,'YEAR'), count(*) FROM txns GROUP BY trunc(trans_dt,'YEAR');

It's time to check some data dictionary views to see what we've got. Save these queries for later use as we make changes to the table.
exec dbms_stats.gather_table_stats('&<owner>','TXNS',cascade=> True);
-- OR --
ANALYZE TABLE txns COMPUTE STATISTICS;
ANALYZE INDEX txns_pk_idx COMPUTE STATISTICS;

-- Number of rows in each table partition

SELECT partition_name, num_rows FROM user_tab_partitions;

-- Number of rows in each index partition

SELECT partition_name, num_rows, status FROM user_ind_partitions;

-- Number of rows in the entire table

SELECT table_name, num_rows FROM user_Tables;

-- Size of each segment

SELECT segment_type, segment_name, partition_name, tablespace_name, blocks, extents FROM user_segments ORDER BY segment_Type, segment_name, partition_name;

PART 2 - Unplug the oldest partition
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Our "production" table is complete and contains at least one partition that's old enough to archive. The first thing to do is create a discrete table that is identical to the partition we want to remove. It must have the same columns, data types, constraints and indexes, and exist in the same tablespace as the partition. The index must also exist in the corresponding index tablespace.

CREATE TABLE txns_fy2004 TABLESPACE ttsdat1 AS SELECT * FROM txns WHERE 1=0;
ALTER TABLE txns_fy2004 ADD CONSTRAINT txns_fy2004_pk PRIMARY KEY (trans_id, trans_dt) USING INDEX TABLESPACE ttsidx1;
The new table contains no rows, and the index is empty. When we EXCHANGE a partition with a discrete table, we literally exchange data segments between the two. That's why they must match in structure and location. No data is actually moved, only internal pointers are changed.

ALTER TABLE txns EXCHANGE PARTITION fy2004  WITH TABLE txns_fy2004 INCLUDING INDEXES;
Count the rows in the new table and in the partitioned table before and after the EXCHANGE to convince yourself that the data did actually "move", check the size of the data segments, and the status of the indexes.

SELECT COUNT(*) FROM txns PARTITION (fy2004);
SELECT COUNT(*) FROM txns_fy2004;
We no longer need the old partition as it is now empty, so we can drop it. If your goal was to simply transport this partition without removing it, such as in a data warehouse scenario, you could exchange the table back into the partition after transporting the tablespace.
ALTER TABLE txns DROP PARTITION fy2004;
To get ready make the tablespaces we are transporting (data and index) READ ONLY to ensure there are no active transactions, and that none start during transport. The rest of the partitions are unaffected.

ALTER TABLESPACE ttsdat1 READ ONLY;
ALTER TABLESPACE ttsidx1 READ ONLY;
Use the DBMS_TTS utility to verify the tablespace set is self contained. That is, there are no objects within the set that reference (foreign key, constraint indexes, etc) any object outside of the set. The set can be one or more tablespaces that will be transported together.

EXECUTE sys.DBMS_TTS.TRANSPORT_SET_CHECK('ttsdat1,ttsidx1', TRUE);
SELECT * FROM sys.TRANSPORT_SET_VIOLATIONS;
We're going to use data pump to perform the export, so make sure a named directory exists within the database and any other users have necessary permissions on it. Data pump cannot read or write arbitrarily to any file system directory like the old export/import.

CREATE OR REPLACE DIRECTORY trans_dir AS '/u01/app/oracle/oradata/oltpdb';
GRANT READ, WRITE ON DIRECTORY trans_dir TO ...;
Bounce out to the OS and perform the metadata export. This will run quickly and consistently regardless of the partition size as it contains data descriptions only, no actual data.

cd /u01/app/oracle/oradata/oltpdb

expdp <user>/<pwd>@tnsname /
   DUMPFILE=ttsfy1.dmp /
   DIRECTORY=trans_dir /
   TRANSPORT_TABLESPACES = ttsdat1,ttsidx1

ls -al
REMEMBER

You can not import back subset if exported data on TTS, so if you plan to export one by one or handle them independently you should export every partition separately to insure maximum flexibility.

The second part of transportation is copying the datafile(s) for the tablespace(s) we are moving. If there is more than one datafile per tablespace remember to get them all. Remember to take all tablespaces that make up the complete transport set. Copying can be performed by simple OS commands if you are moving to a similar platform. If you are using ASM or if Indian conversion is necessary then use RMAN to perform the copy (and parallel it!). You can list supported platforms and their Indian format by querying V$TRANSPORTABLE_PLATFORM.

rman target oltpdb
CONVERT TABLESPACE ttsdat1, ttsidx1
   TO PLATFORM 'Microsoft Windows IA (32-bit)'
   FORMAT='$ORACLE_BASE/oradata/oltpdb/%N_win.dbf';
exit

ls -al
REMEMBER

If you have no plans to drop that TBS(s) from oltpdb you may copy data files first before converting it, and use convert datafile option on the original source before copying the datafiles.

After transport is complete we can drop the source tablespaces. This is optional based on your business requirement. You could go back to the source database and simply put them back into READ WRITE mode and continue business as usual. Discrete tables can be easily EXCHANGED back into the original partition they came from.

DROP TABLESPACE ttsdat1 INCLUDING CONTENTS;
DROP TABLESPACE ttsidx1 INCLUDING CONTENTS;
In preparation for the next year's business data we need to create new data and index partitions, again in their own tablespaces to facilitate future transport.

CREATE TABLESPACE ttsdat9 DATAFILE '/u01/app/oracle/oradata/oltpdb/ttsdat6.dbf'
   SIZE 1M AUTOEXTEND ON MAXSIZE 50M;
CREATE TABLESPACE ttsidx9 DATAFILE '/u01/app/oracle/oradata/oltpdb/ttsidx6.dbf'
   SIZE 1M AUTOEXTEND ON MAXSIZE 50M;

ALTER TABLE txns ADD PARTITION fy2009
   VALUES LESS THAN (to_date('2010-01-01','yyyy-mm-dd') )
   TABLESPACE ttsdat6
   UPDATE INDEXES (txns_pk_idx (PARTITION fy2009 TABLESPACE ttsidx6));
PART 3 - Plug in to the archive database
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Copy the data and index tablespace datafiles and the export file to the target system. Connect to the target system and verify the database character sets match the source database. This is a very important restriction of Transportable Tablespaces!

CONNECT arch/arch@BING
SELECT * FROM v$nls_parameters WHERE parameter IN ('NLS_CHARACTERSET','NLS_NCHAR_CHARACTERSET');

Ensure an Oracle directory with necessary permissions exists for data pump.

CREATE OR REPLACE DIRECTORY trans_dir AS 'd:/oradata/oltpdb';

GRANT READ, WRITE ON DIRECTORY trans_dir TO ...;
Import the metadata into the target system. Again this is fast because we only import data definitions; the data already exists in the named datafiles. Note use of REMAP_SCHEMA if a different user will own the objects in the target database.

impdp arch/arch@BING parfile=ttsfy1.par
DUMPFILE=ttsfy1.dmp
DIRECTORY=trans_dir
REMAP_SCHEMA=oltp:arch
TRANSPORT_DATAFILES='d:/oradata/oltpdb/TTSDAT1_WIN.dbf',
'd:/oradata/oltpdb/TTSIDX1_WIN.dbf'
By default the new tablespaces will be READ ONLY. In order to create the table partition and EXCHANGE it with the existing discrete table we will have to make them READ WRITE.

ALTER TABLESPACE ttsdat1 READ WRITE;
ALTER TABLESPACE ttsidx1 READ WRITE;
Because this is the first time we are transporting a partition we must create the partitioned table with its first partition. Remember, it must match exactly the source table, and exist in the same tablespace.

CREATE TABLE arch
PARTITION BY RANGE (trans_dt)
( PARTITION fy2004 VALUES LESS THAN (to_date('2005-01-01','yyyy-mm-dd') )
TABLESPACE ttsdat1 )
AS SELECT * FROM txns_fy2004 WHERE 1=0;
Again, the new partition must match exactly including indexes and constraints. Note the tablespace clause to ensure the index is located in the same tablespace.

CREATE UNIQUE INDEX arch_pk_idx ON arch(trans_id,trans_dt) LOCAL ( PARTITION fy2004 TABLESPACE ttsidx1 );

ALTER TABLE arch ADD CONSTRAINT arch_pk PRIMARY KEY (trans_id, trans_dt)
USING INDEX LOCAL ( PARTITION fy2004 );

desc arch
We now have a partitioned table with one empty partition, and our discrete table with data that was transported. To complete the process, simply EXCHANGE data segments between the empty partition and discrete table. Then, count the number of rows, check the size of data segments and status of indexes again.

ALTER TABLE arch EXCHANGE PARTITION fy2004
WITH TABLE txns_fy2004 INCLUDING INDEXES;
Gather the stats and check the dictionary tables as before.
When you are convinced the exchange has occurred properly and the partitioned table is functional, drop the now empty discrete table and you are done for this year.

DROP TABLE txns_fy2004 PURGE;
For subsequent years, transport of the next partition would be identical. Then, instead of creating the partitioned table in the target database, we simply ADD another partition and EXCHANGE it with the discrete table again.

ALTER TABLE arch ADD PARTITION fy2005
VALUES LESS THAN (to_date('2006-01-01','yyyy-mm-dd') )
TABLESPACE ttsdat2
UPDATE INDEXES (arch_pk_idx (PARTITION fy2005 TABLESPACE ttsidx2));

ALTER TABLE arch EXCHANGE PARTITION fy2005 WITH TABLE txns_fy2005 INCLUDING INDEXES;

DROP TABLE txns_fy2005 PURGE;

SUMMARY
--------------------
Both Transportable Tablespaces and Table Partitioning are simple and efficient options of the Oracle database. They can greatly enhance the performance and manageability of a production system. However, before relying on these features it is imperative to spend some time setting up a test bed and evaluating your specific scenario(s). There are many other caveats and restrictions. This article demonstrates one simple scenario that can be used as a guide, but is no replacement for reading and understanding the Oracle documentation in each specific section as needed, and for further assistance don't hesitate to contact ORACLE support.

原创粉丝点击