book read expert 11g

来源:互联网 发布:类似于instagram的软件 编辑:程序博客网 时间:2024/06/05 17:15
Book reading  expert oracle 11g



login.sql File



--   login.sql
--   SQL*Plus user login startup file.
--   This script is automatically run after glogin.sql
--   To change the SQL*Plus prompt to display the current user,
--   connection identifier and current time.
-- First set the database date format to show the time.
ALTER SESSION SET nls_date_format = 'HH:MI:SS';
-- SET the SQLPROMPT to include the _USER, _CONNECT_IDENTIFIER
-- and _DATE variables.
SET SQLPROMPT "_USER'@'_CONNECT_IDENTIFIER _DATE> "
-- To set the number of lines to display in a report page to 24.
SET PAGESIZE 24
-- To set the number of characters to display on each report line to 78.
SET LINESIZE 78
-- To set the number format used in a report to $99,999.
SET NUMFORMAT $99,999



errorlogging

SQL> show errorlogging
errorlogging is OFF
SQL>

To set error logging on, issue the following command in SQL*Plus:


SQL> set errorlogging on;

Once you turn error logging on, confirm that by issuing the following command:


SQ> connect hr/hr
Connected.
SQL> show errorlogging
errorlogging is ON TABLE HR.SPERRORLOG
SQL>

SQL> select username,statement,message
     from sperrorlog;

USERNAME       STATEMENT                         MESSAGE
---------      ---------------------------       --------------------------
HR             create table employees as         ORA-00955: name is already
               select * from employees           used by an existing object
HR             select names from employees       ORA-00904: "NAMES":
                                            invalid identifier

SQL>


how error of a procedure


SQL> SHO ERRORS PROCEDURE TEST_PROC

===========SPOOL Command

The SPOOL command enables you to save the output of one or more SQL statements to an operating system file in both UNIX and Windows:


SQL> SET LINESIZE 180
SQL> SPOOL employee.lst
SQL> SELECT emp_id, last_name, salary, manager FROM employee;
SQL> SPOOL OFF;


copy from

使用create table  table-name as select * from table-name,这种方法在copy大表的时候,因为oracle不是在每次insert的时候commit,所以在执行的过程中,undo segments会run out of space。使用copy 命令可以避免这个问题

orcl1>copy from system/julia033@sales create test02 using select * from copy_test;

Array fetch/bind size is 15. (arraysize is 15)
Will commit when done. (copycommit is 0)
Maximum long size is 80. (long is 80)
Table TEST02 created.

   34 rows selected from system@sales.
   34 rows inserted into TEST02.
   34 rows committed into TEST02 at DEFAULT HOST connection.


orcl1>copy from system/julia033@sales insert test02 using select * from copy_test;

Array fetch/bind size is 15. (arraysize is 15)
Will commit when done. (copycommit is 0)
Maximum long size is 80. (long is 80)
   34 rows selected from system@sales.
   34 rows inserted into TEST02.
   34 rows committed into TEST02 at DEFAULT HOST connection.

orcl1>

APPEND: Inserts records into the target table and creates the table if it doesn't exist

CREATE: Creates the target table and inserts rows into it

INSERT: Inserts rows into an existing table

REPLACE: Drops the existing table, re-creates it, and loads data into it

当在同一个数据库server中时 ,create table as select的方式要比copy快,因为在这种情况下copy要到sqlplus interface然后在copy到原来的数据库。


using sql to generate sql

Always make sure you set the heading off, echo off, and feedback off. This will give you a clean, spooled output script, which you can execute directly without any changes.
SQL> SET ECHO OFF HEADING OFF FEEDBACK OFF
SQL> SPOOL test.txt
SQL> SELECT 'grant connect, resource to '||username||';' FROM dba_users;

This is part of the output of the preceding command:


GRANT CONNECT, RESOURCE TO DBA1;
GRANT CONNECT, RESOURCE TO MAMIDI;
GRANT CONNECT, RESOURCE TO JEFFRESS;
GRANT CONNECT, RESOURCE TO CAMPBELL;
GRANT CONNECT, RESOURCE TO ALAPATI;
GRANT CONNECT, RESOURCE TO BOLLU;
GRANT CONNECT, RESOURCE TO BOGAVELLI;
SQL> SPOOL OFF


OEM

Starting with the Oracle Database 10g release, there are two versions of the OEM—a single-database version called the Database Control, and a systemwide version that lets you manage your entire system, called the Grid Control. With the Grid Control, you can manage your enterprise-wide database, application servers, hosts, and other services.


[oracle@julia01 install]$ more portlist.ini
Enterprise Manager Console HTTP Port (sales) = 1158
Enterprise Manager Agent Port (sales) = 3938
[oracle@julia01 install]$
[oracle@julia01 install]$ pwd
/oracle/app/oracle/product/11.2.0/dbhome_1/install
[oracle@julia01 install]$

What's Inside a Data Block?

SYS@sales 2013-06-06 23:10:46> select owner,segment_name,file_id,block_id,blocks,bytes from dba_extents where owner = 'SCOTT' ;

OWNER       SEGMENT_NA     FILE_ID   BLOCK_ID    BLOCKS        BYTES
---------- ---------- ---------- ---------- ---------- ----------
SCOTT       TEST            4       4624         8        65536

SYS@sales 2013-06-06 23:11:23> host pwd    
/home/oracle

SYS@sales 2013-06-06 23:11:50> alter system dump datafile 4 block 4624;

System altered.

在dump的块中找有用的信息


SYS@sales 2013-06-06 23:31:31> select name from sys.obj$ where obj#='86868';

NAME
------------------------------
TEST



The High Cost of Disk I/O

Although secondary storage (usually magnetic disks) is significantly larger than main memory, it's also significantly slower. A disk I/O involves either moving a data block from disk to memory (a disk read) or writing a data block to disk from memory (a disk write). Typically, it takes about 10–40 milliseconds (0.01–0.04 seconds) to perform a single disk I/O.

Suppose your update transaction involves 25 I/Os—you could spend up to 1 second just waiting to read or write data. In that same second, your CPUs could have performed millions of instructions—the update takes a negligible amount of time compared to the disk reads and disk writes. If you already have the necessary data in Oracle's memory, the retrieval time would be much faster, as memory read/writes take only a few nanoseconds. This is why avoiding or minimizing disk I/Os plays such a big role in providing high performance in Oracle databases.




PGA


You can classify the PGA memory into the following types:

    Private SQL area: This area of memory holds SQL variable bind information and runtime memory structures. Each session that executes a SQL statement will have its own private SQL area.

    Runtime area: The runtime area is created for a user session when the session issues a SELECT, INSERT, UPDATE, or DELETE statement. After an INSERT, DELETE, or UPDATE statement is run, or after the output of a SELECT statement is fetched, the runtime area is freed by Oracle.

For example, a user's process may have some cursors (which are handles to memory areas where you store the values for variables) associated with it.

Caution     

Many Oracle manuals suggest that you can allocate up to half of the total system memory for the Oracle SGA. This guideline assumes that the PGA memory will be fairly small. However, if the number of users is very large and the queries are complex, your PGA component may end up being even larger than the SGA. You should estimate the total memory requirements by projecting both SGA and PGA needs.


SCN

The System Change Number

The system change number, or SCN, is an important quantifier that the Oracle database uses to keep track of its state at any given point in time. When you read (SELECT) the data in the tables, you don't affect the state of the database, but when you modify, insert, or delete a row, the state of the database is different from what it was before. Oracle uses the SCN to keep track of all the changes made to the database over time. The SCN is a logical time stamp that is used by Oracle to order events that occur within the database. The SCN is very important for several reasons, not the least of which is the recovery of the database after a crash.

SCNs are like increasing sequence numbers, and Oracle increments them in the SGA. When a transaction modifies or inserts data, Oracle first writes a new SCN to the rollback segment. The log writer process then writes the commit record of the transaction immediately to the redo log, and this commit record will have the unique SCN of the new transaction. In fact, the writing of this SCN to the redo log file denotes a committed transaction in an Oracle database.

The SCN helps Oracle determine whether crash recovery is needed after a sudden termination of the database instance or after a SHUTDOWN ABORT command is issued. Every time the database performs a checkpoint operation, Oracle writes a START SCN command to the datafile headers. The control file maintains an SCN value for each datafile, called the STOP SCN, which is usually set to infinity, and every time the instance is stopped normally (with the SHUTDOWN NORMAL or SHUTDOWN IMMEDIATE command), Oracle copies the START SCN number in the datafile headers to the STOP SCN numbers for the datafiles in the control file. When you restart the database after a graceful shutdown, there is no need for any kind of recovery because the SCNs in the datafiles and the control files match. On the other hand, abrupt instance termination does not leave time for this matching of SCNs, and Oracle recognizes that instance recovery is required because of the varying SCN numbers in the datafiles on the one hand and the control file on the other. As you'll learn in Chapter 16, they play a critical role during database recovery. Oracle determines how far back you should apply the archived redo logs during a recovery based on the SCN.
    
    

数据库恢复需要的文件

It's important to understand the basics of how Oracle manages its backup and recovery operations. You've seen some of the components previously, but I put it all together here. The following Oracle structures ensure that you can recover your databases after a problem:

    The control file: The control file contains datafile and redo log information, as well as the latest system change number, which is key to the recovery process.

    Database backups: These are file or tape backups of the database datafiles. Since these backups are made periodically, they most likely won't contain all the data needed to bring the database up to date. The backup files include both backups of the datafiles as well as the archived redo log files.

    The redo logs: The redo logs, as you've seen earlier in this chapter, contain all changes made to the database, including uncommitted and committed changes.

    The undo records: These records contain the before images of blocks changed in uncommitted transactions.

Recovery involves restoring all backups first. Since the backups can't bring you up to date, you apply the redo logs next, to bring the database up to date. Since the redo logs may contain some uncommitted data that shouldn't really be in the database, however, Oracle uses the undo records to roll back all the uncommitted changes. When the recovery process is complete, your database will not have lost any committed or permanent data.

Flashback Recovery Techniques  闪回技术


Quite often, you may be called upon to help recover from a logical corruption of the database, rather than from a hardware failure. You can use the following flashback techniques in Oracle Database 11g to recover from logical errors:

    Flashback Database: Takes the entire database back to a specific point in time

    Flashback Table: Returns individual tables to a past state

    Flashback Drop: Undoes a DROP TABLE command and recovers the dropped table

    Flashback Query, Flashback Version Query, and Flashback Transaction Query: Retrieve data from a time (or an interval) in the past

    Flashback Transaction Backout: Lets you undo a transaction, together with all its dependent transactions as well, with just a single click in the Database Control

    Flashback Data Archive: Stores history of the changes made to a table, which you can use to make queries using older versions of data and for auditing purposes as well


    

所有的数据字典视图


SQL> SELECT * FROM dict;


oracle 工具

Oracle Utilities

Oracle provides several powerful tools to help with loading and unloading of data and similar activities. The following sections describe the main ones.
Data Pump Export and Import

The Data Pump Export and Import utilities are the successors to the traditional export and import utilities; they help with fast data loading and unloading operations. The original export and import utilities are still available, but Oracle recommends the use of the newer and more sophisticated tools. Chapter 13 discusses the Data Pump utilities in detail.
SQL*Loader

The SQL*Loader is a powerful and fast utility that loads data from external files into tables of an Oracle database. Chapter 14 discusses SQL*Loader in detail.
    Note      

You use the SQL*Loader to load external data into an Oracle table. Sometimes, though, you need to use some external data but don't want to go to the trouble of loading the data into a table. The external tables feature offers some of the SQL*Loader utility's functionality.

External tables let you use data that resides in external text files as if it were in a table in an Oracle database. You can write to external tables as well as read from them. I describe the external tables in detail in Chapter 13.
LogMiner

The LogMiner utility lets you query online and archived redo log files through a SQL interface. As you know, redo log files hold the history of all changes made to the database. Thus, you can use the LogMiner to see exactly which transaction and what SQL statement caused a change, and if necessary, undo it. Chapter 16 shows you how to use the LogMiner tool for precision recovery.

Automatic Diagnostic Repository Control Interface

The automatic diagnostic repository control interface (ADRCI) is a command-line tool that helps you manage the brand-new Oracle Database 11g diagnosability infrastructure. Through the ADRCI, you can view diagnostic data and health monitor reports besides managing incident packaging and transmission (to Oracle Support). You can access all types of diagnostic data, including incident descriptions, trace and dump files, alert log contents, and the new health monitor reports through the ADRCI interface.



Tablespace Extent Sizing and Space Management


当你创建表空间时,两件事需要考虑,
一个是:extent size 的增长方式。一般采用oracle 推荐的autollocate 自动增长的方式来管理extend.
一个是:segment的空间管理。推荐自动管理。即本地管理,bitmap

Before you actually create a tablespace, you must be aware of two other important concepts: extent sizing and segment space management. I discuss these concepts in the two subsequent sections.
If you choose the UNIFORM option, you specify the actual size of the extents by using the SIZE clause. If you omit the SIZE clause, Oracle will create all extents with a uniform size of 1MB, but you can choose a much larger uniform extent size if you wish.

Oracle recommends that unless all the objects in a tablespace are of the same size, you should use the AUTOALLOCATE feature. In addition to the simplicity of management, the AUTOALLOCATE option for extent sizing can potentially save you a significant amount of disk space, compared to the UNIFORM extent size option.

Allocating the Extent Size: Autoallocate vs. Uniform

you can let the database automatically choose the extent size (by selecting the AUTOALLOCATE option) or you can specify that the tablespace be managed with uniform-sized extents (theUNIFORM option).

Automatic vs. Manual Segment Space Management


You can use the space in an Oracle block for two purposes: inserting fresh data or updating existing data in the blocks. When you delete data from a block, or an update statement changes the existing data length to a smaller size, there will be an increase in free space in the block. Segment space management is how Oracle manages this free space within the individual Oracle data blocks.


manual 管理:pctfree /pct used

automatic:bit map


If you choose automatic segment space management when creating a tablespace (by specifyingAUTO), the database will usebitmaps to track free space availability in a segment. A bitmap, which is contained in a bitmap block, indicates whether free space in a data block is below 25 percent, between 25 and 50 percent, between 50 and 75 percent, or above 75 percent. For an index block, the bitmaps can tell you whether the blocks are empty or formatted.



创建表空间


 As mentioned previously, AUTOALLOCATE is the default for extent management, and automatic segment space management is the default for segment space management.

Let's create a (permanent) tablespace by using the CREATE TABLESPACE command. Note that you must use a DATAFILE clause before the file specification, since this is a permanent tablespace. For a temporary tablespace, you must use the clause TEMPFILE instead.


SQL> CREATE TABLESPACE test01
  2  DATAFILE '/pasx02/oradata/pasx/test01.dbf'
  3* SIZE 500M;
Tablespace created.



 If you truncate a table with the DROP STORAGE option (TRUNCATE TABLE table_name DROP STORAGE), for example, Oracle deal-locates the allocated extents. You can also manually deallocate unused extents using the following command:


SQL> ALTER TABLE table_name DEALLOCATE UNUSED;


Note the defaults in Oracle Database 11g Release 1:

    Extent management: LOCAL---??

    Allocation of extent sizes: AUTOALLOCATE (shows up as SYSTEM in the preceding output) ---数据库自动决定extent的size.

    Segment space management: AUTO-----位图bitmap来管理空闲空间

Adding Space to a Tablespace


SQL> ALTER TABLESPACE test01
     ADD DATAFILE '/finance10/app/oracle/finance/test01.dbf'
     SIZE 1000M;


SQL> ALTER DATABASE DATAFILE '/finance10/oradata/data_09.dbf'
     RESIZE 500m;


You can use the AUTOEXTEND provision when you create a tablespace or when you add datafiles to a tablespace to tell Oracle to automatically extend the size of the datafiles in the tablespace to a specified maximum. Here's the syntax for using the AUTOEXTEND feature:


SQL> ALTER TABLESPACE data01
     ADD DATAFILE '/finance10/oradata/data01.dbf' SIZE 200M
     AUTOEXTEND ON -----------allocation 的方式,自动根据需要增长,搜集extend。
     NEXT 10M
     MAXSIZE 1000M;


SYS@sales 2013-06-21 22:26:25> SELECT TABLESPACE_NAME, initial_extent,next_extent, extent_management,allocation_type,segment_space_management FROM dba_tablespaces;TABLESPACE_NAME        INITIAL_EXTENT NEXT_EXTENT EXTENT_MAN ALLOCATIO SEGMEN------------------------------ -------------- ----------- ---------- --------- ------SYSTEM65536  LOCAL      SYSTEM    MANUALSYSAUX65536  LOCAL      SYSTEM    AUTOUNDOTBS165536  LOCAL      SYSTEM    MANUALTEMP      1048576  1048576 LOCAL      UNIFORM   MANUALUSERS65536  LOCAL      SYSTEM    AUTOTEMP2      1048576  1048576 LOCAL      UNIFORM   MANUALTEST65536  LOCAL      SYSTEM    AUTORMAN_CATALOG65536  LOCAL      SYSTEM    AUTOTEST_SPACE0165536  LOCAL      SYSTEM    AUTOTEST_SPACE0265536  LOCAL      SYSTEM    AUTOTEST_SPACE0365536  LOCAL      SYSTEM    AUTOTABLESPACE_NAME        INITIAL_EXTENT NEXT_EXTENT EXTENT_MAN ALLOCATIO SEGMEN------------------------------ -------------- ----------- ---------- --------- ------TEST_SPACE0465536  LOCAL      SYSTEM    AUTO

select file_name,tablespace_name,AUTOEXTENSIBLE from dba_data_filesorcl1>/FILE_NAME TABLESPACE_NAMEAUTOEXTENSIBLE-------------------------------------------------------------------------------- ------------------------------ ---------------+DATA/orcl/datafile/users.265.810370621  USERSYES+DATA/orcl/datafile/undotbs1.269.810370621 UNDOTBS1YES+DATA/orcl/datafile/sysaux.271.810370619 SYSAUX YES+DATA/orcl/datafile/system.270.810370619 SYSTEM YES+DATA/orcl/datafile/undotbs2.268.810370621 UNDOTBS2YESorcl1>


Removing Tablespaces


Sometimes you may want to get rid of a tablespace. You can remove a tablespace from the database by issuing this simple command:


SQL> DROP TABLESPACE test01;

If the test01 tablespace includes tables or indexes when you issue a DROP TABLESPACE command, you'll get an error. You can either move the objects to a different tablespace or, if the objects are dispensable, you can use the following command, which will drop the tablespace and all the objects that are part of the tablespace:


SQL> DROP TABLESPACE test01 INCLUDING CONTENTS;


The DROP TABLESPACE . . . INCLUDING CONTENTS statement will not release the datafiles back to the operating system's file system. To do so, you have to either manually remove the datafiles that were a part of the tablespace or issue the following command to remove both the objects and the physical datafiles at once:


SQL> DROP TABLESPACE test01 INCLUDING CONTENTS AND DATAFILES;


If there are referential integrity constraints in other tables that refer to the tables in the tablespace you intend to drop, you need to use the following command:


SQL> DROP TABLESPACE test01 CASCADE CONSTRAINTS;

Proactive Tablespace Space Alerts

 the database itself can send you proactive space alerts for all locally managed tablespaces, including the undo tablespace. The Oracle database stores information on tablespace space usage in its system global area (SGA). The new Oracle background process MMON checks tablespace usage every ten minutes and raises alerts when necessary.


There are two ways to set alert thresholds: you can specify that the database alert be based on thepercent of space used or on thenumber of free bytes left in the tablespace:

  • Percent full: The database issues an alert when the space used in a tablespace reaches or crosses a preset percentage of total space. For anew database, 85 percent full is the threshold for the warning alerts, and 97 percent full is the threshold for the critical alerts. You can, if you wish, change these values and set, for example, 90 and 98 percent as the warning and critical thresholds.

  • Bytes remaining: When the free space falls below a certain amount (specified in KB), Oracle issues an alert. For example, you can use a warning threshold of 10,240KB and a critical threshold of 4,096KB for a tablespace. By default, the "bytes remaining alerts" (both warning and critical) in anew database are disabled, since the defaults for both types of bytes-remaining thresholds are set to zero. You can set them to a size you consider appropriate for each tablespace.

The easiest way to set and modify tablespace space alerts is by using the Oracle Enterprise Manager (OEM). Just go to the OEM Home Page and select Administration ➤ Related Links ➤ Manage Metrics ➤ Edit Thresholds. From the Edit Thresholds page, you can set warning and critical thresholds for your tablespaces. You can also specify a response action when an alert is received, in the form of a command or script that is made accessible to the Management Agent.

You can also use the Oracle-provided PL/SQL package DBMS_SERVER_ALERT to set warning and critical space alerts. Listing 6-4 shows how you can set a "bytes remaining" alert threshold using the warning value and the critical value attributes.


Managing Logging of Redo Data


However, the recording of the redo data creates an overhead. When you perform an operation such as acreate table as select . . . (CTAS) operation, you really don't need the redo data,

You can't switch off the production of redo data for normal DML activity in your database. However, you can do so for a direct load operation

表空间级别NOLOGGING

You can specify the NOLOGGING clause when you create a tablespace, so the database doesn't produce any redo records for any of the objects in that tablespace. When you specify theNOLOGGGING option in a CREATE TABLESPACE statement, all database objects that you create in that tablespace will inherit that attribute. However, you can specify theLOGGING clause in aCREATE TABLE orALTER TABLE statement to override theNOLOGGING clause that you specified for the tablespace.


Managing the Availability of a Tablespace

You can change the status of a tablespace to offline, to make a tablespace or a set of tablespaces unavailable to the users of the database. When you make a tablespace offline, all tables and indexes in that tablespace become out of reach of the users. You normally take tablespaces offline when you want to make an application unavailable to users by or when you want to perform management operations such as renaming or relocation the datafiles that are part of a tablespace. When you take a tablespace offline, the database automatically takes all datafiles that are part of that tablespace offline.

You can't take the System or the temporary tablespaces offline. You can specify either the NORMALl, TEMPORARY, or IMMEDIATE parameters as options to the tablespace offline statement. Here's how you choose among the three options:

    If there are no error conditions for any of the datafiles of tablespace, use the OFFLINE NORMAL clause, which is the default when you offline a tablespace.

    Using the OFFLINE NORMAL clause is considered taking a tablespace offline cleanly, which means the database won't have to perform a media recovery on the tablespace before bringing it back online. If you can't take the tablespace offline with the OFFLINE NORMAL clause, specify the OFFLINE TEMPORARY clause. If the NORMAL and TEMPORARY settings don't work, specify the OFFLINE IMMEDIATE clause, as shown here:


SQL> ALTER TABLESPACE users OFFLINE IMMEDIATE;

When you specify the OFFLINE IMMEDIATE clause, the database requires media recovery of the tablespace before it can bring the tablespace online.

When you are ready to bring a tablespace online, issue the following statement:


SQL> ALTER TABLESPACE <tablespace_name> ONLINE;


Renaming Tablespaces

You can rename both permanent and temporary tablespaces, but there are a few restrictions:

    You can't rename the System and Sysaux tablespaces.

    The tablespace being renamed must have all its datafiles online.

    If the tablespace is read-only, renaming it doesn't update the file headers of its datafiles.

Sometimes, you may need to rename a datafile. The process for this is straightforward:

    Take the datafile offline by taking its tablespace offline. Use the following command:


    SQL> ALTER TABLESPACE test01 OFFLINE NORMAL;

     Tablespace altered.
    SQL>

    Rename the file using an operating system utility such as cp or mv in UNIX, or copy in Windows.


    $ cp /u01/app/oracle/test01.dbf    /u02/app/oracle/test01.dbf

    Rename the datafile before bringing it online by using the following command:


    SQL> ALTER TABLESPACE test01
      2  RENAME DATAFILE
      3  '/u01/app/oracle/test01.dbf'
      4  TO
      5* '/u02/app/oracle/test01.dbf';

    Tablespace altered.
    SQL>

Read-Only Tablespaces

By default, all Oracle tablespaces are both readable and writable when created. However, you can specify that a tablespace cannot be written to by making it a read-only tablespace. The command to do so is simple:

SQL> ALTER TABLESPACE test01 READ ONLY;

If you want to make this read-only tablespace writable again, you can use the following command:

SQL> ALTER TABLESPACE test01 READ WRITE;

Taking Tablespaces Offline

Except for the System tablespace, you can take any or all of the tablespacesoffline—that is, you can make them temporarily unavailable to users. You usually need to take tablespaces offline when a datafile within a tablespace contains errors or you are changing code in an application that accesses one of the tablespaces being taken offline.

Four modes of offlining are possible with Oracle tablespaces:normal,temporary,immediate, andfor recovery. Except for the normal mode, which is the default mode of taking tablespaces offline, all the other modes can involve recovery of the included datafiles or the tablespace itself. You can take any tablespace offline with no harm by using the following command:

SQL> ALTER TABLESPACE index_01 OFFLINE NORMAL;

Oracle will ensure the checkpointing of all the datafiles in the tablespace (index_01 in this example) before it takes the tablespace offline. Thus, there is no need for recovery when you later bring the tablespace back online.

To bring the tablespace online, use the following command:

SQL> ALTER TABLESPACE index_01 ONLINE;
 

Temporary Tablespaces

the data in a temporary tablespace  only persist  for the length of a user session. Oracle uses temporary tablespaces as work areas for tasks such as sort operations for users and sorting during index creation.

Note 

Oracle writes data in the program global area (PGA) in 64KB chunks. Therefore, Oracle advises you to create temporary tablespaces with extent sizes that are multiples of 64KB. For large data warehousing and decision-support system databases, which make extensive use of temporary tablespaces, the recommended extent size is 1MB.


Altering a Temporary Tablespace

Here's an example showing how you can make the temporary tablespace larger:
SQL> ALTER TABLESPACE temp     ADD TEMPFILE '/u01/app/oracle/tempfile/tempo3.dbf' size 1000M reuse;
You can similarly use the following ALTER TABLESPACE command to resize a tempfile:
SQL> ALTER DATABASE TEMPFILE '/u01/app/oracle/tempfile/temp03.dbf'     RESIZE 200M;
SQL> ALTER DATABASE TEMPFILE '/u01/app/oracle/tempfile/temp03.dbf'     DROP INCLUDING DATAFILES;

When you drop a tempfile belonging to a temporary tablespace, the tablespace itself will remain in use.

You can shrink a temporary tablespace, just as you can a normal tablespace. The following example shows how to issue theALTER TABLESPACE statement to shrink a temporary tablespace:

SQL> ALTER TABLESPACE temp SHRINK SPACE KEEP 500m;


SYS@sales 2013-06-24 22:35:54> select file_name,tablespace_name,bytes/1024/1024 M from dba_temp_files;

FILE_NAME                                     TABLESPACE_NAME             M
-------------------------------------------------------------------------------- ------------------------------ ----------
+DATA/sales/tempfile/temp.263.800160099                      TEMP                      2883
+DG1/sales/tempfile/temp2.547.810001683                      TEMP2                    50

You may have to increase the size of a temporary tablespace to accommodate an unusually large job that makes use of the temporary tablespace. After the completion of the job, you can shrink the temporary tablespace using the clauseSHRINK SPACE in an ALTER TABLESPACE statement. Here's an example:

SQL> ALTER TABLESPACE temp SHRINK SPACE;Tablespace altered.SQL>

The SHRINK SPACE clause will shrink all tempfiles to a minimum size, which is about 1MB. You can employ theKEEP clause to specify a minimum size for the tempfiles, as shown here:

SQL> ALTER tablespace temp SHRINK SPACE     KEEP 250m;
Oracle uses a peculiar logic when shrinking tempfiles in a temporary tablespace. Let's say you have a temporary tablespace that contains two 1GB tempfiles. You issue a command to shrink the tablespace to 1GB, a shown here:
SQL> ALTER TABLESPACE temp SHRINK SPACE KEEP 1000M;Tablespace altered.SQL>

If you query the V$TEMPFILE view, you'll see this:

SQL> SELECT file#, name, bytes/1024/1024 mb FROM v$tempfile;FILE#     NAME                                    MB-----     ------------------------------------    ---------1         /u01/app/oracle/tempfile/temp01.dbf     999.93752         /u01/app/oracle/tempfile/temp02.dbf'    1.0625

The database shrinks one of the two tempfiles all the way down to 1MB and the other only by 1MB, leaving 999MB of space intact in that tempfile. If your goal is to shrink a particular tempfile down to a certain minimum, you can do so by specifying the name of the particular tempfile you want to shrink, as shown here:

SQL> ALTER TABLESPACE temp SHRINK SPACE     TEMPFILE tempfile '/u01/app/oracle/oradata/prod1/temp02.dbf'     KEEP 100m;Tablespace altered.SQL>
The ALTER TABLESPACE statement shown here shrinks just the tempfile you list by the amount you specify with theKEEP clause. It leaves the other tempfiles in the TEMP tablespace alone. TheKEEP clause in the previous statement ensures that the tempfile I specify retains 500MB of space. The following example shows how to shrink a single tempfile without any specific retained space:
SQL> ALTER TABLESPACE temp     SHRINK tempfile '/u01/app/oracle/tempfile/temp03.dbf';

Since I didn't specify the KEEP clause in the previous statement, the database shrinks the tempfile I specified to the minimum possible size, which is about 1MB.


Default Temporary Tablespace

When you create database users, you must assign a default temporary tablespace in which they can perform their temporary work, such as sorting. If you neglect to explicitly assign a temporary tablespace, the users will use the critical System tablespace as their temporary tablespace, which could lead to fragmentation of that tablespace, besides filling it up and freezing database activity. You can avoid these undesirable situations by creating adefault temporary tablespace for the database when creating a database by using theDEFAULT TEMPORARY TABLESPACE clause. Oracle will then use this as the temporary tablespace for all users for whom you don't explicitly assign a temporary tablespace.

Note that if you didn't create a default temporary tablespace while creating your database, it isn't too late to do so later. You can just create a temporary tablespace, as shown in the preceding example, and make it the default temporary tablespace for the database, with a statement like this:

SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temptbs02;

You can find out the name of the current default temporary tablespace for your database by executing the following query:

SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE     FROM database_properties     WHERE property_name='DEFAULT_TEMP_TABLESPACE';PROPERTY_NAME                    PROPERTY_VALUE-----------------------          -----------------DEFAULT_TEMP_TABLESPACE          TEMP

Note 

You can't use the AUTOALLOCATE clause for temporary tablespaces. By default, all temporary tablespaces are created with locally managed extents of a uniform size. The default extent size is 1MB, as for all other tablespaces, but you can use a different extent size if you wish when creating the temporary tablespace.


Temporary Tablespace Groups

Large transactions can sometimes run out of temporary space. Large sort jobs, especially those involving tables with many partitions, lead to heavy use of the temporary tablespaces, thus potentially leading to a performance hit. Oracle Database 10g introduced the concept of a temporary tablespace group, which allows a user to utilize multiple temporary tablespaces simultaneously in different sessions.


Here are some of the main characteristics of a temporary tablespace group:

    A temporary tablespace group must consist of at least one tablespace. There is no explicit maximum number of tablespaces.

    If you delete all members from a temporary tablespace group, the group is automatically deleted as well.

    A temporary tablespace group has the same namespace as the temporary tablespaces that are part of the group.

    The name of a temporary tablespace cannot be the same as the name of any tablespace group.

    When you assign a temporary tablespace to a user, you can use the temporary tablespace group name instead of the actual temporary tablespace name. You can also use the temporary tablespace group name when you assign the default temporary tablespace for the database

Benefits of Temporary Tablespace Groups

Using a temporary tablespace group, rather than the usual single temporary tablespace, provides several benefits:

    SQL queries are less likely to run out of sort space because the query can now simultaneously use several temporary tablespaces for sorting.

    You can specify multiple default temporary tablespaces at the database level.

    Parallel execution servers in a parallel operation will efficiently utilize multiple temporary tablespaces.

    A single user can simultaneously use multiple temporary tablespaces in different sessions.

Creating a Temporary Tablespace Group

When you assign the first temporary tablespace to a tablespace group, you automatically create the temporary tablespace group. To create a tablespace group, simply specify the TABLESPACE GROUP clause in the CREATE TABLESPACE statement, as shown here:


SQL> CREATE TEMPORARY TABLESPACE temp01
     TEMPFILE '/u01/oracle/oradata/temp01_01.dbf'
     SIZE 500M TABLESPACE GROUP tmpgrp1;

The preceding SQL statement will create a new temporary tablespace, temp01, along with the new tablespace group named tmpgrp1. Oracle creates the new tablespace group because the key clause TABLESPACE GROUP was used while creating the new temporary tablespace.

You can also create a temporary tablespace group by specifying the same TABLESPACE GROUP clause in an ALTER TABLESPACE command, as shown here:


SQL> ALTER TABLESPACE temp02
     TABLESPACE GROUP tmpgrp1

Tablespace altered.
SQL>

The preceding statement will cause Oracle to create a new group named tmpgrp1, since there wasn't a prior temporary tablespace group with that name.

If you specify a pair of quotes ('') for the tablespace group name, you are implicitly telling Oracle not to allocate that temporary tablespace to a tablespace group. Here's an example:


SQL> CREATE TEMPORARY TABLESPACE temp02
     TEMPFILE '/u01/oracle/oradata/temp02_01.dbf' SIZE 500M
     TABLESPACE GROUP '';

The preceding statement creates a temporary tablespace called temp02, which is a regular temporary tablespace and doesn't belong to a temporary tablespace group.

If you completely omit the TABLESPACE GROUP clause, you'll create a regular temporary tablespace, which is not part of any temporary tablespace group:


SQL> CREATE TEMPORARY TABLESPACE temp03
     TEMPFILE '/u01/oracle/oradata/temp03_01.dbf' SIZE 500M;
  Tablespace created.
SQL>

Adding a Tablespace to a Temporary Tablespace Group

As shown in the preceding section, you can add a temporary tablespace to a group by using the ALTERTABLESPACE command. You can also change which group a temporary tablespace belongs to by using the ALTER TABLESPACE command. For example, you can specify that the tablespace temp02 belongs to the tmpgrp2 group by issuing the following statement:


SQL> ALTER TABLESPACE temp02 TABLESPACE GROUP tmpgrp2;

The database will create a new group with the name tmpgrp2 if there is no such group already.
Setting a Group as the Default Temporary Tablespace for the Database

You can use a temporary tablespace group as your default temporary tablespace for the database. If you issue the following statement, all users without a default tablespace can use any temporary tablespace in the tmpgrp1 group as their default temporary tablespaces:


SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE tmpgrp1;

The preceding ALTER DATABASE statement assigns all the tablespaces in tmpgrp1 as the default temporary tablespaces for the database.
Assigning Temporary Tablespace Groups When Creating and Altering Users

When you create new users, you can assign them to a temporary tablespace group instead of to the usual single temporary tablespace. Here's an example:


SQL> CREATE USER salapati IDENTIFIED BY sammyy1
     DEFAULT TABLESPACE users
     TEMPORARY TABLESPACE tmpgrp1;
User created.

SQL>

Once you create a user, you can also use the ALTER USER statement to change the temporary tablespace group of the user. Here's a SQL statement that does this:


SQL> ALTER USER salapati TEMPORARY TABLESPACE tmpgrp2;

Viewing Temporary Tablespace Group Information

You can use the new DBA_TABLESPACE_GROUPS data dictionary view to manage the temporary tablespace groups in your database. Here is a simple query on the view that shows the names of all tablespace groups:


SQL> SELECT group_name, tablespace_name
     FROM dba_tablespace_groups;

GROUP_NAME         TABLESPACE_NAME
----------         ---------------
TMPGRP1            TEMP01
SQL>

You can also use the DBA_USERS view to find out which temporary tablespaces or temporary tablespace groups are assigned to each user. Here's an example:


SQL> SELECT username, temporary_tablespace
     FROM dba_users;

USERNAME        TEMPORARY_TABLESPACE
--------        ---------------------
SYS             TEMP
SYSTEM          TEMP
SAM             TMPGRP1
SCOTT           TEMP
. . .
SQL>


 

Default Permanent Tablespaces

Prior to the Oracle Database 10g release, the System tablespace was the defaultpermanenttablespace for any users you created if you neglected to assign the user to a default tablespace. As of Oracle Database 10g, you can create adefault permanent tablespace to which a new user will be assigned if you don't assign a specific default tablespace when you create the user.

Note     

You can't drop a default permanent tablespace without first creating and assigning another tablespace as the new default tablespace.

To find out what the current permanent tablespace for your database is, use the following query:


SQL> SELECT property_value FROM database_properties
     WHERE property_name='DEFAULT_PERMANENT_TABLESPACE';

PROPERTY_VALUE
--------------

USERS
SQL>


SYS@sales 2013-06-24 23:27:55> select * from database_properties where property_name like '%DEFAULT%' ;PROPERTY_NAME   PROPERTY_VALUEDESCRIPTION-------------------------------------------------- -------------------- ------------------------------DEFAULT_TEMP_TABLESPACE    TEMP Name of default temporary tablespaceDEFAULT_PERMANENT_TABLESPACE   USERSName of default permanent tablespaceDEFAULT_EDITION    ORA$BASEName of the database default editionDEFAULT_TBS_TYPE   SMALLFILEDefault tablespace typeSYS@sales 2013-06-24 23:28:09> 



You can also create or reassign a default permanent tablespace after database creation, by using theALTER DATABASE statement, as shown here:

SQL> ALTER DATABASE DEFAULT TABLESPACE users;


Bigfile Tablespaces


Oracle Database 11g can contain up to 8 exabytes (8 million terabytes) of data. Don't panic, however, thinking how many millions of datafiles you'd need to manage in order to hold this much data. You have the option of creating really big tablespaces called, appropriately, bigfile tablespaces. A bigfile tablespace (BFT) contains only one very large datafile. If you're creating a bigfile-based permanent tablespace, it'll be a single datafile, and if it's a temporary tablespace, it will be a single temporary file. The maximum number of datafiles in Oracle is limited to 64,000 files. So, if you're dealing with an extremely large database, using bigfile tablespaces ensures you don't bump against the ceiling for the number of datafiles in your database.

Depending on the block size, a bigfile tablespace can be as large as 128 terabytes. In previous versions of Oracle, you always had to keep in mind the distinction between datafiles and tablespaces. Now, using the bigfile concept, Oracle has made a tablespace logically equal to a datafile by creating the new one-to-one relationship between tablespaces and datafiles. With Oracle Managed Files (OMF), datafiles are completely transparent to you when you use a BFT, and you can directly deal with the tablespace in many kinds of operations.
    Note      

The traditional tablespaces are now referred to as smallfile tablespaces. Smallfile tablespaces are the default tablespaces in Oracle Database 11g. You can have both smallfile and bigfile tablespaces in the same database.

Here's a summary of the benefits offered by using BFTs:

    You only need to create as many datafiles as there are tablespaces.

    You don't have to constantly add datafiles to your tablespaces.

    Datafile management in large databases is simplified—you deal with a few tablespaces directly, not many datafiles.

    Storage capacity is significantly increased because you don't reach the maximum-files limitation quickly when you use BFTs.

Restrictions on Using Bigfile Tablespaces

There are few restrictions on using BFTs. You can use them only if you use a locally managed tablespace with automatic segment space management. By now, you know that locally managed tablespaces with automatic segment space management are the default in Oracle Database 11g Release 1. Oracle also recommends that you use BFTs along with a Logical Volume Manager or Automated Storage Management feature that supports striping and mirroring. Otherwise, you can't really support the massive datafiles that underlie the BFT concept. Both parallel query execution and RMAN backup parallelization would be adversely impacted if you used BFTs without striping.

To avoid creating millions of extents when you use a BFT in a very large (greater than one terabyte) database, Oracle recommends that you change the extent allocation policy from AUTOALLOCATE, which is the default, to UNIFORM and set a very high extent size. In databases that aren't very large, Oracle recommends that you stick to the default AUTOALLOCATE policy and simply let Oracle take care of the extent sizing.
Creating Bigfile Tablespaces

You can create bigfile tablespaces in three different ways: you can specify them at database creation time and thus make them the default tablespace type, you can use the CREATE BIGFILE statement, or you can use the ALTER DATABASE statement to set the default type to a BFT tablespace. Let's look into each of these methods in the following sections.

You can specify BFTs as the default tablespace type during database creation. If you don't explicitly specify BFT as your default tablespace type, your database will have the traditional smallfile tablespace as the default.

Here's a portion of the CREATE DATABASE statement, showing how you specify a BFT:

SQL> CREATE DATABASE
     SET DEFAULT BIGFILE tablespace
     . . .

Once you set the default tablespace type to bigfile tablespaces, all tablespaces you create subsequently will be BFTs unless you manually override the default setting, as shown shortly.

Irrespective of which default tablespace type you choose—bigfile or smallfile—you can always create a bigfile tablespace by specifying the type explicitly in the CREATE TABLESPACE statement, as shown here:

SQL> CREATE BIGFILE TABLESPACE bigtbs_01
     DATAFILE '/u01/oracle/data/bigtbs_01.dbf' SIZE 100G
     . . .

In the preceding statement, the explicit specification of the BIGFILE clause will override the default tablespace type, if it was a smallfile type. Conversely, if your default tablespace type is BIGFILE, you can use the SMALLFILE keyword to override the default type when you create a tablespace.

When you specify the CREATE BIGFILE TABLESPACE clause, Oracle will automatically create a locally managed tablespace with automatic segment space management. You can specify the datafile size in kilobytes, megabytes, gigabytes, or terabytes.
    Tip     

On operating systems that don't support large files, the bigfile size will be limited by the maximum file size that the operating system can support.

You can dynamically change the default tablespace type to bigfile or smallfile, thus making all tablespaces you subsequently create either bigfile or smallfile type tablespaces. Here's an example that shows how to set the default tablespace type in your database to bigfile:

SQL> ALTER TABLESPACE SET DEFAULT BIGFILE TABLESPACE;

You can also migrate database objects from a smallfile tablespace to a bigfile tablespace, or vice versa, after changing a tablespace's type. You can migrate the objects using the ALTER TABLE . . .MOVE or the CREATE TABLE AS SELECT commands. Or you can use the Data Pump Export and Import tools to move the objects between the two types of tablespaces.
Altering a Bigfile Tablespace

You can use the RESIZE and AUTOEXTEND clauses in the ALTER TABLESPACE statement to modify the size of a BFT. Note that both these space-extension clauses can be used directly at the tablespace, not the file, level. Thus, both of these clauses provide datafile transparency—you deal directly with the tablespaces and ignore the underlying datafiles.

Here are more details about the two clauses:

    RESIZE: The RESIZE clause lets you resize a BFT directly, without using the DATAFILE clause, as shown here:

    SQL> ALTER TABLESPACE bigtbs RESIZE 120G;

    AUTOEXTEND: The AUTOEXTEND clause enables automatic file extension, again without referring to the datafile. Here's an example:

    SQL> ALTER TABLESPACE bigtbs AUTOEXTEND ON NEXT 20G;

Viewing Bigfile Tablespace Information

You can gather information about the BFTs in your database by using the following data dictionary views:

    DBA_TABLESPACES

    USER_TABLESPACES

    V$TABLESPACE

All three views have the new BIGFILE column, whose value indicates whether a tablespace is of the BFT type (YES) or smallfile type (NO).

You can also use the DATABASE_PROPERTIES data dictionary view, as shown in the following query, to find out what the default tablespace type for your database is:

SQL> SELECT property_value
     FROM database_properties
     WHERE property_name='DEFAULT_TBS_TYPE';

PROPERTY_VALUE
--------------
SMALLFILE
SQL>

Managing the Sysaux Tablespace


Oracle Database 10g mandates the creation of the Sysaux tablespace, which serves as an auxiliary tablespace to the System tablespace. Until now, the System tablespace was the default location for storing objects belonging to components like the Workspace Manager, Logical Standby, Oracle Spatial, LogMiner, and so on. The more features the database offered, the greater was the demand for space in the System tablespace. In addition, several features had to be accommodated in their own repositories, like the Enterprise Manager and its Repository. On top of all this, you had to create a special tablespace for the Statspack Repository.

To alleviate this pressure on the System tablespace and to consolidate all the repositories for the various Oracle features, Oracle Database 10g offers the Sysaux tablespace as a centralized single storage location for various database components. Using the Sysaux tablespace offers the following benefits:

    There are fewer tablespaces to manage because you don't have to create a separate tablespace for many database components. You just assign the Sysaux tablespace as the default location for all the components.

    There is reduced pressure on the System tablespace.

The size of the Sysaux tablespace depends on the size of the database components that you'll store in it. Therefore, you should base your Sysaux tablespace sizing on the components and features that your database will use. Oracle recommends that you create the Sysaux tablespace with a minimum size of 240MB. Generally, the OEM repository tends to be the largest user of the Sysaux tablespace.


Creating the Sysaux Tablespace


If you use the Oracle Database Configuration Assistant (DBCA), you can automatically create the Sysaux tablespace when you create a new database, whether it is based on the seed database or a completely new, built-from-scratch, user-defined database. During the course of creating a database, the DBCA asks you to select the file location for the Sysaux tablespace. When you upgrade a database to Oracle Database 10g, the Database Upgrade Assistant will similarly prompt you for the file information for creating the new Sysaux tablespace.
    Tip     

The Sysaux tablespace is mandatory, whether you create a new Oracle Database or migrate from a release prior to Oracle Database 10g.

You can create the Sysaux tablespace manually at database creation time. Here is the syntax for creating the Sysaux tablespace:


CREATE DATABASE mydb
USER sys IDENTIFIED BY abc1def
USER system IDENTIFIED BY uvw2xyz
. . .
SYSAUX DATAFILE '/u01/oracle/oradata/mydb/sysaux01.dbf' SIZE 500M REUSE
. . .

If you omit the SYSAUX creation clause from the CREATE DATABASE statement, Oracle will create both the System and Sysaux tablespaces automatically, with their datafiles being placed in system-determined default locations. If you are using Oracle Managed Files, the datafile location will be dependent on the OMF initialization parameters. If you include the DATAFILE clause for the System tablespace, you must use the DATAFILE clause for the Sysaux tablespace as well, unless you are using OMF.

You can only set the datafile location when you create the Sysaux tablespace during database creation, as shown in the preceding example. Oracle sets all the other attributes, which are mandatory and not changeable, with the ALTER TABLESPACE command. Once you provide the datafile location and size, Oracle creates the Sysaux tablespace with the following attributes:

    Permanent

    Read/write

    Locally managed

    Automatic segment space management

You can alter the Sysaux tablespace using the same ALTER TABLESPACE command that you use for other tablespaces. Here's an example:


SQL> ALTER TABLESPACE sysaux ADD DATAFILE
     '/u01/app/oracle/prod1/oradata/sysaux02.dbf' SIZE 500M;

Usage Restrictions for the Sysaux Tablespace

Although using the ALTER TABLESPACE command to change the Sysaux tablespace may make it seem as if the Sysaux tablespace is similar to the other tablespaces in your database, several usage features set the Sysaux tablespace apart. Here are the restrictions:

    You can't drop the Sysaux tablespace by using the DROP TABLESPACE command during normal database operation.

    You can't rename the Sysaux tablespace during normal database operation.

    You can't transport a Sysaux tablespace like other tablespaces.

Easy File Management with Oracle Managed Files


The previous sections have dealt with operating system file management, where you, the DBA, manually create, delete, and manage the datafiles. Oracle Managed Files enable you to bypass dealing with operating system files directly.

As you learned in Chapter 5, in an Oracle database, you deal with various types of database files, including datafiles, control files, and online redo log files. In addition, you also have to manage tempfiles for use with temporary tablespaces, archived redo logs, RMAN backup files, and files for storing flashback logs. Normally, you'd have to set the complete file specification for each of these files when you create one of them. Under an OMF setup, however, you specify the file system directory for all the previously mentioned types of Oracle files by specifying three initialization parameters: DB_CREATE_FILE_DEST, DB_CREATE_ONLINE_LOG_DEST_n, and DB_RECOVERY_FILE_DEST. Oracle will then automatically create the files in the specified locations without your having to provide the actual location for it.

OMF offers a simpler way of managing the file system—you don't have to worry about specifying long file specifications when you're creating tablespaces or redo log groups or control files. When you want to create a tablespace or add datafiles when using OMF, you don't have to give a location for the datafiles. Oracle will automatically create the file or add the datafile in the location you specified in the init.ora file for datafiles. Note that you don't have to use a DATAFILE or TEMPFILE clause when creating a tablespace when you use the OMF-based file system.

Here are a couple of examples showing how simple it is to create a tablespace and add space to it under an OMF system:


SQL> CREATE TABLESPACE finance01;
SQL> ALTER TABLESPACE finance01 ADD DATAFILE 500M;

Similarly, when you want to drop a tablespace, you just need to issue the DROP TABLESPACE command and the OMF datafiles are automatically removed by Oracle, along with the tablespace definition:


SQL> DROP TABLESPACE finance01;

OMF files are definitely easier to manage than the traditional manually created operating system files. However, there are some limitations:

    OMF files can't be used on raw devices, which offer superior performance to operating system files for certain applications (such as Oracle Real Application Clusters).

    All the OMF datafiles have to be created in one directory. It's hard to envision a large database fitting into this one file system.

    You can't choose your own names for the datafiles created under OMF. Oracle will use a naming convention that includes the database name and unique character strings to name the datafiles.

Oracle recommends using OMF for small and test databases. Normally, if you drop a datafile, the database won't have any references to the datafile, but the physical file still exists in the old location—you have to explicitly remove the physical file yourself. If you use OMF, Oracle will remove the file for you when you drop it from the database. According to Oracle, OMF file systems are most useful for databases using Logical Volume Managers that support RAID and extensible file systems. Smaller databases benefit the most from OMF, because of the reduced file-management tasks. Test databases are another area where an OMF file system will cut down on management time.

You have to use operating system-based files if you want to use the OMF feature; you can't use raw files. You do lose some control over the placement of data in your storage system when you use OMF files, but even with these limitations, the benefits of OMF file management can outweigh its limitations in some circumstances.


Benefits of Using OMF


You can create tablespaces with OMF-based files. You can also specify that your online redo log files and your control files are in the OMF format. OMF files offer several advantages over user-managed files:

    Oracle automatically creates and deletes OMF files.

    You don't have to worry about coming up with a naming convention for the files.

    It's easy to drop datafiles by mistake when you're managing them. With OMF files, you don't run the risk of accidentally deleting database files.

    Oracle automatically deletes a file when it's no longer needed.

    You can have a mix of traditional files and OMF files in the same database.

In the following sections, we'll look at the OMF feature in some detail.
Creating Oracle Managed Files

You can create OMF files when you create the database, or you can add them to a database that you created with traditional datafiles later on. Either way, you need to set some initialization parameters to enable OMF file creation.
Initialization Parameters for OMF

You need to set three initialization parameters to enable the use of OMF files. You can set these four parameters in your parameter file, and you can change them online with the ALTER SYSTEM or ALTER SESSION statement. You can use each of these parameters to specify the file destination for different types of OMF files, such as datafiles, control files, and online redo log files:

    DB_CREATE_FILE_DEST: This parameter specifies the default location of datafiles, online redo log files, control files, block-change tracking files, and tempfiles. You can also specify a control file location if you wish. Unfortunately, the DB_CREATE_FILE_DEST parameter can take only a single directory as its value; you can't specify multiple file systems for the parameter. If the assigned directory for file creation fills up, you can always specify a new directory, because the DB_CREATE_FILE_DEST parameter is dynamic. This enables you to place Oracle datafiles anywhere in the file system without any limits whatsoever.

    DB_CREATE_ONLINE_LOG_DEST_n: You can use this parameter to specify the default location of online redo log files and control files. In this parameter, n refers to the number of redo log files or control files that you want Oracle to create. If you want to multiplex your online redo log files as Oracle recommends, you should set n to 2.

    DB_RECOVERY_FILE_DEST: This parameter defines the default location for control files, archived redo log files, RMAN backups, and flashback logs. If you omit the DB_CREATE_ONLINE_LOG_DEST_n parameter, this parameter will determine the location of the online redo log files and control files. The directory location you specify using this parameter is also known as the flash recovery area, which I explain it in detail in Chapter 10.

In addition to the preceding three initialization parameters, the DB_RECOVERY_FILE_DEST_SIZE parameter specifies the size of your flash recovery area.

If you don't specify any of these initialization parameters in your init.ora file or SPFILE, you can still use the ALTER SYSTEM command to dynamically enable the creation of OMF files, as shown in the following example:


SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST =
  2  '/test01/app/oracle/oradata/finance1';
System altered.
SQL>

As long as you specify the DB_CREATE_FILE_DEST parameter, you can have Oracle create OMF files for you, and you can use both the user-managed and OMF files simultaneously without a problem.
File-Naming Conventions

Oracle uses the OFA standards in creating filenames, so filenames are unique and datafiles are easily identifiable as belonging to a certain tablespace. Table 6-1 shows the naming conventions for various kinds of OMF files and an example of each type. Note that the letter t stands for a unique tablespace name, g stands for an online redo group, and u is an 8-character string.
Table 6-1: OMF File-Naming Conventions
Open table as spreadsheet

OMF File Type
    

Naming Convention
    

Example

Datafile
    

ora_t%_u.dbf
    

ora_data_Y2ZV8P00.dbf

Temp file (default size is 100MB)
    

ora_%t_u.tmp
    

ora_temp_Y2ZWGD00.tmp

Online redo log file (default size is 100MB)
    

ora_%g_%u.log
    

ora_4_Y2ZSQK00.log

Control file
    

ora_u%.ctl
    

ora_Y2ZROW00.ctl
Different Types of Oracle Managed Files

You can use OMF to create all three types of files that the Oracle database requires: control files, redo log files, and, of course, datafiles. However, there are interesting differences in the way OMF requires you to specify (or not specify) each of these types of files. The following sections cover how Oracle creates different types of files.
Control Files

As you have probably noticed already, there is no specific parameter that you need to include in your init.ora file to specify the OMF format. If you specify the CONTROL_FILES initialization parameter, you will, of course, have to specify a complete file location for those files, and obviously they will not be OMF files—they are managed by you. If you don't specify the CONTROL_FILES parameter, and you use the DB_CREATE_FILE_DEST or the DB_CREATE_ONLINE_LOG_DEST_n parameter, your control files will be OMF files.

If you are using a traditional init.ora file, you need to add the control file locations to it. If you are using an SPFILE, Oracle automatically adds the control file information to it.
Redo Log Files

OMF redo log file creation is similar to control file creation. If you don't specify a location for the redo log files, and you set either the DB_CREATE_FILE_DEST or the DB_CREATE_ONLINE_LOG_DEST_n parameter in the init.ora file, Oracle automatically creates OMF-based redo log files.
Datafiles

If you don't specify a datafile location in the CREATE or ALTER statements for a regular datafile, or a tempfile for a temporary tablespace, tempfile, or an undo tablespace datafile, but instead specify the DB_CREATE_FILE_DEST parameter, all these files will be OMF files.
Simple Database Creation Using OMF

Let's look at a small example to see how OMF files can really simplify database creation. When you create a new database, you need to provide the control file, redo log file, and datafile locations to Oracle. You specify some file locations in the initialization file (control file locations) and some file locations at database creation (such as redo log locations). However, if you use OMF-based files, database creation can be a snap, as you'll see in the sections that follow.
Setting Up File Location Parameters

For the new OMF-based database, named nicko, let's use the following initialization parameters:


db_name=nicko
DB_CREATE_FILE_DEST = '/u01/app/oracle/oradata'
DB_RECOVERY_FILE_DEST_SIZE = 100M
DB_RECOVERY_FILE_DEST = '/u04/app/oracle/oradata'
LOG_ARCHIVE_DEST_1 = 'LOCATION = USE_DB_RECOVERY_FILE_DEST'

Note that of the four OMF-related initialization parameters, I chose to use only the DB_CREATE_FILE_DEST, DB_RECOVERY_FILE_DEST_SIZE, and DB_RECOVERY_FILE_DEST parameters. I didn't have to use the fourth parameter, DB_CREATE_ONLINE_LOG_DEST_n, in this example. When this parameter is left out, Oracle creates a copy of the log file and the redo log file in the locations specified for the DB_CREATE_FILE_DEST and the DB_RECOVERY_FILE_DEST parameters. I thus have two copies of the control file and the online redo log files.

The setting for the last parameter, LOG_ARCHIVE_DEST_1, tells Oracle to send the archived redo logs for storage in the flash recovery area specified by the DB_RECOVERY_FILE_DEST parameter.
Starting the Instance

Using the simple init.ora file shown in the preceding section, you can start an instance as shown in Listing 6-10.
Listing 6-10: Creating the OMF-Based Instance


SQL> connect sys/sys_passwd as sysdba
Connected to an idle instance.
SQL> STARTUP NOMOUNT PFILE='initnicko.ora';
ORACLE instance started.
Total System Global Area   188743680 bytes
Fixed Size                   1308048 bytes
Variable Size              116132464 bytes
Database Buffers            67108864 bytes
Redo Buffers                 4194304 bytes
SQL>

Creating the Database

Now that you've successfully created the new Oracle instance, you can create the new database nicko with this simple command:


SQL> CREATE DATABASE nicko;

  Database created.
SQL>

That's it! Just those two simple lines are all you need to create a functional database with the following structures:

    A System tablespace created in the default file system specified by the DB_CREATE_FILE_DEST parameter (/u01/app/oracle/oradata)

    A Sysaux tablespace created in the default file system (/u01/app/oracle/oradata)

    Two duplexed redo log groups

    Two copies of the control file

    A default temporary tablespace

    An undo tablespace automatically managed by the Oracle database

Where Are the OMF Files?

You can see the various files within the database by looking in the alert log for the new database, alert_nicko.log, which you'll find in the $ORACLE_HOME/rdbms/log directory, since we didn't specify the BACKGROUND_DUMP_DIR directory in the init.ora file.

In the following segment from the alert log file for the database, you can see how the various files necessary for the new database were created. First, Oracle creates the control files and places them in the location you specified for the DB_CREATE_ONLINE_LOG_DEST_n parameter.


Sun Jan  13 17:44:51 2008
create database nicko
default  temporary tablespace temp
Sun Jan  13 17:44:51 2008
WARNING: Default passwords for SYS and SYSTEM will be used.
         Please change the passwords.
Created Oracle managed file /u01/app/oracle/oradata/NICKO/controlfile/o1_mf_150w
. . .

Sun Jan  13 17:46:37 2008
Completed: create database nicko
default  temporary tablespace
MMNL started with pid=13, OS id=28939

Here's what the alert log shows regarding the creation of the control files:


Created Oracle managed file /u01/app/oracle/oradata/NICKO/controlfile/o1_mf_150w
h3r1_.ctl
Created Oracle managed file /u04/app/oracle/oradata/NICKO/controlfile/o1_mf_150w
h3_.ctl

Next, the Oracle server creates the duplexed online redo log files. Oracle creates the minimum number of groups necessary and duplexes them by creating a set of online log files (two) in the locations specified by the DB_CREATE_ONLINE_LOG_DEST and the DB_RECOVERY_FILE_DEST parameters:


Created Oracle managed file /u01/app/oracle/oradata/NICKO/onlinelog/o1_mf_1_150w
h48m_.log
Created Oracle managed file /u04/app/oracle/oradata/NICKO/onlinelog/o1_mf_1_150w
hf07_.log
Created Oracle managed file /u01/app/oracle/oradata/NICKO/onlinelog/o1_mf_2_150w
honc_.log
Created Oracle managed file /u04/app/oracle/oradata/NICKO/onlinelog/o1_mf_2_150w
hwh0_.log

The System tablespace is created next, in the location you specified for the DB_CREATE_FILE_DEST parameter:


create tablespace SYSTEM datafile /* OMF datafile */
  default storage (initial 10K next 10K) EXTENT MANAGEMENT DICTIONARY online
Created Oracle managed file /u01/app/oracle/oradata/NICKO/datafile/o1_mf_system_
150wj4c3_.dbf
Completed: create tablespace SYSTEM datafile /* OMF datafile

The default Sysaux tablespace is created next, as shown here:


create tablespace SYSAUX datafile /* OMF datafile */
  EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO online
Sun Jan 33 17:46:16 2008
Created Oracle managed file /u01/app/oracle/oradata/NICKO/datafile/o1_mf_sysaux_
150wkk9n_.dbf
Completed: create tablespace SYSAUX datafile /* OMF datafile

The undo tablespace is created next, with the default name of SYS_UNDOTS in the location specified by the DB_CREATE_FILE_DEST parameter. A temporary tablespace named TEMP is also created in the same directory:


CREATE UNDO TABLESPACE SYS_UNDOTS DATAFILE SIZE 10M AUTOEXTEND ON
Created Oracle managed file
/test01/app/oracle/oradata/ora_omf/finDATA/ora_sys_undo_yj5mg123.dbf
...
Successfully onlined Undo Tablespace 1.
Completed: CREATE UNDO TABLESPACE SYS_UNDOTS DATAFILE SIZE 1
CREATE TEMPORARY TABLESPACE TEMP TEMPFILE
Created Oracle managed file
/test01/app/oracle/oradata/ora_omf/finDATA/ora_temp_yj5mg592.tmp
Completed: CREATE TEMPORARY TABLESPACE TEMP TEMPFILE

Adding Tablespaces

Adding other tablespaces and datafiles within an OMF file system is easy. All you have to do is invoke the CREATE TABLESPACE command without the DATAFILE keyword. Oracle will automatically create the datafiles for the tablespace in the location specified in the DB_CREATE_FILE_DEST parameter. The example that follows shows how to create the tablespace:


SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST =
  2 '/test01/app/oracle/ora_omf/finance1';
System altered.
SQL> CREATE TABLESPACE omftest;
Tablespace created.
SQL> SELECT file_name FROM dba_data_files
  2  WHERE tablespace_name='OMFTEST';
FILE_NAME
-----------------------------------------------------------
/test01/app/oracle/oradata/ora_omf/ora_omftest_yj7590bm.dbf
SQL>

Compare the OMF tablespace-creation statement shown previously with the typical tablespace-creation statement, and you'll see how OMF simplifies database administration. Adding datafiles is also simple with OMF, as shown by the following example:


SQL> ALTER TABLESPACE omftest ADD DATAFILE;

OMF files, as you can see, simplify file administration chores and let you create and manage databases with a small number of initialization parameters. You can easily set up the necessary number of locations for your online redo log files, control files, and archive log files by specifying the appropriate value for the various OMF parameters. Oracle's ASM-based file system relies on the OMF file system.


Copying Files Between Two Databases

You can copy files directly between databases over Oracle Net, without using either OS commands or utilities such as the FTP protocol. You can use the DBMS_FILE_TRANSFER package to copy binary files within the same server or to transfer a binary file between servers. You use the COPY_FILE procedure to copy files on the local system, The GET_FILE procedure to copy files from a remote server to the local server and the PUT_FILE procedure to read and copy a local file to a remote file system. Here's a brief explanation of the key procedures of this new package.

COPY_FILE

The COPY_FILE procedure enables you to copy binary files from one location to another on the same or different servers. Before you can copy the files, you must first create the source and destination directory objects, as follows:

SQL> CREATE OR REPLACE DIRECTORY source_dir as '/u01/app/oracle/source';SQL> CREATE OR REPLACE DIRECTORY dest_dir as '/u01/app/oracle/dest';

Once you create your source and destination directories, you can use the COPY_FILE procedure to copy files, as shown here:

SQL> BEGIN     DBMS_FILE_TRANSFER.COPY_FILE(     source_directory_object      => 'SOURCE_DIR',     source_file_name             => 'test01.dbf',     destination_directory_object => 'DEST_DIR',     destination_file_name        => 'test01_copy.dbf');     END;     /SQL>

Ensure that the copy was correctly copied by checking the destination directory.


GET_FILE

You use the GET_FILE procedure to copy binary files from a remote server to the local server. First, log into the remote server and create the source directory object, as shown here:

SQL> CONNECT system/system_passwd@remote_dbConnected.SQL> CREATE OR REPLACE DIRECTORY source_dir as '/u01/app/oracle/source';

Next, you log into the local server and create a destination directory object, as shown here:

SQL> CONNECT system/system_passwd@local_dbConnected.SQL> CREATE OR REPLACE DIRECTORY dest_dir as /'u01/app/oracle/dest';

Once you create the source and destination directories, ensure that you have a database link between the two databases, or create one if one doesn't exist:

SQL> CREATE DATABASE LINK prod1     CONNECT TO system IDENTIFIED BY system_passwd     USING 'prod1';SQL>

You must make sure that you've set up the connection to the prod1 database using atnsnames.ora file, for example, before you can create the database link.

Now you execute the GET_FILE procedure to transfer the file from the remote server to the local server, as shown here:

SQL> BEGIN     DBMS_FILE_TRANSFER.GET_FILE(     source_directory_object      => 'SOURCE_DIR',     source_file_name             => 'test01.dbf',     source_database              => 'remote_db',     destination_directory_object => 'DEST_DIR',     destination_file_name        => 'test01.dbf');     END;     /SQL>

Note that for the SOURCE_DATABASE attribute, you provide the name of the database link to the remote database.

PUT_FILE

You use the PUT_FILE procedure to transfer a binary file from the local server to a remote server. As in the case of the previous two procedures, you must first create the source and destination directory objects, as shown here (in addition, you must ensure the existence of a database link from the local to the remote database):

SQL> CONNECT system/system_passwd@remote_dbConnected.SQL> CREATE OR REPLACE DIRECTORY source_dir as '/u01/app/oracle/source';SQL> connect system/system_passwd@local_dbConnected.SQL> CREATE OR REPLACE DIRECTORY dest_dir as /'u01/app/oracle/dest';

You can now use the PUT_FILE procedure to put a local file on the remote server, as shown here:

SQL> BEGIN     DBMS_FILE_TRANSFER.PUT_FILE(     source_directory_object      => 'SOURCE_DIR',     source_file_name             => 'test01.dbf',     destination_directory_object => 'DEST_DIR',     destination_file_name        => 'test01.dbf',     destination_database         => 'remote_db');     END;     /SQL>

Types of SQL Statements


Relational database principles underlie SQL. You need only instruct the language what to do, not how to do it. In addition to working with traditional relational data, Oracle's new XML-centric extensions to its SQL language enable you to manage XML, full text, multimedia, and objects. Oracle Database 11g integrates XML query, storage, and update functionality in the database engine. No matter which tool you use to access the Oracle database, ultimately you'll be using Oracle SQL to perform your transactions. Your application program or the Oracle tool you use may allow you access to the database without your using SQL, but the tools and applications themselves have to use SQL to process your requests.

SQL includes commands for data modeling, data definition, data access, data security, and data administration. SQL statements used by Oracle can be broadly divided into several groups based on whether they change the table data, the table structures, or some other session or instance characteristic. The SQL statement types are as follows:

    System control

    Session control

    Embedded SQL

    Data manipulation

    Transaction control

    Data definition

The following sections examine each of these broad types of SQL statements in detail.
System-Control Statements

You can use the system-control statement ALTER SYSTEM to alter the properties of a running database instance. For example, you can use ALTER SYSTEM to modify certain initialization parameters, such as the shared pool component of the system global area (SGA). At present, the ALTER SYSTEM command is the only system-control SQL statement in Oracle.

Here's an example of the ALTER SYSTEM command:


SQL> ALTER SYSTEM KILL SESSION '25,9192';

Session killed
SQL>

Session-Control Statements

Session-control statements dynamically alter the properties of an individual user's session. For example, if you intend to trace what your SQL session is doing in the database, you can use the ALTERSESSION SET SQL_TRACE=TRUE SQL statement to trace your session alone. The session-control statements also come in handy when you're changing several initialization parameters just for your session.
    Note      

PL/SQL (Oracle's procedural extension of the SQL language) doesn't support session-control statements.

Common session-control statements include the ALTER SESSION and SET ROLE commands. Here's an example of the use of the ALTER SESSION statement, wherein the ALTER SESSION command is used to set the data format for the duration of the session:


SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'MM-DD-YYYY HH:MI:SS';

Session altered.
SQL>

Embedded SQL Statements

Embedded SQL statements are data definition language, data manipulation language (DML), and transaction-control statements (such as OPEN, CLOSE, FETCH, and EXECUTE) used in a procedural language program, such as the statements used with the Oracle precompilers.
Data Manipulation Language Statements

The data manipulation language statements are statements that either query (retrieve) or manipulate (change) data in a table. For the most part, DML statements modify the data in the schema objects. In most online transaction processing (OLTP) systems, the bulk of Oracle's work consists of accepting requests from users that contain DML statements and returning the results of those statements.

You'll deal with four important DML statements most of the time: SELECT, INSERT, UPDATE, and DELETE. Note that in addition to these four common DML statements, there are others that facilitate the execution of the four basic DML statements. For example, the MERGE statement deals with conditional inserts and deletes, and you use the LOCK TABLE statement to modify the default Oracle locking mechanism.
SELECT Statements

SELECT statements are queries that retrieve data from a table or a set of tables (or views). Oracle provides set operators, such as UNION, MINUS, and INTERSECT, that enable you to combine the results of several queries to get one final result set of data. You can use the ORDER BY command to sort the results provided by Oracle; otherwise, the results will not be in any particular order. When you need data from several tables, you need to join the tables in your SELECT statements. You can limit the result set when you join tables by providing a join condition.

You can also use subqueries as part of the main or top query. A subquery in the WHERE clause of a SELECT statement is called a nested subquery. A subquery that is part of the FROM clause of a SELECT statement is called an inline view. The Appendix provides examples of subqueries, nested subqueries, and inline views.
INSERT, DELETE, and UPDATE Statements

The INSERT statement inserts new rows into existing tables, and the DELETE statement removes entire rows from tables. The UPDATE command modifies one or more columns of a single row, or multiple rows within a table. Although optimizing the writing of SELECT statements that address large tables is an important part of performance tuning, it's the SQL statements that modify, delete, or add data that cause more frustration for the DBA when dealing with an OLTP database. Designing proper tables and indexes is important if the database is to efficiently process a large number of concurrent inserts, deletes, and updates to tables. In addition, the DBA needs to properly size the undo tablespace and the online redo logs to efficiently process these types of statements.
Transaction-Control Statements

Transaction-control statements are used to control the changes made by data-manipulation SQL statements, such as INSERT, UPDATE, and DELETE. These are the four transaction-control statements:

    COMMIT: When this statement follows a set of DML statements, the changes will be made permanent.

    ROLLBACK: When this statement follows one or more DML statements, the changes made by the preceding statement or statements will be undone. If there are no save points, all statements from the beginning of the transaction will be rolled back.

    SAVEPOINT: This statement allows flexibility in your transactions, helping you set intermediate points in the transaction to which you can roll back (undo) your transactions.

    SET TRANSACTION: This rarely used statement denotes the start of a transaction and is used in statements like SET TRANSACTION READ ONLY.

Data Definition Language Statements

Data definition language statements enable you to define the structure of the various schema objects in the Oracle database. DDL statements enable you to create, alter, and remove database objects, such as tables and indexes. These are some of the main uses of DDL statements:

    Creating tables, indexes, and other schema objects

    Creating and modifying procedures, functions, and packages

    Dropping and modifying database objects

    Creating and managing users of the database

    Granting and revoking privileges on objects

    Creating and altering tablespaces

    Creating and modifying database links


Oracle Schemas


In Oracle, a schema is defined as a collection of logical structures of data, or schema objects, although it is used mostly as a synonym for the database user (specifically, the application owner) that owns the schema pertaining to a specific application. Thus, the accounting schema within a company database would own all the tables and code pertaining to the accounting department. In addition to containing tables, a schema contains other database objects, such as PL/SQL procedures, functions and packages, views, sequences, synonyms, and clusters. This logical separation of the objects within the database allows you considerable flexibility in managing and securing your Oracle databases.

Although the DBA can use the CREATE SCHEMA statement to populate a schema with database objects such as tables and views, more often the application owner creates the database objects and is referred to as the schema owner. The user who creates the objects owns database objects such as tables, views, procedures, functions, and triggers. The owner of the object has to explicitly assign specific rights to other users, such as SELECT or UPDATE, if those other users are to use the objects.
User-Defined Object Types

Oracle Database 11g is an object-relational database and, as such, it allows users to define several types of data other than the standard relational data types. These user-defined data types include the following:

    Object types: These complex types are an abstraction of real-world entities.

    Array types: These types are used to create ordered sets of data elements of the same type.

    Table types: These types are used to create an unordered set of data elements of the same data type.

    XML schema: This is a new object type that is used to create types and storage elements for XML documents based on the XML schema.

The Appendix provides examples of how to create various kinds of user-defined object types. In this chapter, the focus is on the traditional relational objects.

In addition, the owner may also create synonyms, which are aliases for the various objects for other users in the database. Synonyms, which are explained in the "Using Synonyms" section, later in this chapter, serve multiple purposes, including masking the ownership of data objects and simplifying SQL statements for users by eliminating the need for them to specify the schema owner's name each time they access a database object not owned by themselves.

There are two basic ways to create a schema in an Oracle database. The common way is to log in as the schema owner and create all the tables, indexes, and other objects that you plan to include in your schema. Since the objects are all created by the same schema owner, they'll automatically be part of the schema.

The second way to create a schema is to explicitly create it by using the CREATE SCHEMA statement. The CREATE SCHEMA statement lets you create multiple tables and views, as well as grant users privileges on those tables and views, all in a single SQL statement.


Oracle Tables


Tables are the basic units of data storage in an Oracle database. A table is a logical entity that makes the reading and manipulation of data intuitive to users. A table consists of columns and rows, and a table row corresponds to a single record. When you create a table, you give it a name and define the set of columns that belong to it. Each column has a name and a specific data type (such as VARCHAR2 or DATE). You may have to specify the width or the precision and scale for certain columns, and some of the table columns can be set to contain default values.
    Note     

You can create either relational tables or object tables in Oracle databases. Relational tables are the basic table structures with rows and columns to hold data. Object tables use object types for their column definitions and are used to hold object instances of a particular type. In this chapter, we exclusively use relational tables.
What's The Dual Table?

The dual table belongs to the sys schema and is created automatically when the data dictionary is created. The dual table has one column called "dummy" and one row, and it enables you to use the Oracle SELECT command to compute a constant expression. As you have seen, everything in Oracle has to be in a table somewhere. Even if something isn't in a table, such as the evaluation of an arithmetical expression, a query that retrieves those results needs to use a table, and the dual table serves as a catchall table for those expressions. For example, to compute the product of 9 and 24,567, you can issue the following SQL command: SELECT 9*24567 FROM dual.

There are four basic ways in which you can organize tables in an Oracle database:

    Heap-organized tables: A heap-organized table is nothing but the normal Oracle table, where data is stored in no particular order.

    Index-organized tables: An index-organized table stores data sorted in a B-tree indexed structure.

    Clustered tables: A clustered table is part of a group of tables that shares the same data blocks, because data from the clustered tables are often requested together.

    Partitioned tables: A partitioned table lets you divide a large amount of data into subtables, called partitions, according to various criteria. Partitioning is especially useful in a data warehouse environment.

This section of the chapter will discuss the standard (heap-organized) Oracle tables. I'll discuss the other types of tables in the "Special Oracle Tables" section, later in the chapter.
Estimating the Table Size

Before you create a new table, it's a good idea to estimate the size of the table you'll need now and the size you expect in the future. Knowing the size of the table allows you to make the right decisions about space allocation.

Algorithms are available for figuring out the potential size of tables and indexes—they take the row size in bytes and multiply it by the estimated number of rows in the table. Estimation of table size is more an art than a precise science, and you don't need to agonize over coming up with "accurate" figures. Just use common sense and make sure you are not wildly off the mark.

You can simplify table-size estimation by using the OEM Database Control or by using the CREATE_TABLE_COST procedure of the DBMS_SPACE package. The following sections illustrate both approaches to sizing a new table.
Using Database Control to Estimate Table Size

Let's look at the steps you need to follow to derive size estimates for a new table using the Database Control interface:

    From the Database Control home page, click the Administration tab.

    Click Tables in the Schema list.

    Click the Create button at the bottom-right corner.

    Select Standard or the Index Organized type.

    On the Create Table page, enter the new table name and the column data types in the columns section. Click the Estimate Table Size button.

    In the Estimate Table Size page, enter the estimated number of rows in your table (see Figure 7-1).

Image from book
Figure 7-1: Using the OEM Database Control to estimate table size

Once you finish all the steps, OEM will quickly tell you how much space you'll need to accommodate the new table. It will also tell you how much space you need to allocate to the tablespace in which you're going to create your new table.
    Note     

The following discussion of table operations deals with the "normal" or "regular" heap-organized Oracle tables, whose rows aren't stored in any particular order into the table. Most of the table operations discussed are common to all types of Oracle tables, but with some syntax modifications or limitations.
Oracle Row Format and Size

The Oracle database stores each row of a table as one or more row pieces. If a table row is longer than the row piece, the database may store the row using multiple row pieces, by chaining the row across the multiple blocks. Of course, if the table row is small, a data block can contain multiple table rows. The database uses the ROWIDs of the row pieces to chain the pieces together. Each of the row pieces contains a row header and the data for all or part of a row's columns.
Using the DBMS_SPACE Package to Estimate Space Requirements

The DBMS_SPACE package enables you to analyze segment growth and space requirements. You can use a procedure from this package to estimate size requirements for table indexes. If you know the approximate length of a new table's rows and the estimated number of rows, the DBMS_SPACE package will tell you the estimated space you need to create the table, given the storage attributes of the tablespace in which you plan to create it. You can use either the column information of the table or its row size to output the estimated table size. Listing 7-1 shows a simple example.
Listing 7-1: Using the DBMS_SPACE Package to Estimate Space Requirements


SQL> SET SERVEROUTPUT ON
SQL> DECLARE
  2   l_used_bytes NUMBER;
  3   l_allocated_bytes NUMBER;
  4   BEGIN
  5   DBMS_SPACE.CREATE_TABLE_COST (
  6   tablespace_name   => 'PERSON_D',
  7   avg_row_size      => 120,
  8   row_count         => 1000000,
  9   pct_free          => 10,
 10   used_bytes        => l_used_bytes,
 11   alloc_bytes       => l_allocated_bytes);
 12   DBMS_OUTPUT.PUT_LINE ('used = ' || l_used_bytes || ' bytes'
 13   ||   'allocated = ' || l_allocated_bytes || ' bytes');
 14*END;
SQL> /
used = 138854400 bytes   allocated = 167772160 bytes
PL/SQL procedure successfully completed.
SQL>

Note that the DBMS_SPACE package also contains the SPACE_USAGE procedure, which helps you deallocate unused space (actually unused extents) allocated to tables, indexes, and other objects. Here's the syntax for using this procedure to deallocate space allocated to a table:


SQL> ALTER TABLE persons DEALLOCATE UNUSED;
Table altered.
SQL>

Creating a Simple Table

To create a table in your own schema, you must have the CREATE TABLE system privilege; to create a table in another user's schema, you must have the CREATE ANY TABLE system privilege. Always specify a tablespace for the table creation—if you don't, the table will be created in the user's default tablespace. You must have either enough space quota in the tablespace where you are going to create your tables, or you must have the UNLIMITED TABLESPACE system privilege. Listing 7-2 gives the syntax for creating a simple table.
Listing 7-2: Creating a Simple Table


SQL> CREATE TABLE emp (
     empno      NUMBER(5) PRIMARY KEY,
     ename      VARCHAR2(15) NOT NULL,
     ssn        NUMBER(9),
     job        VARCHAR2(10),
     mgr        NUMBER(5),
     hiredate   DATE DEFAULT (SYSDATE),
     sal        NUMBER(7,2),
     comm       NUMBER(7,2),
     deptno     NUMBER(3) NOT NULL
                 CONSTRAINT dept_fkey REFERENCES hr.dept(dept_id))
     TABLESPACE admin_tbs01

SQL>

    Tip     

If your database consists of large read-only tables, consider using the Oracle table compression feature to save storage space.

In the CREATE TABLE statement in Listing 7-2, there are several integrity constraints, including a primary key and a foreign key defined on various columns of the table. Constraints are discussed in the "Managing Database Integrity Constraints" section, later in this chapter.
    Note     

You can use the ENCRYPT clause to transparently encrypt column data. You can encrypt columns of type CHAR, NCHAR, VARCHAR2, NVARCHAR2, NUMBER, DATE, and RAW. The user who encrypts the column will see the data in its unencrypted format. Encryption involves setting an encryption key and some other details—see the Oracle manual titled Oracle Advanced Security Administrator's Guide, accessible through http://tahiti.oracle.com, for additional information on encryption.

Here's how you would encrypt the ssn column in the previous table creation statement:


ssn         NUMBER(9) ENCRYPT

Once you create a new table, you can populate the table with data in several ways: you can use an INSERT command to insert data or use SQL*Loader (see Chapter 14) to load data. Or, you may decide to create a new table and have data come from an existing table in the same or a different database. This uses the well-known CREATE TABLE AS SELECT (CTAS) technique, which I explain shortly, in the "Creating a New Table with the CTAS Option" section. You can also use the SQL MERGE command to insert data from another table based on specific conditions. The use of the MERGE command is explained in the Appendix.
    Note     

If you are creating your database objects in a locally managed tablespace, you don't have to set storage parameters for any objects you create in that tablespace.
What Is a Null Value?

A null value means you simply leave a column blank in a row. A null value for a column in a certain row doesn't indicate a zero value for that column. Rather, the null indicates there is no value for that column in that row. If you have missing, inapplicable, or unknown data for a column, you use a null to indicate that. You can't leave any column in a table as a null value. A column will allow null values only if you don't specify the NOT NULL constraint for that column. In addition, when you designate a column as the primary key for that table, the column will not allow null values. Try to include all null values toward the end of the table, to conserve disk space. This is so because of the way Oracle stores null values. Any comparison between a null and other values can't be true or false, since the null signifies an unknown value.
Default Values for Columns

You can assign default values to columns. When you insert a new row, you can omit a value for any column with a default value. The database will automatically supply the default value for that column. If you don't explicitly define a default value for a column, the column value defaults to null. For example, if you set the 20 as the default value of the DEPT_NO column in the employees table, Oracle will insert the value 20 for the DEPT_NO column, even though that column doesn't have a value when new data in inserted.
Virtual Columns

In an Oracle Database 11g release database, you can use virtual columns in a table. A virtual column is a column that is derived by evaluating an expression based on one or more of the actual columns of the table or by evaluating a SQL or PL/SQL function. Unlike normal columns, the virtual column data is not stored permanently on disk. The database computes the virtual column values when you query it, by dynamically evaluating an expression on other columns or by computing a function.

You can use virtual columns in both DDL and DML statements. You can defined indexes on them and collect statistics on them as well.
Creating a Virtual Column

You use the clause GENERATED ALWAYS AS when you specify a virtual column as part of a CREATE TABLE statement, as the following example illustrates:


SQL>  CREATE TABLE emp (
  2   empno      NUMBER(5) PRIMARY KEY,
  3   ename      VARCHAR2(15) NOT NULL,
  4   ssn        NUMBER(9),
  5   sal        NUMBER(7,2),
  6*  hrly_rate  NUMBER(7,2) generated always as (sal/2080));

Table created.
SQL>

The last line in the previous example shows the specification of the virtual column. If you want, you can also specify the optional keyword VIRTUAL, as shown here:


SQL> CREATE TABLE emp3
  2  (sal number (7,2),
  3  hrly_rate number (7,2) generated always as (sal/2080)
  4  VIRTUAL
  5* CONSTRAINT HourlyRate CHECK (hrly_rate > 8.00));

Table created.
SQL>

In both of the examples shown here, hrly_rate is a virtual column generated by evaluating the expression sal/2800 for each row. You can also add a virtual column to an existing table by executing the ALTER TABLE statement, as shown in this example:


SQL> ALTER TABLE employees ADD (income AS (salary*commission_pct));
Table altered.
SQL>

Since I didn't specify a data type for the virtual column income, Oracle will automatically assign a default data type based on the data type for the two columns (salary and commission_pct) from which the database computes the virtual column.
Limitations of Virtual Columns

Some limitations exist on the use of virtual columns in a table, which I summarize here:

    You can't create virtual columns on an index-organized table, external table, temporary table, object, or a cluster.

    You can't create a virtual column as a user-defined type, large object (LOB), or RAW.

    All columns in the column expression must belong to the same table.

    The column expression must result in a scalar value.

    The column expression in the AS clause can't refer to another virtual column.

    You can't update a virtual column by using it in the SET clause of an update statement.

    You can't perform a delete or insert operation on a virtual column

Adding a Column to a Table

Adding a column to a table is a very straightforward operation. You simply use the ALTER TABLE command to add a column to a table, as shown here:


SQL> ALTER TABLE emp
     ADD (retired char(1));

Table altered.
SQL>

Dropping a Column from a Table

You can drop an existing column from a table by using the following command:


SQL> ALTER TABLE emp
     DROP (retired);

Table altered.
SQL>

If the table from which you're dropping the column contains a large amount of data, you can ask Oracle to merely mark the column as unused, without trying to remove the data at all. You won't see the column in any queries or views, and all dependent objects, such as constraints and indexes, defined on the column are removed. For all practical purposes, you can "drop" a large column this way very quickly.

Here's an example that marks as unused the hiredate and mgr columns in the emp table:


SQL> ALTER TABLE emp SET UNUSED (hiredate, mgr);

During a maintenance window, you can then permanently drop the two columns by using the following command:


SQL> ALTER TABLE emp DROP UNUSED COLUMNS;

If you think that the large number of rows in a table could potentially exhaust the undo space, you can drop a column with the optional CHECKPOINT clause. This will reduce the generation of undo data while dropping the column by applying checkpoints after a certain number of rows. Here's an example that makes the database apply a checkpoint each time it removes 10,000 rows in the emp table:


SQL> ALTER TABLE emp DROP UNUSED COLUMNS CHECKPOINT 10000;

Renaming a Table Column

You can easily rename table columns using the rename column command. For example, the following command will rename the retired column in the emp table to non_active. Note that you can also rename the column constraints, if you wish.


SQL> ALTER TABLE emp
     RENAME COLUMN retired TO non_active;
Table altered.
SQL>

    Tip     

You can rename tempfiles, as well as datafiles and the redo log files, using the ALTER DATABASE command.
Renaming a Table

On occasion, an application developer may want to rename a table. Renaming a table is straightforward:


SQL> ALTER TABLE emp
     RENAME TO emp;
Table altered.
SQL>

Removing All Data from a Table

To remove all the rows from a table, you can use the TRUNCATE command, which, contrary to its name, doesn't abbreviate or shorten anything—it summarily removes all the rows very quickly. TRUNCATE is a DDL command, so it can't be undone by using the ROLLBACK command.

You can also remove all the rows in a table with the DELETE * FROM TABLE . . . command, and because this is a DML command, you can roll back the deletion if you desire. However, because the DELETE command writes all changes to the undo segments, it takes a much longer time to execute. The TRUNCATE command doesn't have to bother with the undo segments, so it executes in a few seconds, even for the largest tables.

Here's an example of the TRUNCATE command in action:


SQL> SELECT COUNT(*) FROM test;

  COUNT(*)
-----------
    31

SQL> TRUNCATE TABLE test;

Table truncated.

SQL> SELECT COUNT(*) FROM test;

COUNT(*)
------------
     0
SQL>

Creating a New Table with the CTAS Option

To create a new table that is identical to an existing table, or to create a new table that includes only some rows and columns from another table, you can use the CREATE TABLE AS SELECT * FROM command. With this command, you can load a portion of an existing table into a new table by using where conditions, or you can load all the data of the old table into the newly created table by simply using the SELECT * FROM clause, as shown in the following code snippet:


SQL> CREATE TABLE emp_new
     AS
     SELECT * FROM emp;

Table created.
SQL>

If the table has millions of rows, and your time is too limited to use the simple CTAS method, there are a couple of ways to speed up the creation of new tables that contain large amounts of data. If the table you're creating is empty, you don't need to be concerned with the speed with which it's created—it's created immediately. But if you're loading the new table from an existing table, you can benefit from using the PARALLEL and NOLOGGING options, which speed up the loading of large tables.

The PARALLEL option enables you to do your data loading in parallel by several processes, and the NOLOGGING option instructs Oracle not to bother logging the changes to the redo log files and rollback segments (except the very minimum necessary for housekeeping purposes). Here's an example:


SQL> CREATE TABLE employee_new
   2 AS SELECT * FROM employees
   3 PARALLEL DEGREE 4
   4*NOLOGGING;

Table created.
SQL>

The other method you can use to save time during table creation is to simply move a table from one tablespace to another. You can take advantage of the moving operation to change any storage parameters you wish. Here's an example of the ALTER TABLE . . . MOVE command, which enables you to move tables between tablespaces rapidly. In this example, the employee table is moved from its present tablespace to a new tablespace:


SQL> ALTER TABLE employee MOVE new_tablespace;

When you move a table, the ROWIDs of the rows change, thus making the indexes on the table unusable. You must either re-create the indexes or rebuild them after you move the table.
Placing a Table in Read-Only Mode

You can make any table in an Oracle database a read-only table, which means that the database will not permit you to add, remove, or alter the data in any way. For example, if you have a configuration table that you want to keep safe from any changes by any users, you can change the status of the table to read-only.

Use the ALTER TABLE statement to place a table in the read-only mode. Here's an example:


;
SQL> ALTER TABLE test READ ONLY;

Once you place a table in read-only mode, the database won't permit the following operations on that table:

    TRUNCATE TABLE

    SELECT FOR UPDATE

    Any DML operations

    ALTER TABLE ADD/MODIFY/RENAME/DROP COLUMN

    ALTER TABLE SET COLUMN UNUSED

    ALTER TABLE DROP/TRUNCATE/EXCHANGE (SUB)PARTITION

    ALTER TABLE UPGRADE INCLUDING DATA or ALTER TYPE CASCADE INCLUDING TABLE DATA for a type with read-only table dependents

    Online redefinition

    FLASHBACK TABLE

You can perform the following operations on a read-only table:

    SELECT

    CREATE/ALTER/DROP INDEX

    ALTER TABLE ADD/MODIFY/DROP/ENABLE/DISABLE CONSTRAINT

    ALTER TABLE for physical property changes

    ALTER TABLE MOVE

    RENAME TABLE and ALTER TABLE RENAME TO

    DROP TABLE

You can return a table to the normal read-write status by specifying the READ WRITE clause in the ALTER TABLE statement, as shown here:


SQL> ALTER TABLE test READ WRITE;

Table Compression

You can compress data in a table to save disk space and reduce memory usage, besides speeding up query performance. There is some overhead associated with table compression when you're loading data or performing DML operations, however. You can use table compression for both data warehousing applications, which involve primarily read-only operations, as well as OLTP systems, which include heavy DML operations. You can perform insert, delete, and update operations on compressed tables. To reduce your overhead, try to compress data that is mostly used for read-only operations and that is infrequently updated. Historical data and archival data are good candidates for table compression.

When the database compresses a table, it eliminates all duplicate values in a data block. The database stores all duplicate values in the table at the beginning of the block, in a symbol table for the block. The database replaces multiple occurrences of the duplicate data with a short reference to the symbol table. You can perform all operations that you can perform on a normal table on a compressed table. You can compress tables and materialized views. You can also compress only some of the partitions of a partitioned table.

Oracle offers tablespace compression as well. If you compress a tablespace, all tables you create in that tablespace are automatically compressed.

Once you define a table as a compressed table, the compression will occur when the data is undergoing any of the following operations:

    Load through direct-path SQL*Loader operations

    Load through a CTAS statement

    Parallel insert statements

    Serial insert statements with an append hint

    Single-row or array inserts and updates

The biggest hit on performance due to compression of data occurs during the insertion of data using any of the methods shown here. Deleting compressed data, however, is just as fast as deleting uncompressed data. In addition, updating also is slower in a compressed table.

In addition to creating a table with compression enabled, you can also compress existing data in a table by using the ALTER TABLE . . . MOVE statement. When you compress the data in this way, the database needs to acquire an exclusive lock on the table, which prevents all updates and inserts until the compression operation completes. Alternatively, you can use the DBMS_REDEFINITION package to perform an online redefinition of the table to avoid the locking of the table.
Enabling Compression

You can enable compression by specifying the COMPRESS clause either in a CREATE TABLE statement or in an ALTER TABLE . . . COMPRESS statement. If you're altering a table, only new data will be compressed. Thus, the table can have both compressed and uncompressed data in it at the same time. You can disable table compression for a table by using the ALTER TABLE . . . UNCOMPRESS statement. Disabling compression doesn't compress the already compressed data in the table—it makes sure that the new data is uncompressed.
    Note     

You can use table compression in both an OLTP as well as a data warehousing environment. You can get the best results by compressing all read-only or historical data, which rarely changes.

There are a couple of variants of the COMPRESS clause: you must use the COMPRESS FOR ALL OPERATIONS clause to enable compression for all operations. In order to enable it only for direct-path inserts (bulk insert operations), specify the COMPRESS FOR DIRECT_LOAD OPERATIONS clause. The clause COMPRESS by itself is equivalent to the COMPRESS FOR DIRECT_LOAD OPERATIONS clause.

Note that if you enable compression only for direct-path inserts, you can't drop any columns later on. You can add columns only if you don't specify default values for the columns. These restrictions don't apply when you enable compression for all operations on a table.
Examples of Table Compression

The following example shows how to enable compression for all operations on a table, which is what you'd want to do in an OLTP setting:


SQL> CREATE TABLE test
     name varchar2(20)
     address varchar2(50))
     COMPRESS FOR ALL OPERATIONS;

You can use either of the following statements to enable compression for direct-path inserts only on a table:


SQL> CREATE TABLE test
     name varchar2(20)

     address varchar2(50))
     COMPRESS;
SQL> CREATE TABLE test
     name varchar2(20)
     address varchar2(50))
     COMPRESS FOR DIRECT_LOAD OPERATIONS

As you can see from the examples, the COMPRESS FOR ALL OPERATIONS clause is what you must use for compressing OLTP tables. You can use the following query to find out which tables are compressed in your database:


SQL> SELECT table_name, compression, compress_for
     FROM dba_tables;

TABLE_NAME          COMPRESS    COMPRESS_FOR
----------------    --------    -------------------
EMP                 ENABLED     DIRECT LOAD ONLY
DEPT                ENABLED     FOR ALL OPERATIONS
SQL>

The COMPRESS_FOR column shows the type of table compression (all operations or direct load only).
Dropping Tables

You can drop a table by using the DROP TABLE table_name command. In order to be able to drop a table, the user must own the table (it must be in your schema), or the user must have the DROP ANY TABLE privilege.

When you use the DROP TABLE command, however, the table doesn't go away immediately—Oracle simply renames the table and stores it in the Recycle Bin, which is in reality simply a data dictionary table. Thus, you can bring back a table you dropped accidentally by using the following command:


SQL> FLASHBACK TABLE emp TO BEFORE DROP;

The ability to bring back a dropped table is known as the Flashback Drop feature. Chapter 16 explains this feature in detail and provides information about managing the Recycle Bin.

If you are sure that you'll never need the table, you can get rid of it permanently by using the PURGE option with your DROP TABLE command, as shown here:


SQL> DROP TABLE emp PURGE;

When you use the preceding PURGE command, the emp table is dropped immediately, and you can't get it back! Again, you'll see a lot more about this command in Chapter 16.
    Note     

The DROP TABLE table_name PURGE command is equivalent to the old DROP TABLE table_name command in pre-Oracle Database 10g release databases.

When you drop a table, all indexes you had defined on the table will be dropped as well. If the table you want to drop contains any primary or unique keys referenced by foreign keys of other tables, you must include the CASCADE clause in the DROP TABLE statement, in order to drop those constraints as well:


SQL> DROP TABLE emp CASCADE CONSTRAINTS;

Special Oracle Tables


The simple tables you saw in the previous sections satisfy most of the data needs of an application, but these aren't the only kind of tables Oracle allows you to create. You can create several kinds of specialized tables, such as temporary tables, external tables, and index-organized tables. In the following sections, you'll examine these important types of tables.
Temporary Tables

Oracle allows you to create temporary tables to hold data just for the duration of a session or even a transaction. After the session or the transaction ends, the table is truncated (the rows are automatically removed). Temporary tables are handy when you are dealing with complex queries or transactions that require transitory row information to be stored briefly before it is written to a permanent table.

The data in temporary tables cannot be backed up like that in other permanent tables. No data or index segments are automatically allotted to temporary tables or indexes upon their creation, as is the case for permanent tables and indexes. Space is allocated in temporary segments for the temporary tables only after the first INSERT command is used for the tables.

Temporary tables increase the performance of transactions that involve complex queries. One of the traditional responses to complex queries is to use a view to make the complex queries simpler to handle, but the view needs to execute each time you access it, thereby negating its benefits in many cases. Temporary tables are an excellent solution for cases like this, because they can be created as the product of complex SELECT statements used for the particular session or transaction, and they are automatically purged of data after the session.
    Note     

Although Oracle doesn't analyze the temporary table data to gather the data distribution, that's not a problem for efficient query processing, because the temporary tables can keep constantly accessed join and other information in one handy location. You can repeatedly access this table rather than having to repeatedly execute complex queries.

Temporary tables are created in the user's temporary tablespace and are assigned temporary segments only after the first INSERT statement is issued for the temporary table. They are deallocated after the completion of the transaction or the end of the session, depending on how the temporary tables were defined.

Here are some attractive features of temporary tables from the Oracle DBA's point of view:

    Temporary tables drastically reduce the amount of redo activity generated by transactions. Redo logs don't fill up as quickly if temporary tables are used extensively during complex transactions.

    Temporary tables can be indexed to improve performance.

    Sessions can update, insert, and delete data in temporary tables just as in normal permanent tables.

    The data is automatically removed from the temporary table after a session or a transaction.

    You can define table constraints on temporary tables.

    Different users can access the same temporary table, with each user seeing only his or her session data.

    Temporary tables provide efficient data access because complex queries need not be executed repeatedly.

    The minimal amount of locking of temporary tables means more efficient query processing.

    The structure of the table persists after the data is removed, so future use is facilitated.

Creating a Session Temporary Table

Here is an example of a temporary table that lasts for an entire session. You use the ON COMMIT DELETE ROWS option to ensure that the data remains in the table only for the duration of the session.


SQL> CREATE GLOBAL TEMPORARY TABLE flight_status(
     destination VARCHAR2(30),
     startdate DATE,
     return_date DATE,
     ticket_price NUMBER)
     ON COMMIT PRESERVE ROWS;

The ON COMMIT PRESERVE ROWS option in the preceding example indicates that the table data is saved for the entire session, not just for the length of the transaction.
Creating a Transaction Temporary Table

Unlike session temporary tables, transaction temporary tables are specific to a single transaction. As soon as the transaction is committed or rolled back, the data is deleted from the temporary table. Here's how you create a transaction temporary table:


SQL> CREATE GLOBAL TEMPORARY TABLE sales_info
     (customer_name VARCHAR2(30),
     transaction_no NUMBER,
     transaction_date DATE)
     ON COMMIT DELETE ROWS;

The ON COMMIT DELETE ROWS option makes it clear that the data in this table should be retained only for the duration of the transaction that used this temporary table.
Index-Organized Tables

Index-organized tables (IOTs) are somewhat of a hybrid, because they possess features of both indexes and tables. IOTs are tables in which the data is stored in a B-tree index structure (in a primary key sorted manner), but they are unlike regular or heap-organized tables because regular tables do not order data. They are unlike regular indexes because while indexes consist only of the indexed columns, IOTs include both the key and the non-key columns. Oracle uses the B-tree index structures to store its data by sorting it by the primary key.

When you update an IOT, it is the index structure that really gets updated. Data access is much faster because you only have to perform one I/O to access the index/table. There is no need to access the index and the real table separately, as is the case with traditional indexed tables. The actual row data, and not merely the ROWID, is held in the index leaf block along with the indexed column value. IOTs are especially well suited for cases where you need to issue queries based on the values of the primary key. IOTs are convenient for very large databases (VLDBs) and OLTP applications. You can reorganize IOTs without rebuilding the indexes separately, which means that the reorganization time is less than it would be if you used regular heap-based tables. The major differences between normal tables and IOTs are shown in Table 7-1.
Table 7-1: Differences Between Regular Oracle Tables and Index-Organized Tables
Open table as spreadsheet

Regular Oracle Tables
    

Index-Organized Tables

Physical ROWIDs
    

Logical ROWIDs

Uniquely identified by ROWID
    

Uniquely identified by primary key

Can contain LONG and LOB data
    

Can't contain LONG data

Allowed in table clusters
    

Not allowed in table clusters

Larger space requirements
    

Smaller space requirements

Slower data access
    

Faster data access

Listing 7-3 shows how to create an IOT.
Listing 7-3: Creating an Index-Organized Table


SQL> CREATE TABLE employee_new(
     employee_id NUMBER,
     dept_id NUMBER,
     name VARCHAR2(30),
     address VARCHAR2(120),
     CONSTRAINT pk_employee_new PRIMARY KEY (employee_id))
     ORGANIZATION INDEX TABLESPACE empindex_01
     PCTTHRESHOLD 25
     OVERFLOW TABLESPACE overflow_tables;

A few keywords in the previous CREATE TABLE statement are worth reviewing carefully. The key phrase ORGANIZATION INDEX indicates that this table is an IOT rather than a regular heap-organized table. The PCTTHRESHOLD keyword indicates the percentage of space reserved in the index blocks for the employee_new IOT. Any part of a row in the table that does not fit the 25 percent threshold value in each data block is saved in an overflow area. The CREATE TABLE statement assigns the overflow_tables tablespace to hold the overflow of data from the index blocks. You can set the threshold value to accommodate both the key columns as well as the frequently accessed first few non-key columns. You can find out which rows exceed the threshold value by executing the ANALYZE TABLE . . . LIST CHAINED ROWS statement.

You can employ the optional INCLUDING clause to specify the non-key columns you want stored with the key columns. As long as the database doesn't exceed the threshold specified by you, it tries to accommodate all non-key columns up to and including the column you specify with the INCLUDING clause in the index leaf block. The database stores the rest of the non-key columns in the overflow segment. The PCTTHRESHOLD clause will override the INCLUDING clause if there is a conflict between the values you specify for the two clauses. Listing 7-4 shows how to employ the INCLUDING clause.
Listing 7-4: Using the INCLUDING Clause When Creating an Index-Organized Table


SQL> CREATE TABLE employee_new(
     employee_id NUMBER,
     dept_id NUMBER,

     name VARCHAR2(30),
     address VARCHAR2(120),
     CONSTRAINT pk_employee_new PRIMARY KEY (employee_id))
     ORGANIZATION INDEX TABLESPACE empindex_01
     PCTTHRESHOLD 25
     INCLUDING name
     OVERFLOW TABLESPACE overflow_tables;

In Listing 7-4, the INCLUDING clause instructs the database to include the employee_id, dept_id, and name columns (all of which are non-key columns) in the index leaf block, along with the key column values, of course.

Remember that index entries in IOTs can be large because they contain not just a key value, but all the row values. So IOTs do not necessarily have all of their data stored in the index blocks. It is quite possible for the key and part of the row to be saved in the index blocks and for the rest to be in some other tablespace. If the PCTTHRESHOLD parameter is too low, there is a risk of a chaining problem in which parts of the row reside in different data blocks, leading to a slowdown of your queries.
External Tables

Databases in general, and data warehouses in particular, need to regularly extract data from various sources and transform it into a more useful form. For example, a data warehouse may collect data from the OLTP data sources and transform it according to some business rules to make it useful for management.

Traditionally, the way to load a data warehouse has been to first load staging tables with the raw data. Sometimes the data would be transformed outside of the database and loaded directly in one pass to the warehouse tables. Either method is usually very cumbersome, even when you use state-of-the-art extraction and transformation tools or custom scripts.

Oracle allows the use of external tables—that is, tables that use data that resides in external operating system files. External tables don't need any storage in terms of extents in the Oracle database—the definition of an external table merely makes an entry in the data dictionary, which enables you to load data into other Oracle database tables from the external tables. If you drop an external table in Oracle, you'll only be removing its definition from the data dictionary—the data itself remains safe in the external source files.

External tables are commonly used as intermediate staging tables during data transformations. External tables enable you to view externally stored data as if it were inside a table in the Oracle database. You can perform queries and joins on external tables, but you can't update, insert, or delete from these tables; no DML operations are permissible on external tables.
    Note     

Chapter 14 provides a detailed example of using external tables and discusses them in more depth.


Partitioned Tables


Oracle databases can be quite large, and it's not uncommon to encounter tables that hold several gigabytes (or even several terabytes) worth of data. Partitioning is a way of logically dividing a large table into smaller chunks to facilitate query processing, DML operations, and database management. All the partitions share the same logical definition, column definitions, and constraints.

Improvements in query response times are startling when you partition a multibillion-row table into hundreds or thousands of partitions. In some busy environments, new partitions may be created as often as every hour! Partitioning leads directly to better query performance because the database needs to search only the relevant partitions of the table during a query. This avoidance of unneeded partitions when querying is called partition pruning; the availability of one partition is independent of the availability of the other partitions.

Data I/O can also be enhanced by using partitions because you can keep the partitions of a heavily accessed table on different disk drives. If you are using the Oracle parallel DML features, partitioned tables provide you with better performance.

Partitioning a table also provides partition independence, meaning, among other things, that you can perform your backup and recovery operations, data loading, and index creation on partitions of a large table instead of the whole table. For example, you can copy a single partition's data using the Data Pump Export utility, reducing export and import times dramatically when you only need part of the entire data set. The ability to perform tasks on partitions instead of entire tables means that your database downtime will be reduced drastically.
    Note     

Although partitioned tables generally improve query performance in very large tables, they aren't a panacea for poor coding or other design problems in the application. Partitioning also carries a price in terms of additional work to maintain the partitions and their indexes. Of course, you also have to pay for the partitioning capability, as it's an option that you have to license separately from Oracle Corporation.

Partitioning tables is also an effective way of purging or archiving older data that you don't need right now. It is very common for large data warehouses to archive data that is older than a certain date, and partitioned tables make archiving easy. For example, each quarter you can drop the oldest partition and replace it with a new partition. The partitioned table in this case will end up having roughly the same amount of data, and it will cover the same length of time (a quarterly collection of company data for three years will always have 12 partitions in the table). In addition, large table exports can be performed more quickly when you partition the table into smaller chunks and export each partition separately.

Oracle offers six different ways to partition your table data: range partitioning, interval partitioning, hash partitioning, list partitioning, reference partitioning, and system partitioning. In addition, you can use composite partitioning (combine two partitioning methods to divide the data into smaller subpartitions) strategies, which takes the actual number of partitioning methods to over a dozen types. No matter which partitioning method you use, you must specify the following information when creating a partitioned table:

    Partitioning method: This is one of the six types of partitioning.

    Partitioning column (or columns): This is the column or columns on the basis of which you want to partition the table (for example, transaction_date). The range or set of values of the partitioning columns are called the partitioning keys.

    The partition descriptions: These descriptions specify the criteria for the inclusion of the actual partitioning keys in each partition. You use a partition bound for range partitioning and use the clause VALUES LESS THAN to limit the partitioning key values in each partition. In list partitioning, you specify a list of literal values that tell Oracle what partitioning key values qualify for inclusion in a partition.

The following sections discuss the different types of partitioning and show how to partition a table.
Range Partitioning

Range partitioning is a popular way to partition Oracle tables, and it was the first type of partitioning introduced by Oracle. Range partitioning is used for data that can be separated into ranges based on some criterion. You get the best results from range partitioning if the data falls evenly into the different ranges that you create. Your ranges can be based on a sequence number or a part number, but the range-partitioning technique is usually based on time (monthly or quarterly data, for example).

Let's say you need to create a table to hold three years of quarterly sales data for a major airline. This could easily add up to several hundreds of millions of transactions. If you partition the sales table by a range of quarters and decide to hold no more than three years' worth of data at any given time, you could have 12 partitions in the table, partitioned by quarters. Each time you enter a new quarter, you can archive the oldest quarter's data, thus keeping the number of partitions constant. By partitioning the huge table, which might have a total of 480 million rows, for example, any queries you run would only have to deal with one-twelfth of the table—that is, about 40 million rows—which makes a big difference. Partitioning thus provides you with a divide-and-conquer technique for dealing efficiently with massive amounts of table data.

Listing 7-5 shows the DDL for creating a range-partitioned table, with each year's worth of data divided into four partitions. With each new quarter, you can add another partition. Thus, you'll end up with 12 partitions over a three-year period.
Listing 7-5: Creating a Range-Partitioned Table


SQL> CREATE TABLE sales_data
  2  (ticket_no NUMBER,
  3  sale_year  INT NOT NULL,
  4  sale_month INT NOT NULL,
  5  sale_day   INT NOT NULL)
  6  PARTITION BY RANGE (sale_year, sale_month, sale_day)
  7  (PARTITION sales_q1 VALUES LESS THAN (2008, 04, 01)
  8  TABLESPACE ts1,
  9  PARTITION sales_q2 VALUES LESS THAN (2008, 07, 01)
 10  TABLESPACE ts2,
 11  PARTITION sales_q3 VALUES LESS THAN (2008, 10, 01)
 12  TABLESPACE ts3,
 13  PARTITION sales_q4 VALUES LESS THAN (2009, 01, 01)
 14* TABLESPACE ts4);

Table created.
SQL>

The CREATE TABLE statement in Listing 7-5 will create four partitions, each stored in a separate tablespace. Notice how the partitions are based on date ranges. The first partition, sales_q1, will include all transactions that took place in the first three months (one quarter) of the year 2008. The second quarter, sales_q2, will include transactions that occurred between April and June of 2008(months 4, 5, and 6 of the year), and so on.

It is common in range-partitioned tables to use a catchall partition as the very last one. When this is the case, the last partition will contain values less than a value called maxvalue, which is simply any value higher than the values in the second-to-last partition. Note that each partition has a specific name and is stored in a separate tablespace.

In the partitioned sales_data table, the sales data for June 10, 2008 (sale_year=2004, sale_month=6, and sale_day=10) has a partitioning key of (2004, 6, 10) and would be stored in partition sales_q2. When a query requests data for June 10, 2008, the Oracle query zooms in on partition sales_q2 and completely ignores the rest of the table data.
Interval Partitioning

Interval partitioning is an extension of the traditional range-partitioning method. In order to implement interval partitioning for a table, you must first specify a minimum of one range partition for that table. Whether you use the minimum single-range partition or multiple-range partitions, the high value of the range partitioning key is called the transition point. The database automatically creates interval partitions after the data in the table crosses the transition point.

If, for example, you use monthly intervals for a table, and the highest value for the range partitions is January 1, 2009, then the transition point will be at January 1, 2009. The first month interval, then, would be January 2008, and its lower boundary would be January 1, 2008. Similarly, the lower boundary for the December 2008 interval would be December 1, 2008. It doesn't matter in this case whether the November 2008 partition already exists.

Here's what you need to know about interval partitioning:

    Use the INTERVAL clause in the CREATE TABLE statement to create an interval-partitioned table.

    Specify at least one range partition using the partition clause, before specifying your interval partitions.

    You can't use a partitioning key that includes more than one column.

    The partitioning key must be of the NUMBER or DATE type.

    You can optionally specify the tablespaces for the partition data by including the STORE IN clause in the CREATE TABLE statement.

In the following example, I create an interval-partitioned table with four range partitions, identified by p0, p1, p2, and p3. The four range partitions are created on the time_id column, with the transition point being the highest value of the range partitions, which is January 1, 2008 (in partition p3). Once the time_id column's value crosses January 1, 2009, the database will automatically spawn interval-based partitions, all with a width of one month.


CREATE TABLE interval_sales
    ( prod_id        NUMBER(6)
    , cust_id        NUMBER
    , time_id        DATE
    , channel_id     CHAR(1)
    , promo_id       NUMBER(6)
    , quantity_sold  NUMBER(3)
    , amount_sold    NUMBER(10,2)
    )
  PARTITION BY RANGE (time_id)
  INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'))
    ( PARTITION p0 VALUES LESS THAN (TO_DATE('1-1-2006', 'DD-MM-YYYY')),
      PARTITION p1 VALUES LESS THAN (TO_DATE('1-1-2007', 'DD-MM-YYYY')),
      PARTITION p2 VALUES LESS THAN (TO_DATE('1-7-2008', 'DD-MM-YYYY')),
      PARTITION p3 VALUES LESS THAN (TO_DATE('1-1-2009', 'DD-MM-YYYY')) );

There is no limit on the number of interval partitions in a table. In our example here, the database will continue to create a new interval partition for each new month.
Hash Partitioning

Suppose the transaction data in the previous example were not evenly distributed among the quarters. What if, due to business and cyclical reasons, an overwhelming number of sales occurred in the last two quarters, with the earlier quarters contributing relatively negligible sales? Range partitioning will be good only in theory, because the last two quarters could end up each having almost half of the original nonpartitioned table's data.

In such cases, it's better to use the hash-partitioning scheme. All you have to do is decide on the number of partitions, and Oracle's hashing algorithms will assign a hash value to each row's partitioning key and place it in the appropriate partition. You don't have to know anything about the distribution of the data in the table, other than that the data doesn't fall into some easily determined ranges. All you need to do is provide a partition key, which in the hash-partitioning scheme shown next is the ticket_no column:


SQL> CREATE TABLE sales_data
  2  (ticket_no NUMBER,
  3  sale_year  INT NOT NULL,
  4  sale_month INT NOT NULL,

  5  sale_day   INT NOT NULL)
  6  PARTITION BY HASH (ticket_no)
  7  PARTITIONS 4
  8* STORE IN (ts1,ts2,ts3,ts4);
Table created.
SQL>

In the preceding example, four hash partitions are created in four tablespaces. We won't know in which partition the data for, say, June 10, 2008, is stored. Oracle determines the storage based on a hashing algorithm, and you have no control whatsoever over the row-to-partition mapping.
List Partitioning

There may be times when you'll want to partition the data not on the basis of a time range or evenly distributed hashing scheme, but rather by known values, such as city, territory, or some such attribute. List partitioning is preferable to range or hash partitioning when your data is distributed among a set number of discrete values. For example, you may want to group a company's sales data according to regions rather than quarters. List partitioning enables you to group your data on the same lines as real-world groupings of data, rather than arbitrary ranges of time or some such criterion.

For example, when you're dealing with statewide totals in the United States, you'll be dealing with 50 different sets of data. It makes more sense in this situation to partition your data into four or five regions, rather than use the range method to partition the data alphabetically. Listing 7-6 shows how to use list partitioning to partition the ticket_sales table. The partitions are made up of groups of flight-originating cities, shown by the start_city column.
Listing 7-6: Creating a List-Partitioned Table


SQL> CREATE TABLE sales_data
  2  (ticket_no NUMBER,
  3  sale_year  INT NOT NULL,
  4  sale_month INT NOT NULL,
  5  sale_day   INT NOT NULL,
  6  destination_city   CHAR(3),
  7  start_city         CHAR(3))
  8  PARTITION BY LIST (start_city)
  9  (PARTITION northeast_sales values ('NYC','BOS','PEN') TABLESPACE ts1,
 10  PARTITION southwest_sales values ('DFW','ORL','HOU') TABLESPACE ts2,
 11  PARTITION pacificwest_sales values('SAN','LOS','WAS') TABLESPACE ts3,
 12* PARTITION  southeast_sales values ('MIA','CHA','ATL')  TABLESPACE ts4);

Table created.
SQL>

In the previous list-partitioning example, the partition description specifies a list of values for the start_city column. Our table creation statement created four list partitions. Only cities that fall in this list will be included in the partition. A ticket with the information 9999, 2004, 06, 01, DFW, HOU will be stored in the southwest_sales partition.
Reference Partitioning

If two tables are related to one another, you can take advantage of this relationship by partitioning the two tables on the basis of the existing parent-child relationship. The relationship is enforced by primary key and foreign key constraints. If two tables share a parent-child relationship, you only need to formally partition the parent table. Once you do this, you can equipartition the child table, which inherits the partitioning key from the parent table. You thus avoid duplicating key columns. Any partition maintenance operations on the parent table will automatically cascade to the child table as well.

A simple example will make reference partitioning clear. The tables orders and orderitems are related to each other, on the basis of the orderid column in the two tables. This relationship is captured by the referential constraint orderid_refconstraint. The parent table, orders, is partitioned on the OrderDate column using a range-partitioning scheme, as shown here:


CREATE TABLE orders
    ( order_id           NUMBER(12),
      order_date         DATE,
      order_mode         VARCHAR2(8),
      customer_id        NUMBER(6),
      order_status       NUMBER(2),
      order_total        NUMBER(8,2),
      sales_rep_id       NUMBER(6),
      promotion_id       NUMBER(6),
      CONSTRAINT orders_pk PRIMARY KEY(order_id)
    )
  PARTITION BY RANGE(order_date)
    ( PARTITION Q1_2005 VALUES LESS THAN (TO_DATE('01-APR-2005','DD-MON-YYYY')),
      PARTITION Q2_2005 VALUES LESS THAN (TO_DATE('01-JUL-2005','DD-MON-YYYY')),
      PARTITION Q3_2005 VALUES LESS THAN (TO_DATE('01-OCT-2005','DD-MON-YYYY')),
      PARTITION Q4_2005 VALUES LESS THAN (TO_DATE('01-JAN-2006','DD-MON-YYYY'))
    );

Since there is a parent-child relationship between the orders and orderitems tables, you use reference partitioning on the constraint orderid_refconstraint for orderitems, to create the partitioned table. The clause FOREIGN KEY (order_id) REFERENCES order (order_id) shows that the orderitems table is created with the reference-partitioning scheme. The orderitems table is equipartitioned with reference to the parent table orders.


CREATE TABLE order_items
    ( order_id           NUMBER(12) NOT NULL,
      line_item_id       NUMBER(3)  NOT NULL,
      product_id         NUMBER(6)  NOT NULL,
      unit_price         NUMBER(8,2),
      quantity           NUMBER(8),
      CONSTRAINT order_items_fk
      FOREIGN KEY(order_id) REFERENCES orders(order_id)
    )
PARTITION BY REFERENCE(order_items_fk);

You can use all partitioning strategies with reference partitioning, with the exception of interval partitioning.

When creating a reference-partitioned table, the partition by reference clause in the CREATE TABLE statement specifies the name for the reference constraint that is the basis for the reference partition. You must ensure that this referential constraint is both enabled and enforced.

In the example, the child table orderitems has four partitions: Q1_2005, Q2_2005, Q3_2005, and Q4_2005. Each of these four partitions contains the order_items column values that correspond to orders in the parent table's partition

Note the following features about a reference-partitioned table:

    The reference-partitioning example here doesn't use any partition descriptors. If you do provide partition descriptors, they must be the same as the number of partitions or subpartitions in the parent table. That is, the child table will have one partition for each partition or subpartition of the parent table.

    You can name the partitions of a reference-partitioned table.

    If you don't name the partitions, the partitions will derive their names from the corresponding partitions of the parent table.

    You can specify an explicit tablespace for the partitions of a reference-partitioned table.

    If you don't specify a tablespace, the partitions of a reference-partitioned table are stored along with the corresponding partition of the parent table, in the same tablespace.

    Note      

You can't specify partition bounds for the partitions of a reference-partitioned table.
Virtual Column-Based Partitioning

Earlier in this chapter, you learned how to create and use virtual columns in an Oracle database. You can use one or more columns of a table to create a virtual column on that table. You can partition a table on a virtual column. What this means is that you can partition a table on a partition key that doesn't actually exist in the table. Your partitioning key is defined by the same expression that the database uses for the virtual column.

You can use all basic partitioning strategies, including the different combinations of composite partitioning, with virtual column-based partitioning of a table.

In the following example, I partition the sales table using a virtual column for the subpartitioning key. The virtual column total_amount is defined as the product of the amount_sold and quantity_sold columns.


CREATE TABLE sales
  ( prod_id       NUMBER(6) NOT NULL
  , cust_id       NUMBER NOT NULL
  , time_id       DATE NOT NULL
  , channel_id    CHAR(1) NOT NULL
  , promo_id      NUMBER(6) NOT NULL
  , quantity_sold NUMBER(3) NOT NULL
  , amount_sold   NUMBER(10,2) NOT NULL
  , total_amount AS (quantity_sold * amount_sold)
  )
 PARTITION BY RANGE (time_id) INTERVAL (NUMTOYMINTERVAL(1,'MONTH'))
 SUBPARTITION BY RANGE(total_amount)
 SUBPARTITION TEMPLATE
   ( SUBPARTITION p_small VALUES LESS THAN (1000)
   , SUBPARTITION p_medium VALUES LESS THAN (5000)
   , SUBPARTITION p_large VALUES LESS THAN (10000)
   , SUBPARTITION p_extreme VALUES LESS THAN (MAXVALUE)
   )
 (PARTITION sales_before_2007 VALUES LESS THAN
        (TO_DATE('01-JAN-2007','dd-MON-yyyy'))
)
ENABLE ROW MOVEMENT
PARALLEL NOLOGGING;

Notice that the ENABLE ROW MOVEMENT clause ensures that a row can migrate from its current partition to a different partition if the virtual column's value evaluates to a value that doesn't belong in the current partition any longer.
System Partitioning

System partitioning is a unique partitioning method, where the application and not the database controls the placement of the data. The database merely lets you break a table up into partitions, without any idea of what each of the partitions will contain. The application controls what goes into the individual partitions. You must explicitly specify the partition when inserting data into a system-partitioned table. So, if you try to insert data into a system-partitioned table without specifying the specific partition into which the data goes, the insert will fail.

The biggest advantage in using system partitioning is that you can create and maintain tables that are equipartitioned with respect to a base table.
Creating a System-Partitioned Table

The following example shows how to create a system-partitioned table:


CREATE TABLE test (c1 integer, c2 integer)
PARTITIONED BY SYSTEM
(
  PARTITION p1 TABLESPACE tbs_1,
  PARTITION p2 TABLESPACE tbs_2,
  PARTITION p3 TABLESPACE tbs_3,
  PARTITION p4 TABLESPACE tbs_4
);

The clause PARTITIONED BY SYSTEM, of course, specifies that the table use system partitioning.
Inserting Data

When you're inserting data using an INSERT or MERGE statement, you must specify the partition into which you want the new row to be placed. Here's an example of an insertion into a system-partitioned table:


SQL> INSERT INTO test PARTITION (p1) VALUES (4,5);

The example specifies the partition p1 for inserting the new data.
Deleting and Updating Data

Unlike the insert operation, deleting and updating data in a system-partitioned table doesn't require you to use a partition-aware syntax, by specifying the partition name in the DELETE or UPDATE command. However, Oracle recommends that you specify the partition, so the database can use partition pruning and avoid scanning the entire table for the data.
Limitations

System partitioning doesn't support the CREATE TABLE AS SELECT and the INSERT INTO TABLE AS statements. The reason in both cases is that system partitioning doesn't make use of a partitioning method, and hence there's no mapping between rows and partitions.
Composite Partitioning

Sometimes, merely partitioning on range, hash, or list schemes may not be enough. You can further break down a large table into subpartitions for more control over data placement and performance. Oracle offers several types of composite partitioning. For example, under the range-hash-partitioning method, you first partition the table using range partitioning and then subpartition each of those partitions using a hash scheme. In a range-list-partitioning scheme, you first partition the table using range partitioning and then subpartition those partitions using list partitioning. Similarly, you can use the range-range, list-list, list-hash, and list-range composite partitioning methods.
Range-Hash Partitioning

Sometimes you may partition a table range-wise, but the distribution may not be very equal. You can make this a better partitioning scheme by hash partitioning after the range partitioning is done. This will allow you to store the data more efficiently, although it becomes more complex to manage.

Range-hash partitioning combines the best of the range- and hash-partitioning schemes. Range partitioning, as you've already seen, is easy to implement, and hash partitioning provides you benefits such as striping and parallelism.

Listing 7-7 shows a simple example of how to create a range-hash-partitioned table.
Listing 7-7: Creating a Range-Hash-Partitioned Table


SQL> CREATE TABLE scout_gear (equipno NUMBER,equipname VARCHAR(32),price NUMBER)
  2  PARTITION BY RANGE (equipno) SUBPARTITION BY HASH(equipname)
  3  SUBPARTITIONS 8 STORE IN (ts1, ts2, ts3, ts4)
  4  (PARTITION p1 VALUES LESS THAN (1000),
  5  PARTITION p2 VALUES LESS THAN (2000),
  6  PARTITION p3 VALUES LESS THAN (3000),
  7* PARTITION p4 VALUES LESS THAN (MAXVALUE));
Table created.
SQL>

In this example, the scout_gear table is first partitioned by range on the equipno column—four range-based partitions are created. These four partitions are then subpartitioned on the equipname column using a hash-partitioning scheme, resulting in 32 subpartitions altogether. Note the SUBPARTITIONS clause in line 3.
Range-List Partitioning

In the range-list-partitioning method, you first partition the data based on a range of values. You then use list partitioning to break up the first set of partitions, using a list of discrete values. Listing 7-8 shows an example of how to create a range-list-partitioned table.
Listing 7-8: Creating a Range-List-Partitioned Table


SQL> CREATE TABLE quarterly_regional_sales
  2  (ticket_no NUMBER,
  3  sale_year INT NOT NULL,
  4  sale_month INT NOT NULL,
  5  sale_day DATE,
  6  destination_city   CHAR(3),
  7  start_city         CHAR(3))
  8  PARTITION BY RANGE(sale_day)
  9  SUBPARTITION BY LIST (start_city)
 10  (PARTITION q1_2004 VALUES LESS THAN (TO_DATE('1-APR-2004','DD-MON-YYYY'))
 11  TABLESPACE t1
 12  (SUBPARTITION  q12004_northeast_sales VALUES  ('NYC','BOS','PEN'),
 13  SUBPARTITION    q12004_southwest_sales VALUES   ('DFW','ORL','HOU'),
 14  SUBPARTITION    q12004_pacificwest_sales VALUES ('SAN','LOS','WAS'),
 15  SUBPARTITION    q12004_southeast_sales VALUES   ('MIA','CHA','ATL')
 16  ),
 17  PARTITION q2_2004 VALUES LESS THAN (TO_DATE('1-JUL-2004','DD-MON-YYYY'))

 18  TABLESPACE t2
 19  (SUBPARTITION  q22004_northeast_sales  VALUES  ('NYC','BOS','PEN'),
 20  SUBPARTITION    q22004_southwest_sales VALUES   ('DFW','ORL','HOU'),
 21  SUBPARTITION    q22004_pacificwest_sales VALUES ('SAN','LOS','WAS'),
 22  SUBPARTITION    q22004_southeast_sales VALUES   ('MIA','CHA','ATL')
 23  ),
 24  PARTITION q3_2004 VALUES LESS THAN (TO_DATE('1-OCT-2004','DD-MON-YYYY'))
 25  TABLESPACE t3
 26  (SUBPARTITION  q32004_northeast_sales  VALUES  ('NYC','BOS','PEN'),
 27  SUBPARTITION    q32004_southwest_sales VALUES   ('DFW','ORL','HOU'),
 28  SUBPARTITION    q32004_pacificwest_sales VALUES ('SAN','LOS','WAS'),
 39  SUBPARTITION    q32004_southeast_sales VALUES   ('MIA','CHA','ATL')
 30  ),
 31  PARTITION q4_2004 VALUES LESS THAN (TO_DATE('1-JAN-2005','DD-MON-YYYY'))
 32  TABLESPACE t4
 33  (SUBPARTITION  q42004_northeast_sales  VALUES  ('NYC','BOS','PEN'),
 34  SUBPARTITION    q42004_southwest_sales VALUES  ('DFW','ORL','HOU'),
 35  SUBPARTITION    q42004_pacificwest_sales VALUES ('SAN','LOS','WAS'),
 36  SUBPARTITION    q42004_southeast_sales VALUES   ('MIA','CHA','ATL')
 37  )
 38* );
Table created.
SQL>

The preceding statement will create 16 subpartitions in the range-list-partitioned table with 4 subpartitions in each tablespace (t1, t2, t3, t4). Each time you insert a row of data into the quarterly_regional_sales table, Oracle will first check whether the value of the partitioning column for a row falls within a specific partition range. Oracle will then map the row to a subpartition within that partition, by mapping the subpartition column value to the appropriate subpartition based on the values in that subpartition's list. For example, the row with the column values (9999, 2004, 10, 1, ‘DAL', ‘HOU') maps to subpartition q32004_southwest_sales.
Composite Interval-List-Partitioned Tables

You must use a subpartition template to create a table with list subpartitions. Otherwise, you'll be able to create only a default subpartition for every interval partition.

In the example shown in Listing 7-9, the sales table is first interval partitioned on the time_id columns, with a daily interval. The table is then subpartitioned by list on the channel_id column.
Listing 7-9: Creating an Interval-List-Partitioned Table


CREATE TABLE sales
  ( prod_id       NUMBER(6)
  , cust_id       NUMBER
  , time_id       DATE
  , channel_id    CHAR(1)
  , promo_id      NUMBER(6)
  , quantity_sold NUMBER(3)
  , amount_sold   NUMBER(10,2)
  )
 PARTITION BY RANGE (time_id) INTERVAL (NUMTODSINTERVAL(1,'DAY'))
SUBPARTITION BY RANGE(amount_sold)
   SUBPARTITION TEMPLATE
   ( SUBPARTITION p_low VALUES LESS THAN (1000)
   , SUBPARTITION p_medium VALUES LESS THAN (4000)

   , SUBPARTITION p_high VALUES LESS THAN (8000)
   , SUBPARTITION p_ultimate VALUES LESS THAN (maxvalue)
   )
 ( PARTITION before_2000 VALUES LESS THAN (TO_DATE('01-JAN-2000','dd-MON-yyyy')))
PARALLEL;

Composite Interval-Range Partitioning

As in the case of the list subpartitioning, you must use a subpartition template if you want to create range subpartitions for the future interval partitions in an interval-range-partitioned table. Without such a template, you'll manage to create only a range subpartition with the MAXVALUE upper boundary for every interval partition.

The example shown in Listing 7-10 illustrates the creation of an interval-range composite partitioned table. The interval partitions are created using daily intervals on the time_id column and the range subpartitions by partitioning on the amount_sold column.
Listing 7-10: Creating an Interval-Range-Partitioned Table


CREATE TABLE sales
  ( prod_id       NUMBER(6)
  , cust_id       NUMBER
  , time_id       DATE
  , channel_id    CHAR(1)
  , promo_id      NUMBER(6)
  , quantity_sold NUMBER(3)
  , amount_sold   NUMBER(10,2)
  )
 PARTITION BY RANGE (time_id) INTERVAL (NUMTODSINTERVAL(1,'DAY'))
 SUBPARTITION BY LIST (channel_id)
   SUBPARTITION TEMPLATE
   ( SUBPARTITION p_catalog VALUES ('C')
   , SUBPARTITION p_internet VALUES ('I')
   , SUBPARTITION p_partners VALUES ('P')
   , SUBPARTITION p_direct_sales VALUES ('S')
   , SUBPARTITION p_tele_sales VALUES ('T')
   )
 ( PARTITION before_2000 VALUES LESS THAN (TO_DATE('01-JAN-2000','dd-MON-yyyy')))
PARALLEL;

Partition Maintenance Operations

After you initially create partitioned tables, you can perform a number of maintenance operations on the partitions. For example, you can add and drop partitions to maintain a fixed number of partitions based on a quarterly time period.

In this section, I illustrate the use of these maintenance operations by assuming a range-partitioning scheme. These maintenance operations apply to all types of partitioning schemes, with a few exceptions, like the following:

    Range and list partitions can't be coalesced.

    Hash partitions can't be dropped, split, or merged.

    Only list partitions allow the modification of partitions by adding and dropping the partition values.

Adding Partitions

You can add a new partition to the ticket_sales table to include a new quarter, as follows:


SQL> ALTER TABLE ticket_sales
     ADD PARTITION sales_quarter5 VALUES LESS THAN
     (TO_DATE('1-APR-2005','DD-MON-YYYY'))
     TABLESPACE ticket_sales05;

This example adds a new quarterly partition for the first quarter of the year 2005, which comes after the last quarter in the original table.
Splitting a Partition

The add partition statement will add partitions only to the upper end of the existing table. But what if you need to insert some new data into the middle of a table? What if an existing partition becomes too large, and you would rather have smaller partitions? Splitting a partition takes the data in an existing partition and distributes it between two partitions.

You can use the split partition clause to break up a partition, as shown here:


SQL> ALTER TABLE ticket_sales
     SPLIT PARTITION ticket_sales01 AT (2000) INTO
     (PARTITION ticket_sales01A, ticket_sales01B);

Merging Partitions

You can use the MERGE PARTITIONS command to combine the contents of two adjacent partitions. For example, you can merge the first two partitions of the ticket_sales table in the following way:


SQL> ALTER TABLE ticket_sales
     MERGE PARTITIONS ticket_sales01, ticket_sales02 INTO PARTITION
     ticket_sales02;

Renaming Partitions

You can rename partitions in the same way you rename a table. Here is an example:


SQL> ALTER TABLE
     RENAME PARTITION fight_sales01 TO quarterly_sales01;

Exchanging Partitions

The EXCHANGE PARTITION command enables you to convert a regular nonpartitioned table into a partition of a partitioned table. Here's an example:


SQL> ALTER TABLE ticket_sales
     EXCHANGE PARTITION ticket_sales02 WITH ticket_sales03;

A partition exchange doesn't involve the actual movement of data. Oracle renames the source table as a partition and the target partition as the source table. Thus, the database completes the loading process with no data movement.
Dropping Partitions

Dropping partitions is easy if you don't have any data in the partitions. Here's an example:


SQL> ALTER TABLE ticket_sales
     DROP PARTITION ticket_sales01;

If you do have data in the partitions that you intend to drop, you need to be careful to use the additional UPDATE GLOBAL INDEXES clause with the preceding drop partition syntax. Otherwise, all globally created indexes will be invalidated. Local indexes will still be okay, because they're mapped directly to the affected partitions only.
Coalescing Partitions

The hash-partitioned and list-partitioned tables enable you to coalesce their partitions. Coalescing partitions amounts to shrinking the number of partitions. In a hash-partitioned table, the COALESCE command will reorganize the data of the removed partition into the remaining partitions based on a hash function. The database chooses a specific partition for coalescing, and drops it after reorganizing its data among the remaining partitions. In range-hash partitioning, you can coalesce subpartitions.

Here's an example of coalescing a hash-partitioned table, which will reduce the number of partitions by one:


SQL> ALTER TABLE ticket_sales
     COALESCE PARTITION;

    Note     

I've presented only a bare introduction to the vast and complex topic of Oracle table partitioning. Please refer to the Oracle documentation for a complete discussion of this powerful feature, including restrictions on the numerous partition-maintenance operations.



Data Dictionary Views for Managing Tables


Several data dictionary views can help in managing Oracle tables. The most important one is the DBA_TABLES view—it gives you the owner, the number of rows, the tablespace name, space information, and a number of other details about all the tables in the database. Listing 7-11 shows a sample query.
Listing 7-11: Using the DBA_TABLES Data Dictionary View


SQL> SELECT tablespace_name, table_name, num_rows
     FROM dba_tables
     WHERE owner='HR';

TABLESPACE_NAME      TABLE_NAME          NUM_ROWS
---------------      --------------      ---------
EXAMPLE              DEPARTMENTS            27
EXAMPLE              EMPLOYEES             107
EXAMPLE              JOBS                   19
EXAMPLE              JOB_HISTORY            10
EXAMPLE              LOCATIONS              23
EXAMPLE              REGIONS                 4
6 rows selected.
SQL>

Use the DBA_TAB_PARTITIONS view to find out detailed information about partitioned tables. Listing 7-12 shows an example of this view that summarizes information about a partitioned table from an earlier example in this chapter.
Listing 7-12: Using the DBA_TAB_PARTITIONS Data Dictionary View


TABLE_NAME                   PARTITION_NAME       SUBPARTITION_COUNT
-----------                  -----------------    ------------------
SALES_DATA                   SALES_Q1                    0
SALES_DATA                   SALES_Q2                    0
SALES_DATA                   SALES_Q3                    0
SALES_DATA                   SALES_Q4                    0
SALES_HASH                   SYS_P3161                   0
SALES_HASH                   SYS_P3162                   0
SALES_HASH                   SYS_P3163                   0
SALES_HASH                   SYS_P3164                   0
SALES_LIST                   NORTHEAST_SALES             0
SALES_LIST                   SOUTHWEST_SALES             0
SALES_LIST                   PACIFICWEST_SALES           0
SALES_LIST                   SOUTHEAST_SALES             0
SCOUT_GEAR                   P1                          8
SCOUT_GEAR                   P2                          8
SCOUT_GEAR                   P3                          8
SCOUT_GEAR                   P4                          8
QUARTERLY_REGIONAL_SALES     Q1_2009                     4
QUARTERLY_REGIONAL_SALES     Q2_2009                     4
QUARTERLY_REGIONAL_SALES     Q3_2009                     4
QUARTERLY_REGIONAL_SALES     Q4_2009                     4
20 rows selected.
SQL>

The DBA_TAB_COLUMNS view is another useful data dictionary view that provides a lot of information about table columns. Listing 7-13 shows a simple query using this view.
Listing 7-13: Using the DBA_TAB_COLUMNS Data Dictionary View


SQL> SELECT column_name, data_type,
     nullable
     FROM dba_tab_columns
     WHERE owner='HR'
     AND table_name = 'EMPLOYEES';

COLUMN_NAME           DATA_TYPE           NULLABLE
--------------        ----------          ---------
EMPLOYEE_ID           NUMBER              N
FIRST_NAME            VARCHAR2            Y
LAST_NAME             VARCHAR2            N
EMAIL                 VARCHAR2            N
PHONE_NUMBER          VARCHAR2            Y
HIRE_DATE             DATE                N
JOB_ID                VARCHAR2            N
SALARY                NUMBER              Y
8 rows selected.
SQL>

Of course, you could have obtained this type of information easily by using the DESCRIBE command. Listing 7-14 shows how to use this command.
Listing 7-14: Using the DESCRIBE Command


SQL> DESCRIBE new_employees

 Name                     Null?                Type
 ---------------          -----------          ------------
 EMPLOYEE_ID              NOT NULL             NUMBER(6)
 FIRST_NAME               NOT NULL             VARCHAR2(20)
 LAST_NAME                NOT NULL             VARCHAR2(25)
 HIRE_DATE                NOT NULL             DATE
 JOB_ID                   NOT NULL             VARCHAR2(10)
 SALARY                   NUMBER(8,2)
 SQL>

Extracting Object DDL Using the DBMS_METADATA Package

Often you'll want to re-create a table or create a similar table in a different database, and it would be nice to have the DDL for the original table handy. If you're using a third-party tool, such as the SQL Navigator from Quest Software, all you have to do is click a few buttons, and your table DDL statements will be shown on the screen.

But what commands can you use to get the CREATE TABLE statement that created a table? You could get this information from the DBA_TABLES and DBA_TAB_COLUMNS views, but you would have to write lengthy SQL statements to do so. Alternatively, you can use the Oracle-supplied DBMS_METADATA package to quickly get the DDL statements for your tables and indexes.

As an example, let's get the DDL for the employee table using this package. Here is the output of the package execution:


SQL> CONNECT hr/hr
Connected.SQL> SET LONG 100000
SQL> SELECT dbms_metadata.get_ddl('TABLE','EMPLOYEE') from dual;

DBMS_METADATA.GET_DDL('TABLE','EMPLOYEE')
---------------------------------------------------------------------------
CREATE TABLE "HR"."EMPLOYEES"
 ("EMPLOYEE_ID" NUMBER(6,0),
  "FIRST_NAME" VARCHAR2(20),
  "LAST_NAME" VARCHAR2(25) CONSTRAINT "EMP_LAST_NAME_NN" NOT NULL ENABLE,
  "HIRE_DATE" DATE CONSTRAINT "EMP_HIRE_DATE_NN" NOT NULL ENABLE,
  "SALARY" NUMBER(8,2),
  "COMMISSION_PCT" NUMBER(2,2),
  "MANAGER_ID" NUMBER(6,0),
  "DEPARTMENT_ID" NUMBER(4,0),
  CONSTRAINT "EMP_SALARY_MIN" CHECK (salary > 0) ENABLE NOVALIDATE,
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576
MINEXTENTS 1  MAXEXTENTS 2147483645 PCTINCREASE 0
  FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE
  "EXAMPLE"  ENABLE,
   CONSTRAINT "EMP_EMP_ID_PK" PRIMARY KEY ("EMPLOYEE_ID")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
2147483645 PCTINCREASE 0
  FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE

"EXAMPLE"  ENABLE,
 CONSTRAINT "EMP_DEPT_FK" FOREIGN KEY ("DEPARTMENT_ID")
  REFERENCES "HR"."DEPARTMENTS" ("DEPARTMENT_ID") ENABLE NOVALIDATE,
DBMS_METADATA.GET_DDL('TABLE','EMPLOYEES')
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
2147483645 PCTINCREASE 0
  FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "EXAMPLE"
SQL>

    Tip     

The output of the get_ddl procedure in the DBMS_METADATA package spits out its DDL text in long format. If you don't have the LONG variable set in your SQL*Plus session, you may not see the entire DDL statement.

This is the most elegant and the easiest way to get the DDL for your tables and indexes using SQL*Plus. If you need the DDL statements for your database objects, you should use the DBMS_METADATA package. Of course, you can always use the OEM Database Control to extract all types of DDL for your database objects.

原创粉丝点击