TODO
To duplicate an existing table you can use CTAS (Create Table As Select)...
CREATE mytable2 AS SELECT * FROM mytable1;
The SELECT can be pretty much any valid SELECT statement. Any ambiguity will cause problems.. i.e. you should define the columns you are going to populate if the query isn't going to be able to derive them.
If you only want a subset of the data, consider the SAMPLE clause...
CREATE mytable2 AS SELECT * FROM mytable1 SAMPLE(20);
Will create mytable2 based on 20% of the rows in mytable1CREATE mytable2 AS SELECT * FROM mytable1 SAMPLE BLOCK(20);
Will create mytable2 based on 20% of the block in mytable1"This procedure is used in capacity planning to determine the size of the table given various attributes."
https://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_space.htm#i1003180
For example, to change the size or nullability of a column...
ALTER TABLE myTable MODIFY myColumn VARCHAR2(30) NOT NULL;
NOTE: If there are NULL values in this column in any existing rows you will get ORA-02296Also see: MOVE, later on this page.To DROP a table...
DROP TABLE '&TABLE';
To the DROP the table and PURGE the table from the RECYCLEBIN in a single step...
DROP TABLE '&TABLE' PURGE;
To DROP the table and all referential integrity constraints that refer to primary and unique keys in the dropped table...
DROP TABLE '&TABLE' CASCADE CONSTRAINTS;
You could use this to address: ORA-02449: unique/primary keys in table referenced by foreign keysTo "un-drop" a table...
FLASHBACK TABLE '%TABLE' TO BEFORE DROP;
Note: this is NOT possible if you used the PURGE option for the DROP TABLE or if the table has been otherwise purged from the recycle bin.A TRUNCATE statement does not generate any Undo information and it Commits immediately.
It is a DDL statement and cannot be rolled back.
Constraints and Triggers are retained (but Triggers do not fire during TRUNCATE)
A single Audit record is generated for the TRUNCATE statement being issued (if auditing is turned on)
When a table or cluster is truncated, all associated indexes are also truncated.
TRUNCATE TABLE '&TABLE' REUSE STORAGE; # Table retains all extents (but they are all empty)
TRUNCATE TABLE '&TABLE' DROP STORAGE; # Table retains only number of extents equal to MINEXTENTS (all are empty) (DEFAULT)
TRUNCATE TABLE '&TABLE' DROP ALL STORAGE; # Table retains only one extent (which is empty) 11.2.0.2+
SELECT row_movement
FROM dba_tables
WHERE owner = '&OWNER'
AND table_name = '&TABLE';
ALTER TABLE &TABLE ENABLE ROW MOVEMENT;
To see the table size (not including indexes and LOB segments)...
SELECT SUM(e.blocks) AS blocks,
COUNT(*) AS extents,
SUM(e.blocks*p.value) AS bytes,
SUM(e.blocks*p.value)/1024 AS KB,
SUM(e.blocks*p.value)/1024/1024 AS MB,
SUM(e.blocks*p.value)/1024/1024/1024 AS GB
FROM dba_extents e,
v$parameter p
WHERE e.owner = '&table_owner'
AND e.segment_name = '&table_name'
AND p.name = 'db_block_size';
An alternate approach (not including indexes and LOB segments)...
SET TRIMSPOOL ON
SET LINESIZE 132 PAGESIZE 80
COLUMN owner FORMAT a20
COLUMN segment_name FORMAT a30
COLUMN segment_type FORMAT a10
COLUMN bytes FORMAT 999,999,999,999,999
SELECT owner, segment_name, segment_type, bytes
FROM dba_segments
WHERE segment_type = 'TABLE'
AND owner = '&table_owner'
AND segment_name = '&table_name'
ORDER BY bytes, owner, segment_name, segment_type
/
Including LOB segments and Indexes...
SELECT owner, segment_name, segment_type, bytes
FROM dba_segments
WHERE segment_type = 'TABLE'
AND owner = '&&table_owner'
AND segment_name = '&&table_name'
UNION
SELECT owner, segment_name, segment_type, bytes
FROM dba_segments
WHERE segment_type = 'LOBSEGMENT'
AND owner = '&&table_owner'
AND segment_name IN (SELECT segment_name
FROM dba_lobs
WHERE owner = '&&table_owner'
AND table_name = '&&table_name')
UNION
SELECT owner, segment_name, segment_type, bytes
FROM dba_segments
WHERE segment_type = 'INDEX'
AND owner = '&&table_owner'
AND segment_name IN (SELECT index_name
FROM dba_indexes
WHERE owner = '&&table_owner'
AND table_name = '&&table_name')
ORDER BY bytes, owner, segment_name, segment_type
/
TODO: Include partitions (table and index)
In versions prior to 12.1 the query below will only be accurate if table statistics have just been gathered....
SELECT num_rows, blocks
FROM dba_tables
WHERE table_name = '&TABLE';
The queries below will give you accurate results... but, are you really sure you need accurate results?
https://asktom.oracle.com/ords/f?p=100:11:0::NO::P11_QUESTION_ID:1660875645686SPOOL /tmp/tmp.sql
SET HEADING OFF
SET PAGESIZE 0
SET FEEDBACK OFF
SELECT 'SELECT '''||table_name||''' AS TABLE_NAME, COUNT(*) AS ROW_COUNT FROM '||owner||'."'||table_name||'";'
FROM dba_tables
WHERE owner = UPPER('&table_owner')
/
SPOOL OFF
COLUMN TABLE_NAME FORMAT a30
@/tmp/tmp.sql
Save this to a script (e.g. /tmp/my.sql) and run it. Cut&Paste direct to SQL*Plus won't work.This script assumes you are a DBA needing to query from another schema. If you are logged in to the owning schema, make your life easier and use USER_TABLES.ALTER SESSION SET CURRENT_SCHEMA=&table_owner;
SET PAGESIZE 20
COLUMN TABLE_NAME FORMAT a30
WITH
FUNCTION rcount(tname IN VARCHAR2) RETURN INT IS
c INT;
BEGIN
EXECUTE IMMEDIATE
'SELECT COUNT(*) FROM "'||tname||'"'
INTO c;
RETURN c;
END;
SELECT table_name, rcount(table_name) AS ROW_COUNT
FROM dba_tables
WHERE owner = UPPER('&&table_owner');
/
Save this to a script (e.g. /tmp/my.sql) and run it. Cut&Paste direct to SQL*Plus won't work.This script assumes you are a DBA needing to query from another schema. If you are logged in to the owning schema, make your life easier and use USER_TABLES.@/u01/dba/blkspc.sql
@/u01/dba/blkunused.sql
SELECT COUNT(*)
FROM ( SELECT DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
FROM &mytable
GROUP BY DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID));
SELECT SUM(BLOCKS)
FROM dba_segments
WHERE owner = '&myschema'
AND segment_name = '&mytable';
To recover space and amend the high water mark...
ALTER TABLE &TABLE SHRINK SPACE;
To recover space for the table and all dependant objects...
ALTER TABLE &TABLE SHRINK SPACE CASCADE;
To recover space then amend the high water mark as a seperate step...
ALTER TABLE &TABLE SHRINK SPACE COMPACT;
ALTER TABLE &TABLE SHRINK SPACE;
ALTER TABLE &TABLE MOVE;
ALTER TABLE &TABLE MOVE TABLESPACE &TSPACE;
You will need to rebuild indexes to make them usable againALTER TABLE &TABLE MOVE TABLESPACE &TSPACE UPDATE INDEXES
ALTER TABLE &TABLE MOVE ONLINE;
It is recommended to always re-gather statistics at the end of the moveRebuild in same tablespace
Move to new tablespace
Move to new tablespace keeping indexes valid
ONLINE introduced in 12.2 (update indexes not required)
Note that a Move operation will lead to two copies of the table being in existence for the duration of the move. Be aware of the space implications of this.
ALTER TABLE &TABLE MOVE ONLINE TABLESPACE &TSPACE;
ALTER TABLE &TABLE MOVE ONLINE TABLESPACE &TSPACE COMPRESS;
ALTER TABLE &TABLE MOVE ONLINE TABLESPACE &TSPACE NOCOMPRESS;
ALTER TABLE &TABLE MOVE ONLINE STORAGE (PCTINCREASE 0)'
Assuming a block overhead of 8% (1), and that table stats are current, we can estimate how many blocks the table should theoretically use vs what it is actually using...
SELECT blocks,
num_rows*avg_row_len/8192*100/(100-pct_free)*1.08 est_blocks
FROM user_tables
WHERE table_name = '&TABLE';
CREATE TABLE &mytab AS
SELECT level AS id,
DBMS_RANDOM.string('x',10) AS code,
DBMS_RANDOM.string('a',50) AS short_description,
DBMS_RANDOM.string('a',200) AS long_description
FROM dual
CONNECT BY level <= 10000;
Generates a table called mytab with 10,000 rows, an auto-generated unique numeric id, a random 10 character uppercase code, a random mixed case short description and a random mixed case long description.
Potential issues with this dataset....
All values are fixed length rather than variable length
CREATE USER &&user IDENTIFIED BY "&password"
DEFAULT TABLESPACE &&tspace
TEMPORARY TABLESPACE &temp
QUOTA UNLIMITED ON &&tspace
PROFILE &profile;
GRANT resource TO &&user;
Create an empty schema for this test.
CREATE TABLE &mytab AS
SELECT level AS id,
DBMS_RANDOM.string('x',10) AS code,
DBMS_RANDOM.string('a',50) AS short_description,
DBMS_RANDOM.string('a',200) AS long_description
FROM dual
CONNECT BY level <= 10000;
Run this 3 times to create:
MYTAB1, MYTAB2 and MYTAB3.
SELECT table_name,
num_rows,
avg_row_len,
blocks,
pct_free,
ROUND(num_rows*avg_row_len/8192*100/(100-pct_free)*1.08) AS est_blocks
FROM user_tables;
Let's see how many blocks our tables are using and let's compare that to what we estimate it should be using (est_blocks).
Let's save this script as:
/tmp/blkchk.sql
Our estimate calculation is close enough for our purposes (it is within 10 blocks of the number of blocks reported in USER_TABLES).
Note that the number of blocks reported in USER_TABLES will differ from other sources of what, at first glance, would seem to be the same information. Consider this alternate way of counting blocks...
SELECT COUNT(1)
FROM( SELECT DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid)
FROM mytab1
GROUP BY DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID))
To further muddy the waters, USER_SEGMENTS returns a different figure...
SELECT blocks
FROM user_segments
WHERE segment_name = 'MYTAB1'
AND segment_type='TABLE'
Let's dig into that a bit further...
SELECT COUNT(extent_id) AS Extents,
blocks AS Blocks_per_Extent
FROM user_extents
WHERE segment_name = 'MYTAB1'
AND segment_type='TABLE'
GROUP BY blocks;
Let's reverse engineer one of our tables and create a new copy of it without any data...
SET LONG 20000
SET PAGESIZE 0
SELECT DBMS_METADATA.GET_DDL(object_type,object_name)
FROM user_objects
WHERE object_name = '&TABLE';
Let's create some new scripts to make our experiment easier to run.
This script will provide us with a breakdown of what the number of blocks looks like from the various different sources of this information just for our new table (mytab4).
Let's call this script...
/tmp/blkchk4.sql
This script will prompt for a number of rows to insert, insert them, gather stats and provide us with a breakdown of what the number of blocks looks like from the various different sources of this information.
Let's call this script...
/tmp/addrow4.sql
@/tmp/blkchk4
EXEC DBMS_STATS.gather_table_stats(NULL,'MYTAB4');
@/tmp/blkchk4
The existence of the initial extent is not guaranteed. It exists for us because the "CREATE TABLE" statement we used earlier contained a "SEGMENT CREATION IMMEDIATE" clause. Let's recreate the table using "SEGMENT CREATION DEFERRED" instead...
DROP TABLE MYTAB4;
CREATE TABLE "MYTAB4"
( "ID" NUMBER,
"CODE" VARCHAR2(4000),
"SHORT_DESCRIPTION" VARCHAR2(4000),
"LONG_DESCRIPTION" VARCHAR2(4000)
) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS";
@/tmp/blkchk4
We have a couple of ways to materialize a deferred segment. We can add a row, or we can do this...
Note that you need to run this as a privileged user.
BEGIN
DBMS_SPACE_ADMIN.materialize_deferred_segments (
schema_name => 'MYSCHEMA',
table_name => 'MYTAB4',
partition_name => NULL);
END;
/
@/tmp/blkchk4
We can also "dematerialize" tables with no rows...
Note that you need to run this as a privileged user.
BEGIN
DBMS_SPACE_ADMIN.drop_empty_segments (
schema_name => 'MYSCHEMA',
table_name => 'MYTAB4',
partition_name => NULL);
END;
/
@/tmp/blkchk4
@/tmp/addrow4
@/u01/dba/blkspc.sql
SELECT DBMS_ROWID.ROWID_BLOCK_NUMBER(mytab4.rowid) block_id, rowid
FROM mytab4;
COLUMN owner FORMAT a10
COLUMN segment_name FORMAT a15
COLUMN tablespace_name FORMAT a20
SELECT owner,
segment_name,
tablespace_name,
extent_id,
file_id,
block_id AS start_block,
blocks,
block_id+blocks-1 AS end_block
FROM dba_extents
WHERE segment_name = 'MYTAB4';
ALTER SESSION SET TRACEFILE_IDENTIFIER='BLOCK_DUMP';
ALTER SYSTEM DUMP DATAFILE 4 BLOCK MIN 2560521 BLOCK MAX 2560528;
SELECT * FROM mytab4;
SELECT value FROM v$diag_info WHERE name = 'Diag Trace';
ls | grep BLOCK_DUMP
So, the 5 blocks shown in USER_TABLES reflects the number of "trans data" blocks in the initial 8 block extent.
We can get a breakdown of block types from our tracefile...
grep ^frmt tracefile.trc
@/tmp/addrow4
@/tmp/extchk.sql
ALTER SESSION SET TRACEFILE_IDENTIFIER='BLOCK_DUMP';
ALTER SYSTEM DUMP DATAFILE 4 BLOCK MIN 2560521 BLOCK MAX 2560536;
grep ^frmt tracefile.trc
@/u01/dba/blkspc.sql
@/tmp/addrow4
@/u01/dba/blkspc.sql
@/tmp/extchk.sql
ALTER SESSION SET TRACEFILE_IDENTIFIER='BLOCK_DUMP';
ALTER SYSTEM DUMP DATAFILE 4 BLOCK MIN 2560521 BLOCK MAX 2560544;
grep ^frmt tracefile.trc
@/tmp/addrow4
@/tmp/addrow4
@/tmp/addrow4
There seems to be a pattern. 8 blocks per extent. Every other extent gets an additional "first level bitmap block". But, as we know from our orginal tables, after 16 extents, Oracle seems to increase the extent size. Let's reproduce that. We should fill 16 extents at 3068 rows. 26 rows per block, with a pattern of 5,8,7,8,7,8,7,8,7,8,7,8,7,8,7,8 usable blocks across 16 extents.... ((8*8)+(7*7)+5)*26... (64+49+5)*26... 118*26 = 3068
@/tmp/addrow4
@/tmp/addrow4
@/tmp/extchk.sql
@/tmp/addrow4
@/tmp/extchk.sql
ALTER SYSTEM CHECKPOINT;
ALTER SESSION SET TRACEFILE_IDENTIFIER='BLOCK_DUMP';
ALTER SYSTEM DUMP DATAFILE 4 BLOCK MIN 2560521 BLOCK MAX 2561032;
grep ^buffer tracefile.trc | sort | uniq | wc -l
512
grep ^frmt tracefile.trc | grep "trans data" | wc -l
496
@/tmp/blkchk.sql
Don't forget to delete the tracefiles before continuing.
ls *BLOCK_DUMP*
rm *BLOCK_DUMP*
DELETE FROM mytab1 WHERE MOD(id, 2) = 0;
DELETE FROM mytab2 WHERE id < 5001;
DELETE FROM mytab3 WHERE id > 5000;
COMMIT;
EXEC DBMS_STATS.gather_table_stats(NULL,'MYTAB1');
EXEC DBMS_STATS.gather_table_stats(NULL,'MYTAB2');
EXEC DBMS_STATS.gather_table_stats(NULL,'MYTAB3');
Let's delete 50% of the rows from each table, but in different ways....
The first option should leave all the blocks 50% utilised.
The second option should leave empty blocks at the start of the table.
The third option should leave empty blocks at the end of the table.
We should gather stats when we're done so that that the figures in user_tables are updated.
SELECT table_name,
num_rows,
avg_row_len,
blocks,
pct_free,
empty_blocks,
ROUND(num_rows*avg_row_len/8192*100/(100-pct_free)*1.08) AS est_blocks
FROM user_tables;
Let's see how many blocks our tables are now using.
The actual number of blocks remains the same but our estimate approximately halves.
INSERT INTO &mytab
SELECT level+5000 AS id,
DBMS_RANDOM.string('x',10) AS code,
DBMS_RANDOM.string('a',50) AS short_description,
DBMS_RANDOM.string('a',200) AS long_description
FROM dual
CONNECT BY level <= 5000;
COMMIT;
EXEC DBMS_STATS.gather_table_stats(NULL,'MYTAB1');
EXEC DBMS_STATS.gather_table_stats(NULL,'MYTAB2');
EXEC DBMS_STATS.gather_table_stats(NULL,'MYTAB3');
Let's add 5000 more rows to each table.
SELECT table_name,
num_rows,
avg_row_len,
blocks,
pct_free,
ROUND(num_rows*avg_row_len/8192*100/(100-pct_free)*1.08) AS est_blocks
FROM user_tables;
Let's see how many blocks our tables are now using.
This shows us that blocks get reused. i..e. before reclaiming space, be sure you're not just wasting time. Reclaim space when you delete data and don't expect to add new data.
DELETE FROM mytab1 WHERE MOD(id, 2) = 0;
DELETE FROM mytab2 WHERE id < 5001;
DELETE FROM mytab3 WHERE id > 5000;
COMMIT;
EXEC DBMS_STATS.gather_table_stats(NULL,'MYTAB1');
EXEC DBMS_STATS.gather_table_stats(NULL,'MYTAB2');
EXEC DBMS_STATS.gather_table_stats(NULL,'MYTAB3');
So, let's delete 50% again
DROP TABLE MYTAB1;
DROP TABLE MYTAB2;
DROP TABLE MYTAB3;
Cleanup
SELECT COUNT(1)
FROM( SELECT DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid)
FROM mytab1
GROUP BY DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID))
SELECT SUM(BLOCKS) FROM user_segments WHERE segment_name = 'MYTAB1'
SELECT DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid) AS Block, COUNT(1) AS ROWCNT
FROM mytab1
GROUP BY DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
SELECT table_name, num_rows, avg_row_len, blocks, pct_free,
ROUND(num_rows*avg_row_len/8192*100/(100-pct_free)*1.08) est_blocks
FROM user_tables;
SELECT table_name, num_rows, avg_row_len, blocks
FROM user_tables;