Saturday 7 June 2014

How to copy schema stats from one schema to another schema


Here we will see  how to copy schema statistics from one schema to another. Basically there are two methods for doing this:
  • using DBMS_STATS or
  • using Data Pump
This is especially useful when doing large data transfers between either schemas or databases and statistics are needed fast. Remember that data density, row chaining or index cluster factor changed due to the data import and index rebuilds.

Method 1: Using DBMS_STATS
Extract the statistics
First we need to login as schema owner which from which we want to extract the statistics. 
In the follwing example the schema owner and schema is is “SCOTT”. Logged in as scott we create a table holding the statistic information:
exec dbms_stats.create_stat_table('SCOTT','STATISTICS');
The arguments required are: <SCHEMANAME>, <TABLENAME>.
To extract the statistics run:
begin
dbms_stats.export_schema_stats(ownname => 'SCOTT',stattab => 'STATISTICS');
end;
/

Transfer the statistics
Now all statistics from that particular schema are stored in the table named “STATISTICS”. The table must be transfered to the destination schema (in the same or another database). This can be done for instance with exp/imp or via a database link.
For exporting the statistics with export/import:
exp system/manager file=stats_table.dmp log=stats_table.log tables=SCOTT.STATISTICS
Importing is done with:
imp system/manager file=stats_table.dmp log=stats_table_imp.log
 fromuser=<source user name> touser=<destination user name>

Import the statistics
Import into another database but with the same schema name
If the schema name in the destination database is identical to the schema name in the source database (“SCOTT” in source and “SCOTT” in destination) the stats can be imported with:
begin
exec dbms_stats.import_schema_stats(ownname => 'SCOTT',stattab => 'STATISTICS');
 end;
 /

Import into another or the same database but with a different schema name
If the name of the destination schema is changed you have to adjust your statistic table before importing the stats. In the following example we import the stats into a schema called “TEST”.
In the stats table the column named “C5″ contains the name of the schema the statistics were exported from and will be imported to. If you want to import the stats into another schema you have to modify this column first:
update statistics set c5='TEST';
and afterwards import the statistics (loggen in as user TEST or SYS):
begin
exec dbms_stats.import_schema_stats(ownname => 'SCOTT',stattab => 'STATISTICS');
 end;
 /

Method 2: Using data pump
Beside using DBMS_STATS you can use data pump to transfer table and index statistics from one database to another.
When doint a normal export/import table and index statistics are transfered along with the data. If you want to transfer only the statistics you can use the following call:

impdp system/manager dumpfile=<directory>:<file> logfile=<directory>:<logfile name>  remap_schema=<source>:<destination> remap_tablespace=<source>:<destination> schemas=<name of source schema(s)> contents=STATISTICS



Sunday 25 May 2014

Restoring Optimizer Statistics

 

In some cases we may find that gathering fresh statistics has led to the optimizer executing sub-optimal execution plans and we would like to restore the statistics as of a date when we had good performance.

Here we can  restore table statistics to a previous point in the past by using the DBMS_STATS.RESTORE_TABLE_STATS package.
We can check how far back in time we can go to restore statistics.
SQL> select DBMS_STATS.GET_STATS_HISTORY_AVAILABILITY  from dual;

GET_STATS_HISTORY_AVAILABILITY
---------------------------------------------------------------------------
13-MAR-14 10.51.37.266819000 PM +05:30

We can see what the retention period is for the statistics and can also alter the same. In this case we are changing the statistics retention from 10 days to 14 days. The default is 31 days.

SQL> select DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;

GET_STATS_HISTORY_RETENTION
---------------------------
                         10

SQL> execute DBMS_STATS.ALTER_STATS_HISTORY_RETENTION (14);

PL/SQL procedure successfully completed.

SQL> select DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;

GET_STATS_HISTORY_RETENTION
---------------------------
                         14

Let us now look at an example of restoring the statistics from a point in time from the past.

SQL> select count(*) from myobjects;

  COUNT(*)
----------
    138353

SQL> select TABLE_NAME, STATS_UPDATE_TIME from dba_tab_stats_history where table_name='MYOBJECTS' and owner='SYSTEM';

TABLE_NAME                     STATS_UPDATE_TIME
------------------------------ ----------------------------------------
MYOBJECTS                      18-MAR-14 10.00.19.496616 PM +05:30
MYOBJECTS                      23-MAR-14 11.29.10.526131 AM +05:30
MYOBJECTS                      23-MAR-14 11.35.47.249720 AM +05:30
MYOBJECTS                      23-MAR-14 11.37.36.050421 AM +05:30
MYOBJECTS                      23-MAR-14 11.38.05.015640 AM +05:30

The table MYOBJECTS has been analyzed several times in the past few days. Let us see what happens if we restore the statistics of the table gathered on the 18th of March.
SQL> execute dbms_stats.restore_table_stats(‘SYSTEM’,'MYOBJECTS’,’18-MAR-14 10.00.19.496616 PM +05:30′);
PL/SQL procedure successfully completed.

The table MYOBJECTS actually has 138353 rows, but the statistics were gathered at a time when it had much fewer rows as shown by the NUM_ROWS column of the DBA_TABLES view.

SQL> select num_rows from dba_tables
  2  where table_name='MYOBJECTS' and owner='SYSTEM';

  NUM_ROWS
----------
     46141

We can now replace these ‘older’ statistics with the most current statistics available in the history.

SQL> execute dbms_stats.restore_table_stats('SYSTEM','MYOBJECTS',’ 23-MAR-14 11.38.05.015640 AM +05:30’);

PL/SQL procedure successfully completed.

Now the NUM_ROWS column shows the row count which is the same as the actual number of rows in the table.
SQL> select num_rows from dba_tables
  2  where table_name='MYOBJECTS' and owner='SYSTEM';

  NUM_ROWS
----------
     138353

It should be noted that we can use the same procedure to not only restore previous statistics of just a table, but we can also do the same for the full schema or entire database as well.

REMEMBER – we gather statistics to ensure that the optimizer chooses the optimal execution plans, but gathering statistics invalidates the parsed representation of the SQL statement and reparsing the statement post gather stats can make the optimizer choose an execution plan which is different (and less optimised) than the original plan.


Refresh of Mview is Throwing ORA-00942


Refresh of Mview is throwing ORA-942 after successful creation.

Create materialized view schema_a.client_mview 
tablespace users build immediate
refresh force start with trunc(sysdate)+1455/1440
next trunc(sysdate)+1455/1440
as
select client_id, username, createdate, swver
from client_req@TEST_LINK
where swver is not null;

Materialized View Created.

select db_link, username from dba_db_links where db_link like 'TEST_LINK';

DB_LINK                      USERNAME
--------------------      ------------------------------
TEST_LINK.COM            SCHEMA_B

On the TEST_LINK.COM database, the db link user SCHEMA_B has select granted directly on the master tables.
select grantee,owner,table_name,privilege from user_tab_privs
where table_name like 'CLIENT%'

GRANTEE        OWNER      TABLE_NAME      PRIVILEGE
-------------     ------------     ------------------    ------------------
SCHEMA_B     SCHEMA_A    CLIENT_REQ     SELECT

Trying to refresh the materialized views returns errors:

SQL> execute dbms_refresh.refresh('"SCHEMA_A"."CLIENT_MVIEW"');
BEGIN dbms_refresh.refresh('"SCHEMA_A"."CLIENT_MVIEW"'); END;

*
ERROR at line 1:
ORA-12018: following error encountered during code generation for "SCHEMA_A"."CLIENT_MVIEW"
ORA-00942: table or view does not exist
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2255
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2461
ORA-06512: at "SYS.DBMS_IREFRESH", line 683
ORA-06512: at "SYS.DBMS_REFRESH", line 195
ORA-06512: at line 1

Cause
Database link user (SCHEMA_B) doesn't have select privileges on the MLOG$_CLIENT_REQ.
Log table is in the master table owner's schema (SCHEMA_A).
The problem was identified using the trace file.

Solution
To implement the solution, please execute the following steps:
grant select on MLOG$_<Table Name> to <Database link user>;
or
grant select any table to <Database link user>;

Example:
grant select on MLOG$_CLIENT_REQ to SCHEMA_B;
or
grant select any table to SCHEMA_B;



Saturday 10 May 2014

LOG MINING IN ORACLE DATABASE


Suppose data in a table has been deleted. Audit trail was not enabled, therefore we do not have a clue of the user who has performed deletion and there are around 30 users that have DELETE rights on that table. Management want to know the name of the user, here we can use Oracle LOG MINING feature to filter out the transactions performed during that period.

Note: Make sure the Archive log must be enabled previously, if you want to use LOG Mining technique.

CONSIDERATIONS
Database : ORCL
Schema : SCOTT Table : STUDENTS
Date of Deletion : 10-MAY-2014

1. Gather Archived Log 
Make archived logs of 10-MAY-2014 available in your local hard disk. In my case it is already available at the Flash Recovery Area.

2 . Create Tablespace for Log Miner 
By default LOG MINER tables resides in SYSAUX tablespace, but it is efficient to keep it in a separate tablespace for easy manageability & maintenance.

SQLPLUS SYS@ORCL AS SYSDBA

CREATE TABLESPACE TS_LOGMNR DATAFILE ‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\TS_LOGMNR_01.DBF’ SIZE 25M AUTOEXTEND ON MAXSIZE UNLIMITED;

Use the following statement to move all of LOG MINER objects from SYSAUX to TS_LOGMNR tablespace.
 EXEC DBMS_LOGMNR_D.SET_TABLESPACE(TS_LOGMNR);

3) Start Log Mining Load Archive log file into Log Miner.
BEGIN DBMS_LOGMNR.ADD_LOGFILE(C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCLARCHIVELOG\2014_05_12O1_MF_1_2_7JY3ZNWV_.ARC, DBMS_LOGMNR.NEW);
 END;
/

Perform Log Mining 

BEGIN DBMS_LOGMNR.START_LOGMNR(OPTIONS=>DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG+ DBMS_LOGMNR.COMMITTED_DATA_ONLY);
 END;

Format query results.
SET TRIM ON SET LINES 200
COL USERNAME FORMAT A15
COL TIMESTAMP FORMAT A20
COL SESSION_INFO FORMAT A200

SPOOL C:LOGMNR.TXT
SELECT USERNAME, TO_CHAR(TIMESTAMP,’DD-MON-YYYYY HH24:MI:SS’) TIMESTAMP, SESSION_INFO, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER='SCOTT’ AND SEG_NAME=’STUDENTS’ AND OPERATION=’DELETE’;
SPOOL OFF;

Stop Log Miner.
BEGIN DBMS_LOGMNR.END_LOGMNR;
END;
/

Open C:LOGMNR.TXT and look for DELETE statements. If you do not locate the problematic query,repeat the step 2 (above) with another archive log file, and keep repeating it until you caught the query and theuser that has performed the DELETE operation.

Note: V$LOGMNR_CONTENTS has many columns, and you can customize your results by modifying the abovequery.

Sunday 30 March 2014

Rapid Growth of SYSAUX Tablespace


Oracle 10g onwards, The SYSAUX an auxiliary tablespace to the SYSTEM tablespace created in your database.
Some database components that formerly created and used separate tablespaces now occupy the SYSAUX tablespace.
Below table shows the components that uses the SYSAUX tablespace as their default tablespace and the tablespace in which they were stored in earlier releases.

Component Using SYSAUX
Tablespace in Earlier Releases
Analytical Workspace Object Table
SYSTEM
Enterprise Manager Repository
OEM_REPOSITORY
LogMiner
SYSTEM
Logical Standby
SYSTEM
OLAP API History Tables
CWMLITE
Oracle Data Mining
ODM
Oracle Spatial
SYSTEM
Oracle Streams
SYSTEM
Oracle Text
DRSYS
Oracle Ultra Search
DRSYS
Oracle interMedia ORDPLUGINS Components
SYSTEM
Oracle interMedia ORDSYS Components
SYSTEM
Oracle interMedia SI_INFORMTN_SCHEMA Components
SYSTEM
Server Manageability Components

Statspack Repository
User-defined
Oracle Scheduler

Workspace Manager
SYSTEM

The statistics are stored in the data dictionary. Now whenever statistics in the dictionary are modified, old versions of statistics are saved automatically for future restoring. This history information is stored in SYSAUX tablespace and tables involved in this are as below:
WRI$_OPTSTAT_OPR                    
WRI$_OPTSTAT_AUX_HISTORY                
WRI$_OPTSTAT_TAB_HISTORY                
WRI$_OPTSTAT_IND_HISTORY                
WRI$_OPTSTAT_HISTGRM_HISTORY            
WRI$_OPTSTAT_HISTHEAD_HISTORY         
 
By default, the MMON performs the automatic purge that removes all stats history older than the following:
* current time - statistics history retention (by default 31 days)
* time of recent analyze in the system - 1

MMON performs the purge of the optimizer stats history automatically, but it has an internal limit of 5 minutes to perform this job.MMON will do this activity once in 24 hrs. If the operation takes more than 5 minutes, then it is aborted and stats not purged. No trace or alert message is reported. Because of this, as time elapse more data will be accommodated in above tables.
Statistics history retention is configurable using the ALTER_STATS_HISTORY_RETENTION procedure.


I have an Oracle 11g R2 database where I noticed the SYSAUX tablespace was growing larger every day.   After running $ORACLE_HOME/rdbms/admin/awrinfo.sql (Doc ID 1292724.1),
I found the largest consumer to be SM/OPTSTAT(9.5GB) and SM/AWR(1GB) as shown below.

  (1b) SYSAUX occupants space usage (v$sysaux_occupants)
********************************************************
|
| Occupant Name    Schema Name     Space Usage
| -------------------- -------------------- ----------------
| SM/OPTSTAT           SYS                        9,415.1 MB
| SM/AWR                    SYS                        1,005.3 MB
| SM/ADVISOR           SYS                        188.5 MB
| XDB                             XDB                       125.8 MB
| EM                              SYSMAN                 82.3 MB


(3b) Space usage within AWR Components (> 500K)
**********************************
COMP        MB     SEGMENT_NAME - % SPACE_USED                                                                                            
SEGMENT_TYPE
--------- --------- ---------------------------------------------------------------------
------------------------------  ---------------
ASH           382.0  WRH$_ACTIVE_SESSION_HISTORY.WRH$_ACTIVE_3563904870_4350 -  97%  TABLE PARTITION
ASH            38.0  WRH$_ACTIVE_SESSION_HISTORY_PK.WRH$_ACTIVE_3563904870_4350 - 98%  INDEX PARTITION

(4) Space usage by non-AWR components (> 500K)
**********************************

COMPONENT        MB SEGMENT_NAME                                                          SEGMENT_TYPE
--------- --------- --------------------------------------------------------------------- ---------------
NON_AWR     3,220.0 SYS.I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST                    INDEX
NON_AWR     2,905.0 SYS.WRI$_OPTSTAT_HISTHEAD_HISTORY                TABLE
NON_AWR     1,930.0 SYS.I_WRI$_OPTSTAT_HH_ST                                          INDEX
NON_AWR       448.0 SYS.I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST                   INDEX
NON_AWR       296.0 SYS.WRI$_OPTSTAT_HISTGRM_HISTORY                    TABLE
NON_AWR       232.0 SYS.I_WRI$_OPTSTAT_H_ST                                               INDEX
NON_AWR       168.0 SYS.I_WRI$_OPTSTAT_TAB_OBJ#_ST                             INDEX
NON_AWR       141.0 SYS.SYS_LOB0000006306C00038$$                               LOBSEGMENT
NON_AWR       104.0 SYS.I_WRI$_OPTSTAT_TAB_ST                                          INDEX


To resolve this:

1. Turn off the Autoextend on the SYSAUX at the earliest to ensure that the tablespace doesn’t grow out of bounds and finally become complete unmanageable.

2. Manually purge old statistics using DBMS_STATS.purge_stats as below

Find out your present retention value using the below statement
SQL> select dbms_stats.get_stats_history_retention from dual;

GET_STATS_HISTORY_RETENTION
---------------------------
                         31

Find out the oldest statistics history using below statement(Shows available stats that have not been purged):
 

SQL> select dbms_stats.get_stats_history_availability from dual;

GET_STATS_HISTORY_AVAILABILITY
---------------------------------------------------------------------------
05-MAR-14 05.31.04.053232000 AM +05:30

Set retention of old stats to less number of days. I set here it to 10 days as below.

SQL> exec dbms_stats.alter_stats_history_retention(10);

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_stats_history_retention from dual;

GET_STATS_HISTORY_RETENTION
---------------------------
                         10
Purge stats older than 10 days. Best to do this in stages if there is a lot of data (sysdate-10) since it consumes more resources. Do this during less activity on the database. This purge will delete data from WRI$ tables.

Below command will purge stats which are older than 10 days.

SQL> exec dbms_stats.purge_stats(sysdate-10);

PL/SQL procedure successfully completed.

Below command shows available stats that have not been purged

SQL> select dbms_stats.get_stats_history_availability from dual;

GET_STATS_HISTORY_AVAILABILITY
---------------------------------------------------------------------------
30-MAR-14 09.46.46.000000000 AM +05:30

Once purge is done, reorg these tables to release space to the database. Refer ID 1271178.1 for more details.


Saturday 22 February 2014

Impdp does not created user/schema

Until  now my understanding for EXPDP/IMPDP was that if the user performing import has 'IMPORT FULL DATABASE' privilege, it'll create the users/schemas in the target database. I needed to export (expdp) schema from one machine and import (impdp) it to another machine. I just wanted  impdp to create schema on target database.

While doing  import  i encountered error
$ impdp system/oracle123 directory=TEST_DIR dumpfile=test.dmp logfile=impdptest.log
 .......
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** directory=TEST_DIR dumpfile=USR_TEST.dmp logfile=impdpUSR_TEST.log
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
ORA-39083: Object type PROCACT_SCHEMA failed to create with error:
ORA-31625: Schema USR_TEST is needed to import this object, but is unaccessible
ORA-01435: user does not exist
Failing sql is:
BEGIN
sys.dbms_logrep_imp.instantiate_schema(schema_name=>SYS_CONTEXT('USERENV','CURRENT_SCHEMA'), export_db_name=>'XXX.YYY.COM', inst_scn=>'7788478540892');COMMIT; END;
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39083: Object type TABLE:"USR_TEST"."TABLE1" failed to create with error:
ORA-01918: user 'USR_TEST' does not exist
Failing sql is:
CREATE TABLE "USR_TEST"."TABLE1" ("COLUMN1" VARCHAR2(20 BYTE) NOT NULL ENABLE, "COLUMN2" VARCHAR2(20 BYTE), "COLUMN3" VARCHAR2(20 BYTE)) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAU
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 2 error(s).
$

After reading some documents,blogs I discovered  that If  the schema you are importing to does not already exist, the import operation creates it, provided the dump file set contains the necessary CREATE USER metadata and you are importing with enough privileges.

Meaning the oracle user that exported the schema, should have had CREATE USER privilege.Export was done with user (USR_TEST) was a basic user and not DBA/create-user-privileged. 
Below query given result as.
SQL>  select  from  ROLE_SYS_PRIVS  where  PRIVILEGE = ’CREATE USER’ ;
ROLE                                              PRIVILEGE                     ADM
----------------------------           --------------------              ---
DBA                                               CREATE USER                 YES
IMP_FULL_DATABASE           CREATE USER                  NO
2 rows selected.
It means EXP_FULL_DATABAS role does not have create user privilege.

I did 'GRANT CREATE USER TO USR_TEST' here and  ran the export again and this time some additional "object types" were processed.
 when USR_TEST user didn't have 'CREATE USER' privilege:
 $ expdp USR_TEST/USR_TEST schemas=USR_TEST content=METADATA_ONLY directory=TEST_DIR dumpfile=USR_TEST.dmp logfile=expdpUSR_TEST.log
.......
Starting "USR_TEST"."SYS_EXPORT_SCHEMA_01":  USR_TEST/******** schemas=USR_TEST content=METADATA_ONLY directory=TEST_DIR dumpfile=USR_TEST.dmp logfile=expdpUSR_TEST.log
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Master table "USR_TEST"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
.......


when USR_TEST user  have 'CREATE USER' privilege:
 $ expdp USR_TEST/USR_TEST schemas=USR_TEST content=METADATA_ONLY directory=TEST_DIR dumpfile=USR_TEST.dmp logfile=expdpUSR_TEST.log
.......
Starting "USR_TEST"."SYS_EXPORT_SCHEMA_01":  USR_TEST/******** schemas=USR_TEST content=METADATA_ONLY directory=TEST_DIR dumpfile=USR_TEST.dmp logfile=expdpUSR_TEST.log
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Master table "USR_TEST"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
.......

Note the Processing object type SCHEMA_EXPORT/USER, SCHEMA_EXPORT/SYSTEM_GRANT, SCHEMA_EXPORT/ROLE_GRANT, SCHEMA_EXPORT/DEFAULT_ROLE.

To avoid such situations always try to take the export using SYSTEM user.

Total Pageviews