Monday, 15 September 2014
How to Recreate The AWR( AUTOMATIC WORK LOAD ) Repository
The best
way to deinstall/install AWR is as follows:
1. Disable
AWR statistics gathering by setting the statistics level to basic as follows:
Check
settings for parameters as follows:
sqlplus
/nolog
connect /
as sysdba
show
parameter cluster_database
show
parameter statistics_level
show
parameter sga_target
Standby MRP needs old log sequence even after restore of incremental backup at standby
At standby
applied incremental backup but MRP process at standby is looking
for very old sequence.
SQL>SELECT
ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence
Received", APPL.SEQUENCE# "Last Sequence Applied",
(ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
FROM (SELECT THREAD# ,SEQUENCE#
FROM V$ARCHIVED_LOG
Saturday, 7 June 2014
DBUA error database not in mounted state
While upgrading one of my 10.2.0.4 64-bit database
to 11.2.0.2 using DBUA it failed with retrieve db information and pop up the message
"The Upgrade Assistant failes in
bringing up the database XXX. Oracle Home /oracle/OraHome11203 obtained from
file /etc/oratab was used to connect to the database.
How to copy schema stats from one schema to another schema
Here we will see how to copy schema statistics from one schema
to another. Basically there are two methods for doing this:
- using
DBMS_STATS or
- using
Data Pump
This is especially useful when doing large
data transfers between either schemas or databases and statistics are needed
fast. Remember that data density, row chaining or index cluster factor changed
due to the data import and index rebuilds.
Method 1: Using DBMS_STATS
Extract the statistics
First we need to login as schema owner which
from which we want to extract the statistics.
In the follwing example the
schema owner and schema is is “SCOTT”. Logged in as scott we create a table
holding the statistic information:
exec dbms_stats.create_stat_table('SCOTT','STATISTICS');
The arguments required are:
<SCHEMANAME>, <TABLENAME>.
To extract the statistics run:
begin
dbms_stats.export_schema_stats(ownname =>
'SCOTT',stattab => 'STATISTICS');
end;
/
Transfer the statistics
Now all statistics from that particular
schema are stored in the table named “STATISTICS”. The table must be transfered
to the destination schema (in the same or another database). This can be done
for instance with exp/imp or via a database link.
For exporting the statistics with export/import:
exp system/manager file=stats_table.dmp
log=stats_table.log tables=SCOTT.STATISTICS
Importing is done with:
imp system/manager file=stats_table.dmp
log=stats_table_imp.log
fromuser=<source user name>
touser=<destination user name>
Import the statistics
Import into another database but with the
same schema name
If the schema name in the destination
database is identical to the schema name in the source database (“SCOTT” in
source and “SCOTT” in destination) the stats can be imported with:
begin
exec dbms_stats.import_schema_stats(ownname
=> 'SCOTT',stattab => 'STATISTICS');
end;
/
Import into another or the same database but
with a different schema name
If the name of the destination schema is
changed you have to adjust your statistic table before importing the stats. In
the following example we import the stats into a schema called “TEST”.
In the stats table the column named “C5″
contains the name of the schema the statistics were exported from and will be
imported to. If you want to import the stats into another schema you have to
modify this column first:
update statistics set c5='TEST';
and afterwards import the statistics (loggen
in as user TEST or SYS):
begin
exec dbms_stats.import_schema_stats(ownname
=> 'SCOTT',stattab => 'STATISTICS');
end;
/
Method 2: Using data pump
Beside using DBMS_STATS you can use data pump
to transfer table and index statistics from one database to another.
When doint a normal export/import table and
index statistics are transfered along with the data. If you want to transfer
only the statistics you can use the following call:
impdp system/manager
dumpfile=<directory>:<file> logfile=<directory>:<logfile
name> remap_schema=<source>:<destination>
remap_tablespace=<source>:<destination> schemas=<name of source schema(s)>
contents=STATISTICS
Sunday, 25 May 2014
Restoring Optimizer Statistics
In some cases we
may find that gathering fresh statistics has led to the optimizer executing
sub-optimal execution plans and we would like to restore the statistics as of a
date when we had good performance.
Here we can restore table statistics to a previous point in the past by using the DBMS_STATS.RESTORE_TABLE_STATS package.
We can check how
far back in time we can go to restore statistics.
SQL> select
DBMS_STATS.GET_STATS_HISTORY_AVAILABILITY from dual;
GET_STATS_HISTORY_AVAILABILITY
---------------------------------------------------------------------------
13-MAR-14
10.51.37.266819000 PM +05:30
We can see what the
retention period is for the statistics and can also alter the same. In this
case we are changing the statistics retention from 10 days to 14 days. The
default is 31 days.
SQL> select
DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;
GET_STATS_HISTORY_RETENTION
---------------------------
10
SQL> execute
DBMS_STATS.ALTER_STATS_HISTORY_RETENTION (14);
PL/SQL procedure
successfully completed.
SQL> select
DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;
GET_STATS_HISTORY_RETENTION
---------------------------
14
Let us now look at
an example of restoring the statistics from a point in time from the past.
SQL> select count(*)
from myobjects;
COUNT(*)
----------
138353
SQL> select
TABLE_NAME, STATS_UPDATE_TIME from dba_tab_stats_history where
table_name='MYOBJECTS' and owner='SYSTEM';
TABLE_NAME STATS_UPDATE_TIME
------------------------------
----------------------------------------
MYOBJECTS 18-MAR-14
10.00.19.496616 PM +05:30
MYOBJECTS 23-MAR-14
11.29.10.526131 AM +05:30
MYOBJECTS 23-MAR-14
11.35.47.249720 AM +05:30
MYOBJECTS 23-MAR-14
11.37.36.050421 AM +05:30
MYOBJECTS 23-MAR-14
11.38.05.015640 AM +05:30
The table MYOBJECTS
has been analyzed several times in the past few days. Let us see what happens
if we restore the statistics of the table gathered on the 18th of March.
SQL> execute
dbms_stats.restore_table_stats(‘SYSTEM’,'MYOBJECTS’,’18-MAR-14 10.00.19.496616
PM +05:30′);
PL/SQL procedure
successfully completed.
The table MYOBJECTS
actually has 138353 rows, but the statistics were gathered at a time when it
had much fewer rows as shown by the NUM_ROWS column of the DBA_TABLES view.
SQL> select
num_rows from dba_tables
2 where
table_name='MYOBJECTS' and owner='SYSTEM';
NUM_ROWS
----------
46141
We can now replace
these ‘older’ statistics with the most current statistics available in the
history.
SQL> execute
dbms_stats.restore_table_stats('SYSTEM','MYOBJECTS',’ 23-MAR-14 11.38.05.015640
AM +05:30’);
PL/SQL procedure
successfully completed.
Now the NUM_ROWS
column shows the row count which is the same as the actual number of rows in
the table.
SQL> select
num_rows from dba_tables
2 where
table_name='MYOBJECTS' and owner='SYSTEM';
NUM_ROWS
----------
138353
It should be noted
that we can use the same procedure to not only restore previous statistics of
just a table, but we can also do the same for the full schema or entire
database as well.
REMEMBER – we gather statistics to ensure that the optimizer chooses the optimal execution plans, but gathering statistics invalidates the parsed representation of the SQL statement and reparsing the statement post gather stats can make the optimizer choose an execution plan which is different (and less optimised) than the original plan.
Refresh of Mview is Throwing ORA-00942
Refresh of Mview is throwing ORA-942 after
successful creation.
Create materialized view
schema_a.client_mview
tablespace users build immediate
refresh force start with trunc(sysdate)+1455/1440
next trunc(sysdate)+1455/1440
as
select client_id, username, createdate, swver
from client_req@TEST_LINK
where swver is not null;
Materialized View Created.
select db_link, username from dba_db_links where db_link like 'TEST_LINK';
DB_LINK USERNAME
-------------------- ------------------------------
TEST_LINK.COM SCHEMA_B
refresh force start with trunc(sysdate)+1455/1440
next trunc(sysdate)+1455/1440
as
select client_id, username, createdate, swver
from client_req@TEST_LINK
where swver is not null;
Materialized View Created.
select db_link, username from dba_db_links where db_link like 'TEST_LINK';
DB_LINK USERNAME
-------------------- ------------------------------
TEST_LINK.COM SCHEMA_B
On the TEST_LINK.COM database, the db link user SCHEMA_B has select granted directly on the master tables.
select grantee,owner,table_name,privilege
from user_tab_privs
where table_name like 'CLIENT%'
GRANTEE OWNER TABLE_NAME PRIVILEGE
------------- ------------ ------------------ ------------------
SCHEMA_B SCHEMA_A CLIENT_REQ SELECT
where table_name like 'CLIENT%'
GRANTEE OWNER TABLE_NAME PRIVILEGE
------------- ------------ ------------------ ------------------
SCHEMA_B SCHEMA_A CLIENT_REQ SELECT
Trying to refresh the materialized views
returns errors:
SQL> execute
dbms_refresh.refresh('"SCHEMA_A"."CLIENT_MVIEW"');
BEGIN dbms_refresh.refresh('"SCHEMA_A"."CLIENT_MVIEW"'); END;
*
ERROR at line 1:
ORA-12018: following error encountered during code generation for "SCHEMA_A"."CLIENT_MVIEW"
ORA-00942: table or view does not exist
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2255
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2461
ORA-06512: at "SYS.DBMS_IREFRESH", line 683
ORA-06512: at "SYS.DBMS_REFRESH", line 195
ORA-06512: at line 1
BEGIN dbms_refresh.refresh('"SCHEMA_A"."CLIENT_MVIEW"'); END;
*
ERROR at line 1:
ORA-12018: following error encountered during code generation for "SCHEMA_A"."CLIENT_MVIEW"
ORA-00942: table or view does not exist
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2255
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2461
ORA-06512: at "SYS.DBMS_IREFRESH", line 683
ORA-06512: at "SYS.DBMS_REFRESH", line 195
ORA-06512: at line 1
Database link user (SCHEMA_B) doesn't have
select privileges on the MLOG$_CLIENT_REQ.
Log table is in the master table owner's schema (SCHEMA_A).
The problem was identified using the trace file.
Log table is in the master table owner's schema (SCHEMA_A).
The problem was identified using the trace file.
To implement the solution, please execute
the following steps:
grant select on MLOG$_<Table Name> to
<Database link user>;
or
grant select any table to <Database link
user>;
Example:
grant select on MLOG$_CLIENT_REQ to SCHEMA_B;
or
grant select any table to SCHEMA_B;
Saturday, 10 May 2014
LOG MINING IN ORACLE DATABASE
Suppose data in a table has been deleted.
Audit trail was not enabled, therefore we do not have a clue of the user who has
performed deletion and there are around 30 users that have DELETE rights on
that table. Management want to know the name of the user, here we can use Oracle LOG MINING feature to filter out
the transactions performed during that period.
Note: Make sure the Archive log must be
enabled previously, if you want to use LOG Mining technique.
CONSIDERATIONS
Database : ORCL
Schema : SCOTT Table : STUDENTS
Date of Deletion : 10-MAY-2014
1. Gather Archived Log
Make archived logs of 10-MAY-2014
available in your local hard disk. In my case it is already available at the
Flash Recovery Area.
2 . Create Tablespace for Log Miner
By
default LOG MINER tables resides in SYSAUX tablespace, but it is efficient to
keep it in a separate tablespace for easy manageability & maintenance.
SQLPLUS SYS@ORCL AS SYSDBA
CREATE TABLESPACE TS_LOGMNR DATAFILE ‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\TS_LOGMNR_01.DBF’
SIZE 25M AUTOEXTEND ON MAXSIZE UNLIMITED;
Use
the following statement to move all of LOG MINER objects from SYSAUX to
TS_LOGMNR tablespace.
EXEC
DBMS_LOGMNR_D.SET_TABLESPACE(TS_LOGMNR);
3) Start Log Mining Load Archive log file
into Log Miner.
BEGIN DBMS_LOGMNR.ADD_LOGFILE(C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCLARCHIVELOG\2014_05_12O1_MF_1_2_7JY3ZNWV_.ARC, DBMS_LOGMNR.NEW);
END;
/
Perform Log Mining
BEGIN
DBMS_LOGMNR.START_LOGMNR(OPTIONS=>DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG+
DBMS_LOGMNR.COMMITTED_DATA_ONLY);
END;
Format query results.
SET TRIM ON SET LINES 200
COL USERNAME FORMAT A15
COL TIMESTAMP FORMAT A20
COL SESSION_INFO FORMAT A200
SPOOL C:LOGMNR.TXT
SELECT USERNAME, TO_CHAR(TIMESTAMP,’DD-MON-YYYYY
HH24:MI:SS’) TIMESTAMP, SESSION_INFO, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS
WHERE SEG_OWNER='SCOTT’ AND SEG_NAME=’STUDENTS’ AND OPERATION=’DELETE’;
SPOOL OFF;
Stop Log Miner.
BEGIN DBMS_LOGMNR.END_LOGMNR;
END;
/
Open C:LOGMNR.TXT and look for DELETE
statements. If you do not locate the problematic query,repeat the step 2
(above) with another archive log file, and keep repeating it until you caught
the query and theuser that has performed the DELETE operation.
Note: V$LOGMNR_CONTENTS has many columns, and
you can customize your results by modifying the abovequery.