Sunday, September 4, 2011

Undocumented parameter: _allow_resetlogs_corruption


Note: This is an undocumented paramete and shouldn’t be used if you don’t know what you are doing. You can put your database in an unstable state if you don’t know what you’re doing, so be careful. You have been warned…
_allow_resetlogs_corruption is a hidden parameter, and undocumented for mortals like us, that is only supposed to be used by Oracle Staff in case of emergency. If you need to use it by yourself, you should have a cold backup ready to restore, and NEVER use it on a working database that you don’t wanna lose, because it’s very probable that you will have corruption problems afterwards.
To use this parameter, you have to set it into your spfile or pfile the following way:
_allow_resetlogs_corruption=TRUE
By default, it is set to false, of course.
When to use this parameter?
You may be in need to use the undocumented parameter _allow_resetlogs_corruption if you have a database that has crashed and cannot be recovered because one of your redologs is corrupted. Yes, you could make an incomplete recover, but… what if you MUST have the data that is inside your corrupted redo logs? What if you can’t afford to lose that data, or at least, you must try to recover it when humanly possible?
Well, in that case then you can try to use this parameter, because you have nothing to lose. However, make sure that you have a cold backup of your broken database, or you could miss any chance of recovering your data.
And just in case you are wondering, yes, you have other ways to recover the data from a “broken” database, but we will talk about this kind of disaster recovering tools in the future.
To activate the resetlog corruption and try to recover all the data that you can, you must:
1. STARTUP MOUNT;
2. ALTER SYSTEM SET _allow_resetlogs_corruption=TRUE;
3. Cross your fingers.
4. ALTER DATABASE OPEN;
If your database opens, DON’T PUT IN IN PRODUCTION. Extract all the data that you need and recover it by any means, for example, using exp or data pump. You could have problem extracting the data, but if you have managed to open the database, you should be able to make an export.
However, it’s best if you don’t ever need to use this kind of parameters. But, in case you have problems with your database, I hope this post will help someone to recover it in case of disaster.
Be careful and don’t play with your production environment!

Saturday, September 3, 2011

What is a Schema in Oracle ?


It may sound like an easy question, but the answer isn’t always clear to everyone, so I intend to clarify it in this post.
Most users think that, when they are developing an application, what they need is a database. Wrong. What they are looking for is a schema (or maybe a set of schemas), but almost no database needs a schema for itself.
A schema is a collection of database objets, owned by a single database user. The name of the schema is the same as the name of the user. The objets in this schemas can be in different tablespaces, because there is no direct relationship between a tablespace and a schema.
On the other hand, we can have a user that doesn’t owns a schema, but has grants to work with it. That’s a clear example of the differences between a schema and an user: a schema is always owned by an user, but an user doesn’t have to own a schema.
Or better said: a user that doesn’t owns a schema, what really owns is an empty schema. Because he can eventually have some object owned by him, so the schema is still there.
However, remember that a schema is a set of database objects, so… ¿a null collection of objects is still considered a collection of objects? That depends on the point of view, but anyway, an empty schema doesn’t have any interest for us.
I hope that this post has made a clearer perspective of what a schema is, and to know the differences between a schema and a database.

Friday, September 2, 2011

Managing Tables: Logging versus Nologging



When creating large tables (large referring to the amount of data), you may want to consider creating them with the nologging option. If the data is of a transient nature, that is, the table is used for staging and other manipulation will transform it into yet another table, do you really need to generate redo log activity or entries? On the other hand, what if the new table is created using CTAS (create table as select)? If the new CTAS-created table includes the nologging option, what are the implications for future data manipulation language (insert, update, or delete) operations?
The main question about NOLOGGING is this: does creating a table with the nologging option mean there is no logging (i.e., generation of redo entries) ever, or just that the initial creation operation has no logging, but that DML down the road generates logging? "Managing Tables," Chapter 15 of Oracle® Database Administrator's Guide 10g Release 2 (10.2), covers options you may want to consider when creating tables.
With respect to the nologging option, three benefits listed in the Administrator's Guide are:
  • Space is saved in the redo log files
  • The time it takes to create the table is decreased
  • Performance improves for parallel creation of large tables
Rule number one with respect to data is to never put yourself into an unrecoverable situation. The importance of this guideline cannot be stressed enough, but it does not mean that you can never use time saving or performance enhancing options. How and when can the NOLOGGING option be employed?
As already mentioned, one way is to use it during CTAS operations. Let's take a look at creating a large (relatively speaking) table using CTAS with and without NOLOGGING. The example table is SALES in the SH sample schema.
SQL> set timing on
SQL> create table sales_logging as
  2  select * from sales;
Table created.
Elapsed: 00:00:25.24
SQL> create table sales_nologging NOLOGGING as
  2  select * from sales;
Table created.
Elapsed: 00:00:06.59
For just over 900,000 rows, the time difference is around 18 seconds. Let's ramp up the size to over 10 million and compare the times. The demo table is SALES_x10 and was created using NOLOGGING, and then built up by selecting * from SALES in repeated insert statements.
SQL> create table sales_x10_logging as
  2  select * from sales_x10;
Table created.
Elapsed: 00:03:19.70
SQL> create table sales_x10_nologging NOLOGGING as
  2  select * from sales_x10;
Table created.
Elapsed: 00:01:29.80
The time difference now is close to two minutes. As a rough estimate using extrapolation, the times for creating a 100 million row table are 32 and 15 minutes, respectively. In a "quick backup/fix some problem right now" scenario, that 15-plus minute difference can be huge in terms of significance.
Now that the new table is created, how does the initial NOLOGGING option impact DML operations? Is there logging or not? Here is a simple test: delete some rows, rollback, and see if the counts are the same.
SQL> select count(*) from sales_nologging;
  COUNT(*)
----------
    918843
SQL> delete from sales_nologging
  2  where rownum < 100000;
99999 rows deleted.
SQL> select count(*) from sales_nologging;
  COUNT(*)
----------
    818844
SQL> rollback;
Rollback complete.
SQL> select count(*) from sales_nologging;
  COUNT(*)
----------
    918843
This result is also borne out by what is stated in the documentation: "The NOLOGGING clause also specifies that subsequent direct loads using SQL*Loader and direct load INSERT operations are not logged. Subsequent DML statements (UPDATEDELETE, and conventional path insert) are unaffected by theNOLOGGING attribute of the table and generate redo."
What happens at the tablespace level? The answer is: it depends. If the tablespace has logging enabled (which is the default), then objects created with NOLOGGING afterwards will stay NOLOGGING, but objects created when logging was in effect will stay that way. Here is the current logging status of our SALES-related tables:
TABLE_NAME                     LOG
------------------------------ ---
SALES
SALES_LOGGING                  YES
SALES_NOLOGGING                NO
SALES_X10                      NO
SALES_X10_LOGGING              YES
SALES_X10_NOLOGGING            NO
Why doesn't SALES have a logging status? The SALES table is partitioned, so if you view the logging status via USER_TAB_PARTITIONS, each partition's logging status will be displayed. Let's alter the USERS tablespace to NOLOGGING (step not shown below), create a new table, and check its logging status. Note: the SALES_TRANSACTION_EXT table was manually deleted from the list below, but does appear in the SQL*Plus output).
SQL> CREATE TABLE SALES_NO_TS_LOGGING
  2  (PROD_ID NUMBER NOT NULL,
  3  CUST_ID NUMBER NOT NULL,
  4  TIME_ID DATE NOT NULL,
  5  CHANNEL_ID NUMBER NOT NULL,
  6  PROMO_ID NUMBER NOT NULL,
  7  QUANTITY_SOLD NUMBER NOT NULL,
  8  AMOUNT_SOLD NUMBER NOT NULL);
Table created.
SQL> select table_name, logging
  2  from user_tables
  3  where table_name like 'SALES%';
TABLE_NAME                     LOG
------------------------------ ---
SALES
SALES_LOGGING                  YES
SALES_NOLOGGING                NO
SALES_NO_TS_LOGGING            NO
SALES_X10                      NO
SALES_X10_LOGGING              YES
SALES_X10_NOLOGGING            NO
Can we force the SALES_NO_TS_LOGGING table to be logging in a NOLOGGING tablespace? As shown below, the answer is yes.
SQL> alter table sales_no_ts_logging LOGGING;
Table altered.
SQL> select table_name, logging
  2  from user_tables
  3  where table_name like 'SALES_NO%';
TABLE_NAME                     LOG
------------------------------ ---
SALES_NOLOGGING                NO
SALES_NO_TS_LOGGING            YES

Clarifying what NOLOGGING means

Let's suppose a table is created using the NOLOGGING option, regardless of how NOLOGGING is being invoked (in a CREATE statement using NOLOGGING, or in a tablespace with NOLOGGING set). What is the end result of creating a table, inserting data, committing, followed by a delete statement and a rollback statement? Does NOLOGGING mean the DML is not recorded and that you cannot rollback because there was nothing logged in the redo logs?
SQL> create table test
  2  (id number) nologging;
Table created.
SQL> insert into test values (1);
1 row created.
SQL> commit;
Commit complete.
SQL> delete from test;
1 row deleted.
SQL> rollback;
Rollback complete.
SQL> select * from test;
        ID
----------
         1
The answer is no, that is not what NOLOGGING means. To reiterate what was stated before:
"The NOLOGGING clause also specifies that subsequent direct loads using SQL*Loader and direct load INSERT operations are not logged. Subsequent DML statements (UPDATEDELETE, and conventional path insert) are unaffected by the NOLOGGING attribute of the table and generate redo."

In Closing

As demonstrated, using the NOLOGGING option can be a time saver, but it can also put you at risk if you do not use it wisely. If you create a table with NOLOGGING, but cannot afford to lose the data (which implicitly means you need the table it is stored in), the first step after the data load is complete is to take a backup. If a good part of your loading data into a database work revolves around using SQL*Loader loading data into stage tables, make the tables (or tablespace) NOLOGGING and save yourself some time.

Source :
http://www.databasejournal.com/features/oracle/article.php/3631361/Managing-Tables-Logging-versus-Nologging.htm

Thursday, September 1, 2011

Trend Oracle Archive Graph : How much archive created y’day or last week


Often you want to show nice graphs/pictures to please your Colleagues. Here is an Excel graph I created to record Archive growth.
The following query shows a count and size of the redo log files by day:
– Daily Count and Size of Redo Log Space (Single Instance)
SELECT A.*,
Round(A.Count#*B.AVG#/1024/1024) Daily_Avg_Mb
FROM
(
SELECT
To_Char(First_Time,'YYYY-MM-DD') DAY,
Count(1) Count#,
Min(RECID) Min#,
Max(RECID) Max#
FROM
v$log_history
GROUP BY
To_Char(First_Time,'YYYY-MM-DD')
ORDER
BY 1 DESC
) A,
(
SELECT
Avg(BYTES) AVG#,
Count(1) Count#,
Max(BYTES) Max_Bytes,
Min(BYTES) Min_Bytes
FROM
v$log
) B
;
This v$log_history script comes from Ilya Petrenko
Output of the script : Delete columns Min(RECID) Min#,   Max(RECID) Max#
DAYLOG COUNTDAILY_AVG_MB
02/07/2010984480
01/07/20101265760
30/06/2010542469
29/06/2010281280
28/06/2010371691
27/06/201014640
26/06/201014640
25/06/201014640
24/06/201019869
23/06/201014640
Copy this table into Excel & click on Chart Wizard. Select X,Y co-ordinates and you will get a nice picture like this