Oracle startup and shutdown script: very famous (dbora) ;)

This is the most complete dbora script i came up with, i used linux subsystem and pid file for complete status workings… please have a look and n point out if there is anything to be done further in it…   #!/bin/sh # chkconfig: 345 55 25 # /etc/init.d/dbora # Description: Starts and stops the Oracle database and listener # Author: ConArtist # Date: 21/04/14 # source function library .

Which query takes the most CPU in oracle database?

Here is a simple sql query which will show which query is misbehaving and consuming the most CPU usage SELECT ss.username,        ss.OSUSER,        ss.TERMINAL,        se.SID,        VALUE / 100 cpu,        sq.SQL_TEXT   FROM v$session ss, v$sesstat se, v$statname sn, v$sql sq  WHERE se.STATISTIC# = sn.STATISTIC#    AND NAME LIKE '%CPU used by this session%'    AND se.SID = ss.SID    AND ss.STATUS = 'ACTIVE'    AND ss.username IS NOT NULL    AND ss.SQL_ID = sq.SQL_ID    AND VALUE / 100 >= 1  ORDER BY VALUE DESC; And when you see the top CPU using process, monitor its activity through this query: SELECT nvl(ses.username,'ORACLE PROC')||' ('||ses.sid||')' USERNAME,        SID,          MACHINE,        REPLACE(SQL.SQL_TEXT,CHR(10),'') STMT,       ltrim(to_char(FLOOR(SES.LAST_CALL_ET/3600), '09')) || ':'        || ltrim(to_char(FLOOR(MOD(SES.LAST_CALL_ET, 3600)/60), '09')) || ':'  

How to get 10046 Trace for Oracle Export and Import Utility

PURPOSE
β€”β€”-

If you want to find out what happens when you run Oracle export or Oracle import utility is to set ON the Oracle trace event 10046. This will generate a trace file that can be used to find out the actual happenings behind the scene.

This is how you can get 10046 Trace for Export and Import Utilities

1] Run the Oracle export command export and let the program prompt you for the options.

$ exp

Enter user and password as below when prompted
Username: system
Password:

2] Open another window to the database server and login using sqlplus.

$ sqlplus system/manager

3] Now find out the SID of exp session

sql> select sid,program from
v$session where username = β€˜SYSTEM’;

SID PROGRAM
β€”β€”β€”- β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”
788 exp@SERVER01 (TNS V1-V3)

4] Now find the PID and SPID for that session

sql> select s.sid, p.pid, p.spid
from v$session s, v$process p
where s.paddr = p.addr and s.sid = 10;

SID PID SPID
β€”β€”β€”- β€”β€”β€”- β€”β€”β€”
788 189 1076

SPID from the previous query is equivalent to OSPID (operating System process). This is the process that will be traced

5] Now exit from this session

Sql>exit

6] Generate a trace file for Procces ID 1076. To do that login as sys using sqlplus and run the commands (in bold)

$ sqlplus / as sysdba

SQL> oradebug setospid 1076
Oracle pid: 189, Unix process pid: 1076, image: oracle@SERVER01 (TNS V1-V3)

SQL> oradebug unlimit
Statement processed.

SQL> oradebug tracefile_name
/u01/app/oracle/diag/rdbms/dev/DEV1/trace/DEV1_ora_1076.trc

This gives the name of the trace file

SQL> oradebug Event 10046 trace name context forever, level 12;
Statement processed.

7] From the window where β€œexp” command was run, now export a table

8] From the SQL prompt of the window where logged in as β€œsys” user
Set the Trace off once you get the Required information or the error.

SQL> oradebug Event 10046 trace name context off;
ORA-00072: process β€œUnix process pid: 17370, image: oracle@ SERVER01β€³ is not active

SQL>exit

Now you have got the trace file which is
/u01/app/oracle/diag/rdbms/dev/DEV1/trace/DEV1_ora_1076.trc

Investigating Slow Oracle database performance

On many occasions you will face a scenario where the Oracle database performance is very slow. There are many reasons why the database performance can be slow. To investigate a slow performance problem, begin by deciding what diagnostics should be gathered. To do this, consider the following questions and take the appropriate action.

Database is Slow: Is the performance problem constant or does it occur at certain times of the day ?

* CONSTANT
o Gather an AWR or Statspack report for a period of time when the problem occurs (a 1 hour report is usually sufficient).
o If you have an historic report which covers the same time of day and period when the performance was OK then take that too.

* ONLY CERTAIN TIMES
o Gather an AWR or Statspack report for a period of time which covers the period when the problem exists (For instance, if you have a problem when something is run between 12 and 3 then make sure the report covers either that time or part of that time).
o Additionally, for comparison, gather an AWR or Statspack report for a similar period of time when the problem does not occur. Always ensure that you are making a fair comparison – for instance, the same time of day or the same workload and make sure the duration of the report is the same.

NOTE 1:- As much as possible statspack reports should be minimum 10 minutes, maximum 30 minutes. Longer periods can distort the information and reports should be re-gathered using a shorter time period. With AWR a 1hr report is OK, but for most performance issues a short 10-30 minute snapshot should be sufficient.

NOTE 2:- It is often prudent to use a matched ADDM report initially to give a pointer to the main issues. Reading the corresponding ADDM report as a first step to tuning can save a lot of time because it immediately points at the main user as compared to trying to understand what an AWR report is presenting.

NOTE 3:- If SQL performance is suspected as the cause of the slowness then collect an ASH report for the same period. If a specific SQL is suspected of slowness then run an ASH report just for that SQLID and also look at using SQLTXplain to diagnose issues with that statement.

Database is Slow: Does the problem affect one session, several sessions or all sessions ?

* ONE SESSION – Gather 10046 trace for the session.
* SEVERAL SESSIONS – Gather 10046 trace for one or two of the problem sessions
* ALL SESSIONS – Gather AWR or Statspack reports

Database Hangs: Does a particular Session ”appear” to hang or do several sessions or all sessions hang?

Please collect the following diagnostics according to the specific scenario:

When only one session appears to be β€˜hung’

* Gather 10046 trace for the session.
* Get a few errorstacks for the session
* Gather an AWR (or Statspack) report for a period of time when the problem occurs (a 1 hour report is usually sufficient).

When more than one session appears to be β€˜hung’

* Gather 10046 trace for one or two of the problem sessions
* Get a few errorstacks for one or two of the problem sessions
* Gather an AWR (or Statspack) report for a period of time when the problem occurs (a 1 hour report is usually sufficient).

When most of the sessions appears to be β€˜hung’ treat this as Database Hang

Oracle PL/SQL performance tuning using BULK COLLECT and FORALL

Here I’ll show you, in Oracle database, how to do performance tuning of your pl/sql code using BULK COLLECT and FORALL. Using BULK COLLECT and FORALL instead of normal FOR LOOP I have achieved significant performance benefit. In some cases I was able to tune the performance from 14 minutes to just 10 seconds.

You may already know that in Oracle database the PL/SQL code is executed by PL/SQL engine and SQL is executed by SQL engine. Also you know that SQL is embedded with PL/SQL. When PL/SQL engine encounters SQL code, it passes control to SQL engine for execution. This is called context switching between SQL and PL/SQL.

For example see the code below:

FOR i in 1..1000 LOOP
insert into emp values (…);
END LOOP;

In the above code when PL/SQL engine is executing the for loop, it needs to execute the INSERT statement 1000 times. This also means there were 1000 context switching. This generally degrades performance and causes longer execution time. Instead of context switching 1000 times, by using BULK COLLECT and FORALL method, the same operation can be achieved with just a single context switch. So that is where you can achieve significant performance benefit.

The test case here copies 1,000,000 records from a simple source table to a destination table. The source and destination tables have same structure. The only difference is that the source table has a primary key.

Here is the code that I used to create the source table called t_source and the destination table called t_dest:

create table t_source(
empno number,
ename varchar2(10),
joindate date);

alter table t_source add constraint pk_src primary key (empno);

create table t_dest as select * from t_source;

Then I created 1,000,000 records in t_source using this simple code

declare
begin
for i in 1..1000000 loop
insert into t_source values (i, β€˜emp’||i, sysdate);
end loop;
commit;
end;
/

After inserting source data, I analyzed the table so that the optimizer has the latest statistics about the source table. That is to help Oracle as much as possible to find out the best way to get the data from t_source table. This may be not necessary for a simple table such as this. But depending on the number of rows and number of columns, it may help enormously.

Anyway here is the code to gather table stats.

EXEC DBMS_STATS.gather_table_stats(β€˜JOE’, β€˜T_SOURCE’);

Then lets set the SQL environment so that we can see SQL execution time. Run this from SQL prompt:

SET TIMING ON;

Now I am going to populate t_dest using the usual FOR LOOP which is a very common method used by programmers. The code basically gets data into a cursor and then read each record from the cursor and inserts into t_dest table.

declare
cursor c1 is
select * from t_source;
begin
for src_rec in c1 loop
insert into t_dest values (src_rec.empno, src_rec.ename, src_rec.joindate);
end loop;
commit;
end;

Once the execution is complete you will get the output something like this.

PL/SQL procedure successfully completed.

Elapsed: 00:02:40.12
SQL>

Depending on the speed of your machine and database parameters, your code may take more time than you can see here or it may take less time. We are going to need this timing to compare against code that uses BULK COLLECT and FORALL.

Now run this code from your SQL*PLUS session

truncate table t_dest;

declare
cursor c1 is
select * from t_source;
TYPE src_tab IS TABLE OF t_source%ROWTYPE INDEX BY BINARY_INTEGER;
rec_tab src_tab;

begin
open c1;
fetch c1 BULK COLLECT INTO rec_tab limit 10000;
WHILE rec_tab.COUNT > 0 LOOP
FORALL i IN 1..rec_tab.COUNT
INSERT INTO t_dest (empno, ename, joindate) VALUES (rec_tab(i).empno,rec_tab(i).ename,rec_tab(i).joindate);
fetch c1 BULK COLLECT INTO rec_tab limit 10000;
END LOOP;
CLOSE c1;
end;

The output should shoething like this:

PL/SQL procedure successfully completed.

Elapsed: 00:00:16.13
SQL>

As you can see the time to insert 1,000,000 records from source to destination has decreased from 00:02:40.12 to just 00:00:16.13. This is a huge performance gain. When dealing with millions of records the performance benefit may be tremendous.