Tagged: 11g

Weekly script #3 unix_usr.sql

I use this one all the time. You have a high usage unix PID that is an Oracle Process. The script will take your defined pid and lookup the relating SQL and database user details to help you figure out the root cause.


set serveroutput on size 50000
set echo off feed off veri off
accept 1 prompt 'Enter Unix process id: '

DECLARE
v_sid number;
s sys.v_$session%ROWTYPE;
p sys.v_$process%ROWTYPE;
BEGIN
begin
select sid into v_sid
from sys.v_$process p, sys.v_$session s
where p.addr = s.paddr
and (p.spid = &&1
or s.process = '&&1');
exception
when no_data_found then
dbms_output.put_line('Unable to find process id &&1!!!');
return;
when others then
dbms_output.put_line(sqlerrm);
return;
end;

select * into s from sys.v_$session where sid = v_sid;
select * into p from sys.v_$process where addr = s.paddr;

dbms_output.put_line('=====================================================================');
dbms_output.put_line('SID/Serial : '|| s.sid||','||s.serial#);
dbms_output.put_line('Foreground : '|| 'PID: '||s.process||' - '||s.program);
dbms_output.put_line('Shadow : '|| 'PID: '||p.spid||' - '||p.program);
dbms_output.put_line('Terminal : '|| s.terminal || '/ ' || p.terminal);
dbms_output.put_line('OS User : '|| s.osuser||' on '||s.machine);
dbms_output.put_line('Ora User : '|| s.username);
dbms_output.put_line('Status Flags: '|| s.status||' '||s.server||' '||s.type);
dbms_output.put_line('Tran Active : '|| nvl(s.taddr, 'NONE'));
dbms_output.put_line('Login Time : '|| to_char(s.logon_time, 'Dy HH24:MI:SS'));
dbms_output.put_line('Last Call : '|| to_char(sysdate-(s.last_call_et/60/60/24), 'Dy HH24:MI:SS') || ' - ' || to_char(s.last_call_et/60, '990.0') || ' min');
dbms_output.put_line('Lock/ Latch : '|| nvl(s.lockwait, 'NONE')||'/ '||nvl(p.latchwait, 'NONE'));
dbms_output.put_line('Latch Spin : '|| nvl(p.latchspin, 'NONE'));

dbms_output.put_line('Current SQL statement:');
for c1 in ( select * from sys.v_$sqltext
where HASH_VALUE = s.sql_hash_value order by piece) loop
dbms_output.put_line(chr(9)||c1.sql_text);
end loop;

dbms_output.put_line('Previous SQL statement:');
for c1 in ( select * from sys.v_$sqltext
where HASH_VALUE = s.prev_hash_value order by piece) loop
dbms_output.put_line(chr(9)||c1.sql_text);
end loop;

dbms_output.put_line('Session Waits:');
for c1 in ( select * from sys.v_$session_wait where sid = s.sid) loop
dbms_output.put_line(chr(9)||c1.state||': '||c1.event);
end loop;

-- dbms_output.put_line('Connect Info:');
-- for c1 in ( select * from sys.v_$session_connect_info where sid = s.sid) loop
-- dbms_output.put_line(chr(9)||': '||c1.network_service_banner);
-- end loop;

dbms_output.put_line('Locks:');
for c1 in ( select /*+ ordered */
decode(l.type,
-- Long locks
'TM', 'DML/DATA ENQ', 'TX', 'TRANSAC ENQ',
'UL', 'PLS USR LOCK',
-- Short locks
'BL', 'BUF HASH TBL', 'CF', 'CONTROL FILE',
'CI', 'CROSS INST F', 'DF', 'DATA FILE ',
'CU', 'CURSOR BIND ',
'DL', 'DIRECT LOAD ', 'DM', 'MOUNT/STRTUP',
'DR', 'RECO LOCK ', 'DX', 'DISTRIB TRAN',
'FS', 'FILE SET ', 'IN', 'INSTANCE NUM',
'FI', 'SGA OPN FILE',
'IR', 'INSTCE RECVR', 'IS', 'GET STATE ',
'IV', 'LIBCACHE INV', 'KK', 'LOG SW KICK ',
'LS', 'LOG SWITCH ',
'MM', 'MOUNT DEF ', 'MR', 'MEDIA RECVRY',
'PF', 'PWFILE ENQ ', 'PR', 'PROCESS STRT',
'RT', 'REDO THREAD ', 'SC', 'SCN ENQ ',
'RW', 'ROW WAIT ',
'SM', 'SMON LOCK ', 'SN', 'SEQNO INSTCE',
'SQ', 'SEQNO ENQ ', 'ST', 'SPACE TRANSC',
'SV', 'SEQNO VALUE ', 'TA', 'GENERIC ENQ ',
'TD', 'DLL ENQ ', 'TE', 'EXTEND SEG ',
'TS', 'TEMP SEGMENT', 'TT', 'TEMP TABLE ',
'UN', 'USER NAME ', 'WL', 'WRITE REDO ',
'TYPE='||l.type) type,
decode(l.lmode, 0, 'NONE', 1, 'NULL', 2, 'RS', 3, 'RX',
4, 'S', 5, 'RSX', 6, 'X',
to_char(l.lmode) ) lmode,
decode(l.request, 0, 'NONE', 1, 'NULL', 2, 'RS', 3, 'RX',
4, 'S', 5, 'RSX', 6, 'X',
to_char(l.request) ) lrequest,
decode(l.type, 'MR', o.name,
'TD', o.name,
'TM', o.name,
'RW', 'FILE#='||substr(l.id1,1,3)||
' BLOCK#='||substr(l.id1,4,5)||' ROW='||l.id2,
'TX', 'RS+SLOT#'||l.id1||' WRP#'||l.id2,
'WL', 'REDO LOG FILE#='||l.id1,
'RT', 'THREAD='||l.id1,
'TS', decode(l.id2, 0, 'ENQUEUE', 'NEW BLOCK ALLOCATION'),
'ID1='||l.id1||' ID2='||l.id2) objname
from sys.v_$lock l, sys.obj$ o
where sid = s.sid
and l.id1 = o.obj#(+) ) loop
dbms_output.put_line(chr(9)||c1.type||' H: '||c1.lmode||' R: '||c1.lrequest||' - '||c1.objname);
end loop;

dbms_output.put_line('=====================================================================');

END;
/

undef 1

Advertisements

Weekly script #2 pid_from_sid.sql

A fairly easy script today that isn’t really hard to work out for yourself but I find it useful to have this script kicking around for when you find high resource using Oracle server process and you want to identify the offending SQL.


set pages 1000;
set lines 200;
col username for a10;

select b.status,a.inst_id,a.PID "Ora PID",a.SPID "Server PID",
a.LATCHWAIT,b.Program, b.Username,b.Osuser,b.machine,
b.terminal,a.terminal,b.Process "Client Process"
from gv$process a,gv$session b where a.addr=b.paddr and b.sid=&SID
/

See you next week!

Le Paz
 

 

A better Oracle DAC startserver.sh

Just want to share my Oracle DAC 10g startserver.sh.

startserver.sh is already provides as part of the DAC 10g install. I have added a couple of extra java parameters to allow a larger heap size and garbage collection method

. ./config.sh

$JAVA -server -Xms2048m -Xmx2048m -XX:+UseParNewGC -XX:ErrorFile='./hs_err_pid.log' -cp $DACCLASSPATH com.siebel.etl.net.QServer

-Xms and -Xmx give the dac server application 2GB of heap size to play with.

-XX:+UseParNewGC forces JVM to use parallel young generation GC with the concurrent GC.

-XX:ErrorFile should your dac server crash with a JVM error the stacktrace/dump will be stored in the specified file.

Until I feel like posting again ūüėČ

Perfect Datawarehouse

This post will outline the parameters and considerations that need to be made in order to create a basic data warehouse for any need.

With any data warehouse the aim is clear РThe aim is to gather and process large amounts of data in the shortest space of time. If you can achieve this the the goal of a data warehouse has been reached.

Note that these parameters are not set in stone and some may need to be tweaked to fit your respective environment.

It is key to plan your data warehouse approximately from the word go. During these initial steps  CPU and I/O are key failing to gauge these correctly will result in poor performance later on when you load data or look to scale out you DW. Tuning and changing parameters can only do so much until the problem become under performing hardware.

While no environment is the same a good starting point as per Oracles recommendations are 200MB/s of I/O throughput per CPU core. Also the link between should be fiber channel to accommodate.

db_block_checksum – TRUE
db_file_multiblock_read_count – 0
db_block_size – 32k
dml_locks -1000
job_queue_processes – 2
log_buffer – 10485760 (10G)
log_checkpoint_interval -100000
log_checkpoint_timeout – 3600
Open_cursors – 1000
Processes – 500 (max number of conccurent users x2)
sessions – 1000 (Num of processes x2)
session_cached_cursors – 500
shared_pool_reserved_size – 40mb
shared_pool_size – 400mb or 0 for oracle to manage
sga_target -2G
sga_max_size – 2200mb
undo_retention – 90000

Statistics gathering is a key part to a data warehouse. Gathering statistics allows the DW to make educated decisions on the best explain plan thus reducing the query run time. Tuning a data warehouse with start schema enables is relativity simple a without diving to much into every option a simple configuration would be to create Bitmap indexes of each foreign keys columns in the fact table or tables.

dbms_stats.gather_schema_stats – GATHER AUTO
STAR_TRANSFORMATION_ENABLED РTRUE  (If using Star Schema)
OPTIMIZER_MODE = ALL_ROWS
QUERY_REWRITE_ENABLED = TRUE
QUERY_REWRITE_INTEGRITY – TRUSTED
HASH_JOIN_ENABLED – true
always_anti_join = hash
always_semi_join = hash

In order to take full advantage of the hardware provided I would recommend using parallel execution of queries.

In regards to parallel execution a good starting point is below:

Objects smaller than 200 MB should not use any parallelism

Objects between 200 MB and 5GB should use a DOP of 4

Objects beyond 5GB use a DOP of 32

These sizing should be changes to fit the hardware restrictions.

parallel_automatic_tuning = true

parallel_threads_per_cpu = Number of cores per cpu x2
cpu_count = xx – make sure correct

Until next time.

 

OBIEE Performance Tuning Part 3

For the final part of this guide we will take a look at some of the changes that can be made to the database to achieve better performance. The parameters I recommend changing are listed below;

 


db_block_checksum ‚Äď TRUE - database writer process will calculate a checksum

db_file_multiblock_read_count ‚Äď 0

dml_locks - 1000 - minimize lock conversions and achieve better query / Read performance.

job_queue_processes - 2 - This limits the total number of dbms_scheduler and dbms_job jobs that can be running at a given time. Thus, saving database resources

log_buffer - 10485760 - larger values for log_buffer will reduce redo log file I/O.

log_checkpoint_interval ‚Äď 100000

log_checkpoint_timeout ‚Äď 3600
Open_cursors - 1000
undo_retention - 90000
Database resource plan ‚Äď Internal/off

Implementing these changes reduced my report timings further showing a clear increase in performance.

Results:
Time after changing instanceconfig.xml against Database changes
1. 1m44s      43s
2. 2m56s      1m34s
3. 5m15s      3m19s
4. 2m42s      1m5s

This concludes my 3 step guide for OBIEE performance tuning, In future I might write a piece on tuning reports for performance, as the above results we are still above the 1min mark in some reports making sure you write efficient SQL queries by using indexes, partioning and materialized views during development will ensure you are able to reduce these times further.

Until next time.

OBIEE Performance Tuning Part 1

This is part one of my OBIEE Tuning guide. In this post we will look at some of the¬†parameters¬†that should be set at NQSConfig.ini in order to achieve maximum¬†performance¬† First things first to run a valid test I ran the same report each time, after I had applied changes rebooted all database and OBIEE services to find time of completion I used the manage sessions section on the¬†Analytics¬†Console (Administration > Manage Sessions). With regards to what is a good and bad result if its not 10 seconds or more faster then I would say the change hasn’t had an effect on¬†performance.

Note that all SQL should be tuned and it is said that OBIEE should be able to return a report within 1-10 seconds. My instance is not at this stage but you can still see a clear performance increase.

With my performance testing cache has been turned off as this will give invalid results, when performace tweaking caching should only be turned in the final stage.

Below changes were made to the NQSConfig.ini


MAX_SESSION_LIMIT = 5000
MAX_REQUEST_PER_SESSION_LIMIT = 5000
SERVER_THREAD_RANGE = 40-260
DB_GATEWAY_THREAD_RANGE = 40-520
INIT_BLOCK_CACHE_ENTRIES = 5000 # default is 20

The settings were saved, database and all BI services restarted.

The Report I was running contained 4 dashboards they will be listed as 1-4 below.

Original Report Timings                                                  Changes to NQSConfig.ini Timings
1.                  2m10s                                                                                                    1m54s
2.                  3m5s                                                                                                      2m46s
3.                  5m16s                                                                                                    5m13s
4.                  2m41s                                                                                                    2m34s

As you can see from the above there have been slight improvments in the report but one dashboard still runs in pretty much the same time.

My next post will be on changes made to instanceconfig.xml then finally the changes I made to the database.