Category: OBIEE

A better Oracle DAC startserver.sh

Just want to share my Oracle DAC 10g startserver.sh.

startserver.sh is already provides as part of the DAC 10g install. I have added a couple of extra java parameters to allow a larger heap size and garbage collection method

. ./config.sh

$JAVA -server -Xms2048m -Xmx2048m -XX:+UseParNewGC -XX:ErrorFile='./hs_err_pid.log' -cp $DACCLASSPATH com.siebel.etl.net.QServer

-Xms and -Xmx give the dac server application 2GB of heap size to play with.

-XX:+UseParNewGC forces JVM to use parallel young generation GC with the concurrent GC.

-XX:ErrorFile should your dac server crash with a JVM error the stacktrace/dump will be stored in the specified file.

Until I feel like posting again ūüėČ

Fixing ORA-00838: Specified value of MEMORY_TARGET is too small

I can never remember how to do this so its getting dumped here.

SQL>startup
ORA-00838: Specified value of MEMORY_TARGET is too small, needs to be at least 1172M
SQL> create pfile='/tmp/pfile.bkp' from spfile;
File created.

$ vi /tmp/pfile.backup

*.memory_target=2048m

Start sqlplus, and import the pfile settings into the spfile

SQL> create spfile from pfile='/tmp/pfile.bkp';
File created.

Ye’ should be golden!

ūüėÄ

Identify your OBIEE users by setting Client ID in Oracle connection

Very useful

rmoff

You get a call from your friendly DBA. He says the production database is up the spout, and it‚Äôs ‚Äúthat bee eye thingumy causing it‚ÄĚ. What do you do now? All you‚Äôve got to go on is a program name in the Oracle session tables of ‚Äúnqsserver@MYSERVER (TNS V1-V3)‚ÄĚ and the SQL the DBA sent you that if you‚Äôre lucky will look as presentable as this:

The username against the SQL is the generic User ID that you had created for connections to the database from OBIEE.

So you turn to Usage Tracking and discover that when that particular SQL ran there were twenty users all running reports. And not only that, but the SQL that’s recorded is the Logical SQL, not the physical SQL.

So how do you identify the originating report that spawned the SQL that broke the database that upset the DBA that phoned you? …

With…

View original post 167 more words

Use scp to download a file from a server

Firing this command up because I always forget it.

You can easily grab a file off a server from the command line using secure copy:


scp user@domain.com:/home/user/fileyouwant.txt fileyouwant.txt

This will download the file from your server to your local computer.

Perfect Datawarehouse

This post will outline the parameters and considerations that need to be made in order to create a basic data warehouse for any need.

With any data warehouse the aim is clear РThe aim is to gather and process large amounts of data in the shortest space of time. If you can achieve this the the goal of a data warehouse has been reached.

Note that these parameters are not set in stone and some may need to be tweaked to fit your respective environment.

It is key to plan your data warehouse approximately from the word go. During these initial steps  CPU and I/O are key failing to gauge these correctly will result in poor performance later on when you load data or look to scale out you DW. Tuning and changing parameters can only do so much until the problem become under performing hardware.

While no environment is the same a good starting point as per Oracles recommendations are 200MB/s of I/O throughput per CPU core. Also the link between should be fiber channel to accommodate.

db_block_checksum – TRUE
db_file_multiblock_read_count – 0
db_block_size – 32k
dml_locks -1000
job_queue_processes – 2
log_buffer – 10485760 (10G)
log_checkpoint_interval -100000
log_checkpoint_timeout – 3600
Open_cursors – 1000
Processes – 500 (max number of conccurent users x2)
sessions – 1000 (Num of processes x2)
session_cached_cursors – 500
shared_pool_reserved_size – 40mb
shared_pool_size – 400mb or 0 for oracle to manage
sga_target -2G
sga_max_size – 2200mb
undo_retention – 90000

Statistics gathering is a key part to a data warehouse. Gathering statistics allows the DW to make educated decisions on the best explain plan thus reducing the query run time. Tuning a data warehouse with start schema enables is relativity simple a without diving to much into every option a simple configuration would be to create Bitmap indexes of each foreign keys columns in the fact table or tables.

dbms_stats.gather_schema_stats – GATHER AUTO
STAR_TRANSFORMATION_ENABLED РTRUE  (If using Star Schema)
OPTIMIZER_MODE = ALL_ROWS
QUERY_REWRITE_ENABLED = TRUE
QUERY_REWRITE_INTEGRITY – TRUSTED
HASH_JOIN_ENABLED – true
always_anti_join = hash
always_semi_join = hash

In order to take full advantage of the hardware provided I would recommend using parallel execution of queries.

In regards to parallel execution a good starting point is below:

Objects smaller than 200 MB should not use any parallelism

Objects between 200 MB and 5GB should use a DOP of 4

Objects beyond 5GB use a DOP of 32

These sizing should be changes to fit the hardware restrictions.

parallel_automatic_tuning = true

parallel_threads_per_cpu = Number of cores per cpu x2
cpu_count = xx – make sure correct

Until next time.

 

DAC automationUtils.sh Script

Hi Again,

Today I noticed that DAC¬†doesn’t¬†have a method to export DAC repository meta data using the Linux/Unix Shell.

DAC is a windows tool that can also run on Linux. Unfortunately for us Linux users this means that not all of the command line options are available as they come in the form of a .bat file wont work in out native shell.

I was looking for a way to export DAC meta data using the command line, I was fully aware that this task could be achieved using the DAC Client GUI but I wanted something that could be run on a schedule to take backups. After some reading it was revealed that this was possible in Windows but now Linux. I have now converted the automationUtils.bat to automationUtils.sh that can run in Linux and Solaris giving you all the functionality. See the completed script below

#!/bin/ksh
#set -x
#
# Author - Andrew Pazikas
# Date - 10-07-13
# Desc - Originaly a Windows tool automationUtils.bat converted to run as a shell script Linux
#

. ./dac_env.sh

JAVA_HOME=/usr/jdk/instances/jdk1.6.0;export JAVA_HOME;
JAVA=${JAVA_HOME}/bin/sparcv9/java;export JAVA

export SQLSERVERLIB=./lib/msbase.jar:./lib/mssqlserver.jar:./lib/msutil.jar:./lib/sqljdbc.jar
export ORACLELIB=./lib/ojdbc6.jar:./lib/ojdbc5.jar:./lib/ojdbc14.jar
export DB2LIB=./lib/db2java.zip
export TERADATALIB=./lib/teradata.jar:./lib/terajdbc4.jar:./lib/log4j.jar:./lib/tdgssjava.jar:./lib/tdgssconfig.jar:./lib
#export SQLSERVERLIB ; export ORACLELIB ; export DB2LIB ; export TERADATALIB

export DBLIBS=${SQLSERVERLIB}:${ORACLELIB}:${DB2LIB}:${TERADATALIB}
export DACLIB=./DAWSystem.jar:.:
export DACCLASSPATH=${DBLIBS}:${DACLIB}
#export DBLIBS ; export DACLIB ; export DACCLASSPATH;
$JAVA -Xmx1024m -cp $DACCLASSPATH com.siebel.etl.functional.AutomationUtils "$1" "$2" "$3"

Check Node Manager Script

Hi Again,

I just knocked up this nodemanager check script that you can run as a cron job making sure the Weblogic Java NodeManager is running and if not email you. See below for the script.


#!/bin/ksh
 #
 #########################################################
 # NAME:check_node_manager.sh #
 # AUTHOR: Paz #
 # DESC: Check no make sure Weblogic Node Manager is #
 # running #
 # DATE: 19/06/13 #
 # VERSION 1.0 #
 # CHANGE LOG: #
 # AP 19/06/2013 Creation #
 # #
 # #
 #########################################################
 #
 #set -x

. $HOME/.profile
 export SCRIPT_HOME='add you scripts home'
 ####################CHECK NODE MANAGER JAVA PROCESS IS RUNNING########################
 cd $SCRIPT_HOME

jps |grep -i NodeManager >Nodemgr_jps_status.log

sleep 2

nodemgr_jps_status=$(cat Nodemgr_jps_status.log |grep -i 'NodeManager' |wc -l)

if [ ${nodemgr_jps_status} -gt 0 ]
 then
 echo 'do nothing Node Manager Alive'
 else
 mailx -s 'NODEMANAGER DOWN' your@email.com
 fi

exit

EOF

The script can be changed if you use a script based nodemanager, if you change ‘jps’ for ‘ps -ef’ so the line would read:

ps -ef |grep -i NodeManager >Nodemgr_jps_status.log

Until next time