Blog

  • Listening Faster – Audiobooks at x2 speed

    Over that last few weeks, I have begun listening faster than ever before than by listening to audiobooks on Audible at x2+ speed after each book is completed I increase the speed by 0.5 to find my max digestion speed. As it stands my current speed is at x2.15, I still have no problem taking in the information or understanding what’s being said. I’ll admit when I first started at x2.0 speed in my listening faster journey it sounds a bit strange but after 10-15min my brain had adapted and was processing the information normally. But why I hear you ask?

    Why start listening faster

    Consuming information and media is a fundamental part of life, whether your listening, watching or reading we all do it, it’s how we grow as a person via our learnings and life experiences, if you can consume and learn more, it can set you apart from another who is consuming only minimal amounts and not learning much at all.

    If I said to you that you could easily get through 5-6 audiobooks per month with only an hour a day of listening faster, would this not peak your interest? Over the course of a year, you would amass a total completed list of 72 audiobooks!

    How? Audiobooks are recorded for all ages and capabilities, the default playback speed has to suit everybody regardless of the speed they themselves can read or digest information.

    In a digital age of social media and instant gratification, the speed I can process information is much faster than the likes of my Dad, while he is tech-savvy enough not he’s not been around technology and the Internet throughout all of his teens and adult life like I have, he’s much more comfortable reading the paper than flicking through Reddit and news sites. Due to the bombardment of information, I am used to I can process information quicker, and more efficiently than my Dad can and no doubt the kids of today the true digital natives can do it better and listen faster than me.

    Finding your limits to listen faster is simple when you read a book to yourself or think to yourself what speed does your internal monologue go at? I am sure it’s much faster than you can read aloud.

    The aim of listening to audiobooks at an increased speed it to find where your limit is when your internal monologue can no longer keep up, once this point is reached drop the speed one or two points below this value and continue listening. Being at x2.15 at the moment I am having no trouble I can see myself reaching around x2.5 before I need to look at tapering off.

    Honestly give it a go, audiobooks, podcasts anything you just need to sit and listen to, its weird at first listening to someone speak in fast forward, by your mind soon adapts the situation and handles it as the new normal and once this happens yours well on your way to listening faster.

    If you liked this article on listening faster please have a look at some of my others here.

  • Check Node Manager Script

    Hi Again,

    I just knocked up this nodemanager check script that you can run as a cron job making sure the Weblogic Java NodeManager is running and if not email you. See below for the script.

    
    #!/bin/ksh
     #
     #########################################################
     # NAME:check_node_manager.sh #
     # AUTHOR: Paz #
     # DESC: Check no make sure Weblogic Node Manager is #
     # running #
     # DATE: 19/06/13 #
     # VERSION 1.0 #
     # CHANGE LOG: #
     # AP 19/06/2013 Creation #
     # #
     # #
     #########################################################
     #
     #set -x
    
    . $HOME/.profile
     export SCRIPT_HOME='add you scripts home'
     ####################CHECK NODE MANAGER JAVA PROCESS IS RUNNING########################
     cd $SCRIPT_HOME
    
    jps |grep -i NodeManager >Nodemgr_jps_status.log
    
    sleep 2
    
    nodemgr_jps_status=$(cat Nodemgr_jps_status.log |grep -i 'NodeManager' |wc -l)
    
    if [ ${nodemgr_jps_status} -gt 0 ]
     then
     echo 'do nothing Node Manager Alive'
     else
     mailx -s 'NODEMANAGER DOWN' [email protected]
     fi
    
    exit
    
    EOF
    
    

    The script can be changed if you use a script based nodemanager, if you change ‘jps’ for ‘ps -ef’ so the line would read:

    ps -ef |grep -i NodeManager >Nodemgr_jps_status.log

    Until next time

  • Adding Colour to Solaris 10

    I have spent hours trolling the net looking for a definitive guide on how to make colours (color if you are from the US) work in Solaris 10.

    By default colour isn’t enabled but once you know how it is relativity easy to get it to work.

    The first step is to download a number of new packages from www.sunfreeware.com the packages needed are as follows:

    coreutils-8.19-sol10-sparc-local
    gmp-4.2.1-sol10-sparc-local
    libiconv-1.14-sol10-sparc-local
    gcc-3.4.6-sol10-sparc-local
    libgcc-3.3-sol10-sparc-local
    libintl-3.4.0-sol10-sparc-local

    After all the packages are downloaded and added to your Solaris machine it time to install them using pkgadd.

    Get root on your machine and install the packages

    su

    pkgadd -d coreutils-8.19-sol10-sparc-local

    Do this for each of the downloaded packages.

    The next step is to test that colours are now working run the following command to check.

    /usr/local/bin/ls –color    (this command assumes that you have installed the packages in the default location)

    Now that colour is working we just need to modify the profile so that it always works.

    Depending on your shell you may need to edit .profile for ksh or .bashrc for bash.

    Update PATH by adding the new location to it:

    export PATH=$PATH:/usr/local/bin

    now if you run;

    which ls

    its should return /usr/local/bin/ls

    add an alias to your profile to append –color to ls command

    alias ls=’/usr/local/bin/ls –color’

    Reload your profile and type ls, the results should return in colour.

    The final section of this guide is to change the colour of your shell prompt you can do so by adding the folowing command to your profile

    export PS1=”\e[0;35m\u@\h > \e[m”

    You can set the colour to any of the below by editing the number;

    Color Code
    Black     0;30
    Blue       0;34
    Green    0;32
    Cyan      0;36
    Red        0;31
    Purple  0;35
    Brown  0;33
    Blue      0;34
    Green   0;32
    Cyan    0;36
    Red      0;31
    Purple 0;35
    Brown 0;33

    You should now have a fully coloured SHELL prompt.

  • Informatica Backup Script

    Hi,

    Its been a while since I posted, today a wrote a quick script that will take a backup of Informatica .rep file and store it in a specified location. The script works in any Unix/Linux based system if you create a cron job for how often you want to take the backup the script will do the rest.

    The script needs some parameters to work effectively if you add you settings then the rest of the script should work without an issue.

    
    #!/bin/ksh
     #
     # +############################################################+
     # # #
     # # Author: Andrew Pazikas #
     # # Date : 05/06/13 #
     # # Desc : Takes a backup of Informatica Metadata #
     # # Change Log: #
     # # 05/06/13 Andrew Pazikas Initial version #
     # # #
     # # #
     # # #
     # +############################################################+
     #set -x
    
    . $HOME/.profile
    
    export BACKUP_DIR='where you want to store your backups'
     export INFA_REPO_NAME='Informatica Repository Name'
     export INFA_DOMAIN_NAME='Informatica Domain Name'
     export INFA_USER='Informatica User Namer'
     export INFA_PASS='Informatica Password '
     export DAYS_TO_KEEP_BACKUP=30
     export LOG_HOME='location on where you want to keep the backup log'
     #########Take backup of Infa Repo##############
     cd $INFA_HOME
    
    pmrep connect -r $INFA_REPO_NAME -d $INFA_DOMAIN_NAME -n $INFA_USER -x $INFA_PASS > $LOG_HOME/pmrep_conn.log
    
    pmrep backup -o $BACKUP_DIR/infa_repo_backup_$(date +%y%m%d).rep
     #########Remove files older than DAYS_TO_KEEP_BACKUP #########
     cd $BACKUP_DIR
     find . -mtime +$DAYS_TO_KEEP_BACKUP -exec rm {} \;
    
    cd $LOG_HOME
     find . -mtime +$DAYS_TO_KEEP_BACKUP -exec rm {} \;
     exit
     EOF
    
    

    Until Next time

    Cheers

  • OBIEE .lok .DAT files

    Came across this issue today took me an age to work out what was wrong then I knocked up a couple of finds and boom all sorted. Anyway the below is more for self reference in case I come across it again.

    
    <Error> <Store> <BEA-280061> <The persistent store “_WLS_AdminServer” could not be deployed: weblogic.store.PersistentStoreException: java.io.IOException: [Store:280021]There was an error while opening the file store file “_WLS_ADMINSERVER000000.DAT”
    
    

    Other Errors that can be encountered are

    
    There are 1 nested errors:
     weblogic.management.ManagementException: Unable to obtain lock on /u01/app/oracle/admin/soa_domains/aserver/soa_domain/servers/AdminServer/tmp/AdminServer.lok. Server may already be running at weblogic.management.internal.ServerLocks.getServerLock(ServerLocks.java:159)
     <Warning> <BEA-171520> <Could not obtain an exclusive lock for directory: /u01/app/oracle/admin/soa_domains/aserver/soa_domain/servers/AdminServer/data/ldap/ldapfiles. Waiting for 10 seconds and then retrying in case existing WebLogic Server is still shutting down.><Warning> <BEA-171520> <Could not obtain an exclusive lock for directory: /u01/app/oracle/admin/soa_domains/aserver/soa_domain/servers/AdminServer/data/ldap/ldapfiles. Waiting for 10 seconds and then retrying in case existing WebLogic Server is still shutting down.>
    
    

    or

    
    <Security> <BEA-090082> <Security initializing using security realm myrealm.>
     <Error> <Store> <BEA-280061> <The persistent store “_WLS_AdminServer” could not be deployed: weblogic.store.PersistentStoreException: [Store:280105]The persistent file store “_WLS_AdminServer” cannot open file _WLS_ADMINSERVER000000.DAT. weblogic.store.PersistentStoreException: [Store:280105]The persistent file store “_WLS_AdminServer” cannot open file _WLS_ADMINSERVER000000.DAT. at weblogic.store.io.file.Heap.open(Heap.java:325)
    
    

    To resolve the above issues, clear below .lok and .DAT files

    
    find . -name “*.DAT” -print -exec rm {} \;
     find . -name “*.lok” -print -exec rm {} \;
    
    For Admin server
     $DOMAIN_HOME/servers/<server name>/tmp/Adminserver.lok
     $DOMAIN_HOME/servers/<server name>/data/ldap/ldapfiles/Embedded.lok
     $DOMAIN_HOME/servers/<server name>/data/store/default/_WLS_ADMINSERVER000000.DAT
     $DOMAIN_HOME/servers/<server name>/data/store/diagnostics/WLS_DIAGNOSTICS000000.DAT
    
    For Managed servers
    
    $DOMAIN_HOME/servers/<server name>/tmp/<servername>.lok
     $DOMAIN_HOME/servers/<server name>/data/ldap/ldapfiles/Embedded.lok
     $DOMAIN_HOME/servers/<server name>/data/ldap/store/default/_WLS_<servername>000000.DAT
     $DOMAIN_HOME/servers/<server name>/data/ldap/store/diagnostics/WLS_DIAGNOSTICS000000.DAT
    
    
  • DAC Restart Script

    Hi,

    Just thought I would share the script I wrote to restart your DAC server in the event of a crash, this can be helpful during the night as there is no need for a manual restart.

    I used a cron job to check to see if the process is running every 5min.

    
    #!/bin/ksh
    
    #########################################################
     # NAME:email_dac_restart.sh #
     # AUTHOR:  #
     # DESC: Makes sure dac is running and restarts if not #
     # VERSION: 18/1/13 #
     # CHANGE LOG: #
     #########################################################
    
    . ~/.profile
    
    if ps -ef | grep '/usr/jdk/instances/jdk1.6.0/bin/sparcv9/java -server -Xmn500m -Xms2048m -Xmx204'
     then
     echo 'do nothing'
     ## tail -1 $DAC_HOME/nohup.out | mailx -s 'DAC Running TST1 ' [email protected]
     else
     cd $DAC_HOME
     nohup $DAC_HOME/startserver.sh &
     tail -200 $DAC_HOME/nohup.out | mailx -s 'DAC Restarted  ' [email protected]
     fi
    
    exit
     EOF
    
    

    Any improvements or comments are welcome

  • OBIEE Performance Tuning Part 3

    For the final part of this guide we will take a look at some of the changes that can be made to the database to achieve better performance. The parameters I recommend changing are listed below;

     

    
    db_block_checksum – TRUE - database writer process will calculate a checksum
    
    db_file_multiblock_read_count – 0
    
    dml_locks - 1000 - minimize lock conversions and achieve better query / Read performance.
    
    job_queue_processes - 2 - This limits the total number of dbms_scheduler and dbms_job jobs that can be running at a given time. Thus, saving database resources
    
    log_buffer - 10485760 - larger values for log_buffer will reduce redo log file I/O.
    
    log_checkpoint_interval – 100000
    
    log_checkpoint_timeout – 3600
    Open_cursors - 1000
    undo_retention - 90000
    Database resource plan – Internal/off
    
    

    Implementing these changes reduced my report timings further showing a clear increase in performance.

    Results:
    Time after changing instanceconfig.xml against Database changes
    1. 1m44s      43s
    2. 2m56s      1m34s
    3. 5m15s      3m19s
    4. 2m42s      1m5s

    This concludes my 3 step guide for OBIEE performance tuning, In future I might write a piece on tuning reports for performance, as the above results we are still above the 1min mark in some reports making sure you write efficient SQL queries by using indexes, partioning and materialized views during development will ensure you are able to reduce these times further.

    Until next time.

  • OBIEE Performance Tuning Part 2

    For the second part of my OBIEE Tuning guide we will look at the instanceconfig.xml and changes that can be made here. As stated in my previous post I dont believe in cache when performance tuning as it can lead to false results, in that respect make sure cache is turned off in OBIEE Enterprise Manager, the below settings can be set by editing the instanceconfig.xml file.

    
    <Table>
     <MaxCells – 200000>
     <MaxVisibleColumns - 1000>
     <MaxVisiblePages – 1000>
     <MaxVisibleRows – 100000>
     <MaxVisibleSections – 1000>
     <Table>
     <DefaultRowsDisplayed – 25>
     <DefaultRowsDisplayedInDownload – 2500>
    
    <Pivot>
     <MaxCells – 200000>
     <MaxVisibleColumns - 1000>
     <MaxVisiblePages – 1000>
     <MaxVisibleRows – 100000>
     <MaxVisibleSections – 1000>
     <Pivot>
     <DefaultRowsDisplayed – 25>
     <DefaultRowsDisplayedInDownload – 2500>
    
    <ThreadPoolDefaults>
     <ChartThreadPool>
     <MaxQueue>2048</MaxQueue>
     <MaxThreads>**</MaxThreads> (Max Threads no. cores x 8)
     </ChartThreadPool>
     </ThreadPoolDefaults>
    
    <Cursors>
     <NewCursorWaitSeconds>10</NewCursorWaitSeconds>
     </Cursors>
    
    <Catalog>
     <LockStaleSecsSoft>14400</LockStaleSecsSoft>
     <LockStaleSecsHard>14400</LockStaleSecsHard>
     <HashUserHomeDirectories>3</HashUserHomeDirectories>
     <UpgradeAndExit>false</UpgradeAndExit>
     </Catalog>
    
    <BIEEHomeLists>
     <Enabled>false</Enabled>
    
    

    With the above changed made I restarted all services including the database to ensure an accurate report running time was recorded.

    Results:
    Time after changing NQServer.ini Time with instanceconfig.xml changes
    1. 1m54s 1m44s
    2. 2m46s 2m56s
    3. 5m13s 5m15s
    4. 2m34s 2m42s

    Looks like the reports performed slightly worse than before. Even with reports running worse I left the settings in as with my original post only a 10s or more differnece either way will count as an improvment.

    Until next time I will show the changed made to the Data Warehouse itself.