Installing xrdb on Cent OS 6.4

I have been struggling to install xrdb on Cent OS 6.4
That’s what helped.

Add http_caching=packages to /etc/yum.conf
yum clean all
rm -fr /var/cache/yum/*
rpm -ivh epel-release-latest-6.noarch.rpm
sudo yum upgrade ca-certificates --disablerepo=epel
yum install xrdp
yum install tigervnc
yum install tigervnc-server
chkconfig --levels 35 xrdp on
service xrdp start

Many thanks to the Internet ^)

Making myself a backdoor through the corporate firewall. Again.

Recently I’ve been fighting with corporate internet access policy again.
After recent updates Firefox and Palemoon (my browser of choice) stopped working with ForceBindIP utility. Period.
So I’ve starged using VMWare CentOS virtual machine with virtual network adapter bound to my WiFi physical adapter connected to cell phone hotspot. Not so convenient but at least it works.
ForceBindIP is still usefull for connecting to blocked servers with some utilities say WinSCP.

Python: implications while with applying regexp over UTF-16 file and the example of how to solve them.

This simple Python program helps to quickly extract distinct Essbase errors from given log file. It demonstrates the usage of ArgumentParser, the usage of dictionaries and sets, reading of UTF-16 file (with BOM) and applying unicode regex on it’s contents. Plus it parses itself to find all Essbase error codes. The actual file is pretty large, because it contains all error codes in the final section, but it was shortened for this blog.
For me the hard part was to actually read UTF-16 file and not to forget adding `re.UNICODE` to `re.compile` call.

# -*- coding: utf-8 -*-

import sys
import os
import io
import re
import codecs
from argparse import ArgumentParser

usage = "usage: %prog "
parser = ArgumentParser()

arg_group = parser.add_mutually_exclusive_group()
parser.add_argument( "log_file"
	, help = "Error or log file" )
parser.add_argument( "pattern"
	, help = "Pattern" )

( args ) = parser.parse_args()

if not args.log_file  or not args.pattern:
	print "Not all parameters set"
	sys.exit( 0 )

messageFinder = re.compile( r"%%(\d+?)\s(.*)$" )
fmsg = open( '' )
msg = {}
msgCount = 0
for line in fmsg:
	match = line )
	if None <> match:
		msg[ ] =
		msgCount = msgCount + 1
print "Parsed " + str( msgCount ) + " messages"

finder = re.compile( u"" + args.pattern, re.UNICODE )
distinct = set()

lineCount = 0
if args.log_file and os.path.isfile( args.log_file ):
	fin =, encoding='utf-16')
	for line in fin:
		lineCount = lineCount + 1
		match = line )
		if ( None <> match ):
			if not( in distinct ):
				distinct.add( )

print "Searched " + str( lineCount ) + ' lines for "' + args.pattern + '"'
print "\nFound\n"
for el in distinct:
	print el + " " + msg[el]
sys.exit( 0 )

#Created: Jan 13 2015 23:14:17
%%1001000 Unable to Open Report File [%s] on Server
%%1001001 Unknown Command [%s] in Report
%%1001002 Incorrect Syntax for Range Format in Report

Remove BOM with the help of Python before concatenating files

One day I was struggling to concatenate files generated by 3rd party utility for Essbase upload.
Uploading did broke on every single run and I cound not find the culprit until I opened the outline file with HxD HEX editor and found that extra bytes were added between each concatenated file.
To my suprise the 3rd party utility unloaded data from database in Unicode with BOM starting each separate file.
So before concatenating those files I had to remove BOM from them.
Thanks to this SO answer it was easily achievable with this python script.

# -*- coding: utf-8 -*-
import os, sys, codecs

BUFSIZE = 4096
BOMLEN = len(codecs.BOM_UTF8)

path = sys.argv[1]
with open(path, "r+b") as fp:
    chunk =
    if chunk.startswith(codecs.BOM_UTF8):
        i = 0
        chunk = chunk[BOMLEN:]
        while chunk:
            i += len(chunk)
  , os.SEEK_CUR)
            chunk =, os.SEEK_CUR)

Oracle: Iterate over regexp matches with hierarchical query trick

  for matches in ( 
    with in_data as (
      select 'v113*(v43|v42|v900)/v54' haystack
        , 'v\d+' needle
      from dual
    , matches ( a_match, occ ) as ( 
      select regexp_substr( haystack, needle, 1, 1 ) a_match
        , 1 occ
      from in_data
      union all
      select regexp_substr( haystack, needle, 1, p.occ+1 ) a_match
        , p.occ + 1 as occ
      from matches p
      cross join in_data
      where p.a_match is not null
    cycle a_match set cycle to 1 default 0
    select a_match
    from matches
    where a_match is not null
  ) loop
    dbms_output.put_line( matches.a_match );
  end loop;

PL/SQL: effectively reusing table of object type in PL/SQL

Just follow the example code to get the feature.

  v_tbl ty_varchar2_tbl; -- table of varchar2
  v_stmt clob;
  g_te_merge_te ty_te;
  -- here we fill our table one time
  select column_value
  bulk collect into v_tbl 
  from table( pk_utils.vchar2_to_vchar2_lines( v_sd.composite_key, ',' ) )
  -- some code follows
  -- ...
  -- don't try to get intricacies of calling pk_te
  -- just mention the usage of v_tbl two times with casting it to SQL-level type
  select pk_te.substitute( 
      , ty_m(
        ty_p( 'dest_tbl', v_sd.dest_table )
        , ty_p( 'tmp_tbl', v_sd.tmp_table )
      , cursor ( 
        select ty_m( ty_p( 'column_name', uc1.column_name ) )
        from user_tab_columns uc1
        where uc1.table_name like v_sd.dest_table
          and uc1.column_name not in ( select t.column_value from table( cast ( v_tbl as TY_VARCHAR2_TBL ) ) t )
      , cursor ( 
        select ty_m( ty_p( 'comp_key', t.column_value ) )
        from table( cast ( v_tbl as TY_VARCHAR2_TBL ) ) t
  into v_stmt 
  from dual


Just to remember: changing Essbase Administration Services Console’s interface language

Head to EAS Console install directory (your installation directory may differ)


Locate a file and change it’s contents to include your desired locale and language, for example US/EN.

#Mon Aug 19 18:24:32 PDT 2002