SOLVED: SAP Sybase IQ SQL Anywhere error -203 Cannot set a temporary option for user (Rapid SQL)

Thanks to Joseph Weaver for supplying this workaround!
In Embarcadero’s Rapid SQL acceRapidSQLssing data within SAP IQ can sometimes result in a “SQL Anywhere Error -203: Cannot set a temporary option for user ‘XXXXXX’” error.  This is not an IQ issue.  


The Rapid SQL configuration needs to be updated by disabling the “Enable SET Query Options” by default:


You will need to restart Rapid SQL for the change to take effect.
Share Button

The Perl Data Language Book: PDF fixed to work with tablet/phone PDF readers like Aldiko

Perl Data Language: Scientific computing with Perl

Aldiko Book Reader Premium Full v2.2.3.apkMany PDF readers for smart phones (Android/iphone) and tablets manage the pdf files Calibre_Logobased solely on the Title and Author fields in the PDF file. While for this is fine for your average book, it is not all that helpful with manuals that tend to have abbreviated or no data in the title/author fields.

How to fix? Easy. Go get Calibre. Drop the PDF files on to the running Calibre. Edit them by hitting the E key.

In my case, I edited the “Title”, “Author”, “Tags”, “Publisher” and “Languages”:


Calibre doesn’t modify the PDF files themselves so I will need to export the files to a custom directory. In Calibre nomenclature, this is “Saving”. Highlight all the titles you want to export and hit “S” twice. Why twice? No idea. Choose the directory.

Perl Data Language Book for PDL 2.006

Perl Data Langauge

You can now copy the exported PDF files to your phone, tablet, whatever without fear of the v2.006 version of the PDL Book being rejected by Aldiko because the v2.004 version is already added.

Share Button

SAP Sybase ASE Unable to shutdown with there are not enough ‘user connections’ Error: 1601, Severity: 17, State: 3

If you’re trying to shutdown ASE and you’re not able to log in to the ASE instance, you can shutdown the instance with “kill -15 ” on Linux/Unix:

server  Error: 1601, Severity: 17, State: 3
server  There are not enough 'user connections' available to start a new process. Retry when there are fewer active users, or ask your System Administrator to reconfigure ASE with more user connections

Obtain the os PID simply by running showserver:

$ showserver
sybase    542123 15.1  1.0 52220 93356  pts/2 A    11:15:35  8:34 /sybase/ASE-15_0/bin/dataserver -d/dev/rmasterd001 -e/sybase/ASE-15_0/install/errorlog -c/sybase/ASE-15_0/sybase.cfg -isybase -ssybase -M/sybase/mem 

Kill the dataserver process with “kill -15″ triggering a “shutdown with nowait” within ASE:

$ kill -15 542123

Only as a last resort, use “kill -9″.

  • If you need to, verify with “ipcs -m” that the shared memory segments are released, if not use “ipcrm” to release it.
  • Verify with “netstat -an |grep ” that the bound port(s) that ASE uses are released. If not, you may need to restart the machine to release them.
Share Button

SAP Sybase IQ: Easily Extracting and Importing Data With Your IQ Data Warehouse

SAP/Sybase’s documentationSybase SAP isn’t very clear for new IQ dbas and developers. Once such item is simply extracting data from the warehouse into files that can be loaded into another IQ system when load table from location simply isn’t available or isn’t practical.


  1. Extraction of table data with the owner of ‘JF’. Replace with the schemas you desire.
  2. Exported data will go to the /dba/backup/sybbackup/bkp/export/data_export directory.
  3. We will use isql with the parameters to connect stored in the $ISQL environment variable.
  4. The max temporary table size is 1TB. Increase as needed.

The simplest method is to simply create a shell script per table to export:

#!/bin/env bash

if [[ ! -d data_export ]]; then
  mkdir data_export


TABLES=$( echo "set nocount on
select convert(varchar(80), user_name(creator) + '.' + table_name) from systable where user_name(creator) in ('JF') and table_type = 'BASE'
exit" | $ISQL -h -b  | sed -e '/affected/d' -e '/---/d' -e '/^$/d' )

for table in ${TABLES}; do
  echo "Generating /dba/backup/sybbackup/bkp/export/data_export/${table}_export.sql"

  echo "set temporary option Temp_Extract_Name1 = '/dba/backup/sybbackup/bkp/export/data_export/${table}.data'
  set temporary option Temp_Extract_Name2 = ''
  set temporary option Temp_Extract_Binary = 'on'
  set temporary option Temp_Extract_Swap = 'off'
    set temporary option TEMP_EXTRACT_SIZE1 = 1073741824

  select * from ${table}

  set temporary option Temp_Extract_name1 = ''
go" > $sql_file

Now that we have the script files, let’s extract the data by running the shell scripts and compress each exported file with the gzip program. You can cancel the export at any time with ctrl-c and restart it after the last exported table:

for TABLE in *.sql; do 
  datafile=$( echo $TABLE | sed -e 's/_export.sql$/.data/' )
  echo $datafile

  if [ -f $gzipfile ]; then
    echo "$gzipfile already exists"
    $ISQL -i $TABLE 2>&1 > $TABLE.out
    gzip -1 $datafile

Now that we have that data exported, imagine that you copied the files to another system. How do you import that data assuming that the tables have already been completed? Easy. We will create a set of import script files.


for TABLE_gzip in *.gz; do 
  datafile=$( echo $TABLE_gzip | sed -e 's/.gz$//' )
  TABLE_FILE=$( echo $TABLE_gzip | sed -e 's/.data.gz$//' )
  TABLE_OWNER=$( echo $TABLE_FILE | cut -d . -f1 )
  TABLE=$( echo $TABLE_FILE | cut -d . -f2 | sed -e 's/_export$//' )

  if [ -f ${datafile}.done ]; then
    echo "${datafile} already processed"
    # ===================================================================
    # Generate the load commend to load the file
    # ===================================================================
    echo "#!/bin/env bash" > ${TABLE_OWNER}.${TABLE}
    echo ". /dba/code/syb/.setenv" >> ${TABLE_OWNER}.${TABLE}
    echo "" >> ${TABLE_OWNER}.${TABLE}
    echo "echo \"uncompressing $TABLE_gzip\"" >> ${TABLE_OWNER}.${TABLE}
    echo "gzip -dc $TABLE_gzip > $datafile" >> ${TABLE_OWNER}.${TABLE}
    echo "">> ${TABLE_OWNER}.${TABLE}
    echo "echo \"importing ${TABLE_OWNER}.${TABLE}\"" >> ${TABLE_OWNER}.${TABLE}
    echo '$ISQL -b < <EOF'>> ${TABLE_OWNER}.${TABLE}
    echo "select 'Start datetime'=convert(char(25), getdate(), 119), 'TABLENAME=${TABLE_OWNER}.${TABLE}'">> ${TABLE_OWNER}.${TABLE}
    echo "go">> ${TABLE_OWNER}.${TABLE}
    echo " ">> ${TABLE_OWNER}.${TABLE}
    echo "truncate table ${TABLE_OWNER}.${TABLE}">> ${TABLE_OWNER}.${TABLE}
    echo "go">> ${TABLE_OWNER}.${TABLE}
    echo "commit">> ${TABLE_OWNER}.${TABLE}
    echo "go">> ${TABLE_OWNER}.${TABLE}
    echo "go">> ${TABLE_OWNER}.${TABLE}
    echo "load table ${TABLE_OWNER}.${TABLE}">> ${TABLE_OWNER}.${TABLE}
    echo "( ">> ${TABLE_OWNER}.${TABLE}
    ../ ip00 $TABLE_OWNER $TABLE | sed -e '/row affected/d;s/ *$//;/^$/d'>> ${TABLE_OWNER}.${TABLE}
    echo ")">> ${TABLE_OWNER}.${TABLE}
    echo "from '${LOCAL_PATH}/${datafile}'">> ${TABLE_OWNER}.${TABLE}
    echo "escapes off format binary">> ${TABLE_OWNER}.${TABLE}
    echo "go">> ${TABLE_OWNER}.${TABLE}
    echo "commit">> ${TABLE_OWNER}.${TABLE}
    echo "go">> ${TABLE_OWNER}.${TABLE}
    echo " ">> ${TABLE_OWNER}.${TABLE}
    echo "select 'Start datetime'=convert(char(25), getdate(), 119), 'TABLENAME=${TABLE_OWNER}.${TABLE}'">> ${TABLE_OWNER}.${TABLE}
    echo "go">> ${TABLE_OWNER}.${TABLE}
    echo "go">> ${TABLE_OWNER}.${TABLE}
    echo " ">> ${TABLE_OWNER}.${TABLE}
    echo "EOF">> ${TABLE_OWNER}.${TABLE}
    echo "" >> ${TABLE_OWNER}.${TABLE}
    echo "rm -f $datafile">> ${TABLE_OWNER}.${TABLE}
    echo " ">> ${TABLE_OWNER}.${TABLE}
    chmod u+x ${TABLE_OWNER}.${TABLE}

If the target system is a different endian (Linux x86-64 -> AIX), replace

echo "escapes off format binary">> ${TABLE_OWNER}.${TABLE}


echo "escapes off format binary byte order high">> ${TABLE_OWNER}.${TABLE}

We simply need to run each import script file:

for import_file in * ; do ./$import_file 2>&1 |tee  ${j}.out ; done
Share Button

FW: Python: Hello world (Socratica)

From the fine folk at Socratica, Python: Hello world

Share Button