The Perl Data Language Book: PDF fixed to work with tablet/phone PDF readers like Aldiko

Perl Data Language: Scientific computing with Perl

Aldiko Book Reader Premium Full v2.2.3.apkMany PDF readers for smart phones (Android/iphone) and tablets manage the pdf files Calibre_Logobased solely on the Title and Author fields in the PDF file. While for this is fine for your average book, it is not all that helpful with manuals that tend to have abbreviated or no data in the title/author fields.

How to fix? Easy. Go get Calibre. Drop the PDF files on to the running Calibre. Edit them by hitting the E key.

In my case, I edited the “Title”, “Author”, “Tags”, “Publisher” and “Languages”:

Calibre

Calibre doesn’t modify the PDF files themselves so I will need to export the files to a custom directory. In Calibre nomenclature, this is “Saving”. Highlight all the titles you want to export and hit “S” twice. Why twice? No idea. Choose the directory.

Perl Data Language Book for PDL 2.006

Perl Data Langauge

You can now copy the exported PDF files to your phone, tablet, whatever without fear of the v2.006 version of the PDL Book being rejected by Aldiko because the v2.004 version is already added.

Share Button

SAP Sybase ASE Unable to shutdown with there are not enough ‘user connections’ Error: 1601, Severity: 17, State: 3

If you’re trying to shutdown ASE and you’re not able to log in to the ASE instance, you can shutdown the instance with “kill -15 ” on Linux/Unix:

server  Error: 1601, Severity: 17, State: 3
server  There are not enough 'user connections' available to start a new process. Retry when there are fewer active users, or ask your System Administrator to reconfigure ASE with more user connections

Obtain the os PID simply by running showserver:

$ showserver
USER          PID %CPU %MEM   SZ  RSS    TTY STAT    STIME  TIME COMMAND
sybase    542123 15.1  1.0 52220 93356  pts/2 A    11:15:35  8:34 /sybase/ASE-15_0/bin/dataserver -d/dev/rmasterd001 -e/sybase/ASE-15_0/install/errorlog -c/sybase/ASE-15_0/sybase.cfg -isybase -ssybase -M/sybase/mem 

Kill the dataserver process with “kill -15″ triggering a “shutdown with nowait” within ASE:

$ kill -15 542123

Only as a last resort, use “kill -9″.

  • If you need to, verify with “ipcs -m” that the shared memory segments are released, if not use “ipcrm” to release it.
  • Verify with “netstat -an |grep ” that the bound port(s) that ASE uses are released. If not, you may need to restart the machine to release them.
Share Button

SAP Sybase IQ: Easily Extracting and Importing Data With Your IQ Data Warehouse

SAP/Sybase’s documentationSybase SAP isn’t very clear for new IQ dbas and developers. Once such item is simply extracting data from the warehouse into files that can be loaded into another IQ system when load table from location simply isn’t available or isn’t practical.

Assumptions:

  1. Extraction of table data with the owner of ‘JF’. Replace with the schemas you desire.
  2. Exported data will go to the /dba/backup/sybbackup/bkp/export/data_export directory.
  3. We will use isql with the parameters to connect stored in the $ISQL environment variable.
  4. The max temporary table size is 1TB. Increase as needed.

The simplest method is to simply create a shell script per table to export:

#!/bin/env bash

if [[ ! -d data_export ]]; then
  mkdir data_export
fi

set -A TABLES

TABLES=$( echo "set nocount on
go
select convert(varchar(80), user_name(creator) + '.' + table_name) from systable where user_name(creator) in ('JF') and table_type = 'BASE'
go
exit" | $ISQL -h -b  | sed -e '/affected/d' -e '/---/d' -e '/^$/d' )

for table in ${TABLES}; do
  echo
  echo "Generating /dba/backup/sybbackup/bkp/export/data_export/${table}_export.sql"
  sql_file=/dba/backup/sybbackup/bkp/export/data_export/${table}_export.sql

  echo "set temporary option Temp_Extract_Name1 = '/dba/backup/sybbackup/bkp/export/data_export/${table}.data'
  set temporary option Temp_Extract_Name2 = ''
  set temporary option Temp_Extract_Binary = 'on'
  set temporary option Temp_Extract_Swap = 'off'
    set temporary option TEMP_EXTRACT_SIZE1 = 1073741824
go

  select * from ${table}
go

  set temporary option Temp_Extract_name1 = ''
go" > $sql_file
done

Now that we have the script files, let’s extract the data by running the shell scripts and compress each exported file with the gzip program. You can cancel the export at any time with ctrl-c and restart it after the last exported table:

for TABLE in *.sql; do 
  datafile=$( echo $TABLE | sed -e 's/_export.sql$/.data/' )
  echo $datafile
  gzipfile=${datafile}.gz

  if [ -f $gzipfile ]; then
    echo "$gzipfile already exists"
  else
    $ISQL -i $TABLE 2>&1 > $TABLE.out
    gzip -1 $datafile
  fi
done

Now that we have that data exported, imagine that you copied the files to another system. How do you import that data assuming that the tables have already been completed? Easy. We will create a set of import script files.

LOCAL_PATH=/dba/backup/sybbackup/bkp/import/data_import

for TABLE_gzip in *.gz; do 
  datafile=$( echo $TABLE_gzip | sed -e 's/.gz$//' )
  TABLE_FILE=$( echo $TABLE_gzip | sed -e 's/.data.gz$//' )
  TABLE_OWNER=$( echo $TABLE_FILE | cut -d . -f1 )
  TABLE=$( echo $TABLE_FILE | cut -d . -f2 | sed -e 's/_export$//' )

  if [ -f ${datafile}.done ]; then
    echo "${datafile} already processed"
  else
    # ===================================================================
    # Generate the load commend to load the file
    # ===================================================================
    echo "#!/bin/env bash" > ${TABLE_OWNER}.${TABLE}_import.sh
    echo ". /dba/code/syb/.setenv" >> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "" >> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "echo \"uncompressing $TABLE_gzip\"" >> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "gzip -dc $TABLE_gzip > $datafile" >> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "echo \"importing ${TABLE_OWNER}.${TABLE}\"" >> ${TABLE_OWNER}.${TABLE}_import.sh
    echo '$ISQL -b < <EOF'>> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "select 'Start datetime'=convert(char(25), getdate(), 119), 'TABLENAME=${TABLE_OWNER}.${TABLE}'">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "go">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo " ">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "truncate table ${TABLE_OWNER}.${TABLE}">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "go">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "commit">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "go">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "SET TEMPORARY OPTION IDENTITY_INSERT = '${TABLE_OWNER}.${TABLE}'">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "go">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "load table ${TABLE_OWNER}.${TABLE}">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "( ">> ${TABLE_OWNER}.${TABLE}_import.sh
    ../gen_iq_col_list_w_null_byte.sh ip00 $TABLE_OWNER $TABLE | sed -e '/row affected/d;s/ *$//;/^$/d'>> ${TABLE_OWNER}.${TABLE}_import.sh
    echo ")">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "from '${LOCAL_PATH}/${datafile}'">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "escapes off format binary">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "go">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "commit">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "go">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo " ">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "select 'Start datetime'=convert(char(25), getdate(), 119), 'TABLENAME=${TABLE_OWNER}.${TABLE}'">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "go">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "SET TEMPORARY OPTION IDENTITY_INSERT = ''">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "go">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo " ">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "EOF">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "" >> ${TABLE_OWNER}.${TABLE}_import.sh
    echo "rm -f $datafile">> ${TABLE_OWNER}.${TABLE}_import.sh
    echo " ">> ${TABLE_OWNER}.${TABLE}_import.sh
    chmod u+x ${TABLE_OWNER}.${TABLE}_import.sh
  fi
done

If the target system is a different endian (Linux x86-64 -> AIX), replace

echo "escapes off format binary">> ${TABLE_OWNER}.${TABLE}_import.sh

with

echo "escapes off format binary byte order high">> ${TABLE_OWNER}.${TABLE}_import.sh

We simply need to run each import script file:

for import_file in *_import.sh ; do ./$import_file 2>&1 |tee  ${j}.out ; done
Share Button

FW: Python: Hello world (Socratica)

From the fine folk at Socratica, Python: Hello world

Share Button

SAP Sybase IQ: How to Restore Your Backups to Another system

SAP/Sybase’s documentationSybase SAP isn’t very clear for new IQ dbas and developers. Once such item is simply restoring an IQ database on to another system. Unlike ASE, you need to specify the new file locations if they are different than the source server.

Assumptions:

  1. IQ software has been installed
  2. The new dbfile locations are symbolic links to raw partitions OR the path exists but not the files
  3. You have a valid SYSAM license for the new IQ instance.
  4. The new IQ instance name is set (via -n instance)
  5. The old directory for the .db, .log and .mir exists (use a symbolic link if you wish)

Obtain dbspace file names with sp_iqfile:

select DBFileName, Path, DBFileSize from sp_iqfile();
DBFileName	Path	DBFileSize
'IQ_SYSTEM_MAIN'	'/dba/syb/old_iq/sybdev/IQ_MAIN/old_iqmain001.iq'	'32G'
'IQ_USER_MAIN_FILE_01'	'/dba/syb/old_iq/sybdev/IQ_USER_MAIN/old_iqusermain001.iq'	'1024G'
'IQ_SYSTEM_TEMP'	'/dba/syb/old_iq/sybdev/IQ_TEMP/old_iqtemp001.iqtmp'	'32G'
'IQ_SYSTEM_TEMP_002'	'/dba/syb/old_iq/sybdev/IQ_TEMP/old_iqtemp002.iqtmp'	'32G'

Create a restore.sql file renaming the DBFileName to the new locations:

restore database 'new_iq'
FROM '/dba/backup/sybbackup/old_iq.20140423100111.17760.IQfullbkp'
RENAME IQ_SYSTEM_MAIN TO '/dba/syb/new_iq/sybdev/IQ_MAIN/new_iqmain001.iq'
RENAME IQ_SYSTEM_TEMP TO '/dba/syb/new_iq/sybdev/IQ_TEMP/new_iqtemp001.iq'
RENAME IQ_SYSTEM_TEMP_002 TO '/dba/backup/sybbackup/new_iqtemp002.iq'
RENAME IQ_SYSTEM_MSG TO '/dba/syb/new_iq/instlog/new_iq.iqmsg'
RENAME IQ_USER_MAIN_FILE_01 TO '/dba/syb/new_iq/sybdev/IQ_USER_MAIN/new_iqusermain001.iq';

Stop the destination IQ instance if it is running and start the utility database:

stop_iq
Checking system ...

The following 1 server(s) are owned by 'sybdba'

## Owner          PID   Started  CPU Time  Additional Information
-- ---------  -------  --------  --------  ------------------------------------
1: sybdba       13909     Apr24  00:43:46  SVR:new_iq DB:new_iq PORT:58116
              /dba/syb/new_iq/sybase/IQ-16_0/bin64/iqsrv16 @/dba/syb/new_iq/sybdb/new_iq.cfg /dba/syb/new_iq/sybdb/new_iq.db -gn 65 -o /dba/syb/new_iq/sybase/IQ-16_0/logfiles/
${SYBASE}/IQ-16_0/bin64/start_iq -n utility_db -gu dba -c 48m -gc 20 -iqgovern 30 \
        -gd all -gl all -gm 10 -gp 4096 -ti 4400 -z -zr all -zo $SYBASE/IQ-16_0/logfiles/utility_db.out \
        -o $SYBASE/IQ-16_0/logfiles/utility_db.srvlog -iqmc 100 -iqtc 100 -x "tcpip{port=9000}"
Starting server utility_db on localhost at port 9000 (04/30 09:37:16)

Run Directory       : /dba/syb/new_iq/sybdb
Server Executable   : /dba/syb/new_iq/sybase/IQ-16_0/bin64/iqsrv16
Server Output Log   : /dba/syb/new_iq/instlog/utility_db.srvlog
Server Version      : 16.0.0.653/sp03 16.0.0/Linux 2.6.18-194.el5
Open Client Version : N/A
User Parameters     : '-n' 'utility_db' '-gu' 'dba' '-c' '48m' '-gc' '20' '-iqgovern' '30' '-gd' 'all' '-gl' 'all' '-gm' '10' '-gp' '4096' '-ti' '4400' '-z' '-zr' 'all' '-zo' '/dba/syb/new_iq/instlog/utility_db.out' '-o' '/dba/syb/new_iq/instlog/utility_db.srvlog' '-iqmc' '100' '-iqtc' '100' '-x' 'tcpip{port=9000}'
Default Parameters  : -gn 25
….

Remove the db, log and mir files:

rm instance.db instance.log instance.mir

Restore the full backup:

dbisql -c "uid=dba;pwd=sql;eng=utility_db;dbn=utility_db" -port 9000 -host $( hostname ) -nogui "restore.sql"

Restore the incremental backup(s):

dbisql -c "uid=dba;pwd=sql;eng=utility_db;dbn=utility_db" -port 9000 -host $( hostname ) -nogui "restore_incrementals.sql"

Stop the utility database:

stop_iq

Start the IQ server to ensure it comes up then shut it back down.

If the name of the server has changed (e.g. old_iq -> new_iq), then we need to update the log and mir files. First let’s find out where the log and mir files are currently set to in the db file:

dblog new_iq.db
SQL Anywhere Transaction Log Utility Version 16.0.0.653
"new_iq.db" is using log file "/dba/syb/old_iq/sybdb/old_iq.log"
"new_iq.db" is using log mirror file "/dba/syb/old_iq/sybdb/old_iq.mir"
Transaction log starting offset is 0702994164
Transaction log current relative offset is 0000397583

Set the log file to “new_iq.log”:

dblog -t new_iq.log new_iq.db
SQL Anywhere Transaction Log Utility Version 16.0.0.653
"new_iq.db" was using log file "/dba/syb/old_iq/sybdb/old_iq.log"
"new_iq.db" is using log mirror file "/dba/syb/old_iq/sybdb/old_iq.mir"
"new_iq.db" is now using log file "new_iq.log"
Transaction log starting offset is 0702994164
Transaction log current relative offset is 0000397625

We need to clear the mir file(s) before we can assign a new one:

dblog -r new_iq.db
SQL Anywhere Transaction Log Utility Version 16.0.0.653
"new_iq.db" is using log file "new_iq.log"
"new_iq.db" was using log mirror file "/dba/syb/old_iq/sybdb/im00.mir"
"new_iq.db" is now using no log mirror file
Transaction log starting offset is 0702994164
Transaction log current relative offset is 0000397625

Set the mir file:

dblog -m new_iq.mir new_iq.db
SQL Anywhere Transaction Log Utility Version 16.0.0.653
"new_iq.db" is using log file "new_iq.log"
"new_iq.db" was using no log mirror file
"new_iq.db" is now using log mirror file "new_iq.mir"
Transaction log starting offset is 0702994164
Transaction log current relative offset is 0000397625

Start your IQ instance.

Share Button