Pages

Monday, November 30, 2009

resolve bind variable values in PS trace/log files

Peoplesoft SQL and AE trace files display bind variable values along with the sql statements. Here is a script that quickly merges them so the sql statements can be rerun for testing.

Friday, July 17, 2009

A Dynamic Prompt Table based on Specific Columns

Some Peoplesoft Financials tables include a number of columns that are coming from different modules such as PO, AP, REQ. Commonly these transaction data are stored in modules' own specific tables, and a certain functions, such as Commitment Control, capture these transaction into a single table. In such case, the Commitment Control transaction table is defined with a column assigned for each type of the modules, such as VOUCHER_ID, PO_ID, REQ_ID, JOURNAL_ID etc.

The Search Key for such tables is defined by another auto-incrementing numeric column. This renders the key column meaningless. On a deliverered search page, user has to know beforehand what the unique key is in order to bring up the transaction.

Tuesday, June 23, 2009

Quick PS Util - Display Table data in order of Column Name

When using Toad or other tools that displays results in a grid that can be scrolled from left to right, and there are many columns, I usually have to to go back and forth a few times to find a column. Some of the solutions to this --

1. creataing a function returns a sql that displays fields in the order of filed name:
CREATE OR REPLACE function  fo(table_in IN varchar2)RETURN varchar2
IS
sql1 varchar2(6000);
cnt integer;
rec_name varchar2(30);

cursor c1 is
select column_name as fieldname from all_tab_columns where TABLE_NAME=rec_name order by column_name;

BEGIN

cnt :=0;
sql1 :='select ';
rec_name :='PS_'||upper(table_in);

FOR zz IN c1 LOOP
sql1:=sql1 || zz.fieldname||',';
cnt:=cnt+1;
END LOOP;

if cnt = 0 then

rec_name := upper(table_in);

FOR zz IN c1 LOOP
sql1:=sql1 || zz.fieldname||',';
cnt:=cnt+1;
END LOOP;
end if;

if cnt = 0 then
return table_in ||' columns not found';
end if;

sql1 := sql1 || '''---'' from ' || rec_name ;

RETURN sql1 ;

EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20001,'Errro - '||SQLCODE||' -ERROR- '||SQLERRM);

END;
/

show error function fo


Ex:
  • select fo('installation') from dual -- default to 'PS_'
  • select fo('psrecdefn') from dual
  • select fo('all_objects') from dual

Wednesday, June 17, 2009

Change Peoplesoft FTP to SFTP - Part 4

A sample script that ties up things using the information discussed.
1. minimize code changes to delivered script & call custom code in separate script
:::::
cd ${PS_HOME}/bin

# source my sftp script
. my_sftp_script

ftp_cmd_file=$1
::::::
::::::
echo "about to submit ftp -vin $1"
#$FTP_CMD -vin < $1
my_sftp_function

break
;;

esac

::::::::


2. my_sftp_script:
#####################################################
#
# My SFTP function
#
#####################################################
function my_sftp_function
{

#singal handler
trap "cleanup" 0
trap "echo Cancelled; cleanup;" 1 2 3 15

err=0
feedback=1

SFTP_CMD="/usr/bin/sftp"
sftp_cmd="h_sftp_cmd."$$
sftp_tmp="h_sftp_tmp."$$
sftp_log="h_sftp_log."$$
dev=me@some.place.com
user=/somehost/somedir/user_list

##### sftp messages --change text as needed
SFTP_LOGIN_ERR="Permission denied (publickey,password,keyboard-interactive)"
SFTP_NO_FILE="No such file or directory"

##### ftp messages & codes --change text as needed
FTP_NO_FILE="A file or directory in the path name does not exist."
FTP_LOGIN_ERR="Login incorrect."
FTP_MAX_SIZE="Exceeded storage allocation"

FTP_DEL_NOFILE_CD=532
FTP_LOGIN_ERR_CD=530
FTP_MAX_SIZE_CD=552
FTP_GET_NOFILE_CD=550

FTP_PUT_OK="226 Transfer complete."
FTP_GET_OK="226 Transfer complete."
FTP_DEL_OK="250 DELE command successful."

###############
# get ftp xfer command, remote dir, remote file name

cmd=''
r_file=`grep ^put $ftp_cmd_file | awk '{print $3}' | tr '\"' '\0'`

if [ ${r_file:-none} == 'none' ]; then

r_file=`grep ^get $ftp_cmd_file | awk '{print $2}' | tr '\"' '\0'`

if [ ${r_file:-none} == 'none' ]; then

r_file=`grep ^delete $ftp_cmd_file | awk '{print $2}' | tr '\"' '\0'`

if [ ${r_file:-none} != 'none' ]; then
cmd=delete
fi
else
cmd=get
fi
else
cmd=put
fi

r_dir=`grep ^cd $ftp_cmd_file | awk 'BEGIN{var=""} {if ($2!="/") var=var"/"$2; else var=var$2; } END{print var}' | sed 's/\/\//\//g'`

#echo "!sleep 500" > $sftp_cmd

# convert ftp to sftp commands
grep -e ^lcd -e ^cd -e ^put -e ^delete -e ^get -e ^quit $ftp_cmd_file | sed s/^delete/rm/ | awk '!/^rm/ {print $0} /^rm/ {print $0"\""}' > $sftp_cmd


echo 'sftp commands---------------' > $sftp_log
cat $sftp_cmd >> $sftp_log

echo '\nsftp log--------------------' >> $sftp_log

# run sftp command batch file
$SFTP_CMD -b $sftp_cmd `grep ^user $ftp_cmd_file|awk '{print $2}'`@`grep ^open $ftp_cmd_file|awk '{print $2}'` 1>>$sftp_log 2>&1

err=$?

if [ $err -ne 0 ]; then

grep -qi $SFTP_LOGIN_ERR $sftp_log

if [ $? -eq 0 ]; then
echo $FTP_LOGIN_ERR_CD" "$FTP_LOGIN_ERR
else
grep -qi $SFTP_NO_FILE $sftp_log

if [ $? -eq 0 ]; then

if [ $cmd = 'delete' ]; then
echo $FTP_DEL_NOFILE_CD" "$FTP_NO_FILE
else
echo $FTP_GET_NOFILE_CD" "$FTP_NO_FILE
fi
else

grep -qi $FTP_MAX_SIZE $sftp_log

if [ $? -eq 0 ]; then

echo $FTP_MAX_SIZE_CD" "$FTP_MAX_SIZE

fi
fi
fi

else
# check sftp results

echo '\nverifying results------------' >>$sftp_log

chk_results

err=$?

if [ $err -eq 0 ]; then

if [ $cmd = 'get' ]; then
echo $FTP_GET_OK
else
if [ $cmd = 'put' ]; then
echo $FTP_PUT_OK
else
echo $FTP_DEL_OK
fi
fi
fi
fi

# send feedback
if [ $err -ne 0 ] && [ $feedback -eq 1 ]; then

echo $cmd' failed.' >> $sftp_log

echo '\nftpunx log------------------' >> $sftp_log
echo ${ftp_cmd_file%.*}".log:" >> $sftp_log
cat `echo ${ftp_cmd_file%.*}".log"` >> $sftp_log

echo '\nEnd: '`date` >> $sftp_log

if [ -f $user ]; then
mailx -s"My SFTP failed" `cat $user` < $sftp_log else mailx -s"My SFTP failed" $dev < $sftp_log fi fi return $err } ################################################ # # check SFTP results # ################################################ function chk_results { case $cmd in put | get) # compare file sizes echo " `grep ^cd $ftp_cmd_file` ls -l "$r_file" quit " > $sftp_cmd

$SFTP_CMD -b $sftp_cmd `grep ^user $ftp_cmd_file|awk '{print $2}'`@`grep ^open $ftp_cmd_file|awk '{print $2}'` 1>$sftp_tmp 2>&1

if [ $? -ne 0 ]; then
return 1
fi

# get temp local file name

if [ $cmd = 'put' ]; then
l_file=`grep ^lcd $ftp_cmd_file | awk 'BEGIN{var=""} {if ($2!="/") var=var"/"$2; else var=var$2; } END{print var}' | sed 's/\/\//\//g'`/`grep ^put $ftp_cmd_file | awk '{print $2}' | tr '\"' '\0'`
else
l_file=`grep ^lcd $ftp_cmd_file | awk 'BEGIN{var=""} {if ($2!="/") var=var"/"$2; else var=var$2; } END{print var}' | sed 's/\/\//\//g'`/`grep ^get $ftp_cmd_file | awk '{print $3}' | tr '\"' '\0'`
fi

l_file_size=`ls -l $l_file | awk '{print $5}'`
r_file_size=`grep -v ^sftp $sftp_tmp | grep $r_file | awk '{print $5}'`

if [ $l_file_size -ne $r_file_size ]; then

echo 'file size different:' >> $sftp_log
echo 'local file size='$l_file_size' ('$l_file')' >> $sftp_log
echo 'remote file size='$r_file_size' ('$r_dir'/'$r_file')' >> $sftp_log

return 1
fi

break
;;

delete)

# DELETE - verify file does not exist; use subshell so sftp does not error off

(
echo "
`grep ^cd $ftp_cmd_file`
ls -l "$r_file"
quit
"
) | $SFTP_CMD `grep ^user $ftp_cmd_file|awk '{print $2}'`@`grep ^open $ftp_cmd_file|awk '{print $2}'` 1>$sftp_tmp 2>&1

if [ $? -ne 0 ]; then
return 1
fi


grep -q "$2 not found" $sftp_tmp

if [ $? -ne 0 ]; then
cat $sftp_tmp >> $sftp_log

return 1
fi

break
;;
esac

return 0
}


function cleanup
{
rm $sftp_tmp 2>/dev/null
rm $sftp_cmd 2>/dev/null
rm $sftp_log 2>/dev/null

}


Navigation to related pages:

Page Used to Define an FTP Server for File Attachments

Page Name

Object Name

Navigation

Usage

URL Maintenance

URL_TABLE

PeopleTools, Utilities, Administration, URLs

Specify an FTP server for storing file


Page used to Set Up File Attachment Servers

Page Name

Object Name

Navigation

Usage

Maintain File Att Servers

PV_ATT_SRV_ADMIN

Set Up Financials/Supply Chain, Utilities, Administer File Attachments, Administer Attachment Servers

Used to set up the FTP server location for PDF and XML file attachments which are included with notifications.


Pages Used to Set Up Attachments for Transactions

Page Name

Object Name

Navigation

Usage

Maintain File Att Servers

PV_ATT_SRV_ADMIN

Set Up Financials/Supply Chain, Utilities, Administer File Attachments, Administer Attachment Servers

Identify servers on which to store attachments.

Maintain Component Subdirs

PV_ATT_SRV_ADMIN

Set Up Financials/Supply Chain, Utilities, Administer File Attachments, Administer Attachment Servers, Maintain Component Subdirs

Define component names and paths where the attachments are stored.

Vendor Address

VNDR_ADDRESS

eProcurement, Procurement Application Admin, Maintain Vendors, Vendor Profile, Address

Define vendor email addresses for sending attachments.

URL Maintenance

URL_TABLE

eProcurement, Procurement Application Admin, Maintain Overall System Options, Maintain FTP Server Location

Requires end users to enter the file location manually for attachments (not the recommended method).

  1. Set up a location for storing attachments. This is usually an FTP server. Add a subfolder named PV to the root directory. Give read and write privileges to anonymous users or any users that are specified in the FTP string.

  2. Create a URL identifier that is named PV for the FTP location for PeopleSoft eProcurement attachments, and specify the FTP server address. The FTP root location must contain a subdirectory that is named PV.


Change Peoplesoft FTP to SFTP - Part3

Tuesday, June 9, 2009

Change Peoplesoft FTP to SFTP - Part 3

Because FTP is like any 3rd party application to PS, and it generally passes a return code of zero in the non-interactive session, it is not useful in determining a Success or Fail status. So PS is using the session message that is captured in the log file and filtering the FTP status code. A list of status code I can find that is used by PS for this :

226 - Transfer complete.
250 - DELE command successful.
532 - A file or directory in the path name does not exist
550 - A file or directory in the path name does not exist
552 - Exceeded storage allocation
Text Color
The following considerations should be taken in regards to the wide range of systems and implementation of SFTP executables:
  • The only way I found that can consistently returns a error status code is the "-b" batch option. If you use HERE document or subshell-echo it returns zero even if there are errors.
  • The FTP status code should be standard, but the message text can vary per system. SFTP does not have a standard status code (on our system), and message text can also vary by systems.
So analyze and thoroughly test the code, look for variations in message text and make necessary adjustments.

The Peoplecode statement for adding/loading a file is like:


The function (AddAttachment) calls ExecutePutAttachment API which examines the log file (ftp????_?????.log) for FTP status code then sets RETCODE as follows:

System Variable for &RETCODE Value FTP Code Comments
%Attachment_Success 0

%Attachment_Failed 1

%Attachment_Cancelled 2
user hit “CANCEL” instead of “Upload”
%Attachment_FileTransferFailed 3

%Attachment_NoDiskSpaceAppServ 4
N/A for FTP
%Attachment_NoDiskSpaceWebServ 5
N/A for FTP
%Attachment_FileExceedsMaxSize 6 552
%Attachment_DestSystNotFound 7

%Attachment_DestSysFailedLogin 8 530 sftp cannot authenticate
%Attachment_FileNotFound 9 550 user “Upload” a 0-byte file; “View” file does not exist
%Attachment_DeleteFailed 10 552 user hit “delete” but file does not exist
%Attachment_NoFileName 11
user hit “Upload” without selecting a file
%Attachment_FileNameTooLong 12


You can see not all status codes are related to FTP. This is due to the the fact only a portion of the transfer process utilizes ftp. Between Browser and Web Server, Web Server & App Server, App Server & DB Server FTP is not involved. I tried to test as many FTP codes as I can, these are the ones I see PS reports off the log file. For ex. if you set &MESSAGE_LVL (in FILE_ATTACH_WRK peoplecode) to a value of 1 or 2, you will see a detailed message box after each operation. This is the delivered code:

&RETCODE = AddAttachment(&URL_ID, &ATTACHSYSFILENAME, &FILEEXTENSION, &ATTACHUSERFILE, &FILESIZE);
:::::::::::::::::::::::::::::::::::::
If &MESSAGE_LVL = 1 Or
&MESSAGE_LVL = 2 Then

If (&RETCODE = %Attachment_Failed) Then
MessageBox(0, MsgGetText(137, 1, "File Attachment Status"), 137, 2, "AddAttachment failed");
End-If;
:::::::::::::::::::::::::::::::::::::::
:::::::::::::::::::::::::::::::::::::::
If (&RETCODE = %Attachment_DestSysFailedLogin) Then
MessageBox(0, MsgGetText(137, 1, "File Attachment Status"), 137, 9, "AddAttachment failed: Unable to login into destination system for ftp");
End-If;
If (&RETCODE = %Attachment_FileNotFound) Then
MessageBox(0, MsgGetText(137, 1, "File Attachment Status"), 137, 29, "The file was not found so the operation could not be completed.");
End-If;
/* Error Message for Invalid file or No file name */
If (&RETCODE = %Attachment_NoFileName) Then
MessageBox(0, MsgGetText(137, 1, "File Attachment Status"), 137, 38, "AddAttachment failed: No File Name Specified.");
End-If;
::::::::::::::::::::::::::::::

The API AddAttachment will return zero if it sees this FTP session message in the log file:

226 Transfer complete.

so the SFTP will simply run:

echo "226 Transfer complete"

after it verifies a PUT operation is successful. Similarly, DeleteAttachment API is looking for:

250 DELE command successful

Next is a script that ties everything together.

Change Peoplesoft FTP to SFTP - Part 2 | Change Peoplesoft FTP to SFTP - Part4

Monday, June 8, 2009

Change Peoplesoft FTP to SFTP - Part 2

In the PS generated script, ftp?????_????.txt, it stores the host, login, cd, lcd, get/put/delete & quit commands. A single file is processed each time. Prior to doing sftp, the correct public/private keys need to be generated, known_host and authorized_keys must be set up for the ftp use account. Converting the script to run sftp is straightforward, in 2 steps:

1. grep -e ^lcd -e ^cd -e ^put -e ^delete -e ^get -e ^quit $ftp_cmd_file | sed s/^delete/rm/ | awk '!/^rm/ {print $0} /^rm/ {print $0"\""}' > $sftp_cmd

This extracts all ftp commands, excluding host and login info, and stores them into a temp sftp script. The ftp "delete" command is replaced by sftp "rm" command. In addition, for DELETE operation, PS leaves off a closing ", so it looks like this

delete "XXXXXX.YYY

FTP does not complain it, but SFTP is more strict in syntax and will cause an error. The command changes it to

rm "XXXXXX.YYY"

For PUT operation, PS inserted a "mkdir" command. This will cause SFTP to error in batch mode. So this command is not picked up by the script, just verify the directory exists beforehand.

2. $SFTP_CMD -b $sftp_cmd `grep ^user $ftp_cmd_file|awk '{print $2}'`@`grep ^open $ftp_cmd_file|awk '{print $2}'` 1>$sftp_log 2>&1
($SFTP_CMD = sftp executable)

This executes SFTP in batch mode, with host name and login info from the original ftp script. Password is not needed, as it will be authenticated by Public Key.

I embedded a sleep command in SFTP to get a view at the process tree for this

  • 24781020 - An App Server instance handles the PS connection and peforms
  • 15798308 - App Server instances spwans a shell & executes ftpunx script
  • 19480650 - ftpunx script runs sftp command
  • 14291166 - sftp run ssh to handle file transfer
  • 2396354 - ssh exeutes a OS command
This is all it takes to transfer a file using secured FTP with sftp. But this is just the beginning, to fully integrate with PS, the results must be "relayed" back to PS . This is done through examination of the SFTP log file. The technique I used is detailed in the next part.

Change Peoplesoft FTP to SFTP - Part 1 | Change Peoplesoft FTP to SFTP - Part 3

Wednesday, June 3, 2009

Peoplesoft Notify Address

Application Package Peoplecode:
PT_WF_NOTIFICATION.Notification.OnExecute

method Send

/* SendMail(0, &strEmailCC, "", &strEmailBCC, &Subject, &Message, &strFileNames, &strFileTitles, &tempEmailReplyTo, ";", &ContentType);*/
SendMail(0, &strEmailTo, &strEmailCC, &strEmailBCC, &Subject, &Message, "/path/path/filename.ext", "xxx.txt", "From_Notify@rver.xx.yy.zz", ";", &ContentType, &tempEmailReplyTo, "sender@srvr1.xx.yy.zz");

Further reading:

SMTP
Internet Messaging protocols (SMTP,POP3 and IMAP4)
SPAM Filtering

Sunday, May 31, 2009

Change Peoplesoft FTP to SFTP - Part 1

Many workplaces are implementing a higher level of secured data transfer requirements due to increased security concerns. The PS delivered file transfer process in v8.49 currently does not support SFTP however. These are the steps I employed to make the switch. This article applies to the file transfer from a client desktop to a FTP server using the delivered ftpunx shell script.

Firstly a note on the flow of Peoplesoft's File Attachment process. In a 4-tier environment, a file transfer request initiates from the client browser. This file data goes to the Web server using HTTP or HTTPS protocal. Web Server then sends it to App Sever through Jolt. It is in the final leg of the relay, from App Server to destination FTP Sever, that FTP/SFTP is employeed. When viewing a file, the above steps are reversed.

The Peoplecode function that transfers a file is AddAttachment. There are multiple pages that uses multiple ways to call this function, some directly and some can go through layers of Application Package code, but they ultimately call the same function/

A look into the behind the scene actions when adding an file attachment -
1. generating a unique temp file name, such as:

&UniqueName = .OPRID | "_" | String(Month(%Date)) | "_" | String(Day(%Date)) | "_" | String(Year(%Date)) | "_" | String(Hour(%Time)) | "_" | String(Minute(%Time)) | "_" | String(Second(%Time));

The name can vary slightly among different load pages/processes. They are in the general format of Date Time stamp and OPRID.

2. When user hit Attach button, an entry is inserted into PSFILE_ATTDET:




  • This row will stay in the table as a tag. FILE_SEQ is 0, FILE_SIZE is 5 bytes, indicating the FILE_DATA size. (value is HEX 00 64 00 75 00 on my system)
  • ATTACHSYSFILENAME is a unique file name that will be used to store file data as it loads in chunks of 28K indexed by FILE_SEQ. If no file is selected to load, this row is not removed.
  • All the rows (one per chunk) used during file load are deleted after successfully reassembled on App Server, and before actual FTP runs.

3. The Upload box displays after the above steps:




4. When user selects a file and hit "Upload", internally PS run a FileAttachService API that performs these tasks:
  • the tag row in PSFILE_ATTDET is cleared
  • get MAXCHUNKSIZE from PSOPTIONS. Default is 28K.
  • breaking down file data into chunks of MAXCHUNKSIZE and insert each segment, identified by FILE_SEQ, into PSFILE_ATTDET


(the last segment size is smaller because it contains only the remainder of file. so the file size = MAX(FILE_SEQ) - 1 * MAXCHUNKSIZE + FILE_SIZE of final segment)

note: the API is not directly accessible from Peoplecode. PS runs them at appropriate junction to perform special purpose internal tasks.

Now that the data has been saved onto a table on Database server. Next PS will put the data back into a file on App Server.
5. The function AddAttachment is called after the above has completed. PS runs internal APIs ExecuteGetAttachmentDB & ExecuteDeleteAttachmentDB:
  • reassembles the data from table back into a file, in the order of FILE_SEQ, the location for the temp file is /tmp/PSFTP/_?????/????
  • delete data from PSFILE_ATTDET
Now the file is on App Server, ready for FTP to its destination. When viewing a file, the above steps are reversed. The file is FTP'ed to App Server, then Jolt'ed to Web Server, then HTTP'ed to client browser.

6. Still in AddAttachment, the API ExecutePutAttachment performs the most important tasks of the process:
  • parse FTP URL (ftp://UID:PWD@FTP_Dest_Server/xxx/xxx/xxx/_
    _?????.ext) to build a series of FTP commands (open, user, cd, lcd, get/put/delete...) into a temp script in /tmp/ftp???_????.txt
  • verify the FTP script exists
  • run FTP using the FTP script
  • check FTP results in /tmp/ftp?????_?????_?????.log
  • delete FTP script, preserve log if needed
This is a very critical step because of the sensitive data involved. The ftp???.txt contains all FTP commands needed to perform the transfer. PS runs(spawn()?) the FTP wrapper, $PS_HOME/bin/ftpunx, to send the file. The output from the ftp is stored in ftp???.log & used to verify results. This is the process as being run:

/bin/sh $PS_HOME/bin/ftpunx /tmp/ftp????_????_?????.txt

A look at ftpunx:

2 things to note: (1) $FTP_CMD is the ftp executable on App server and $1 is the ftp????.txt script (2) because of the location of "result=$?", it's going to always = 0. As a result we will always see "File Transfer Successful" in the log, even in failed situations. This is misleading but not damaging, because PS does not rely on this check.

If all goes well, the file is now supposed to be FTP'ed to it's destination. How does PS verify this? It checks the resulting session message file. Before getting into this, take a look at the changes in the script to run SFTP.

Change Peoplesoft FTP to SFTP - Part 2

Cool Gadgets


Saturday, May 30, 2009

Enlarge CSR Description and Resolution Box

The web version of Stat has some nice feature over the client desktop version. But the text editing area for Description and Resolution is way too small.


Using a simple greasemonkey script I enlarge both to allow for easy editing:

Friday, May 29, 2009

Development Tools -

As a programmer I have used numerous programming tools for development on DOS, Windows, and Unix, 8086 PC assembly, c, c++, cobol, php, VB, VB script, java script, ASP, shell, pl/sql..

I am waiting for the day when voice commands becomes a reality for programming task. For now as little typing as possible.


My bin

dd -

#!/bin/csh -f
#-------------------------------------------------------------------------------
set parm = 'ls -l'

while (1)
if ($1 == '/od') then
if ($?bysize) then
echo 'Error: cannot sort by both date & size'
exit(1)
else
set bydate = '-t -r'
endif
else
if ($1 == '/os') then
if ($?bydate) then
echo 'Error: cannot sort by both date & size'
exit(1)
else
set bysize = 'sort -n -k 5'
endif
else
if ($1 == '/r') then
set revse = '-r'
else
if ($1 == '/a') then
set hidden = '-a'
else
break
endif
endif
endif
endif
shift
end

#---------------------------------------------------------------------------
if ($?hidden) then
set parm = "$parm $hidden"
endif

if ($?bydate) then
set parm = "$parm $bydate"
endif

if ($?bysize) then
$parm $* | $bysize | more -e
else
$parm $* | more -e
endif

exit(0);



gr -

#!/bin/ksh


# this script greps Process

if [ $1'x' = 'x' ]; then
echo 'Nothing to grep'
exit 1
fi

flag=ef

while getopts :c:l args
do
case $args in

c) clear
shift
;;

l) flag=fL
shift
;;

esac
done

if [ $flag == "ef" ]; then
echo "tracing process by name >>>> ps -ef | grep $* "

ps -ef | grep $* | grep -v ~djen/bin/gr | grep -v grep
else

echo "tracing single process tree >>>> ps -fL "

ps -fL $*
fi


path -

#!/bin/ksh
#-------------------------------------------------------------------------------

if [ $# -eq 0 ]; then

echo $PATH | tr ':' '\n'

else

set -A arPATH `echo $PATH | tr ':' ' '`

idx=0
while [ $idx -lt ${#arPATH[*]} ]
do
if [ -f ${arPATH[idx]}/$1 ]; then
ls -l ${arPATH[idx]}/$1
break
else
((idx=idx+1))
fi
done

fi

My Oracle SQLs

-------------------------------------------------------------------------------------------------------SQL history
http://www.dbasupport.com/oracle/ora10g/session_wait.shtml

select distinct sql_id from v$active_session_history where SESSION_ID=

select distinct sql_id from DBA_HIST_ACTIVE_SESS_HISTORY where SESSION_ID=

-------------------------------------------------------------------------------------------------------DB Objects
--http://www.techonthenet.com/oracle/sys_tables/
all_views
ALL_CATALOG

-------------------------------------------------------------------------------------------------------get Session Id
Select Sid From V$SESSION Where audsid = SYS_CONTEXT('userenv','sessionid');

SELECT SYS_CONTEXT('USERENV', 'SID') FROM DUAL;
 
SELECT sid FROM v$mystat  WHERE ROWNUM = 1;
 
SELECT TO_NUMBER(SUBSTR(DBMS_SESSION.unique_session_id, 1, 4)) sid FROM dual;
 



-------------------------------------------------------------------------------------------------------get User name
Select '*** User = '||USER||' ***' From dual

-------------------------------------------------------------------------------------------------------get db name
Select Name, CREATED, LOG_MODE
From V$DATABASE;

-------------------------------------------------------------------------------------------------------get DB links
Select db_link From user_db_links;

Select A.OWNER, A.HOST, A.DB_LINK, A.USERNAME, A.CREATED,
DECODE (B.FLAG, 0, 'NO', 1, 'YES') "Dec", B.AUTHUSR, C.STATUS
From DBA_DB_LINKS A, SYS.USER$ U, SYS.LINK$ B, DBA_OBJECTS C
Where A.DB_LINK = B.Name And
A.OWNER = U.Name And
B.OWNER# = U.USER# And
A.DB_LINK = C.OBJECT_NAME And
A.OWNER = C.OWNER And
C.OBJECT_TYPE = 'DATABASE LINK'
Order By 1, 2, 3;

------------------------------------------------------------------------------------------------------SYS_CONTEXT: http://www.psoug.org/reference/sys_context.html
Select SYS_CONTEXT('userenv','DB_NAME') From dual

DB_NAME = Database
SERVER_HOST = Name Of server Where DB resides
HOST = Name Of client
SESSION_ID = Session Id
OS_USER = O.S user
SERVICE_NAME = Name Of service conneted To A Session (db Name)
SESSION_USER = Name Of user login Id
SESSION_USERID = dentifier Of The Database user Name By which The Current user Is Authenticated.
Sid =

-------------------------------------------------------------------------------------------------------who is running what
Select s.Program, s.schemaname, s.Sid, s.status, s.process, s.osuser, A.sql_text, P.Program
From v$session s, v$sqlarea A, v$process P
Where s.sql_hash_value=A.hash_value And s.sql_address=A.address And s.paddr=P.addr
--and s.schemaname='&1'
And s.status='ACTIVE'


-------------------------------------------------------------------------------------Oracle Parameters (init.ora)
Select Name,
DECODE (Type, 1, 'Boolean', 2, 'String', 3, 'INTEGER', 4, 'FILE', 5, 'RESERVED', 6, 'BIG INTEGER') Type,
VALUE, DESCRIPTION, ISSES_MODIFIABLE, ISSYS_MODIFIABLE, ISDEFAULT,
ISMODIFIED, ISADJUSTED
From V$PARAMETER
Order By Name;

--RBS
Select
r.segment_name segment_name,
r.owner owner,
r.tablespace_name tablespace_name,
r.status status,
ROUND(r.initial_extent/1024/1024) initial_extent,
ROUND(r.next_extent/1024/1024) next_extent,
s.Extents,0 Extents,
ROUND(s.rssize/1024/1024) rssize,
s.xacts active_trans
From
dba_rollback_segs r,
v$rollname n,
v$rollstat s
Where r.segment_name = n.Name
And n.usn = s.usn;

Select 'Database Version = '||banner From v$version Where Rownum = 1;

Select 'Database Block Size = '||value From v$parameter Where Name = 'db_block_size';

----------------------------------------------------------------------------------------------------Date Time difference
Sql> Create Or REPLACE Function datediff( p_what In VARCHAR2,
2 p_d1 In DATE,
3 p_d2 In DATE ) Return NUMBER
4 As
5 l_result NUMBER;
6 Begin
7 Select (p_d2-p_d1) *
8 DECODE( UPPER(p_what),
9 'SS', 24*60*60, 'MI', 24*60, 'HH', 24, Null )
10 Into l_result From dual;
11
11 Return l_result;
12 End;
13 /
Function created

---------------------------------------------------------------------------------------------------who is using rbs?
Select SUBSTR(A.os_user_name,1,8) "OS User"
, SUBSTR(b.object_name,1,30) "Object Name"
, SUBSTR(b.object_type,1,8) "Type"
, SUBSTR(c.segment_name,1,10) "RBS"
, E.process "PROCESS"
, SUBSTR(d.used_urec,1,8) "# Of Records"
, d.used_ublk
From v$locked_object A
, dba_objects b
, dba_rollback_segs c
, v$transaction d
, v$session E
Where A.object_id = b.object_id
And A.xidusn = c.segment_id
And A.xidusn = d.xidusn
And A.xidslot = d.xidslot
And d.addr = E.taddr

----------------------------------------------------------------------------------------------------Re-Connect
Begin sys.dbms_application_info.set_module('T.O.A.D.', Null); End;

-----------------------------------------------------------------------------------------------------Client Info
SET serveroutput On
Declare
XXX VARCHAR(20);

Begin
DBMS_APPLICATION_INFO.SET_CLIENT_INFO ('TOAD1+1');
DBMS_APPLICATION_INFO.READ_CLIENT_INFO (XXX );
DBMS_OUTPUT.PUT_LINE(XXX);
End;


Select P.spid,
substr(s.osuser,1,10) osuser,
substr(s.username,1,8) username,
substr(s.Program,1,24) Program,
substr(s.client_info,1,60) ClientInfo
From v$session s, v$process P
Where s.paddr=P.addr
And s.osuser Is Not Null
--and spid=4767916
Order By s.osuser



-------------------------------------------------------------------------------------------------SET_SESSION_LONGOPS
Select * From V$SESSION_LONGOPS Where sql_address='070000004185C7E0'

Select *-- sql_id
From v$session
Where Sid =24

Select operation
, options
, object_name
, object_alias
, object_type

Select *
From v$sql_plan --WHERE object_name='XXXX'
Where address = --'070000004185C7E0'
( Select sql_address
From v$session
Where Sid =24)

---------------------------------------------------------------------------------------------------------CBO
Select *--index_name,table_name, blevel, avg_data_blocks_per_key, avg_leaf_blocks_per_key, clustering_factor
From user_indexes Where table_name In ('XXXX','YYYY')

Select *--table_name, blocks
From user_tables Where table_name In (''XXXX','YYYY')

EXEC DBMS_STATS.gather_table_stats('orauser', ''XXXX');
EXEC DBMS_STATS.gather_table_stats('orauser', ''XXXX', estimate_percent => 15);

EXEC DBMS_STATS.gather_index_stats('orauser', ''XXXX');
EXEC DBMS_STATS.gather_index_stats('orauser', ''XXXX', estimate_percent => 15);

--more on Statistics
http://www.oracle-base.com/articles/8i/CostBasedOptimizerAndDatabaseStatistics.php

Select table_name, num_rows From dba_tables Where owner='orauser'
Order By 2 desc

Select * From user_tab_cols

col c1 heading 'Average Waits|forFull| Scan Read I/O' format 9999.999
col c2 heading 'Average Waits|for Index|Read I/O' format 9999.999
col c3 heading 'Percent of| I/O Waits|for Full Scans' format 9.99
col c4 heading 'Percent of| I/O Waits|for Index Scans' format 9.99
col c5 heading 'Starting|Value|for|optimizer|index|cost|adj' format 999

Select
A.average_wait ,
b.average_wait ,
A.total_waits /(A.total_waits + b.total_waits) ,
b.total_waits /(A.total_waits + b.total_waits) ,
b.average_wait , A.average_wait
From
v$system_event A,
v$system_event b
Where
A.event = 'db file scattered read'
And
b.event = 'db file sequential read'

Select * From v$system_event

-----------------------------------------------------------------------------------------------------File Status

Select SEGMENT_NAME, STATUS From dba_rollback_segs

Select * From dba_data_files Order By file_id

Select FILE#,status,Error From v$datafile_header

Select Name , status From v$datafile ;
4
/u04/oradata/f752x539/o1_mf_psindex_2pxzbghr_.dbf

-----------------------------------------------------------------------------------------------------AutoNumber in trigger
Create Or REPLACE Trigger TableName_ID_assign
Before Insert On TableName For Each Row
Begin
Select TableName_IDs.Nextval Into :New.Id From dual;
End;
/

Create Procedure TableName_C( p_ID Out TableName.Id%Type ...) Is
v_NewID TableName.Id%Type;
Begin
Select TableName_IDs.Nextval Into v_NewID From dual;
Insert Into TableName (Id ...) Values (v_NewID ...);
p_ID = v_NewID;
End TableName_C;
/


---------------------------------------------------------------------------------------------------Tables/Index in a TableSpace
Select A.Name,c.Name Tablespace,b.analyzetime analyzed,b.rowcnt ,d.bytes, d.bytes/POWER(2,20) MB,d.bytes/POWER(2,30) GB
From sys.obj$ A, sys.tab$ b, sys.ts$ C,DBA_SEGMENTS d
Where A.Name Like 'XXXX%'
And A.obj#=b.obj#
And b.ts#=c.ts#
And A.Name=d.segment_name
And d.segment_type='TABLE'
And d.tablespace_name=c.Name

Select A.Name,c.Name Tablespace,b.analyzetime analyzed,b.rowcnt ,d.bytes, d.bytes/POWER(2,20) MB,d.bytes/POWER(2,30) GB
From sys.obj$ A, sys.ind$ b, sys.ts$ C,DBA_SEGMENTS d
Where A.obj#=b.obj#
And b.ts#=c.ts#
And A.Name=d.segment_name
And d.segment_type='INDEX'
And d.tablespace_name=c.Name
And A.Name Like ''XXXX%'
Order By Name


Select segment_name,segment_type, bytes/POWER(2,20) MB,bytes/POWER(2,30) GB
From DBA_SEGMENTS Where segment_name Like 'XXXX%'

Drop Table XXXX

Select * From sys.tab$

Select * From DBA_SEGMENTS Where segment_name Like 'XXXX%'

-- total bytes
Select SUM(BYTES) From USER_SEGMENTS Where SEGMENT_TYPE='TABLE' And segment_name=UPPER(''XXXX')

--how many MB -GB
Select 1073741824/POWER(2,20) MB, 1073741824/POWER(2,30) GB From dual

--move to a different tablespace
Alter Table 'XXXX Move Tablespace new_table_space


----------------------------------------------------------------------------------------------bind variable VALUES IN 10g
http://halisway.blogspot.com/search/Label/oracle

First have A look In v$sql To find The Sql Query you are looking For,Join The sql_id To v$sql_bind_capture And To View The bind variable Values For that Query.
v$session.sql_hash_value = Current Cursor Sql, v$session.prev_hash_value = prev Sql. Both can Use To look up Sql text (HASH_VALUE(

Select sql_id,sql_text From v$sql Where users_executing > 0

Select Name, position, DATATYPE_STRING,WAS_CAPTURED,LAST_CAPTURED,VALUE_STRING
From v$sql_bind_capture Where sql_id='5chvypbvbxtau'

Select sql_text From v$sqlarea Where users_executing > 0

--from SID
Select SQL_TEXT,sql_id From V$SQLTEXT_WITH_NEWLINES Where HASH_VALUE=TO_NUMBER((Select sql_hash_value From v$session Where Sid =459 And Type <> 'BACKGROUND')) Order By PIECE


---------------------------------------------------------------------------------------------------Oracle select Index
-->> USER_TAB_COLUMNS.NUM_DISTINCT - This column defines the number of distinct values the column holds.

Select * From USER_TAB_COLUMNS Where table_name='XXXXX' And column_name='BUSINESS_UNIT'

--USER_TABLES.NUM_ROWS - IF NUM_DISTINCT = NUM_ROWS THEN USING an INDEX would be preferable TO doing a FULL TABLE SCAN.
--AS the NUM_DISTINCT decreases, the cost OF USING an INDEX increase thereby making the INDEX LESS desirable.

Select * From USER_TABLES Where table_name='XXXXX'

Select Clustering_Factor From USER_INDEXES Where index_name='XXXXX'

# USER_INDEXES.Clustering_Factor - This defines how Ordered The Rows are In The Index. If Clustering_Factor approaches The NUMBER Of Blocks In The Table, The Rows are Ordered. If it approaches The NUMBER Of Rows In The Table, The Rows are randomly Ordered. In such A Case, it Is unlikely that Index entries In The same leaf Block will Point To Rows In The same Data Blocks.

# Decrease The INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT - A higher value will make The Cost Of A Full Table Scan cheaper.

----------------------------------------------------------------------------------------------- Descending index
Select owner, index_name, uniqueness, status
,INDEX_TYPE, Temporary, PARTITIONED, Logging, Degree, funcidx_status, join_index
From sys.ALL_INDEXES
Where table_owner = 'orauser'
And table_name = 'XXXXX'
Order By index_name

Select * From sys.ALL_IND_COLUMNS Where index_owner = 'orauser' And index_name = 'XXXXXX' Order By column_position

Select column_expression From sys.ALL_IND_EXPRESSIONS Where index_owner = 'orauser'And index_name = 'XXXXX'And column_position =3

Select
INDEX_NAME "Index Name"
,INDEX_TYPE "Index Type"
,UNIQUENESS "Uniqueness"
,STATUS "Status"
,TABLE_OWNER || '.' || TABLE_NAME "Table"
,TABLE_TYPE "Table Type"
,TABLESPACE_NAME "Tablespace"
,Buffer_Pool "Buffer Pool"
,initcap(partitioned) "Partitioned"
,decode(Temporary, 'N', 'No', 'Yes') "Temporary"
,INI_TRANS "Initial Transactions"
,MAX_TRANS "Max Transactions"
,INITIAL_EXTENT "Initial Extent Size"
,NEXT_EXTENT "Next Extent Size"
,MIN_EXTENTS "Minimum Extents"
,MAX_EXTENTS "Maximum Extents"
,PCT_INCREASE "Percent Increase"
,PCT_FREE "Percent Free"
,Freelists "Freelists"
,FREELIST_GROUPS "Freelist Groups"
,Degree "Degree"
,Instances "Instances"
,LAST_ANALYZED "Last Analyzed"
,BLEVEL "BLevel"
,LEAF_BLOCKS "Leaf Blocks"
,DISTINCT_KEYS "Distinct Keys"
,AVG_LEAF_BLOCKS_PER_KEY "Avg Leaf Blocks Per Key"
,AVG_DATA_BLOCKS_PER_KEY "Avg Data Blocks Per Key"
,Clustering_Factor "Clustering Factor"
,NUM_ROWS "Num Rows"
,SAMPLE_SIZE "Sample Size"
,Generated "Generated"
,decode(JOIN_INDEX, 'NO', 'No', 'Yes') "Join Index"
From SYS.USER_INDEXES
Where INDEX_NAME Like 'XXXXX'



--------------------------------------------------------------------------------------------Session Info
--::::::::::Sql statements still In The Cache that have done A great deal Of Disk I/O And Buffer gets. (Buffer gets show approximately The amount Of CPU Resource used.)
Select * From V$SQLAREA --WHERE module LIKE 'T.O.A.D%'

--::::::::: Shows active (in progress) transactions
Select Sid, serial#,s.status,username, terminal, osuser,
T.start_time, r.Name, (T.used_ublk*8192)/1024 USED_kb, T.used_ublk "ROLLB BLKS",
DECODE(T.Space, 'YES', 'SPACE TX',
DECODE(T.recursive, 'YES', 'RECURSIVE TX',
DECODE(T.noundo, 'YES', 'NO UNDO TX', T.status)
)) status
From sys.v_$transaction T, sys.v_$rollname r, sys.v_$session s
Where T.xidusn = r.usn
And T.ses_addr = s.saddr



Select
substr(s.username,1,18) username,
substr(s.Program,1,15) Program,
decode(s.command,
0,'No Command',
1,'Create Table',
2,'Insert',
3,'Select',
6,'Update',
7,'Delete',
9,'Create Index',
15,'Alter Table',
21,'Create View',
23,'Validate Index',
35,'Alter Database',
39,'Create Tablespace',
41,'Drop Tablespace',
40,'Alter Tablespace',
53,'Drop User',
62,'Analyze Table',
63,'Analyze Index',
s.command||': Other') command
From
v$session s,
v$process P,
v$transaction T,
v$rollstat r,
v$rollname n
Where s.paddr = P.addr
And s.taddr = T.addr (+)
And T.xidusn = r.usn (+)
And r.usn = n.usn (+)
Order By 1
;

----------------------------------------------------------------------------------------------------- reset PW
Select username, Password
From DBA_USERS Where username='XXXX'

Alter USER XXXX Identified By XXXXX

------------------------------------------------------------------------------------------------- how many users in DB, CPU usage
Select rpad(c.Name||':',11)||chr(9)||chr(9)|| rpad(' Current logons='||(to_number(b.sessions_current)-1),20)||chr(10)|| 'cumulative logons='||rpad(substr(A.value,1,12),12)||chr(9)|| 'highwater mark='||b.sessions_highwater||chr(9)
From v$sysstat A, v$license b, v$database c
Where A.Name = 'logons cumulative'

Select substr(sn.Name,1,30) parameter, ss.username ||' ('|| se.Sid ||') ' user_process, se.value, ss.osuser,ss.process,ss.Program,ss.logon_time,ss.client_info,ss.last_call_et
From v$session ss, v$sesstat se, v$statname sn
Where se. statistic# = sn.statistic#
And sn.Name Like '%CPU used by this session%'
And se.Sid=ss.Sid
Order By sn.Name, se.value DESC


---------------------------------------------------------------------------------------------- Installed Packages
Select *--owner, object_type
From dba_objects Where object_name = 'UTL_FILE'


Select * From dba_objects Where OBJECT_TYPE='PACKAGE' Order By 2


Select *--owner, object_type
From dba_objects Where owner='PUBLIC' And OBJECT_TYPE='SYNONYM'
And OBJECT_name In (Select OBJECT_name From dba_objects Where owner='SYS' And OBJECT_TYPE='PACKAGE')


---------------------------------------------------------------------------------------------- Random 1/10 records select
select a.* from table1 a
where dbms_utility.get_hash_value(dump(key_field),0,100)<10


---- create 10000 random strings
select dbms_random.string('x', 10),
   case when level <= 5000 then 1 else 0 end,
   level
from dual
connect by level <= 10000

---------------------------------------------------------------------------------------------- Displaying multiple column values per row
http://www.dba-oracle.com/t_display_multiple_column_values_same_rows.htm


---------------------------------------------------------------------------------------------- parse a string
(orig: http://www.dbforums.com/oracle/998642-pl-sql-splitting-string-into-array.html)

modified form:

SELECT substr(main_string, position_from + 1, decode(position_to, 0, length(main_string),position_to - position_from - 1))
FROM (SELECT main_string,
decode(rownum - 1, 0, 0, instr(main_string, ',', 1, rownum - 1)) position_from,
instr(main_string, ',', 1, rownum) position_to
FROM (SELECT '111,123,aaabbb,555,AAA' main_string
FROM dual)
CONNECT BY LEVEL <= length(main_string))
WHERE position_to > 0 or position_from > 0


-------------------------------------------------------Random # & strings

SELECT dbms_random.value(1,2000), dbms_random.string('U', 10),dbms_random.string('L', 10),dbms_random.string('A', 20) ,
dbms_random.string('X', 20),dbms_random.string('P', 20) 

my quick ksh

Numeric test -

1.

nonums="$(echo $answer | sed 's/[0-9]//g')"


if [ ! -z "$nonums" ] ; then
echo "Not an integer value."
fi

2.
echo -n "Pick a number between 1 and 20: " read answer

nonums="$(echo $answer | sed 's/[0-9]//g')"

if [ ! -z "$nonums" ] ; then
echo "Not an integer value." exit 0
fi

if [ $answer -lt 10 ] ; then
echo "Your answer is less than ten"
else
echo "Your answer is not less than ten"
fi

3.case xxx in
+([0-9]))
echo 1
;;
* )
echo 0
;;
esac

4. 
stackflow:
re='^[0-9]+$'
if ! [[ $yournumber =~ $re ]] ; then
   echo "error: Not a number" >&2; exit 1
fi


Parse file Name & Ext -

file=xxxx.yyy

nam=`echo ${file%.*}`
ext=`echo ${file#*.}`



remove CR from EOF -

  • tr '\012' ' '; echo
  • awk '{printf "%s ", $0} END { print "" }'



Ksh Version -

at a ksh prompt - print xxx

$ set|egrep -i ver

$ what /bin/ksh | egrep Version


Email -

simple text body:

  • cat file1 | mailx -s "subject" user@host.com
  • mailx -s "subject" user@host.com <>
attachments:
  • uuencode file1 file1 | mailx -s "subject" user@host.com
  • (uuencode file1 file1; uuencode file2 file2) | mailx -s "subject" user@host.com
both:
  • (cat file1; uuencode file1 file1;uuencode file2 file2 ) | mailx -s "subject" user@host.com
save attachment in mbox then uuencode to retrieve the file.

sendmail:

#!/bin/ksh
set -xv
# sendmail header function
fn_sendmail()
{
# Build sendmail header.
echo "From: user1@test.com"
echo "To: user2@test2.com"
echo "Reply-to: user3@test3.com, user4@user4"
echo "Subject: ${SUBJECT}"
echo "X-MSMail-Priority: High"
echo "importance: 1"
#echo "Mime-Version: 1.0"
#echo "Content-Type: text/plain"
#echo "Content-Type: multipart/mixed"
echo "

# Run the data function.
Hi XXX,

This is a test email

YYY
"
# uuencode the attachments, add checks if required file extension,
for k in file1 file2
do
uuencode $k $k
done

}

SUBJECT=test_subject

# Call sendmail header and pipe to sendmail.
(fn_sendmail)| sendmail -t -oi
##



My quick unix commands

AIX v5.3

CRON


There are four common causes for cron job commands to behave differently compared to commands typed directly into an interactive shell:

  • Cron provides a limited environment, e.g., a minimal $PATH, and other expected variables missing.
  • Cron invokes /bin/sh by default, whereas you may be using some other shell interactively.
  • Cron treats the % character specially (it is turned into a newline in the command). precede with "\"
  • The command may behave differently because it doesn't have a terminal available.

FIND
  • find + multiple exec + pipe + redirect:

find . -name *.AET -exec ksh -c 'grep ^"\-\- [0-9]\{2\}:[0-9]\{2\}:[0-9]\{2\}.*(PeopleCode)" {}  |uniq|wc -l >> xxx'  \; -exec ksh -c 'grep ^"\-\- [0-9]\{2\}:[0-9]\{2\}:[0-9]\{2\}.*(PeopleCode)" {} | wc -l >> yyy' \;

source: https://stackoverflow.com/questions/5119946/find-exec-with-multiple-commands

find accepts multiple -exec portions to the command. For example:

find . -name "*.txt" -exec echo {} \; -exec grep banana {} \;

Note that in this case the second command will only run if the first one returns successfully, as mentioned by @Caleb. If you want both commands to run regardless of their success or failure, you could use this construct:

find . -name "*.txt" \( -exec echo {} \; -o -exec true \; \) -exec grep banana {} \;

  • find -modified > 3 days : find . -mtime +3
modified <>2 & < >

find only in current dir, exclude sub-dir: find . \( ! -name . -prune \) \( -name "xyz.dat*" -o -name "abc*" \) find in all sub-dirs: find . -type d -print

find only in 1st
level sub-dirs: find . \( ! -name . -prune \) -type d -print find only in 1st level sub-dirs, folder names matching: find . \( ! -name . -prune \) \( -type d -a \( -name "5AA00" -o -name "5X[0789]00" \) \)


find files > or <: data-blogger-escaped-font="">



find . -type f -size +(-)100000c

more - http://www.unix.com/answers-frequently-asked-questions/13863-advanced-complex-uses-find-command.html


On RH Linux 5:

  • find . -type f -cmin -60 # creation time
  • find . -type f -amin -60 # access time
  • find . -type f -mmin -60 # modify time
  • find . -maxdepth 5 -mindepth 3 # sub-folders
  • find . -newermt "2011-01-01" ! -newermt "2011-12-31"  # between dates
* Find based on calendar days - -daystart (https://unix.stackexchange.com/questions/257960/how-do-i-find-files-older-than-1-days-using-mtime/257966)

      -daystart -mtime 0 means today 
-daystart -mtime +0 means before today.

Executing awk on each file found with find, then redirecting the result to a new filename
find -type f -iname "*.txt" -exec awk '{print $1, $2 >(FILENAME "-new")}' {} +
* Find TABs:


1. grep -P "\t" foo.txt
2. grep "$(printf '\t')" foo.txt
3. awk '/\t/'
4. sed -n '/\t/p'
5. grep $'\t' foo.txt 



grep - exlude multiple patterns


1. grep -v PSORA| grep -v "Store Field:"
2. grep -Fv -e PSORA -e "Store Field:"
3. grep -Ev "PSORA|Store Field:" 







Grab file records by line #
awk '{if (NR>=11895 && NR<=11905) print $0}' xxx > yyy

awk 'NR==11895, NR==11905' xxx > yyy
awk '{if (NR==11895 || NR==11905) print $0}' xxx > yyy

Get E

64 ways to get
Environment variables: http://rosettacode.org/wiki/Environment_variables
awk 'BEGIN{print "HOME:"ENVIRON["HOME"],"USER:"ENVIRON["USER"]}'

print N
records after some pattern:
awk 'c&&c--;/pattern/{c=N}' file




groups - find user groupsfind groups a user belongs to: groups user1 user2...
find users in a group: lsgroup -f group
(file: /etc/group
)



IFS - change field delimiter

see current value:

$echo "$IFS" | od -b
0000000 040 011 012 012
0000004

$echo "$IFS" | od -c
0000000 \t \n \n
0000004

changing IFS to ","

IFS_SAV=$IFS
IFS=,
line='a,,b,c,d'

for x in $line
do
echo $x
done



ls - get file date time stamp
ls -l = modification time

ls -lu = access timels -l = creation time
ls -ld *dirname* = show folder info, no sub-folder, no files
ls -ld */  = show folder info, no sub-folder, no files
du -sh <folder>= total space used by folder



ksh93: ls -l ~(i:a*) 


port -

AIX - This file shows the usage of ports -
/etc/services
/opt/freeware/sbin
lsof -i :port#


nmap -p


netstat -plunt
netstat -tap

(port # may be replaced by its symbolic name in /etc/services, for ex:
commplex-link   5001/tcp                #
commplex-link   5001/udp                #


if so, use lsof -i :<#> or netstat -an | grep # )

nc -z ; echo $?


proctree -

proctree (-a) PID
procfiles -

procfiles (-n) PID






ps -
for user xxx
>> ps -fuxxx
>> ps -fl -uxxx

all jobs by xxx sort by cpu
>> ps -fuxxx | sort +3

all jobs related to PID ??????
>> ps -fL ??????

display environment of another process:

ps eww PID | tr ' ' '\n' | grep ORACLE_SID

check process priority:

ps -eo pid,state,nice,args | less -S
ps -o user,cpu,pcpu,pid,etime,time,comm -p 19234892
ps -lef

change priority:
nice/renice

***** look at process ENV *****
e Displays the environment as well as the parameters to the command, up to a limit of 80 characters.
ew Wraps the display from the e flag one extra line.
eww Wraps the display from the e flag and displays the ENV list until the flag reaches the LINE_MAX value.
ewww Wraps the display from the e flag and displays the ENV list until the flag reaches the INT_MAX value.

==>ps -ef | grep db2sysc
instv9 24060 24059 0 16:33 pts/10 00:00:00 db2sysc 0
instv9 24114 23951 0 16:43 pts/10 00:00:00 grep db2sysc


==> ps ewww 24060
PID TTY STAT TIME COMMAND
24060 pts/10 S 0:00 db2sysc 0 HOME=/home/instv9 KRB5CCNAME=
FILE:/tmp/krb5cc_p23950 PWD=/home/instv9 DB2INSTANCE=instv9 DB2INSTDEF=instv9 DB2LPORT=0 DB2NODE=0
HOSTNAME=xxxxxxx.com LANG=en_US.UTF-8 USER=instv9 CLASSPATH=/home/instv9/sqllib/java/db2java.zip:
/home/instv9/sqllib/java/db2jcc.jar:/home/instv9/sqllib/java/sqlj.zip:/home/instv9/sqllib/function:/home/instv9/

ps wwee       








read -
(break fields on a record/line into variables)

>>move 1st field of each line to end of line (xx=1st field, yy=rest of fields)
while read -r xx yy
do
print printf "%s %s/n" $yy $xx
done <>> To read a line and split it into fields, and use "enter ur name:" as a prompt
read word1?"enter ur name: " word2 word3 word4....

>> read more than 1 values into a variable (reak up the list by using for)
$ read x y z; echo $x; echo $y; echo $z
one two three four
$one
$two
$three four

>> show the oldest file
: ls -tr| read xxx; echo $xxx
: ls -tr |&
read -p $xxx
echo $xxx

>> read from a file into variables
exec 3<>


sed -
  • print a line with a pattern & the next line
search string = APPSTVCH.BI120-3

sed -n '
/APPSTVCH.BI120-3/ {
N
/\n.*1/ p
}' APPSTVCH_703295.log



  • Handy one-liners for SED: http://www.catonmat.net/blog/sed-one-liners-explained-part-two/

(great site with a # of one-liners-explained http://www.catonmat.net/)

 # print 1 line of context before and after regexp, with line number
# indicating where the regexp occurred
sed -n -e '/regexp/{=;x;1!p;g;$!N;p;D;}' -e h



  • Find text after tag/pattern:
      sed -n -e 's/^.*\(COM Stmt=\)//p'  xxx 
  1.  awk '{for (I=1;I<=NF;I++) if ($I == "FROM") {print $(I+1)};}' file
  2.  grep -oP "FROM\s+\K\w+" file

      -o ==> option for printing only the matching part of the line

      -P ==>  use perl-regexp

      \K ==>  do not print that comes before \K (zero-width look-behind assertion)

      \w ==>  match word characters



      3. awk -F 'FROM' '{print $2}' file | awk '{print $1}'  -- works if only 1 WORD per line



sort -

>> sort by column 4-14 of 1st field:

sort -k 1.4,1.14

>> sort by numeric, 3rd field, delimited by ':'
sort -n -t ':' +2 /etc/passwd

>> second field as the sort key
sort -k 2,2 infile # new style
sort +1 -2 infile # old style

>> reverse sort, contents of infile1 and infile2, placing the output in outfile and using the second character of the second field as the sort key (assuming that the first character of the second field is the field separator):

sort -r -o outfile -k 2.2,2.2 infile1 infile2 # new style key definition used
sort -r -o outfile +1.1 -1.2 infile1 infile2 # old style key definition used

>> sorts the contents of infile1 and infile2 using the second non-blank character of the second field as the sort key:

sort -k 2.2b,2.2b infile1 infile2
sort +1.1b -1.2b infile1 infile2

>> prints the passwd(4) file (user database) sorted by the numeric user ID (the third colon-separated field):

sort -t ':' -k 3,3n /etc/passwd
sort -t ':' +2 -3n /etc/passwd
sort -n -t ':' -k 3,3 /etc/passwd

>> imitate uniq using sort: either of the following commands prints the lines of the already sorted file infile, suppressing all but one occurrence of lines having the same third field:

sort -um -k 3.1,3.0 infile
sort -um +2.0 -3.0 infile

The -n option informs sort to compare the specified field as numbers, not ASCII characters. The r option reverses the order of the sort.

sort -t: +5 -6 +0 -1 /etc/passwd # The output is now sorted by field 6, then by field 1 if necessary.

  • using TAB as delimiter, sort on 5th field: sort -t "`/bin/echo '\t'" +6 xx
>> sort alphabetically on the 1st field, numerically on the 2nd field, delimited by ',':
sort -t"," -k1,1 -k2n,2 file

>> use a range of hexdecimal escapes in sequare brackets?

LC_ALL=C sed 's/[\xE0-\xEF]/_/g'


touch -

- change DTTM stamp

03:04:55 a.m. on Jan 2, 1985
>>touch -t 198501020304.55 program.c

use the time stamp of another file
>>touch -r file1 program.c

avoid creating a new file, enter
>>touch -c program


touch: cannot touch xxx: No space left on device
check:
* spaces - df 
* inodes - df -i
* deleted - lsof -nP | grep 'deleted'




viminfo - https://stackoverflow.com/questions/2816719/clear-certain-criteria-from-viminfo-file
to manually edit entries - 

:set viminfo= to turn off the auto-saving of info to the .viminfo file. If you don't do this, Vim will overwrite all your changes when you quit,