paste -d " " - -
- sed 'N;s/\n/ /' yourFile
- xargs -l2
- xargs -n2 -d'\n'
- awk '{key=$0; getline; print key ", " $0;}'
Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts
Monday, June 1, 2020
Combine 2 lines into 1 line
Tuesday, July 2, 2019
Check Linux Email
source: linux-troubleshoot-outbound-email/
* check MTA:
If it returns a link to “/usr/sbin/sendmail.sendmail”, your system is configured to use sendmail.
* check Relay:
postfix: grep ^relayhost /etc/postfix/main.cf
sendmail: grep ^DS /etc/mail/sendmail.cf
* mail queue:
* check MTA:
ls -l /etc/alternatives/mtaIf the command returns a link to “/usr/sbin/sendmail.postfix”, your system is configure to use postfix.
If it returns a link to “/usr/sbin/sendmail.sendmail”, your system is configured to use sendmail.
* check Relay:
postfix: grep ^relayhost /etc/postfix/main.cf
sendmail: grep ^DS /etc/mail/sendmail.cf
* mail queue:
mailq* user mailbox: /var/spool/mail/$USER, /var/mail/$USER
Thursday, June 27, 2019
How to tell where the file read offset by a process
source: https://unix.stackexchange.com/questions/34751/how-to-find-out-the-file-offset-of-an-opened-file
1. $ cat /proc/687705/fdinfo/36
1. $ cat /proc/687705/fdinfo/36
pos: 26088
flags: 0100001
in case of symbolic links:readlink /proc/$PID/fd/$N
2.>lsof -o file1.st1
COMMAND PID USER FD TYPE DEVICE OFFSET NODE NAME
process 24017 user1 18r REG 253,3 0x14a85000 6330199 file1.st1
Monday, November 26, 2018
Send HTML email in Llnux
A lot of Q&A out on the web, the best one I can find.
On my system , 2 successful implementations:
1. sendmail
(echo "From: sender@aaa.com}";
echo "To: receiver@bbb.com";
echo "Reply-To: all@ccc.com";
echo "Subject: HTML test";
echo "Content-Type: text/html";
echo "MIME-Version: 1.0";
echo "";
echo "<b>line1</b>";
echo "line2";
echo "<b>line3</b>";
) | sendmail -t
2. mutt man
export REPLYTO="replyTo"
mutt -e 'set content_type="text/html"' -s "test" xxx@yyy.com <$FILE
* cannot get mailx to include mail header info
* REPLYTO can store very long text, but mutt only takes around 225 chars, so if the reply list long switch to sendmail.
Monday, October 22, 2018
find folder older than n days and remove
- find folder older than 30 days and remove
## display foldersfind . -type d -ctime +30 -exec echo -ne "folder:" {} "\t " \; -exec stat --format=%y {} \; | sort -k 3,3
## remove folders
find . -type d -ctime +30 -exec rm -r {} \;
Wednesday, October 14, 2015
Access Env of anothe process
- Read env of another process: http://unix.stackexchange.com/questions/29128/how-to-read-environment-variables-of-a-process
- xargs --null --max-args=1 echo < /proc/<pid>/environ
- ps wwwe <pid> | tr ' ' '\n'
- Orphan vs Zombie vs Daemon processes: http://gmarik.info/blog/2012/08/15/orphan-vs-zombie-vs-daemon-processes
- How To Kill Defunct Or Zombie Process: source: http://www.linuxnix.com/how-to-kill-defunct-or-zombie-process/
Tuesday, August 4, 2015
Process killed due to OOM
Original article:
http://unix.stackexchange.com/questions/128642/debug-out-of-memory-with-var-log-messages
" The kernel will have logged a bunch of stuff before this happened, but most of it will probably not be in
Find the original "Out of memory" line in one of the files that also contains
Again, this probably won't do much more than illuminate the obvious: the system ran out of memory and
"
> free -m
http://unix.stackexchange.com/questions/128642/debug-out-of-memory-with-var-log-messages
" The kernel will have logged a bunch of stuff before this happened, but most of it will probably not be in
/var/log/messages
, depending on how your (r)syslogd
is configured. Try:grep oom /var/log/*
grep total_vm /var/log/*
The former should show up a bunch of times and the latter in only one or two places. That is the file you want to look at.Find the original "Out of memory" line in one of the files that also contains
total_vm
. Thirty second to a minute (could be more, could be less) before that line you'll find something like:kernel: foobar invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
You should also find a table somewhere between that line and the "Out of memory" line with headers like this:[ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
This may not tell you much more than you already know, but the fields are:- pid The process ID.
- uid User ID.
- tgid Thread group ID.
- total_vm Virtual memory use (in 4 kB pages)
- rss Resident memory use (in 4 kB pages)
- nr_ptes Page table entries
- swapents Swap entries
- oom_score_adj Usually 0; a lower number indicates the process will be less likely to die when the OOM killer is invoked.
nr_ptes
and swapents
although I believe these are factors in determining who gets killed.
This is not necessarily the process using the most memory, but it very
likely is. For more about the selection process, see here.
Basically, the process that ends up with the highest oom score is
killed -- that's the "score" reported on the "Out of memory" line;
unfortunately the other scores aren't reported but that table provides
some clues in terms of factors.Again, this probably won't do much more than illuminate the obvious: the system ran out of memory and
mysqld
was choosen to die because killing it would release the most resources. This does not necessary mean mysqld
is doing anything wrong. You can look at the table to see if anything
else went way out of line at the time, but there may not be any clear
culprit: the system can run out of memory simply because you misjudged
or misconfigured the running processes."
> free -m
Wednesday, June 24, 2015
Display Tivoli Job Name next to logs
This ksh function lt (list tivoli) display additional Job Stream+ Job names next to log file list so we can see which job created the logs. It based on "ls -ltr" with Job names displayed next to log file name.
Tivoli logs are stored in ~maestroX/stdlist/YYYY.MM.DD. They keeps the job execution details but is hard to tell which job it is based on. This function makes it easy to investigate the log files.
In a stdlist/YYYY.MM.DD folder and run lt, Output:
-rw-r--r-- 1 user1 uGroup 1543 Jun 24 00:05 O33964244.0004 (JOBSTREAM1.JOB1:0/1)
-rw-r--r-- 1 user1 uGroup 1060 Jun 24 00:05 O33964246.0005 (JOBSTREAM1.JOB2:1/0)
-rw-r--r-- 1 user1 uGroup 1578 Jun 24 00:10 O31059994.0010 (JOBSTREAM2.JOB1:0/4)
-rw-r--r-- 1 user1 uGroup 1543 Jun 24 00:10 O34291814.0009 (JOBSTREAM3.JOB1:0/1)
-rw-r--r-- 1 user1 uGroup 1060 Jun 24 00:15 O35487936.0015 (JOBSTREAM3.JOB2:1/0)
-rw-r--r-- 1 user1 uGroup 1060 Jun 24 00:15 O35389680.0015 (JOBSTREAM2.JOB2:1/10)
-rw-r--r-- 1 user1 uGroup 1058 Jun 24 00:15 O34934840.0015 (JOBSTREAM4.JOB1:1/12)
-rw-r--r-- 1 user1 uGroup 1059 Jun 24 00:15 O34775842.0015 (JOBSTREAM4.JOB1:0/36)
-rw-r--r-- 1 user1 uGroup 1059 Jun 24 00:15 O34775841.0015 (JOBSTREAM5.JOB1:-/-)
- First # after Job Name is the Exit Status. '-' if still running
- 2nd # after Job Name is Elapsed Time. '-' if still running
- grep on a Job Stream to see all Jobs within and processing time
- grep on a Job across dates to see overall performance
linux/ksh:
lt(){ for x in `ls -tr $* | grep -E "O[0-9]*.[0-9]{4}$"`; do; echo -e `ls -l $x` "\c" ; grep ^"= JOB" $x | sed "s/[:#\\[,.]/ /g" | awk '{printf "(%s.%s", $4,$8}'; echo `grep '^= Exit Status' $x`|sed 's/ : /:/g'| awk -F: '{printf ": %s/", (length($0) == 0)?"-":$2}'; echo `grep 'Elapsed' $x`|sed 's/ : /:/g'| awk -F: '{printf "%s)\n", (length($0) == 0)?"-":$3}';done; }
linux/bash:
lt(){ for x in `ls -tr $* | grep -E "O[0-9]*.[0-9]{4}$"`; do
echo -e `ls -l $x` "\c" ;grep ^"= JOB" $x | sed "s/[:#\\[,.]/ /g" | awk '{printf "(%s.%s", $4,$8}';
echo `grep '^= Exit Status' $x`|sed 's/ : /:/g'| awk -F: '{printf ": %s/", (length($0) == 0)?"-":$2}'; echo `grep 'Elapsed' $x`|sed 's/ : /:/g'| awk -F: '{printf "%s)\n", (length($0) == 0)?"-":$3}';done;}
aix/ksh:
lt{for x in `ls -tr $* | grep -E "O[0-9]*.[0-9]{4}$"`; do
echo `ls -l $x` "\c" ;
grep ^"= JOB" $x | sed "s/[:#\\[,.]/ /g" | awk '{printf "(%s.%s", $4,$8}'; echo `grep '^= Exit Status' $x`|sed 's/ : /:/g'| awk -F: '{printf ": %s/", (length($0) == 0)?"-":$2}'; echo `grep 'Elapsed' $x`|sed 's/ : /:/g'| awk -F: '{printf "%s)\n", (length($0) == 0)?"-":$3}';
done; }
Tuesday, February 10, 2015
Print lines between patterns
Source: www.shellhacks.com/en/Using-SED-and-AWK-to-Print-Lines-Between-Two-Patterns+&cd=5&hl=en&ct=clnk&gl=us
File:
I Love Linux
***** BEGIN *****
BASH is awesome
BASH is awesome
***** END *****
I Love Linux
Example :
awk '/StartPattern/,/EndPattern/' FileName
Example :
awk '/BEGIN/,/END/' info.txt
***** BEGIN *****
BASH is awesome
BASH is awesome
***** END *****
Another VERY Cool snipet:
source:
http://stackoverflow.com/questions/9476018/split-text-file-into-parts-based-on-a-pattern-taken-from-the-text-file
---------------------------------------- another ex:
More examples ------ https://stackoverflow.com/questions/18185771/extract-nth-line-after-matching-pattern
File:
I Love Linux
***** BEGIN *****
BASH is awesome
BASH is awesome
***** END *****
I Love Linux
- sed
sed -n '/StartPattern/,/EndPattern/p' FileName <<<---inclusive both
Option | Description |
---|---|
-n, --quiet, --silent | Suppress automatic printing of pattern space |
p | Print the current pattern space |
sed -n '/BEGIN/,/END/p' info.txt
***** BEGIN ***** BASH is awesome BASH is awesome ***** END *****
- awk
awk '/StartPattern/,/EndPattern/' FileName
Example :
awk '/BEGIN/,/END/' info.txt
***** BEGIN *****
BASH is awesome
BASH is awesome
***** END *****
Another VERY Cool snipet:
source:
http://stackoverflow.com/questions/9476018/split-text-file-into-parts-based-on-a-pattern-taken-from-the-text-file
BEGIN { fn=0 }
NR==1 { next }
NR==2 { delim=$1 }
$1 == delim {
f=sprintf("test%02d.txt",fn++);
print "Creating " f
}
{ print $0 > f }
- initialize output file number
- ignore the first line
- extract the delimiter from the second line
- for every input line whose first token matches the delimiter, set up the output file name
- for all lines, write to the current output file
---------------------------------------- another ex:
* file: TEST_TAF PREF: RAC1 RAC2 RAC3 ...... AVAIL: RAC4 (PREF, AVAIL)
> sed 's/.*PREF//;s/AVAIL.*//' yourfile
> sed 's/.*PREF: //;s/ AVAIL.*//;s/ */,/g' yourfile
* https://nixtip.wordpress.com/2010/10/12/print-lines-between-two-patterns-the-awk-way/
file:
test -3
test -2
test -1
OUTPUT
top 2
bottom 1
left 0
right 0
page 66
END
test 1
test 2
test 3
> awk '/OUTPUT/ {flag=1;next} /END/{flag=0} flag {print}'
-------------------------- Another Ex: starting with multiple patterns
sed -n '/\(12:05:43.376\|Begin FS_BP.Init.*Init.OnExecute\)/,/12:05:43.605/p' AE_FS_BP_2740745.trc
-------------------------- Another Ex: starting with multiple patterns
sed -n '/\(12:05:43.376\|Begin FS_BP.Init.*Init.OnExecute\)/,/12:05:43.605/p' AE_FS_BP_2740745.trc
More examples ------ https://stackoverflow.com/questions/18185771/extract-nth-line-after-matching-pattern
To
extract the Nth line after a matching pattern
you want:awk 'c&&!--c;/pattern/{c=N}' file
e.g.
awk 'c&&!--c;/Revision:/{c=5}' file
would print the 5th line after the text "Revision:"/.
FYI the following idioms describe how to select a range of records given a specific pattern to match:
a) Print all records from some pattern:
awk '/pattern/{f=1}f' file
b) Print all records after some pattern:
awk 'f;/pattern/{f=1}' file
c) Print the Nth record after some pattern:
awk 'c&&!--c;/pattern/{c=N}' file
d) Print every record except the Nth record after some pattern:
awk 'c&&!--c{next}/pattern/{c=N}1' file
e) Print the N records after some pattern:
awk 'c&&c--;/pattern/{c=N}' file
f) Print every record except the N records after some pattern:
awk 'c&&c--{next}/pattern/{c=N}1' file
g) Print the N records from some pattern:
awk '/pattern/{c=N}c&&c--' file
I changed the variable name from "f" for "found" to "c" for "count" where appropriate as that's more expressive of what the variable actually IS.
Thursday, January 29, 2015
Source: http://hacktux.com/bash/bashrc/bash_profile
An Explanation of .bashrc and .bash_profile
Both the ~/.bashrc and ~/.bash_profile are scripts that might be executed when bash is invoked. The ~/.bashrc file gets executed when you run bash using an interactive shell that is not a login shell. The ~/.bash_profile only gets executed during a login shell. What does this all mean? The paragraphs below explains interactive shells, login shells, .bashrc, .bash_profile and other bash scripts that are executed during login.
Login Shells (.bash_profile)
A login shell is a bash shell that is started with - or --login. The following are examples that will invoke a login shell.
sudo su -
bash --login
ssh user@host
When BASH is invoked as a login shell, the following files are executed in the displayed order.
- /etc/profile
- ~/.bash_profile
- ~/.bash_login
- ~/.profile
Purely Interactive Shells (.bashrc)
Interactive shells are those not invoked with -c and whose standard input and output are connected to a terminal. Interactive shells do not need to be login shells. Here are some examples that will evoke an interactive shell that is not a login shell.
sudo su
bash
ssh user@host /path/to/command
In this case of an interactive but non-login shell, only ~/.bashrc is executed. In most cases, the default ~/.bashrc script executes the system's /etc/bashrc.
Be warned that you should never echo output to the screen in a ~/.bashrc file. Otherwise, commands like 'ssh user@host /path/to/command' will echo output unrelated to the command called.
Non-interactive shells
Non-interactive shells do not automatically execute any scripts like ~/.bashrc or ~/.bash_profile. Here are some examples of non-interactive shells.
su user -c /path/to/command
bash -c /path/to/command
Subscribe to:
Posts (Atom)