sysadmin – Imaginary Billboards http://www.imaginarybillboards.com Imagined things Fri, 23 Mar 2018 15:50:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.8 Perl one-liner to see who is hogging all the open filehandles http://www.imaginarybillboards.com/?p=194 http://www.imaginarybillboards.com/?p=194#comments Tue, 22 Mar 2011 13:46:50 +0000 http://www.imaginarybillboards.com/?p=194 Helpful one-liner to help fix a problem we ran into the other day.

perl -e 'map{$c{(split(/\s+/))[2]}++} `lsof`;print "$_ $c{$_}\n" for (keys %c);'

The thinking is:

Use lsof to get all the open filehandles which conveniently also shows who has it open.

`lsof`

Loop through them, using the ` as a cheat that it inputs an array

map {   } `lsof`;

Splitting on whitespace.  The input to each iteration of the map{ } defaults to $_, and if you don't put anything to split in a perl split, it uses $_.  Neat.

split(/\s+/)

Since we just care about the count, only use the 3rd column by forcing the output of the split into an array and using a slice.

(split(/\s+/))[2]

Now, we just want the count for those users so we increment a hash with the user name as they key.

$c{ }++

Of course, the split is returning the name so that gives us the user name and hash key.

$c{(split(/\s+/))[2]}

And increment that.  Unlike python, for example, you can just increment it.

$c{(split(/\s+/))[2]}++

It will do that for every iteration of the map{ }.  i.e. every line in the output of the `lsof`.

After that, it's just a matter of printing out the key/value pairs using a easy hash printing line blatently stolen from an answer on Stack Overflow.

]]>
http://www.imaginarybillboards.com/?feed=rss2&p=194 3
New network – How I find out what’s there http://www.imaginarybillboards.com/?p=96 http://www.imaginarybillboards.com/?p=96#respond Fri, 29 Jan 2010 16:48:21 +0000 http://www.imaginarybillboards.com/?p=96 I switched jobs recently to become sysadmin of a fairly small company.  I think job #1 is to figure out just what is on your new network.  It’s kind of important.  This is the dumb little perl script I re-write every time I go someplace new because frankly – it’s fun!

#!/usr/bin/perl
use  warnings;
use strict;
#this should be run as root, otherwise nmap will probably yell at you

my $net=shift || usage();
#the lazy, lazy regex to get the subnet you're working on...
$net=~s/(\d{1,3}\.\d{1,3}\.\d{1,3}\.)\d// || usage();

foreach my $end(0..255)
{
        my $ip  ="$net$end";
        my ($fwd,$rev,$ud,$os) = ("unknown")x4;
        my $nmap  =`nmap -v -O -sT $ip`; #save for later
        my @nmap  =split("\n",$nmap);

        #get forward and reverse DNS
        chomp(my $host =`host $ip`);
        if($host!~m/NXDOMAIN/)
        {
                $fwd=(split(" ",$host))[-1];
                chomp($rev=`host $fwd`);
                $rev=(split(" ",$rev))[-1];
                $rev= "" unless $ip ne $rev; #only display if it doesn't equal the original ip
        }

        $ud = $nmap=~m/Host seems down/?'Down':'Up';
        #get the o/s
        $os=(grep(/Running/,@nmap))[0] || '';
        if($os)
        {
                $os=~s/Running: //;
                $os=substr $os,0,25;
        }
        $fwd=substr $fwd,0,40;
        printf "%-16s%-5s%-28s%-43s%-20s\n",$ip,$ud,$os,$fwd,$rev;
}
sub usage
{
        print "usage: >#!/usr/bin/perl
use warnings;
use strict;
#this should be run as root, otherwise nmap will probably yell at you

my $net=shift || usage();
#the lazy, lazy regex to get the subnet you're working on...
$net=~s/(\d{1,3}\.\d{1,3}\.\d{1,3}\.)\d/$1/ || usage();

foreach my $end(0..255)
{
        my $ip  ="$net$end";
        my ($fwd,$rev,$ud,$os) = ("unknown")x4;
        my $nmap  =`nmap -v -O -sT $ip`; #save for later
        my @nmap  =split("\n",$nmap);

        #get forward and reverse DNS
        chomp(my $host =`host $ip`);
        if($host!~m/NXDOMAIN/)
        {
                $fwd=(split(" ",$host))[-1];
                chomp($rev=`host $fwd`);
                $rev=(split(" ",$rev))[-1];
                $rev= "" unless $ip ne $rev; #only display if it doesn't equal the original ip
        }

        $ud = $nmap=~m/Host seems down/?'Down':'Up';
        #get the o/s
        $os=(grep(/Running/,@nmap))[0] || '';
        if($os)
        {
                $os=~s/Running: //;
                $os=substr $os,0,25;
        }
        $fwd=substr $fwd,0,40;
        printf "%-16s%-5s%-28s%-43s%-20s\n",$ip,$ud,$os,$fwd,$rev;
}
sub usage
{
        print "usage: $0    ex: $0 192.168.0.0\n";
        exit();
}<    ex: >#!/usr/bin/perl
use warnings;
use strict;
#this should be run as root, otherwise nmap will probably yell at you

my $net=shift || usage();
#the lazy, lazy regex to get the subnet you're working on...
$net=~s/(\d{1,3}\.\d{1,3}\.\d{1,3}\.)\d/$1/ || usage();

foreach my $end(0..255)
{
        my $ip  ="$net$end";
        my ($fwd,$rev,$ud,$os) = ("unknown")x4;
        my $nmap  =`nmap -v -O -sT $ip`; #save for later
        my @nmap  =split("\n",$nmap);

        #get forward and reverse DNS
        chomp(my $host =`host $ip`);
        if($host!~m/NXDOMAIN/)
        {
                $fwd=(split(" ",$host))[-1];
                chomp($rev=`host $fwd`);
                $rev=(split(" ",$rev))[-1];
                $rev= "" unless $ip ne $rev; #only display if it doesn't equal the original ip
        }

        $ud = $nmap=~m/Host seems down/?'Down':'Up';
        #get the o/s
        $os=(grep(/Running/,@nmap))[0] || '';
        if($os)
        {
                $os=~s/Running: //;
                $os=substr $os,0,25;
        }
        $fwd=substr $fwd,0,40;
        printf "%-16s%-5s%-28s%-43s%-20s\n",$ip,$ud,$os,$fwd,$rev;
}
sub usage
{
        print "usage: $0    ex: $0 192.168.0.0\n";
        exit();
}< 192.168.0.0\n";
        exit();
}

Example output:

monitor:~ imaginarybillboards$ sudo perl Documents/check_network.pl 192.168.2.0
192.168.2.0   Down                             unknown                                  unknown
192.168.2.1   Up   SonicWALL SonicOS 3.X       firewall.private.blah.com.
192.168.2.2   Down                             switch.private.blah.com.
192.168.2.3   Up   Cisco IOS 12.X              ck-sw0.private.blah.com.
192.168.2.4   Down                             unknown                                  unknown
192.168.2.5   Down                             unknown                                  unknown

And without down hosts (a little more directly useful, perhaps):

monitor:~ imaginarybillboards$ sudo perl Documents/check_network.pl 192.168.2.0 | grep -v Down
192.168.2.102 Up   Apple Mac OS X 10.5.X       monitor.private.blah.com.             192.168.2.105
192.168.2.103 Up   Linux 2.6.X                 cartman.private.blah.com.
192.168.2.104 Up   Linux 2.6.X                 kenny.private.blah.com.
192.168.2.105 Up   Apple Mac OS X 10.5.X       monitor.private.blah.com.
192.168.2.107 Up   Microsoft Windows XP        unknown                                  unknown
192.168.2.108 Up   Apple iPhone OS 1.X|2.X|3   unknown                                  unknown
192.168.2.110 Up   Apple Mac OS X 10.5.X       unknown                                  unknown
192.168.2.112 Up   Apple Mac OS X 10.5.X       unknown                                  unknown

Obviously, I have a bit of work to do with that monitor DNS.  This gives me a decent idea of what's around.  Servers and desktops (and iphones apparently) are all mixed on the same network.

Also, once I've (re-)written this, I put into a cron job so I can keep a running track of what's going on.  Disk space is cheap, and it can't hurt anything.

crontab -l
0 2 * * * /bin/bash -login -c 'perl /Users/chriskaufmann/Documents/check_network.pl 192.168.200.0 > \
    /Users/chriskaufmann/Documents/NetworkReports/`date +\%y-\%m-\%d`'

And then you can just diff them to see when something came onto the network.

]]>
http://www.imaginarybillboards.com/?feed=rss2&p=96 0
Lazy automatic ssh + key distribution http://www.imaginarybillboards.com/?p=74 http://www.imaginarybillboards.com/?p=74#respond Tue, 26 Jan 2010 21:00:53 +0000 http://www.imaginarybillboards.com/?p=74 I want to ssh to hosts, sometimes as a user, sometimes as root.  I also want to distribute my public ssh key so I don’t have to login anymore.  I want to do it without stacking tons of my keys onto the ends of files, and I want to be lazy about it.  This is the script I use, I put it somewhere in my path as “go” with chmod +x so it’s executable.  I can then use it like “go hostname” or “go chris@somehost”.

#!/usr/bin/env  bash
#this will copy our public key to a remote host and ssh to it.

userhost=
keyfile=~/.ssh/id_rsa.pub
authkeyfile='~/.ssh/authorized_keys'

#if no username is passed (like someuser@somehost), use root by default
if [[ ! "$userhost" =~ '@' ]]
  then
    userhost=root@
fi

#if no ssh public key exists, create one in the default spot
if [ ! -e $keyfile ]
  then
    echo "Creating SSH key in $keyfile"
    ssh-keygen -t rsa  -f $keyfile -q -N ''
fi
#now get the key itself into a variable
mypubkey=`cat $keyfile`

#this keeps it to one time needed to enter the password,
#it'll create the .ssh directory with right perms, touch the key file,
#create a backup without our key (no dupes),
#and copy it back
ssh $userhost "mkdir -p .ssh;
  chmod 700 .ssh;
  touch $authkeyfile;
  cp $authkeyfile ${authkeyfile}.bak;
  grep -v '$mypubkey' ${authkeyfile}.bak > $authkeyfile;
  echo '$mypubkey' >> $authkeyfile"

#and finally, ssh to the host.
ssh $userhost
]]>
http://www.imaginarybillboards.com/?feed=rss2&p=74 0
Perl super-easy parallelization with threadeach http://www.imaginarybillboards.com/?p=24 http://www.imaginarybillboards.com/?p=24#respond Mon, 08 Dec 2008 04:04:08 +0000 http://www.imaginarybillboards.com/?p=24 I’ve been thinking about a good way to make perl more parallelize-able. The thing that keeps coming to my mind is that it should be so easy that you wouldn’t even think about it. Lots of the time in sysadmin-land, you have to just do a ton of things completely identically to a bunch of things. Examples from just the last week at work.

For each thing in a list, connect to its database, get x data, do some analysis on that.

For each server in a list, connect to it and so something. Push a file, get a file, run a command, etc.

For each ip/port in a list, open a socket and listen for x time, then return the results.

So, what to use? I just like the name threadeach(). Normally, in perl, you do this

foreach  my $thing(@list_of_things){... do something }

It’d be nice if you knew this was easily done in parallel, to do it like this:

threadeach my $thing(@list_of_things){function to be performed on each thing}

Right now, you can *sort of* do the same thing with a little work. I’ve got a threadeach module like this I’ve been using.

Threadeach

I whipped up a module for it with three functions in it.

  • threadeach(\&subroutine,@array) #will parallelize, running  (number of cpu cores) threads at a time
  • threadall(\&subroutine,@array) # will parallelize all at once!!!  Kind of crazy but fun actually
  • threadsome(\&subroutine,<num to run>,@array); #will parallelize the number passed of threads at a time

It’s also got another trick in that it waits for them all to be done and then returns the “return” values in order.  A lot of the time, I do a foreach and print something, in this case I can just return what I’d have printed before and print it at the end.  print threadeach(\&sub,@things);

I’ve been using it in my check_network script that looks at things in a given subnet and it works pretty well.  I just had to change the line from foreach my $ip(0..255){…} to threadeach(\&…,0..255);sub {…}  And instead of printing inside, I return the printed value (as stated above).  It’s been working really well in this limited case, I have to try it more on other things, but don’t see why it wouldn’t work fine.  But since this script does an nmap against the host, it uses a good bit of CPU – I tried using threadall() and it almost hung the machine – 255 nmap processes at once will do that.

Timing

Running original version: sudo time perl check_network.pl 192.168.200.0 -> 524.71 real 76.91 user 26.39 sys

Running threadeach version: sudo time perl threaded_check_network.pl 192.168.200.0 -> 189.95 real 77.31 user 28.48 sys

Roughly 1/3 the time.  Which makes sense, because no matter how long one of the machines takes to do, it’s added in to the rest for the original version, and can be “worked around” in the parallel version.  For my example, for some reason the .107 box takes several minutes ( I skipped it in this test) to run – but even not counting that one, there are some that are almost instant(the down boxes) and some that take longer.

How it works

Not counting the deciding how many to run at a time (which depends on how it’s called) – that just tries to get the number of cpus on the machine, and if it can’t, returns an arbitrary number (currently 8), it’s fairly straightforward.  Set up an empty hash to keep track of thread ID vs index, an index variable to keep track of where we are at with the list, and finally an empty array to store anything being returned.

Main loop:  As long  as there are:

  • Threads working
  • Threads done and waiting to return
  • or more things to do

Do:

  • Get the return values of any threads waiting to give them back. (puts the return value of the thread into the return array corresponding to the slot that it was passed originally)
  • launches more threads, until there are either no more left or until it has reached the max number
  • When launching those threads, it puts the value of the index (id corresponding to the slot of original array) into the value side of a hash where the key is the thread ID
  • sleep for one second.

And at the end, returns the @return array.  The @return isn’t strictly necessary if it’s supposed to be a foreach replacement, but works really well for where it’s useful.  The sleep(1); isn’t strictly necessary either, but 1- if you’re doing a bunch of threads, waiting a second at a time isn’t a huge deal, and 2- otherwise it pegs the CPU just doing a tight while loop checking on thread status.

In the future…

Figure out how to make it work as a drop-in replacement for foreach. Calling it as a function seems so hack-ish.  A better way to decide how many threads to call (if Sys::CPU doesn’t work).  Was thinking about the year – 2005, so the number would increase.  Or could just require Sys::CPU…  Could also make sure whatever is in the main block is thread-safe, but should probably just trust the user.  In the meantime, I’m going to use it for a little bit and bang against it on a few systems before throwing it out into the cruel, cold world.

It may also be cool if it can buffer I/O so that for some things it really does act just like foreach, too.

]]>
http://www.imaginarybillboards.com/?feed=rss2&p=24 0
Documentation should be in a wiki http://www.imaginarybillboards.com/?p=4 http://www.imaginarybillboards.com/?p=4#respond Wed, 10 Oct 2007 22:29:19 +0000 http://www.imaginarybillboards.com/?p=4 Let me repeat that.  Documentation should be in a wiki.

Here is how things are done in my corporate world.  For keeping track of versions, other info, etc we have a series of Excel files kept either on a shared network drive or in The Worst Software Ever (Lotus Notes).  Which either means

A)

1- Finding the excel file link or browsing the shared folders looking for it

2- Opening it and hoping it’s not in use (and thus locked)

2a- If locked, try to remember to make the change later (and forget)

3- Editing the file, trying to find the  appropriate area and fill in a lot of non-friendly fields

or

B)

1- Opening TWSE and waiting for it to open.

2- Trying to find the link you saved to where the file is because you can’t search in TWSE

3- Opening a file which involves multiple clicks, editing steps, etc

4- Repeating the steps from above with the actual file.

Compare this with a wiki.

1- Open the wiki

2- Search for your thing you are documenting (or going to the bookmark even)

3- Clicking “Edit”.

4- Changing the page which can be any format that makes sense.

Note also that with some customization of open source wiki software, you could also either automagically populate a main status page from the wiki or from the thing you’re documenting directly.  This could also be done for versions, etc.  Summation then:  automatic populating of things is of course best, but short of that whatever you’re working on should be A) searchable, and B) very easily editable.

]]>
http://www.imaginarybillboards.com/?feed=rss2&p=4 0