High Frequency Trading – Clearing

Posted by Chris on March 4th, 2010 filed in Trading
1 Comment »

Prop trading shops don’t just get free access to all of the exchanges they’re on.  Seats are expensive, and for what they’re doing, not really necessary either.  They generally get something called Sponsored Access.  A big company will sponsor them on the exchange in return for a fee.  But it’s not just access to the exchange.  If you’re a small trading company, you may have a few dozen million dollars in the real world.  So you prove that to, say, Goldman Sachs and they say “Sure, we’ll sponsor you.  And so you can trade more, here’s some leverage to play with too!”.  Of course, they bill you for how much you trade, so it’s in their interest to let you leverage yourself.  So now your measly 8 figures has turned into 10 or more figures of leveraged money.

Look up haircut sometime if you want to know how it all relates to how much you can leverage.

So, your sponsor takes on the risk of you trading somewhat in their name.  At the end of the day, they have a list of what you traded, and you have a list of what you traded.  The folks in mid-office then have to deal with your clearing firm and your sponsor so everyone agrees just what happened when the traders were out playing all day.  The clearing firm is the go-between the two parties of the trade.  Basically, they’re a really valuable lubricant in the wheels of the market.  So GS gets the list from your clearing firm, you send what you think you traded, and hopefully you don’t get a call from them asking where the 10,000 shares of Berkshire are in your list.

Clearing – in practice

It’s fairly simple actually.  You get a method by which you send  your report.  Let’s say FTP.  You upload it with a certain filename format to a server with a given username and password.  Easy.  The hard part is generating the file.  You could have one database, you could have a hundred.  There could be one format, or dozens.  You could have to download and parse text files.  And don’t forget versioning either.  So you download all the executions you made.  Then you have to group them by exchange, symbol, and side, and add them up and average them out. So your file goes from, say, 200,000 shares of AAPL traded to three lines:

AAPL,B,100203, 190.91,blah,blah

You combine all of them together for that particular exchange, in that particular format, and upload it to  your firm.  They compare it with their (much simpler to generate, because they only have one potential input) file and if there’s a problem they email and/or call.  It’s a surprisingly simple program in perl.  Collect your data from all the different sources, put it into a huge hash, print it out to a file, upload it.  Put in error checking and notification.  Do this for every exchange you trade on, at the specified time(s) daily.

Only two important things to note:

If that file is wrong or doesn’t get uploaded – they don’t trade the next day until it’s fixed.

“The next financial crisis will be caused by a divide-by-zero error in someone’s perl script” (Citation needed)


A huge amount of time and money is spent on compliance at these firms.  And at the same time it’s a total afterthought.  The traders rule the roost, and they don’t care about it –  until they can’t trade anymore or their bonus is lower because of a failed audit.  Essentially here’s what it comes down to.  You have to keep track of every trade and every order you make, and keep it forever.  Sure, they say seven years, or five years, but it’s forever.  For one – seven years really is forever in computer terms.  Creating something that’s archivable over that long is essentially creating it forever.  And for two – there’s nothing saying that a year from now they won’t want the records for 10 years, or 15 years.

So you’re keeping track of every order and every trade.  While you’re doing it, you have to keep your positions at the time if at all possible.   Why?  Shorting stocks for example.  Traders are allowed to short, but it goes against the haircut and reduces the amount they can trade.  If it’s on the easy-to-borrow list though, it’s a different story.  Anyway, the regulators come in and you have to prove that you weren’t shorting a stock.  Which means you have to know or be able to derive your position at that time.  Then you have to prove that at the millisecond you placed the order, it wouldn’t be a short.  Which also means accurate time across the board of your systems.  It’s fun, see?

The easy to borrow list is exactly like it sounds.  A list of things that are easy to borrow.  Some stocks are so available to trade – so liquid – that they get on a special list that doesn’t really count as short selling! Link

In addition to ordering what and when, you also have to be able to prove who.  So every trader has to have a unique login so you can prove who did what.   If Bob over in the corner is doing something illegal or unethical, you have to be able to prove it was bob doing it.  That way when he’s caught he’ll just take himself down and not the company.  So you have to record every method by which someone can communicate with evil peoples.  You have to keep (like I said, pretty much forever):

  • Email
  • Chat
  • Phone
  • Tin can and string
  • Any other electronic medium

Funny story – one time the network went down for some reason.  Most places use IP phones now, which use the network.  Just to be safe, we had to call the exchanges to cancel our orders – but no phones!  So we all whip out our cell phones and call that way.  1- You can’t do that since they’re not logged and monitored. and 2- “Does anyone have something other than AT&T?  I can’t call anyone!”

Email has to go through your email servers and be backed up, instant messaging has to go through a proxy that logs all conversation, phones (at least for traders) have to keep recordings, etc etc.  One of my better scripts was actually to do a daily dump of chat logs, bundle them per user, and send it as an email to a special email address.  Killing two birds with one stone!   Regulators will seriously come in and say “we want every communication trader X made from this date to this date.”  And you say “okay” or you get a fine.  They also do spot-checks at least once per year.  “Give us everything you traded from this date to this date.”  And you say “okay” or you get a fine.  Incidentally, if you want to see roughly what the code for chat log retention looked like:

use strict;use warnings;
use include;
my $ch='chat_host';
my $cu='chat_user';
my $cp='chat_pass';
my @worries=();
my $to='[email protected]';
my $today=&include::getdate('YYYY-MM-DD');
my @senders = &include::sql($ch,$cu,$cp,"select distinct(senderid) from message_log where date(modified)='$today'")
  or &freakout($!);
foreach my $sender(@senders)
  my @lines=&include::sql($ch,$cu,$cp,"select msg_text from message_log where date(modified)='$today' and senderid='$sender'");
    push(@worries,"No lines for $sender");
  &include::send_email($to,join("_",split(" ",$sender)).'@backuphost.mycompany.com','Chat logs for $today',join("\n",@lines))
    or push(@worries,$!);
  &include::send_email('[email protected]','[email protected]','Possible problems with chat backups',join("\n",@worries));
sub freakout
  my $wtf=shift;
  &include::send_email('[email protected]','[email protected]','Error with chat logs for $today',$wtf);

It’s a  simplified but only because the hard stuff is hidden in my include module I keep around to make my life easier – it was(is) literally this short.  We traded every single day and there was chat (and chatbots) every day, so if there wasn’t anything in the logs there was definitely a problem.  In plain language:

Do some housekeeping (includes, variable setup for hostnames and the like, the date, etc)
Get a list of everyone who sent an IM that day and if there's a problem, send a freakout email with what happened.
For every person in that list, send an email with their name as the "from" address and the chats they sent as the body
  (if there's a problem, add it to the list of worrying things)
If anything is worrying, send an email to that effect to me.

The reason this is so simple is kind of a lucky coincidence.  The chat logs are in one table, the ID is the name of the person, and the text of the log includes the time and who it’s to.  If it didn’t, I’d have to have another line to get the identity of the person – no big deal.  I’d also have to have a temporary array of lines that I’d append the “to”, “time”, and “text” to as one entry, then send the email with the contents of that array instead.

High Frequency Trading – Strategies

Posted by Chris on March 4th, 2010 filed in Trading
Comment now »

Once you have all the access, machines, network, middle office, clearing, etc etc etc, what do you do now?  Well, you need to figure out what to do.  There are two basic things folks do here, one of which is basically a subset of the other.  How do you pay for it?  And What do you do with it?

Market Making and Market Taking

The individual markets want action.  They want you to be able to buy or sell easily on their markets.  Sure, they go for big numbers, too, but what they want is liquidity.  If average Joe on the street can’t go into BillyBob Exchange, LLC and buy 100 shares of That Gargle Interweb thing, but can elsewhere, they’re not going to be an exchange for much longer.

MarketMakers add liquidity to the market

There’s not always enough people in the market at a given time to make it liquid enough for every single instrument traded on it.  So the exchanges have people called MarketMakers.  Their job is literally that – to make the market.  If you’re a registered marketmaker, you have to be in the market for a given amount of time or get in trouble.   You can be a marketmaker for one symbol or a hundred, it’s whatever you can keep up the action for.  Marketmakers can make an okay living doing this, but it can be risky too.  They live on both sides of the “price” for a symbol.  If something is “worth” some arbitrary amount of money, they will buy it for a little less than that, and sell it for a little more.  (Bid and Ask)  If the market moves too quickly in one direction, there’s a risk that they are making bad trades because they haven’t updated their prices to account for the risk.  More risk means trading wider around the price.  You can also adjust your price to account for your position.  If you’re long something, you may be willing to take slightly less money to unload so you don’t have that position.  In return for this risk – the exchanges “bribe” the marketmakers.  They get a nice discount on their trades.

While it may cost you $7.95 per trade, a marketmaker can pay 1/1000 of that – or less.  Or nothing.  Or they can get paid to trade.

It’s in the ratio.  Quoting (marketmaking) gets you a rebate, taking liquidity costs you money.  Marketmakers also have an obligation to quote a certain amount and hours per day.  That doesn’t mean they can’t quote extra wide, but they’re not making money then and not making the exchange happy.


I love arbitrage.   In it’s most basic definition, it is taking advantage of price differentials.  Say the price of AAPL on the TSX is $201/share, on Nasdaq it’s $200 a share.  So you buy it on Nasdaq and sell it on the TSX and pocket the dollar, minus any fees.  My favorite is Berkshire Hathaway (BRK) – it has two classes of stock, A and B.  They’re related – you can exchange A for B.  One class-A share equals 30 class-B shares.  Since most people don’t buy BRK to vote, there is a very, very strong relationship between the two.  So if it’s ever not very very close to a 30:1 price ratio, there’s a simple arbitrage opportunity.

A surprising amount of what the proprietary trading companies do is arbritage.  Everything from distance (latency, really) based, to ratios like the BRK example, to the symbols within a larger context.  For example, you can trade the component stocks of the S&P 500 against a future of the S&P 500 index itself.  Since one is literally made of the other, the price in one underlying symbol means that the other is worth a different amount.

Another good example is pricing options.  Most of the market prices options based on the Black-Scholes formula (roughly speaking).  This means that there’s a pretty standard way of figuring the prices that the market generally gravitates toward.  Those who can program a faster algorithm for computing this can price them faster, and have a pure arbitrage opportunity.  Options are also fun in another way.  You can “create” shares from options.  Wikipedia explains it better than I could, but basically you can buy and sell options in a certain way that lets you simulate having a share of stock.  Or better yet, simulate selling one or selling one short when you can’t really sell it short.


This is basically taking advantage of similarities between companies.  If oil goes up, companies that use lots of oil tend to go down.  Or on a more practical example I like to give, if oil goes up, American car companies tend to go down.  Hand in hand with that, similar companies tend to follow each other also.  GM and Chrysler did for a while.  Big steel companies, when they existed.  Heck, weather futures against crop futures.  Think of a few on  your own, it’s fun!  Basically anything you can think of that may have an effect on something else can be priced.  It doesn’t have to be a 1:1 ratio either.  The companies aren’t priced the same, nor is the relationship between them perfect.

In practice – order types (list)

The market open and market close are where almost all of the fun in the markets happens.  Open especially.  You can saturate a 1GB connection to a market just with data feeds.  (Get a bigger connection).  For perspective, that line that is getting completely overrun with data is probably 500 to 1000 times faster than yours at home. You put your quotes and orders in before open or right at open and the insanity happens.  Now when the market moves on you, you need to change your quotes.  If the “value” is $10.00 and you’re at $9.95 and $10.05 and it inches higher, pretty soon your asking price is so low that you’re losing money on every trade.  So you update the price to move with the market.  You can have one order in at the beginning of the day and update it 100,000 times throughout the day.  These orders (market or limit) are normally “day” orders, and are valid until canceled.  So you can update it or do a cancel and replace.  The are liquidity *adding* orders.  The marketmaker uses them to get into the market and stay there until filled, at which point they’ll adjust their prices depending on whether they’re long or short and put in a new order.  Why is this *adding* liquidity?  Because it is giving others the opportunity to trade.  You you just out there saying “Here I am, I’ll trade with you if you want. Here are my prices.”

Immediate or cancel, fill or kill, whatever you want to call it – these are the market *taking* orders.  You see an opportunity for some quick profit that will remove liquidity from the market.  There’s a price mis-match somewhere, someone put in an order you think is “wrong”, there are plenty of reasons.  But you only only only want to get it at that price.  If it’s not immediately filled at that price, the order is canceled and nothing happened.  Why does this remove liquidity? Because unlike the day orders, you’re not actually giving an opportunity to someone else to trade, you’re trying to take it away from someone else.   It removes one open order from the market, takes the numbers a little lower from the exchange, and thus they charge you for the privilege.

Another interesting thing about the markets that one has to account for is what’s called an Iceberg.  Say you have a lot of stock you have to sell.  Putting a giant order out there will change the market – people will see you need to sell a lot and the price will got down accordingly.   Of course, you don’t want the price to go down as much – you want more money for your stock!  The markets let you put something called an iceberg order into the market.  It lets you specify how much the market sees of the order.  Instead of 10,000 or 100,000 maybe you want to show 1,000.  Eventually the market will figure out that there’s a lot of selling going on and price itself accordingly, but you haven’t thrown it for a loop with your giant numbers – it’s gradual.  So you can set  your order as an iceberg, and the people on the other side ordering against you can also place their order for *more* than it shows the offer is.  So you see an offer in the market for 500 shares @ $10.00, and you think that’s a great price – you can try an order larger than that, and if it’s an iceberg, you’ll trade for the larger quantity.  Neat stuff.

High Frequency Trading – What It Is

Posted by Chris on March 4th, 2010 filed in Trading
Comment now »

What is High Frequency Trading?  In the most basic sense, it’s trading done completely by computers over networks with humans babysitting them.  Maybe a little history would help?

Trading, a super short history

In the beginning, there was the marketplace.  And it was… primitive.  Multiple places in Europe started different kinds of exchanges to trade things.  Shares in companies, government securities, tulip futures (famously), etc.  For hundreds of years, it was open outcry at exchanges.  People would literally huddle in groups for each thing being traded and make deals in person.  They would write down what happened for the deal on pieces of paper and reconcile at the end of trading.  Phone lines elsewhere on the floor were added, runners would go to their brokers after getting a call from someone else who saw the price on the ticket, etc.  NASDAQ came around and was electronic.  It revolutionized the industry.  Other exchanges slowly came around until now nearly everything is electronic.

The rise of computers

Computers changed quite a few things.  Speed and reliability of execution were improved by an incredible degree.  Costs were lowered.  Barriers to entry removed.

Seats on the CME start at $750,000 for example link.  Nasdaq?  $2000 application fee and montly fees for membership ($3500 or so). Link

Per trade fees are also lower than they used to be.  Combined with networking advances, it’s the perfect recipe for high frequency trading.

Enabling factors

There are a few things that have to happen for this to work.  A firm wanting to get involved in high frequency trading needs kind of a perfect storm to make it in the market.

Fast data feeds from one or more markets.

The feed is a stream of data coming from the exchange listing what is happening.  People are bidding on products, people are offering products for sale, and sales are occuring.  The feed tells you all of them.

Fast connections back to the matching engine.

The matching engine is what it sounds like.  It matches up the buyer with the seller for a given product at a given price and sends that data back via the feed

Computer power to actually figure out if there’s a profitable trade to be made.

Companies usually locate their servers as close as physically possible to one or more exchanges.  The datacenters themselves are amazing.

Cheap trading.

(talked about more later).

So this all means?

Computers take in the feed, which is just what is going on in the market, and decide to make trades.  It’s fast.  How fast?  Less than 200 microseconds to make a decision and send it back out is completely reasonable.  Two hundred millionths of a second.  And it has to do a lot of other things.  If you are long (own) too many, you want to make it slightly less likely you’ll buy so you don’t end up with too many at the end of the day.  If you’re short(have sold more than you have), you have to track all of that.  High frequency trading is, quite simply, trading that’s done by computers, and only incidentally are humans involved.

We (the people) think of the strategies, turn them into parameters the computers can understand, test them, and then let the computers run them with someone watching.  Usually, someone spots or thinks they spot some advantageous idea and goes from there.  Some people will write it into the programming itself – meaning that the program that does the trading itself is re-written to account for the new idea.  Others will let the program modify itself – the people thinking up the ideas will enter programming language-like-code that then gets run by the trading program.

The rest of the posts will be more about specific areas I thought were interesting when I was there – or areas I was involved in a lot.

High frequency trading, explained

Posted by Chris on March 4th, 2010 filed in Trading
Comment now »

  1. What it is
  2. Clearing
  3. Access and the datacenters
  4. Strategies
  5. The arms race
  6. A word on interest rates
  7. Should I work for a trading company?

This is a series of posts about my experience as a sysadmin and programmer in the world of High Frequency Trading.  I spent two years intimately involved in the daily operations and planning of a player in the field.  My programs were (and are to this day) responsible for clearing, backups, logging, statistical analysis, and a hundred other things.  One handled/handles billions of dollars of trades daily.  I’ve been to datacenters, been to sporting events with brokers, and called exchanges to cancel all of our orders.  The people, the environment, the companies, and the competition are all aggressive and brilliant and exhausting.  There’s nothing like it in the world, I’m glad I was there, and I’m glad I’m gone. The purpose of this is to explain to family and friends what it is I was doing at this secretive company for all that time. If other people learn a bit about how it works, it’s extra credit.  It’s all as I remember having been out for three months – enough time to get a sense of detachment but not so much that it’s all strange again.

New network – How I find out what’s there

Posted by Chris on January 29th, 2010 filed in programming, sysadmin
Comment now »

I switched jobs recently to become sysadmin of a fairly small company.  I think job #1 is to figure out just what is on your new network.  It’s kind of important.  This is the dumb little perl script I re-write every time I go someplace new because frankly – it’s fun!

use warnings;
use strict;
#this should be run as root, otherwise nmap will probably yell at you

my $net=shift || usage();
#the lazy, lazy regex to get the subnet you're working on...
$net=~s/(\d{1,3}\.\d{1,3}\.\d{1,3}\.)\d/$1/ || usage();

foreach my $end(0..255)
        my $ip  ="$net$end";
        my ($fwd,$rev,$ud,$os) = ("unknown")x4;
        my $nmap  =`nmap -v -O -sT $ip`; #save for later
        my @nmap  =split("\n",$nmap);

        #get forward and reverse DNS
        chomp(my $host =`host $ip`);
                $fwd=(split(" ",$host))[-1];
                chomp($rev=`host $fwd`);
                $rev=(split(" ",$rev))[-1];
                $rev= "" unless $ip ne $rev; #only display if it doesn't equal the original ip

        $ud = $nmap=~m/Host seems down/?'Down':'Up';
        #get the o/s
        $os=(grep(/Running/,@nmap))[0] || '';
                $os=~s/Running: //;
                $os=substr $os,0,25;
        $fwd=substr $fwd,0,40;
        printf "%-16s%-5s%-28s%-43s%-20s\n",$ip,$ud,$os,$fwd,$rev;
sub usage
        print "usage: $0    ex: $0\n";

Example output:

monitor:~ imaginarybillboards$ sudo perl Documents/check_network.pl   Down                             unknown                                  unknown   Up   SonicWALL SonicOS 3.X       firewall.private.blah.com.   Down                             switch.private.blah.com.   Up   Cisco IOS 12.X              ck-sw0.private.blah.com.   Down                             unknown                                  unknown   Down                             unknown                                  unknown

And without down hosts (a little more directly useful, perhaps):

monitor:~ imaginarybillboards$ sudo perl Documents/check_network.pl | grep -v Down Up   Apple Mac OS X 10.5.X       monitor.private.blah.com.    Up   Linux 2.6.X                 cartman.private.blah.com. Up   Linux 2.6.X                 kenny.private.blah.com. Up   Apple Mac OS X 10.5.X       monitor.private.blah.com. Up   Microsoft Windows XP        unknown                                  unknown Up   Apple iPhone OS 1.X|2.X|3   unknown                                  unknown Up   Apple Mac OS X 10.5.X       unknown                                  unknown Up   Apple Mac OS X 10.5.X       unknown                                  unknown

Obviously, I have a bit of work to do with that monitor DNS.  This gives me a decent idea of what’s around.  Servers and desktops (and iphones apparently) are all mixed on the same network.

Also, once I’ve (re-)written this, I put into a cron job so I can keep a running track of what’s going on.  Disk space is cheap, and it can’t hurt anything.

crontab -l
0 2 * * * /bin/bash -login -c 'perl /Users/chriskaufmann/Documents/check_network.pl > \
    /Users/chriskaufmann/Documents/NetworkReports/`date +\%y-\%m-\%d`'

And then you can just diff them to see when something came onto the network.

Lazy automatic ssh + key distribution

Posted by Chris on January 26th, 2010 filed in sysadmin
Comment now »

I want to ssh to hosts, sometimes as a user, sometimes as root.  I also want to distribute my public ssh key so I don’t have to login anymore.  I want to do it without stacking tons of my keys onto the ends of files, and I want to be lazy about it.  This is the script I use, I put it somewhere in my path as “go” with chmod +x so it’s executable.  I can then use it like “go hostname” or “go [email protected]”.

#!/usr/bin/env bash
#this will copy our public key to a remote host and ssh to it.


#if no username is passed (like [email protected]), use root by default
if [[ ! "$userhost" =~ '@' ]]
    [email protected]$1

#if no ssh public key exists, create one in the default spot
if [ ! -e $keyfile ]
    echo "Creating SSH key in $keyfile"
    ssh-keygen -t rsa  -f $keyfile -q -N ''
#now get the key itself into a variable
mypubkey=`cat $keyfile`

#this keeps it to one time needed to enter the password,
#it'll create the .ssh directory with right perms, touch the key file,
#create a backup without our key (no dupes),
#and copy it back
ssh $userhost "mkdir -p .ssh;
  chmod 700 .ssh;
  touch $authkeyfile;
  cp $authkeyfile ${authkeyfile}.bak;
  grep -v '$mypubkey' ${authkeyfile}.bak > $authkeyfile;
  echo '$mypubkey' >> $authkeyfile"

#and finally, ssh to the host.
ssh $userhost

Print media done right, Monocle.com

Posted by Chris on December 30th, 2009 filed in Uncategorized
Comment now »

I’ve been listening to the “Monocle Weekly” podcast for most of a year now.  From there, I started buying the print edition – when I could find it.  They’re a very new-media company combining all forms of media while doing old-school reporting at the same time.  In-depth interviews with people from around the world on topics they’re experts in, or good at, or have been studying.

For Christmas, I got a six-month subscription.  Why six months?  Because we put a limit on our gift exchange prices and Monocle isn’t cheap.  It shipped from Sweden and apparently took three weeks to get there.  We were in communication with the Monocle people and they got back to us pretty quickly.

That’s not the point of this though.  Today, the package arrived.  It had a big, bubble-wrapped envelope to protect the rest.  Inside the bubble wrap?  A patterned envelope with their logo and a gold sticker with the logo on it also.  Inside *that*?  The magazine itself – on thick, beautiful, glossy paper with an actual, physical rubber band holding a guide inside, plus a thank you note and a catalog.

I’m comparing this with the only two other timely delivered items I’ve received.  A Maxim sent to someone else – it’s so disposable and interchangable noone cares who gets it, and the people it’s sent to don’t care either.  The Sun-Times glues the delivery label to the edge so I actually have to tear the edge of the paper to open the front half.  At least it’s news, and portable.  No-one would buy one if it weren’t for public transportation, I think.

I’m excited to read my new issue.

Building useful command lines, step by step

Posted by Chris on September 28th, 2009 filed in sysadmin
Comment now »

1- I tend to have tons of things running at a time and want to watch out of a corner of my eye.  Since I’m doing some pretty heavy stat stuff that can end up killing a machine’s memory in a short amount of time, I want to watch that too.

Enter ‘watch’.  Here’s how it goes – watch -n<number of seconds to wait before refreshing> <command to do this to>

And here’s my big watch command (which I’ve aliased to watcher).  In this I’m looking at perl and python processes, the amount of free memory and the cpu usage, and any mysql processes going on.

watch -n1 ' ps aux | egrep "(perl|python)" | grep -v grep;uptime;free -m; echo "show processlist" | mysql -uroot | grep -v Sleep'

This shows details on any perl or python processes (without the grep command itself), the load, the memory state, and any non-sleeping mysql processes.  Every second.   I just keep it up in a corner of the monitor.

2. Semi-related – killing a bunch of mysql processes.

Sometimes I’ll fire up a bunch of processes and decide they should go away.  Easy – killall perl works wonders.  But the mysql processes remain – enter another one:

echo "show processlist" | mysql |egrep -v "(Id|processlist)" | awk {'print "kill "$1";"'} | mysql

What it does:  'echo "show proesslist" ' just echoes “show processlist” to standard output.  That is then piped to the input of mysql.

#echo "show processlist" | mysql
Id User Host db Command Time State Info
455093 root localhost NULL Query 0 NULL show processlist

Next, it reverse-greps (as I like to call it) for ‘Id’ because I don’t want to see that.

# echo "show processlist" | mysql |grep -v Id
455095 root localhost NULL Query 0 NULL show processlist

Next, it inputs it into a very short awk program – what this does is split it up by spaces, and set each of those to a $<number> variable.  So we print "kill $1;" to standard out – that’ll be the command we want to send to mysql to kill them all. So we end up with:

# echo "show processlist" | mysql |grep -v Id| awk {'print "kill "$1";"'}
kill 455098;

Finally, pipe that into mysql, like so:

# echo "show processlist" | mysql |grep -v Id| awk {'print "kill "$1";"'} | mysql
ERROR 1094 (HY000) at line 1: Unknown thread id: 455101

What happened there?  Well, since I’ve been killing threads for a good two minutes now while working on this shortcut the only thread left is the “show processlist” thread – which ends as soon as the processlist is shown.  Which makes sense.  So cheat and either add another grep -v to get rid of it or  egrep with a simple regex: egrep -v "(Id|processlist)"

#echo "show processlist" | mysql |egrep -v "(Id|processlist)" | awk {'print "kill "$1";"'} | mysql

Yes, both of these could be done in better ways!   But in my defense, part of doing these things is to share them with others and help them be better.  Just like using simple perl is better a lot of the time than really complex code, simple things that are easy to handle are easier to show others and let them modify.  Plus, this is actually how I go about helping someone solve one of these problems so they can do it themselves in the future.

Oh, and now I can just send this link…

If perl is write-only…

Posted by Chris on September 3rd, 2009 filed in programming

Then python is read-only.  Think of it.

Both have a shebang line, and after that import (use) lines.  Perl’s are mostly optional – for sysadmin stuff you’re usually just doing your boilerplate strict and warnings.  Of course, even that is optional.  Technically anyway.  For python, you need to import something to do absolutely anything.   Which is okay – it shows you what is being used.

Then on to the real work.  In perl, you start out with the program.  It’s right there.  If you want to see the logic, just open the file – it’s usually at the top.  Python is the opposite – you have to declare your objects and functions higher up in the file before you can use them.   I couldn’t say you have to declare them before you use them because in practice you’re coding along and think “hey, this should be a function” and zoom down a bit and add it, then go back to the logic.  You’re still doing them before.

So you have your listing of objects and functions somewhere, and the actual program logic somewhere.  But this shows one difference between the two.

Perl cares about doing things.  Python cares about defining things.

The office enabler

Posted by Chris on August 29th, 2009 filed in Uncategorized
1 Comment »

It’s my firm belief that every office has – or desperately needs – an enabler.  Maybe you know the type.  There’s a steady line of people walking over to ask quick questions about tons of random things.  If there’s a problem, they’re (if not fixing it) just walking around, watching, listening.  The next time everyone is stumbling around in the dark, working on the broken a/c he’ll say “hey, they hid a light switch in here behind this panel.  That’s better”.  You know the svn repository directory called “include” or “handy” or “access” or something of the sort?  It has the “example.pl” file with an example of every cool, handy, awesome trick in the book and how to use the module to do all the irritating things you need to do constantly in one line?  That’s his.  When the new folks come around to meet everyone he gets up and says something to the point of “Hey, I’m the enabler.  I’ve been here a little while now, and if you need to know anything – just let me know.  Want some candy?”  When the old folks say “Hey, they moved the stapler!” he says “Yeah, it’s in the other room now by the stapling pool.  Let me show you.  Did you meet the new head of stapling?  They also moved the shiny copier here too, so if you need something in higher quality, come over to this one.”

Their production may be only average or above average, but everyone in the zone around them is way above average.  That’s because when they find some way to automate the widget approval application process, they get excited and share it.  Suddenly widget applications go from hours a day to seconds (this has actually happened to me) and everyone gets way more widgets out.   Meanwhile he’s been using his newfound time to try to find a way to get a chat room for everyone in the team so they don’t have to yell or call all day.

He’ll automate things to make not just his life easier, but everyone around’s life easier also.

How do you find your enabler?  I think it starts with being nice.  Having too much empathy for everyone else helps too.  Does someone not only order their drink  at the Tastes Burnt with a smile and say please, but pay attention to how they yell them to each other so they don’t have to translate it?  That fellow down the row who strangely requests a garbage can for a strategic spot in the middle?  Depends: did you find yourself (along with him) saving time and reducing hassle by having one in just the right place?

Why would you want to become one?  Well, there’s some drawbacks and some benefits.


  • Less time to get your stuff done
  • You look like a kissup
  • You look like you’re trying to involve yourself everwhere
  • You’ll get lots and lots of questions

On the plus side:

  • It’s genuinely helping people in that tiny way that makes a huge difference
  • The office is a more pleasant place for all involved
  • Your tools get used
  • Your brain gets used
  • That stuff you do for everyone else?   You get to do it for you, too.
  • You don’t have to learn to say “no”
  • You get to do more, and more interesting, things day to day

In short (too late!) – I think there’s someone who acts as the grease that makes any office run smoother – and that it’s important they’re there.