Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

August 17 2012

Wall Street’s robots are not out to get you

ABOVE by Lyfetime, on FlickrTechnology is critical to today’s financial markets. It’s also surprisingly controversial. In most industries, increasing technological involvement is progress, not a problem. And yet, people who believe that computers should drive cars suddenly become Luddites when they talk about computers in trading.

There’s widespread public sentiment that technology in finance just screws the “little guy.” Some of that sentiment is due to concern about a few extremely high-profile errors. A lot of it is rooted in generalized mistrust of the entire financial industry. Part of the problem is that media coverage on the issue is depressingly simplistic. Hyperbolic articles about the “rogue robots of Wall Street” insinuate that high-frequency trading (HFT) is evil without saying much else. Very few of those articles explain that HFT is a catchall term that describes a host of different strategies, some of which are extremely beneficial to the public market.

I spent about six years as a trader, using automated systems to make markets and execute arbitrage strategies. From 2004-2011, as our algorithms and technology became more sophisticated, it was increasingly rare for a trader to have to enter a manual order. Even in 2004, “manual” meant instructing an assistant to type the order into a terminal; it was still routed to the exchange by a computer. Automating orders reduced the frequency of human “fat finger” errors. It meant that we could adjust our bids and offers in a stock immediately if the broader market moved, which enabled us to post tighter markets. It allowed us to manage risk more efficiently. More subtly, algorithms also reduced the impact of human biases — especially useful when liquidating a position that had turned out badly. Technology made trading firms like us more profitable, but it also benefited the people on the other sides of those trades. They got tighter spreads and deeper liquidity.

Many HFT strategies have been around for decades. A common one is exchange arbitrage, which Time magazine recently described in an article entitled “High Frequency Trading: Wall Street’s Doomsday Machine?”:

A high-frequency trader might try to take advantage of minuscule differences in prices between securities offered on different exchanges: ABC stock could be offered for one price in New York and for a slightly higher price in London. With a high-powered computer and an ‘algorithm,’ a trader could buy the cheap stock and sell the expensive one almost simultaneously, making an almost risk-free profit for himself.

It’s a little bit more difficult than that paragraph makes it sound, but the premise is true — computers are great for trades like that. As technology improved, exchange arb went from being largely manual to being run almost entirely via computer, and the market in the same stock across exchanges became substantially more efficient. (And as a result of competition, the strategy is now substantially less profitable for the firms that run it.)

Market making — posting both a bid and an offer in a security and profiting from the bid-ask spread — is presumably what Knight Capital was doing when it experienced “technical difficulties.” The strategy dates from the time when exchanges were organized around physical trading pits. Those were the bad old days, when there was little transparency and automation, and specialists and brokers could make money ripping off clients who didn’t have access to technology. Market makers act as liquidity providers, and they are an important part of a well-functioning market. Automated trading enables them to manage their orders efficiently and quickly, and helps to reduce risk.

So how do those high-profile screw-ups happen? They begin with human error (or, at least, poor judgment). Computerized trading systems can amplify these errors; it would be difficult for a person sending manual orders to simultaneously botch their markets in 148 different companies, as Knight did. But it’s nonsense to make the leap from one brokerage experiencing severe technical difficulties to claiming that automated market-making creates some sort of systemic risk. The way the market handled the Knight fiasco is how markets are supposed to function — stupidly priced orders came in, the market absorbed them, the U.S. Securities and Exchange Commission (SEC) and the exchanges adhered to their rules regarding which trades could be busted (ultimately letting most of the trades stand and resulting in a $440 million loss for Knight).

There are some aspects of HFT that are cause for concern. Certain strategies have exacerbated unfortunate feedback loops. The Flash Crash illustrated that an increase in volume doesn’t necessarily mean an increase in real liquidity. Nanex recently put together a graph (or a “horrifying GIF“) showing the sharply increasing number of quotes transmitted via automated systems across various exchanges. What it shows isn’t actual trades, but it does call attention to a problem called “quote spam.” Algorithms that employ this strategy generate a large number of buy and sell orders that are placed in the market and then are canceled almost instantly. They aren’t real liquidity; the machine placing them has no intention of getting a fill — it’s flooding the market with orders that competitor systems have to process. This activity leads to an increase in short-term volatility and higher trading costs.

The New York Times just ran an interesting article on HFT that included data on the average cost of trading one share of stock. From 2000 to 2010, it dropped from $.076 to $.035. Then it appears to have leveled off, and even increased slightly, to $.038 in 2012. If (as that data suggests) we’ve arrived at the point where the “market efficiency” benefit of HFT is outweighed by the risk of increased volatility or occasional instability, then regulators need to step in. The challenge is determining how to disincentivize destabilizing behavior without negatively impacting genuine liquidity providers. One possibility is to impose a financial transaction tax, possibly based on how long the order remains in the market or on the number of orders sent per second.

Rethinking regulation and market safeguards in light of new technology is absolutely appropriate. But the state of discourse in the mainstream press — mostly comprised of scare articles about “Wall Street’s terrifying robot invasion” — is unfortunate. Maligning computerized strategies because they are computerized is the wrong way to think about the future of our financial markets.

Photo: ABOVE by Lyfetime, on Flickr


July 19 2012

“It’s impossible for me to die”

Julien Smith believes I won’t let him die.

The subject came up during our interview at Foo Camp 2012 — part of our ongoing foo interview series — in which Smith argued that our brains and innate responses don’t always map to the safety of our modern world:

“We’re in a place where it’s fundamentally almost impossible to die. I could literally — there’s a table in front of me made of glass — I could throw myself onto the table. I could attempt to even cut myself in the face or the throat, and before I did that, all these things would stop me. You would find a way to stop me. It’s impossible for me to die.”

[Discussed at the 5:16 mark in the associated video interview.]

Smith didn’t test his theory, but he makes a good point. The way we respond to the world often doesn’t correspond with the world’s true state. And he’s right about that not-letting-him-die thing; myself and the other people in the room would have jumped in had he crashed through a pane of glass. He would have then gone to an emergency room where the doctors and nurses would usher him through a life-saving process. The whole thing is set up to keep him among the living.

Acknowledging the safety of an environment isn’t something most people do by default. Perhaps we don’t want to tempt fate. Or maybe we’re wired to identify threats even when they’re not present. This disconnect between our ancient physical responses and our modern environments is one of the things Smith explores in his book The Flinch.

“Your body, all that it wants from you is to reproduce as often as possible and die,” Smith said during our interview. “It doesn’t care about anything else. It doesn’t want you to write a book. It doesn’t want you to change the world. It doesn’t even want you to live that long. It doesn’t care … Our brains are running on what a friend of mine would call ‘jungle surplus hardware.’ We want to do things that are totally counter and against what our jungle surplus hardware wants.” [Discussed at 2:00]

In his book, Smith says a flinch is an appropriate and important response to fights and car crashes and those sorts of things. But flinches also bubble up when we’re starting a new business, getting into a relationship and considering other risky non-life-threatening events. According to Smith, these are the flinches that hold people back.

“Your world has a safety net,” Smith writes in the book. “You aren’t in free fall, and you never will be. You treat mistakes as final, but they almost never are. Pain and scars are a part of the path, but so is getting back up, and getting up is easier than ever.”

There are many people in the world who face daily danger and the prospect of catastrophic outcomes. For them, flinches are essential survival tools. But there are also people who are surrounded by safety and opportunity. As hard as it is for a worrier like me to admit it (I’m writing this on an airplane, so fingers crossed), I’m one of them. A fight-or-flight response would be an overreaction to 99% of the things I encounter on a daily basis.

Now, I’m not about to start a local chapter of anti-flinchers, but I do think Smith has a legitimate point that deserves real consideration. Namely, gut reactions can be wrong.

Real danger and compromised thinking

To be clear, Smith isn’t suggesting we blithely ignore those little voices in the backs of our heads when a real threat is brewing.

“You can’t assume that you’re wrong, and you can’t assume that you’re right,” he said, relaying advice he received from a security expert. “You can just assume that you’re unable to process this decision properly, so step away from it and then decide from another vantage point. If you can do that, you’re fundamentally, every day, going to make better decisions.” [Discussed at 4:10]

I was surprised by this answer. I figured a guy who wrote a book about the detriments of flinches would compare threatening circumstances with other unlikely events, like lightning strikes and lottery wins. But Smith is doing something more thoughtful than rejecting fear outright. He’s working within a framework that challenges assumptions about our physical and mental processes. You can’t trust your brain or your body if you’re incapable of processing the threat. The success of your survival method, whatever it may be, depends on your capabilities. So, what you have to do is know when you’re compromised, get out of there, and then give yourself the opportunity to assess under better circumstances.

Other things from the interview

At the end of the interview I asked Smith about the people and projects he follows. He pointed toward Peter Thiel because he admires people who see different versions of the future. Smith also tracks the audacious moves made by startups, and he looks for ways those same actions and perspectives can be applied in non-startup environments. The goal is to to “see if we come up with a better society or a better individual as a result.”

You can see the full interview from Foo Camp in the following video:

Associated photo on home and category pages: Broken Glass on Concrete by shaire productions, on Flickr


April 01 2010


November 27 2007

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!