On the third floor of Citigroup’s Manhattan headquarters, at the far end of a trading floor overlooking the Hudson River, Young Kang, Citi’s global head of algorithmic products, leans over a terminal and monitors the progress of a canny and powerful beast named Dagger. Bred and trained in secret by Citi’s financial engineers, Dagger can stalk through more than 20 markets, public and otherwise—hunting for anomalies, buying and selling, prowling through mountains of historical data—all at the behest of Citi’s clients. Amid the trading-floor din, Dagger fulfills its duties in flickering silence, with a speed and acuity no human can match.
“It’s self-learning,” Kang says. “The numbers keep updating, the strategy keeps adjusting itself. It gets smarter.”
And it makes a lot of money. Algorithms like Dagger can exploit the smallest inefficiencies in the market. They can parse trades in millionths of a second. Some species can detect other algos embarking on predictable trading strategies, and ruthlessly adjust their techniques. They’re growing ever more complex, subtle, and sophisticated. And as they become more popular, they’re creating some serious headaches for regulators.
By some estimates, algorithms now trigger 70 percent of all trades in U.S. equities. The speed and volume of everyday trading have propelled the market into a new and esoteric dimension, and rendered traders in the pits largely obsolete. Average daily share volume on the New York Stock Exchange increased by 181 percent between 2005 and 2009, while the time required to execute a trade on its electronic systems dropped to 650 microseconds.
Such changes have a lot of people worried, including the Securities and Exchange Commission. It released a wide-ranging paper earlier this year seeking suggestions on how to restructure the entire equity market, and created a Division of Risk, Strategy, and Financial Innovation in part to help monitor new technologies. A market collapse in early May—in which automated-trading systems exacerbated a sell-off that drove the Dow down more than 900 points in less than an hour, before it quickly recovered—gave two worries new public salience: that the proprietors of these algos may not be in full control of their creations, and that the strategies they pursue are, in some cases, fundamentally warping the financial markets.
In January, the NYSE fined Credit Suisse $150,000 for “failing to adequately supervise the development, deployment, and operation of a proprietary algorithm.” The fine was a pittance, but more troubling was that the bank didn’t even know that its malfunctioning algo (which sent hundreds of thousands of cancel-and-replace requests for orders that hadn’t been made) had crippled some of the NYSE’s trading stations until regulators called them the next day. This spring, a newsletter from the Federal Reserve Bank of Chicago warned: “Although algorithmic trading errors have occurred, we likely have not yet seen the full breadth, magnitude, and speed with which they can be generated. Furthermore, many such errors may be hidden from public view.”
Bernard Donefer, a finance professor at Baruch College and the author of a study in the most recent Journal of Trading called “Algos Gone Wild,” contends that the speed of these equations, and their ability to reach so many markets simultaneously, could turn even a minor coding error into a spiraling disaster. “Another 1987,” he told me, referring to the epic crash caused in part by simpler automated-trading schemes. This view puts Donefer in the minority in the financial community, which tends to have more faith in firms’ internal risk controls. But he thinks that without better regulation, more algo-gone-wild scenarios are inevitable. He notes that while controls at big firms, like Citi, are generally exemplary, second- and third-tier firms present a graver risk.