Exhortation - Spring 2009

The Terminator Comes to Wall Street


How computer modeling worsened the financial crisis and what we ought to do about it

By Joseph Fuller

March 1, 2009

You’ve seen this story in countless Hollywood science-fiction movies, from The Terminator to War Games. Scientists develop a sophisticated computer or robot to assure the nation’s security, but something goes wrong and the technology itself mutates into a catastrophic threat. Unfortunately, the U.S. economic system now finds itself crippled by a real-life technology-gone-wrong story line. In this case, the culprit is not a Pentagon fighting machine, but rather the computer-based modeling and trading programs developed for Wall Street over the last quarter century.

Business models—whether they are models for analyzing market trends or running a major auto manufacturer—typically assume that history provides a guide to future outcomes. Such an assumption is usually reliable, but whenever events fall outside historical norms, the results can be catastrophic. Against this background, consider the introduction of computer-based program trading, arguably the most important change in global investing since the founding of the first mutual fund—the Massachusetts Investors Trust—in 1924. Over the past 20 years on Wall Street, computer-based models have gradually replaced human networks of strategists and traders. Quantitative analysts (“Quants”) trained in mathematics and physics have used sophisticated data analytics and modeling skills to evaluate securities and develop portfolio-management theories. The advent of Quants has allowed firms of all stripes to trade ever-larger volumes of securities and to extend their activities to new and exotic instruments. Using either mathematical or statistical models, firms have also been able to trade huge volumes of securities globally. In many cases, the computers didn’t just provide advice, they actually executed stock trades. By the end of September 2008, the global stock exchange NYSE Euronext reported that so-called program trading, in which computers execute trades based on programs developed by Quants without specific human intervention, represented almost 17 percent of trades—more than 900 million shares per day.

Since the data that feed these analytical formulas come from the past, the models can have trouble responding to extraordinary or unprecedented events. When credit markets began to seize up in mid-2008 and the securities markets went into free fall, the models tried to figure out a suitable response. They had been programmed to avoid volatility by moving out of securities and into cash. Of course, when many models trading hundreds of millions of shares all tried to liquidate investments and move into cash, they only increased the stock declines, leading to further volatility and thus to more selling. The models had not been programmed to understand a scenario in which everyone might try to move to cash at the same time. The effect was like a panicked crowd trying to escape from a burning theater.

Any good movie gives its audience clues early in the action that foreshadow subsequent events. For the stock market in this current economic crisis, the 1997 death of Long Term Capital Management (LTCM) was the clue that went unheeded by the Quants. LTCM was a U.S. hedge fund that used high leverage and complex mathematics-based trading strategies to rack up big profits in the mid-1990s. In 1998, a lethal com­bination of losses on major investments mixed with the Russian financial crisis, coming on the heels of a financial crisis in Asia in 1997, put LTCM under immense pressure. As LTCM sold off assets in a frantic effort to cover its debts, its massive selling further depressed the market value of those assets, leading to a classic flight-to-liquidity, in which concerned investors seek to exit a troubled investment firm’s vehicles, exa­­­cerbating the problem they are trying to escape. A chain reaction doomed the company itself and shook the capital markets. The Federal Re­serve Bank of New York orga­nized a bail­out led by commer­cial banks to prevent LTCM’s prob­lems from infecting the broader market.

If the need for a bailout sounds familiar to us to­day, Wall Street and its regulators did not pick up the echo from LTCM as soon as they should have. They failed to perceive the vulnerability to volatility of trading models such as those that LTCM employed. More important, they ascribed the firm’s collapse to a defective trading strategy rather than to a toxic mix of an aggressive strategy funded by high leverage (in LTCM’s case, $25 in debt for every dollar of invested capital). Indeed, within two years, the Clinton ad­min­istration had completed the dismantling of the Glass-Steagall Act, eliminating the distinction between investment and commercial banks and allowing investment banks to take on even more leverage to fund their trading strategies. Had regulators seen LTCM as an omen instead of an anomaly, they might have imposed stricter limitations on the use of leverage or enacted more exacting reporting requirements for the industry.

Computer models have three inherent problems. The first problem is that those who created the models don’t understand the markets. Modelers are experts in math, computer science, or physics. They are not generally experts in stocks, bonds, markets, or psychology. Modelers like to think of markets as efficient abstractions, but these abstractions can never fully account for the messy and irrational actions that humans take for emotional reasons. Moreover, as we have seen, they construct their models or programs based on a study of historical market data. They test them by showing how well the model would have performed in a given historical situation. Because their programs must have some parameters, modelers necessarily have to exclude unprecedented circumstances like the current simultaneous volatility in global debt, equity, currency, and commodity markets.

The second problem is that managers don’t understand the modelers. Most of the current generation of senior executives on Wall Street lack the technical background to understand the models (or the algorithms that underlie them) that power their own firms’ trading strategies. Because they are unable to speak the same language as the people creating the models, the managers have difficulty framing the questions necessary to comprehend how the models might respond to different situations. The problem here goes beyond comprehension. Even if the executives were Quants, they might well not understand as much as they would like about the programs running their businesses. The models themselves—and particularly the interaction among models—has grown so complex that it may have become impossible for any human to fully grasp the types and volumes of derivatives traded in this way or to predict how the models will interact with each other.

The third problem is that the models don’t “understand” each other. Each model executes its own strategy based on its calculus for maximizing value in a given market. But individual models are not able to take into account the role other models play in driving the markets. As a result, each program reacts almost in real time to the actions of other programs, potentially compounding volatility and leading to wild market swings. As we have seen, this happened recently when a set of models analyzing market data led their respective firms to liquidate assets and maximize their cash positions. The cumulative effect intensified the resulting selloff.

In horror and science-fiction movies, the threat rarely dies easily. Barred from entering a police station, the Terminator utters its famous “I’ll be back” line before smashing through the door in a car. How can regulators avoid the model-driven volatility of the 2008 market? It won’t be easy. Computer-based trading programs have become too integrated into the way the stock markets work to simply shut them off. But there are steps that companies and regulators can take to reduce future model-driven volatility.

First, bank executives must manage their company’s models. During the recent crisis, executives and board members of troubled institutions protested that they didn’t fully understand the risks their firms had incurred. The trading positions were so massive and the models that governed them so complex that they defied anyone’s ability to understand them. That explanation should evoke little sympathy now and none in the future. Financial institutions don’t need to elevate math Ph.D.’s to the highest echelons of top management, but they must build the capability to ensure that executives understand the nature of the risks underlying their trading strategies. In short, they need risk managers who can explain in plain English not only the firm’s trading positions, but also the logic of the models driving them. Managers need to be equipped to ask the Quants the right questions: Which past events were used to create the model? What are the five or 10 most important differences between those past events and the current economic situation? How will the model attempt to account for those differences?

Companies must also overhaul their internal-governance systems. First, executives familiar with the day-to-day workings of the market, who understand the human factors that so strongly influence the financial industry, need to oversee the work of modelers. They must assemble people who have sweated through trading highs and lows and have them sit down with the math guys who are building the models. If a financial firm already has such a system in place, it should work on making it better. There’s room for improvement throughout the industry.

Boards of directors should also insist that the compensation system for publicly traded financial institutions be changed. As recent events have proven all too conclusively, strategies that yield short-term gains can lead to intermediate-term disasters. Companies should place substantial percentages of the bonuses of traders and their bosses in bonus “banks” that pay out over multi-year periods. That will help ensure a pay-for-actual-performance culture and instill more discipline in the investing process. Investors in private investment firms should demand the same protections.

Boards should also devote more time to understanding the trading strategies of the firms they oversee. Audit and risk-management committees should extend their writ to include a review of the logic embedded in the models being employed to trade major positions. It is no longer sufficient for boards to rely on the risk-management department to understand the firm’s overall investment posture. Board members must roll up their sleeves and understand how those positions are constructed and how vulnerable their positions are to unexpected events.

As regulators and legislators design new regulatory structures to prevent future financial calamities, they must take into account the pervasive role that models play in making global markets work. Models enable great market liquidity and permit firms to achieve a high level of productivity. It will not work for regulators to adopt some Luddite point of view seeking to curb the use of models. Nor will it work for regulators to rely on accurate reporting of an institution’s risk exposure. Those exposures change constantly; retrospective reporting would provide investors only limited insight into the risk positions of major firms. Likewise, legislators cannot demand that managers cease to take imprudent risks if, in fact, the risks are being taken and magnified by the interplay of computers.

However, regulators can hold managers and boards to a higher degree of accountability. The regulators must establish new rules that stipulate the responsibilities of management and boards for overseeing trading risk. Those rules must be framed by experienced financial professionals and reflect the predominance of models in the trading strategies of financial institutions.

They must also find ways to dial down the volatility in markets. Regulators in the United States and in other major markets must revisit the use of market “circuit breakers.” In recent months, circuit breakers that suspended trading when markets fell too fast were employed repeatedly worldwide. As we move to harmonize financial regulations globally, regulators must themselves use models to understand how program trading drives volatility and how to test alternative regulatory mechanisms.

While regulators will not be able to suppress individual models, they can change the rules that drive the behavior of the companies that employ them. For example, they can change the leverage ratios that companies are allowed, in order to reduce the use of aggressive trading strategies. Such a regulation would limit investors’ borrowing to a multiple of their equity, like forcing home buyers to pay at least a certain percentage of the home price as their down payment. Less leverage reduces the capacity of firms to take on risk, therefore reducing the possibility that a few bad investment decisions will compel a firm to sell assets rapidly to cover its debts, roiling the markets.

Investment firms will wrangle with the challenge of tying their models to a better understanding of market behaviors. Regulators will struggle to adjust market rules to curtail the explosive effects of existing models. Meanwhile, new types of computer-based trading programs will emerge as technologies driving them continue to evolve. At the cutting edge of modeling science, researchers are trying to move away from relatively crude rules-based models toward models that approximate the processes of human reason.

Such artificially intelligent models might be able to consider numerous different data streams, interpret them, and look for patterns rather than simply trying to fit market behavior into a fixed algorithm. Given the torrid pace at which both processing power and data storage continue to improve, some of the technological barriers to such advanced models should fall in the coming years.

Given the extent to which the government has helped support the banking system, regulators will have a small window of opportunity in which to influence compensation practices. These regulators may wish to consider the compensation practices for the Quants who build the trading models. If the compensation for modelers depended on the long-term performance of their models, the Quants might very well develop models that prized steady growth over short-term dazzle.

With more oversight and better management at the investment firms, and more intelligent regulation, it’s possible to create an environment in which the Quants and their programs enable liquidity and productivity, with reduced volatility. We don’t need any more real-life technology-gone-wrong scenarios.

Joseph Fuller is a co-founder of Monitor, a global consulting firm. His writing has appeared in The Wall Street Journal, the Financial Times, The Washington Post, Sloan Management Review, and Harvard Business Review.


Comments are closed for this post.