Artificial intelligence is considered to be one of the biggest leaps forward in the history of mankind. While we have had the capacity to process our surroundings and to question them, we haven’t had the power to process an answer– yet. Artificial intelligence is the key to answering many of the questions we are unable to answer due to our natural and biological limitations.
At the current stage of development, AI is a perfect administrator of sorts and can be used in systems that are well.-understood It exceeds at pattern recognition, controlling, identifying dynamics, etc. It also has the capacity to learn from its mistakes without much human intervention. This makes it a perfect match for conducting day-to-day activities.
While looking at AI in risk management and financial markets, AI seems to be a perfect fit. It is used in profiting from arbitrage opportunities to making money of tweets by the president of the united states. Many have come up with ingenious ways of using these tools to profit in the financial markets.
The question this then raises is, does AI increase market stability and reduce risk, or does it destabilize markets and increase risk?
When it comes to large-scale applications of AI, they are given responsibility of small parts within the overall system. The aggregate of these different small parts could, at some point, make up the entire system. A model study in this regard is risk management. The first step in risk management is the modeling of risk which involves processing market prices with simple statistical tools. The next is to aggregate the available knowledge on positions held by the banks with the information of individuals who make decisions in the system. This leads to the creation of a risk management AI engine with knowledge of risk, positions, and human capital. While the end game is clear, we still have a long way to go.
It isn’t difficult to translate regulations and rule into code and system logic to ensure technological oversight of the system. Such a case makes oversight much easier. But the problem isn’t of technology or data; it is political, legal and social in nature. It follows that while deploying micro unit AI into the system level to check for system stability or instability, the outcomes maybe very different. It could at some level increase systemic risk.
An increasing number of hedge funds are directed entirely by AI-powered trading engines. These funds are at the vanguard of AI in financial markets. And, like many markets where AI is transforming business as usual, these engines represent innovative new investment products while simultaneously raising new questions. Knight Capital, a firm that specializes in executing trades for retail brokers, took $440m in cash losses in 2012 due to a faulty test of new trading software very recently. Such events aren’t one-offs; they have occurred time and again in different markets. Earlier this year, a computer glitch sent shares in dozens of US technology companies including Apple, Amazon and Microsoft to the same price on Tuesday morning, leading some to apparently lose billions in market value. The bug showed many stocks on the Nasdaq exchange to briefly be reported as $123.47 on Bloomberg, Reuters and Google Finance data. It was triggered after financial information providers wrongly interpreted a Nasdaq data test as live prices, leading to brief pandemonium on trading floors. Based on this information, algorithm execute orders. This poses a huge risk.
Using such models has led to a homogenization of risk where everyone is monitoring and watching almost the same things. The more similar our perceptions and objectives are, the more systemic risk we create. A slight change in one indicator makes almost everyone act in the same way at high speeds. Diverse views and objectives dampen out the impact of shocks and act as a countercyclical stabilising, systemic risk minimising force.
There is also an element of confusing risk with uncertainty where the two are viewed as the same. But in reality, the two are very different. Risk is when all possible states of the world and their outcomes are known. Uncertainty is when there is no perfect information about the possible states nor their outcomes. This is true in the case of events such as political outcomes, etc which fundamentally cannot be modeled. These are uncertainties rather than risk.
Excessive quantification of such events leads to outcomes which we may not completely understand.
There is a silent arms race emerging in financial markets with the rise of funds driven by AI. Presently, trading algorithms can fake one another out to gain advantages, which the BBC notes is illegal but difficult to prove. They can also predict a slower program’s next moves and then trade accordingly. With firms competing aggressively to get faster trading times, a slower program could create massive functionality gaps. As algorithms become more intelligent and more powerful, the financial industry will require ever-smarter safeguards against exploitation and risk.
While artificial intelligence is useful in preventing historical failures from repeating and will increasingly take over financial supervision and risk management functions, it is also creating some risks and uncertainties of its own. This leads some to believe that AI might not be useful in ultimately increasing financial stability. This point holds some weights as there is always the possibility that everyone games the system to take advantage of it and everyone ends up losing instead.
-Contributed by Bhargav Dhakappa
Picture Credits: linkedin.com