February 23, 2017
Without noticing it, artificial intelligence (AI) already fits in many forms into our daily lives and supports our decision-making. At times, it’s discussed a bit like blockchain, which we’re being promised will solve things like world hunger and human trafficking. However, artificial intelligence is already prevalent practically everywhere in technology, from cars to Google searches and as this technology is specifically designed for singular tasks, we humans cannot compete with that level of insistent focus.
This relates specifically to what is called Artificial Narrow Intelligence or ANI designed to perform one sole task meticulously. The two next levels of AI development, specifically Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) are still out of our reach, for now.
This post won’t debate AI, its existence or its development – instead, I want to discuss how rapidly we could see the emergence and adoption of AI-centric policies that could lead to the disproportionate adoption of AI across market segments. In the context of finance, we’re already embracing AI in trading algorithms and complex functions, but the emergence of technologies such as chatbots raking in data from users across the world make implications of this technology and its advancement far-reaching and transformative.
I’ll also use ‘AI’ and ‘technology’ interchangeably at times because with all its fantastic applications, that’s all it is at the end of the day.
To take a popular anecdote, with Tesla’s Autopilot feature being a common headline lately, with its advanced sensors on cars to interpret a barrage of ongoing information in order to assist the driver, emphasis on assist at this stage. In 2016 there were a few accidents involving Tesla’s Autopilot feature, which lead to the US Department of Transportations National Highway Traffic Safety Administration (NHTSA) to launch an investigation into these fatal accidents. In reading through these reports and findings, the NHTSA has not found any defect in the AI functionality or its intended use, bar for some notes on proper driver guidance and driver alertness (see https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF).
One thing of note in the report is that the NHTSA makes the case that driver assisted tools, such as automatic braking, collision warning and lane assist, could help reduce accidents by 40%.
Take a moment to think about that, the NHTSA, whose mission it is to Save lives, prevent injuries, reduce vehicle-related crashes has found that driver assisted tools in cars (AI or ANI applications) could improve them meeting their goals by 40%. How long will it be until these types of driver assisted tools are required in all cars, similar to how seat belts exist today?
Further, how long might it be until we ban these dangerous humans from driving who so often rely on their gut and intuition and only follow rules intermittently?
While an anecdote, I believe this is a similar case we will see in many sectors and with several applications. There are various organizations whose existence and mandate is predicated on improving the sector’s performance, safety and security. When these types of bodies see a clear improvement in meeting their mandate by adopting stronger support for AI systems, won’t we see an exponential technical adoption of AI across the board?
Let’s also be clear, oversight of markets has done for ages and what oversight organizations such as regulators and policy makers are set out to do in the first place. AI exists in all sorts of uses, outside of cars, in aviation control, military use, security applications as well as the financial markets. What is becoming increasingly important is its ability to offer a better quality service than an actual person where the mainstream adoption of a technology-first approach is at the tipping point.
There’s often a misconception between humans and technology, where one makes mistakes and one doesn’t. Let’s be absolutely clear, 1) humans make mistakes, 2) technology does not. Now, it stands to reason that a human can design the technology to perform a function that may contain faults or create a faulty outcome, but the technology will perform in the way it was designed to a fault.
The financial crash was driven by various factors such as greed, herding mentality, errors of judgment and a lot of conflicting behavior. From a macro perspective, even though some may have made money in their activities, it's clear the market overall paid a steep price all over the world.
Various white-collar professions are experiencing pressure to justify their own roles and value add. Many repetitive functions, data entry, data mining and pattern seeking can already be outsourced to technology with better results and entire professional fields are wondering what is next. For all of the work in investment banks, law firms to pick on two, a lot is repetitive process. But down the line, will technology be quicker and better to pick out trends and spot errors to also play an oversight role in spotting errors and conflicts? I believe it will.
I recently had the chance to have a well-spirited argument with the CEO of a multinational bank about the headcount in the future. Their position, knowing their organization far better than I do, was that they would be adding tens of thousands of jobs in the next five years. I respectfully disagreed and further argued that many of their jobs would change from banking jobs to more technology-centric, data-driven roles. I think this is true across the board and however we feel, it won’t change the direction toward greater efficiency.
How long will it be to this tipping point? I believe we are there, with organizations laying their position in the shift. I fundamentally believe organizations can leverage data-centered technology for better decision-making and better results. In markets such as financial services, that have been marred by scandals around transparency, lack of accountability and conflicts of interest, we may be leading up to a point where demand builds to enforce technology-based approaches to tackling the most pertinent systemic issues, for example via distributed, immutable ledgers.
If you follow through the trends toward more data-driven decision-making and automated workflows, the assumption is that the data is available for analysis and action. This data society I believe will be polarized. Imagine a world where financial markets function driven fully by algorithms where people play a supporting role. How long until people themselves start standing out like sore thumbs?
The polarization of a data-driven society does worry me, as does the (naive) excitement and en masse adoption of new technologies such as establishing blockchain to end human trafficking, which by the way involves establishing a universal identity, which sounds dangerously dystopian. Through many initiatives, intentions may be admirable, but the consequences may well be vast and dire and likely unpredictable on the scale we are talking about.
AI is invisible and it will continue to exist and play its part beneath the surface in the short term. However, over the course of increasing development and self-development of AI, its fundamental impact will become more and more apparent on industries and markets. I believe these changes are starting to manifest themselves today with significant and dramatic consequences, which we're likely to only fully grasp in hindsight.
(1) Some sites have reported this to be a clear endorsement of Tesla’s autopilot, which is clearly misreported and incorrect if you read the actual study.