“Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.”
Conceived at a conference at Dartmouth University in 1956, the concept of artificial intelligence (AI) began with this simple, incredibly bold argument. Henry Simon, one of the conference attendees, went further in 1965, predicting that machines will “be capable, within 20 years, of doing any work a man can do.” Such pronouncements were commonplace; early adherents of AI did not lack ambition.
AI’s very nature begs important questions, from capability to deployment. Take these two examples: is it even possible to accurately crystallize every learned activity? Are perfect simulacra really the objective of AI or should these tools be able to do whatever humans can do, but differently? Beyond that, AI opens up an entirely new realm of moral, ethical and even existential dilemmas to consider. AI’s momentum has ebbed and flowed, with the scientific community buckling under the weight of that very human element: great expectations. Even after beating chess champion Garry Kasparov, bettering Jeopardy contestants and creating robots capable of doing box jumps and backflips, AI as predicted by Simon is far from being a reality.
AI technologies have nonetheless come a long way. If the issues once seemed better suited to engineering laboratories and philosophy theses than to boardrooms, that is certainly not the case today. The concept is firmly in the popular consciousness. Across many industries and within the public sector, AI applications are now seen as change agents and cost reducers, with IBM’s Watson one of the most ubiquitous characters in advertising. For many companies, few investments are made into new processes today without at least contemplating whether AI can assist them. Yet the same, previously mentioned dilemmas manifest themselves in business contexts too. What are the risks associated with delegating responsibility for a person’s tasks to a machine? Can those risks be managed? How vulnerable is the broader business model to AI-driven disruption?
These questions are particularly acute for financial services, which is catching up to the promise of AI after a slow, skeptical start. In some ways, these considerations are similar to those in the adoption of any new technology: AI must be integrated into the stack thoughtfully, and special attention must be paid to pairing it with—or replacing—existing analytics. New compliance must be built around it, and proving the effectiveness of its application and metricizing its performance—effectively, by creating reliable data—must be undertaken.
But, in other ways, adopting AI is more of a greenfield exercise. AI today represents an umbrella for a number of subset technology groups, and depending on the nature of the task and their internal commitment to the technology, firms must choose which one is suitable. Most financial technologies are also viewed as laissez-faire and plug-and-play; by contrast, some in the industry see ample reason to regulate AI. With that in mind, how can a mutually benefitial outcome be achieved?
Survey research conducted by WatersTechnology in April 2018 engaged respondents at investment banks, asset managers, exchanges, regulators and technology firms to gain a clearer sense of progress in AI’s capital markets adoption. The results reflect an interesting moment for the technology. It is clear that AI is well defined and firmly on the radar, in particular as it relates to specific core functions. Outside of these, however, many admit they are searching for the right application of AI—they like the square peg, but for the moment still have not managed to adapt to the round hole. This whitepaper takes a look inside this emerging, if qualified, optimism.