In Luxembourg as in the wider world of finance, the marvels of artificial intelligence are being celebrated, but there’s a growing focus on the potential downsides. The risk of fraudulent behaviour and manipulation in financial markets is looming large and both local and international cybersecurity experts are warning that today’s hackers may enhance their abilities through AI. This advancement is happening while numerous organisations are neglecting basic cybersecurity measures.
Hollywood has brought us an entire film franchise – The Terminator – based on out-of-control artificial intelligence and intelligent cyborgs. Now Netflix has a film called Unknown: Killer Robots. Today’s warm view of AI appears heavily affected by novelty.
But such worries are not confined to Hollywood. A recent survey by Natixis Investment Managers mentioned “a high potential for fraudulent behaviour” and manipulation in connection with AI.
“For financial instruments, it could be problematic,” said PwC Luxembourg advisory partner and cybersecurity expert Koen Maris. “The speed at which AI could start launching transactions and manipulate a stock exchange — that could be dangerous.”
Data manipulation
Maris sketched it out. “If the dataset is wrong and hackers or the dark side has everything to win by manipulating the data in the dataset,” he said.
“If tomorrow you can convince these investment AI tools or robots that Microsoft is a bad, bad option to buy and that there’s no long-term vision any more, everybody will start dumping it based on the AI information.”
Natixis noted, “Much like the concern that AI can be used to manipulate political sentiment and voter behaviour, strategists worry that investor behaviour and market sentiment could be manipulated by bad actors, and 100% of strategists say AI will increase the potential for fraud and scams.”
AI resilience
AI is on its way to being a key topic for investors. “As investors evaluate and select investment managers, it will become more important to test how resilient managers’ investment strategies are in markets that are increasing shaped by AI, to ensure that their alpha opportunity remains repeatable to explain and add value over the long term,” said Martha Brindle, bfinance’s equity director.
But if there’s one area that sees AI as a real threat, it’s cybersecurity. Maris explained that AI could help as well as harm computer security.
A step behind
“In security, it could help us to detect and improve,” he said. “But don’t forget, that kind of AI is always a step back in comparison to the AI that attackers could use.”
“It’s a cat and mouse game that we play. And on the attacking side, it could be very dangerous, because AI could find exploits and not tell anybody and use those to infiltrate your network or your environment,” Maris explained.
“AI has almost grown too big, too fast, and regulating its use is now beyond possible,” said George Ralph, the global managing director and chief risk officer at technology firm RFA. He said such questions have been preoccupying the European Union agency for cybersecurity, Enisa, as well as his firm.
Flip side of the same coin
“For every positive use of AI and how it benefits cybersecurity, there is the flip side of the same coin, said Ralph, “where someone somewhere is working using the same technology for illegal activities.” Ralph, however, said he doubts there’s any direct impact from AI on financial markets at this early stage.
PwC’s Maris discussed the 2016 US Defense Advanced Research Projects Agency (Darpa)’s Cyber Grand Challenge, called the “world’s first automated network defense tournament” featuring computers capable of “reasoning about flaws”.
“They were attacking and they were using attack vectors that nobody knew about,” said Maris. “They were very, very creative. They innovate.”
Data privacy concerns
Deloitte Luxembourg partner and cyber risk leader Stéphane Hurtaud explained three currently prevalent areas of AI downside, starting with data privacy. Advocates have raised concerns about the data fed into AIs. “We have observed that sometimes financial institutions do not pay attention to the type of data that they provide,” said Hurtaud, adding this led to data leaks.
This concern is behind bans in some areas on using AI, Hurtaud said.
A second type of threat, Hurtaud explained, is using AI to automate cyber-attacks. “Attackers can use machine learning algorithms to process vast amounts of data to identify vulnerabilities, to bypass existing security measures.” AI can also help automate social engineering attacks, he explained.
Training malware
AI can help bad actors to evade detection systems. “Using machine learning algorithms, attackers can typically use the system to train malware to avoid detection by traditional security software.”
Hurtaud from Deloitte related his experiences talking to corporate boards expressing concerns about AI. His message is “for the time being, the priority should remain to get the basic capabilities and measures to protect your organisation,” he said. “Before running you need to know walking.”
Related articles on Investment Officer Luxembourg:
- Lack of qualified staff leads to higher cybersecurity risk
- Artificial intelligence: Esma fears lack of transparency
- AI in asset management? Focus on end of curve
*The image used to illustrate this article was generated by artificial intelligence.