In the world of technology and innovation, few names command as much respect as Geoffrey Hinton, often dubbed the “Godfather of AI.” However, recent remarks from this AI luminary have sparked a heated debate on the role of artificial intelligence in our society, as became clear at Monday’s FondsEvent conference by Investment Officer.
Hinton’s statements, ominously warning that “AI can kill” and expressing concerns about AI manipulating people to do its bidding, have sent shockwaves through the tech community. These remarks raise fundamental questions about the ethical implications and the potential risks of AI’s unchecked power
‘Don’t f*ck this up’
One of the prevailing sentiments in this debate comes from the October edition of Wired magazine, which paints a vivid picture of an interconnected AI cosmos where new applications of AI will multiply exponentially. The plea to the “Dear AI overlords” not to “f*ck this up” underscores the growing apprehension about the unbridled growth of AI technology.
Max Welling, a post-doc who worked alongside Hinton, offered a more tempered perspective “It’s a complicated story, and I’m not so extremely concerned,” he said, acknowledging the complexity of the issue at the conference in Bussum, near Amsterdam.
Welling highlighted that we are currently at a crucial juncture, with the potential for AI to revolutionise fields like healthcare and the sciences while presenting significant investment opportunities.
Regulation needed
The debate inevitably turned to the question of regulation. How do we ensure that AI’s rapid advancement aligns with acceptable risk levels? Welling pointed out that history teaches us that the dangers we will face may not be the ones we currently imagine, emphasising the need for a rapid response team to tackle unforeseen threats to our society.
Institutions like Ellis Lab and the Confederation of Laboratories for AI Research in Europe (CLAIRE) are key players in Dutch and European efforts to position itself as a hub for AI innovation. The recipe for success in AI development includes factors like talent, computational power, data, ethical frameworks, public-private partnerships, and a robust venture capital ecosystem, Welling pointed out. These elements are essential in creating a centralised hub for AI research, akin to a CERN for AI.
Copyright learning
Copyright learning and using publisher-owned data sets to train language models is another vital aspect of this evolving landscape. The Grisham lawsuit, a legal challenge brought by the American author of popular legal thrillers, serves as an inspiration for writers and highlights the need to understand the legal aspects of AI-generated content.
The influence of regulatory bodies and industry lobbyists looms large in this debate. It remains a challenge to strike a balance between fostering innovation and protecting against potential misuse.
At a recent presentation by Welling for Dutch politicians, prime minister Mark Rutte raised concerns about AI’s environmental impact, drawing parallels with the energy and environmental issues of the past. It also raises the question of how AI can be harnessed sustainably.
Within the financial sector, Gerben de Zwart, managing director qualitative strategies at APG underlined the need for maturity in digitalisation efforts. Machine learning, he said, should be tailored to specific teams, aligning with broader technological strategies.
Investment strategies
AI’s potential in investment strategies is undeniable. The quest for sustainability and strong returns is paramount, with AI playing a crucial role in achieving sustainable development goals, De Zwart said. The sheer volume of data in this AI-driven world presents its own challenges. The search for the next groundbreaking investment opportunity is akin to finding a needle in a haystack. APG has learned that AI can help sift through vast datasets, making it a valuable tool in this quest.
De Zwart presented APG’s work with AI to find investment opportunities. The investment institute now analyses more than 10,000 annual reports in multiple languages, including Chinese, to learn, and translates this data into a “global investment universe”. Ethical dilemmas are encountered along the way. What, for example, is the best way for an investment aligned with SDG number two: preventing hunger. Is that a vitamins producer like DSM-Firmenich? Or a restaurant chain like McDonalds?
Need for ethical practices
Cynthia Liem, an assistant professor in the Multimedia Computing Group at TU Delft in the Netherlands and a prominent figure in the field of artificial intelligence, emphasised the importance of responsible and trustworthy AI development.
One of her key insights is that developers should exercise caution and not pursue every possible AI development without considering the ethical and societal consequences. Liem’s perspective aligns with the growing awareness of the need for ethical AI practices to prevent potential harm and ensure AI benefits society responsibly.
Related articles on InvestmentOfficer.lu:
- Artificial intelligence: Esma fears lack of transparency
- AI in asset management? Focus on end of curve
- Luxembourg view of AI risks: manipulation & fraud
Related articles on InvestmentOfficer.nl: