- March 3, 2021
- Reading time: 2 minutes
Artificial Intelligence requires Ethics and Trust
The AI integration within industry and society and its impact on human lives, calls for ethical and legal frameworks that will ensure its effective governance, progressing AI social opportunities and mitigating its risks. Development of said frameworks is hampered by an information gap between creators of AI technology and policymakers trying to regulate it. Responsible investors participate in the dialogue and develop an understanding of the underlying techniques, principles and fundamental impacts of AI-based systems.
It is only by familiarizing themselves with AI and its potential benefits and risks, that investors can develop sensible strategies that balance the development of AI within ethical and societal boundaries while leveraging its tremendous potential.
What is it about
The process of understanding and acknowledging the social and cultural context in which AI technologies are deployed, sometimes with high stakes for humanity, requires patience and time. This is in tension with the status quo of business models, which are built on speed and scale for a quick profit. To advance this debate, the World Economic Forum regularly offers a platform to AI-experts. In an important article the information gap is identified between creators of AI technology and policymakers trying to regulate it.
Why is it important
With increased investments in the development and deployment of AI, investors in technology companies need to identify the ethical consideration relevant to their products. Investors need to insist that companies will have sound risk mitigation strategies, while at the same time ensuring and proving that their financial gains will not happen at the expense of society. Moreover, they diminish the chances of developing a negative reputation associated with their use of AI.
The Globalance View
Compared with other corporate sustainability issues, AI is accelerating the need for technology companies to advance conversations about ethics and trust. When assessing their impacts, we look out for evidence that technology companies show ethics literacy. In general, the technical teams behind AI developments are not sufficiently educated about the complexity of human social systems, how their products could negatively impact society, and how to embed ethics in their designs. Globalance also encourages investors alongside governments and international organizations to help establish high-level ethical principles for AI development and deployment.
We welcome the fact that some companies are starting to propose their own rules (e.g. Google AI Principles, Microsoft AI Principles). However, to ensure the effective governance of AI, there should be a consistent dialogue between businesses, investors and policymakers to agree on a common set of principles and concrete methodologies of translating them into practice.