Why an AI code of ethics is essential

Like all revolutionary tools, AI is often a double-edged sword: while it offers opportunities and solutions in various fields that were unimaginable just a few years ago, it can also have economic and ethical repercussions, creating inequalities and prejudices. This is why it is so important to regulate this key tool of our future lives

Content Hub
Reading time: 2'00"

 

Artificial intelligence (AI) now dominates our lives and will become an increasingly essential tool in everyday life. According to an Ipsos survey conducted in 2023, 49% of respondents believe that products and services that use AI have fundamentally changed their daily lives in the last 3-5 years, and this figure rises to 66% when the period is extended to the next 3-5 years. This data confirms the growing importance that AI will have in the near future.

Like all revolutionary tools, however, AI can be a double-edged sword: on the one hand, it offers opportunities and solutions that were unimaginable even just a few years ago in various sectors, from politics, business, and services, to commerce, education, communication, public administration, justice, and health; on the other hand, however, AI could also bring about economic and ethical repercussions, giving rise to new inequalities and prejudices.

Favoring an ethical approach to AI to minimize risks

Giuseppe Corasaniti highlights this issue in his article “The Ethical Challenge of Artificial Intelligence” published in the Bollettino Generali, which addresses the complexity of artificial intelligence and the need to define a digital code of ethics of the algorithm. Such a process, however, involves many technical challenges:

The main problems are linked to the transparency of algorithms, which are often difficult to reconstruct logically and to understand, both for users and regulators, and even for experienced programmers. This also makes it difficult to assess their real functioning, their real motivations, the real problems that may emerge long after they are first introduced, and the possible consequences, making it necessary to introduce tools for explanation, documentation, and verification.”

Moreover, all algorithms can influence important decisions that affect people’s rights and interests, but also their lives or even their very survival, such as access to credit, health, education, or justice. According to Corasaniti, “Any algorithm, just like any calculation, can reproduce or reinforce inequalities and discrimination that are already present in civil society, or that are created by the methods or data selections on which they are based. (...) An ethical approach to AI is crucial in the design, development, and use of data and algorithms, in order to maximise sustainable value creation and to minimise risks to individuals and society.”

The importance of AI governance

Achieving an effective governance of artificial intelligence is therefore one of the most important challenges of our time, and requires mutual learning based on lessons and best practices from different jurisdictions around the world.

The creation of the Global AI Ethics and Governance Observatory, established by UNESCO following the Recommendation on the Ethics of Artificial Intelligence and adopted by 193 countries in 2021, is part of this effort. The Observatory aims to provide a comprehensive resource for policymakers, regulators, academics, the private sector, and civil society, in order to find solutions to the most pressing challenges posed by artificial intelligence.

In addition to presenting information on countries’ readiness to adopt artificial intelligence in an ethical and responsible manner, the Observatory also hosts the Artificial Intelligence Ethics and Governance Lab, which gathers contributions, research, toolkits, and good practices on a range of issues related to AI ethics, governance, responsible innovation, standards, institutional capacity, generative AI, and neurotechnologies.

The risk of bias and discrimination

AI ethics is essential, because artificial intelligence is designed to augment or replace human intelligence. When technology is designed to replicate human life, however, the same problems that can cloud human judgement can also creep into the technology itself.

Artificial intelligence designs based on biased or inaccurate data can have harmful consequences, especially for underrepresented or marginalised groups and individuals. Furthermore, if artificial intelligence algorithms and machine learning models are developed too quickly, it can become unmanageable for engineers and product managers to correct learned biases.

Some examples: Lensa AI, AI ChatGPT, Amazon

In order to mitigate future risks, it is more effective to incorporate a code of ethics during the development process. In December 2022, for example, the Lensa AI app used artificial intelligence to generate cartoon-like profile pictures from ordinary photos of people.

From an ethical standpoint, some people criticised the app for not giving credit or enough compensation to the artists who created the original digital art on which the AI was trained. According to the Washington Post, Lensa was trained on billions of photos from the internet without consent.

Another example is the ChatGPT AI model, which allows users to interact with it by asking questions. ChatGPT scours the Internet for data and responds with a poem, Python code, or suggestion. One ethical dilemma is that people use ChatGPT to win programming contests or to write essays. The concerns raised are similar to those seen with Lensa, but with text instead of images.

Another instance is that of Amazon that in 2018 was criticised for an AI-based recruitment tool, which downgraded resumes that included the word ‘women’, such as the ‘Women’s International Business Society’. In essence, the AI tool discriminated against women and created legal risks for the tech giant.

AI is based on data extracted from internet searches, photos, comments on social media, online purchases, and more. While this helps to personalise the customer experience, there are questions about the apparent lack of real consent for these companies to access our personal data. Some AI models are large and require significant amounts of energy to be trained on data. While research is underway to develop methods for energy-efficient AI, more could be done to incorporate environmental ethical concerns into AI policy.

Creating more ethical AI

In this context, regulatory frameworks can help ensure that technologies benefit, rather than harm, society. Governments all over the world are beginning to enforce ethical AI guidelines, including how companies should handle legal issues in the event of bias or other damages.

But there’s more: making these resources more accessible and straightforward for users can turn them into valuable allies against misinformation, distortions, and biases. It may seem counterintuitive to use technology to detect unethical behaviour in other forms of technology, but AI tools can be used to determine whether or not a video, audio, or text is fake, and can detect unethical data sources and distortions. Just as long as training and information are not neglected.