As use of artificial intelligence (AI) applications increases, what are the ethical implications for businesses when AI makes biased or discriminatory decisions or outputs? What does the regulation of AI in 2023 mean for businesses with a poor understanding of AI ethics and how can you avoid ‘ethics-washing’?

Is ethics-washing the next AI challenge?

Alongside the questionable ethics attached to using AI to imitate the likeness of real people, for example the famous AI-generated image of the Pope and associated furore, there are also issues around how developers and businesses should be held to account over their own ethics when creating and utilising AI. So much so, tech leaders are calling on developers to pause on AI developments, whilst the ‘risk to human-kind’ is addressed, likely to include the capacity for discriminatory outputs.

The ethical problem with AI – what is ethics-washing?

AI has the potential to make grossly biased decisions based on flawed input data or programming. Businesses could be acquiring pseudo-ethical AI products to satisfy their environmental and social governance targets, only to discover that the claims made about the ethical qualities of the AI design are actually false. In addition, companies could be selling 'AI for good' initiatives whilst also selling surveillance technology to corrupt governments and questionable corporate customers. This is known as 'AI ethics-washing'.

AI discrimination - what is AI bias?

Examples include facial recognition software that performs badly on dark-skinned people, and voice-commands /recognition which doesn't compute accents that aren’t British or American. This often stems from a lack of diversity in the programming of AI and the input data, which can result in AI that is biased against women, different ethnic groups, and those from disadvantaged backgrounds. This can be seen in recent research which calls out AI image software generators like DALL-E 2 for producing images of a white man in 97% of search results for a ‘CEO’ or ‘director’, or in job-searching algorithms that 'matches' administrations roles to female job-hunters over their male counterparts.

Stanford Fellow Dr Lance Eliot has been vocal on this subject, particularly in response to AI developers and manufacturers making unrealistic claims about the fairness and balance of the decision-making processes employed by their AI products. The aim of ethics-washing is to calm fears about the biases flowing from AI when it makes human choices that could affect people's lives, for example when AI decisions deny credit or access to medical treatment. The sad reality is that AI has the potential to be extremely biased, and vulnerable individuals are often negatively impacted. Unless there is a manual correction, we run the risk of historical biases being replicated in the future.

How AI developers can avoid ethics-washing and what users should know

AI is on the agenda for the EU, UK and US legislators, and whilst there are currently no specific regulatory consequences for unethical use, this is also likely to change. In the meantime developer/users should be mindful of the following influential (albeit legally unenforceable) sources:

UNESCO's Recommendation on the Ethics of Artificial Intelligence

193 Member States at UNESCO’s 2021 General Conference adopted the Recommendation on the Ethics of Artificial Intelligence, the very first AI global standards agreement. It emphasises the importance of respect for human rights protection and promotion of human rights, benefits for the environment and ecosystem, diversity and inclusiveness, and justice. In light of this, transparency, safety, security, fairness and 'do no harm' feature in the recommendation.

Religious leaders

Another highly influential set of AI principles has been developed by a Vatican summit of Jewish, Islamic and Catholic faith leaders. Concerned with the supposed sentient capabilities of AI, and more practically, the effect of AI use on global citizens, the 'Rome Call for AI Ethics', which was originally signed in 2020 by Microsoft and IBM, focuses on six principles: transparency, inclusion, accountability, impartiality, reliability, security and privacy.

White House Office of Science and Technology Policy

The US government has set out its own ethics principles in its proposed Blueprint for an AI Bill of Rights. This blueprint aims to develop safe and effective systems, incorporate equitable protections against discrimination, and protect against abusive data practices. It also aims to ensure that those who are impacted by AI use are informed appropriately, and that a human alternative to AI is offered if problems arise.

What is the risk of ethics-washing in AI?

Ethics-washing arises when AI does not abide by these ethical measures, or other similar ethical principles, in spite of its claims. In such circumstances the claims made would be, if not fraudulent, then at the very least unethical. Whilst the measures set out above, and other similar ethical principles, are helpful, there are currently no systems capable of testing AI against its ethical claims either before or after the AI goes to market. As a result, claims are going unchecked.

Whilst data and discrimination legislation does not currently specifically refer to AI, there is nevertheless a real risk that AI developers and complicit users failing to follow ethical principles will as a consequence be breaching data and discrimination laws and will be financially and reputationally damaged as a result.

As soon as the new laws come into force, further regulatory enforcement is inevitable.

Avoiding AI ethics-washing: recommendations for businesses

  • UK businesses should keep abreast of imminent UK and EU AI legislation expected to come into force in 2023.
  • Be familiar with current and any impending discrimination laws and review any automated processes that could infringe the rights of marginalised groups and individuals.
  • If your business develops this technology, ensure the input data is from a diverse group of individuals and a range of sources.
  • Review statements made about AI before purchasing it. Is the business (or its supplier) making inflated claims about fairness and balance in AI decision making?
  • Conduct a legal/ethical audit of AI use in your business and consider the extent to which it conforms to the proposed legislation and global principles referred to above. Document this audit.
  • Develop a checklist of legal compliance for the acquisition and use of future AI products within your business.
  • Ensure that those affected by AI decisions are informed about their rights and are given the option to request a human review of any decisions taken by AI.

For the latest UK and EU AI legislation, download our recent guide.