What are the Ethics of Artificial Intelligence?

Wednesday, June 12, 2019

Artificial intelligence (AI) is growing in use and importance. It is now widespread in the financial services, life sciences and healthcare, retail and media industries. But are there any ethical issues associated with AI? And if so, who is responsible for managing them? These issues were recently discussed in an excellent paper by Deloitte Insights called ‘Can AI be ethical? Why enterprises shouldn’t wait for AI regulation’ (read it here).

Uses for AI include automated weapons, social media interactions, credit and hiring decisions, facial recognition, and automated learning. You might be surprised how much you are already interacting with AI without realising it, including how much your organisation is making use of AI.

The article provides a good balance between the advantages of AI with the potential risks associated with the ethical judgements built into the AI infrastructure. Some of these risks are identified as the following:

  • Bias and discrimination
  • Lack of transparency
  • Erosion of privacy
  • Poor accountability
  • Workforce displacement and transitions.

As noted in the article, “technological progress tends to outpace regulatory changes, and this is certainly true in the field of AI”. So what can organisations do to protect their stakeholders and reputations while fulfilling their ethical commitments? The article proposes the following, which are summarised here, but well worth the time to read them in full:

  • Enlist the board, engage stakeholders
  • Leverage technology and process to avoid bias and other risks
  • Build trust through transparency
  • Help alleviate employee anxiety

AI is advancing across enterprises, and it behoves all leaders to ensure their ethical frameworks are keeping pace. The Avondale Business School (ABS) is available to work with you on this or any other aspect of your business. For more details contact Dr Warrick Long at [email protected].