ARtificial intelligence

Ethical AI becomes a boardroom issue, but implementation is another matter

 The ethical use of AI is an issue no tech company can avoid, but while many talk a good game, so far the delivery has been less impressive, writes Lara Williams.

Responsibility for the ethical implications of artificial intelligence (AI) has shifted from a technology function to a business issue. If ethical AI is not considered a business priority across every part of the business, the reputational risks could be significant. While boardroom executives have recognised the importance of ethical AI to their businesses, implementation is presenting some real challenges, according to a study by IBM.


IBM’s AI Ethics in Action report, published in April 2022, revealed a significant shift in who manages AI ethics within organisations, with 80% of those responsible holding non-technical roles such as CEO. This represents a sharp increase from 15% in 2018.


This trend is borne out by the Business Roundtable of North America, which represents more than 230 CEOs across a variety of industries. In January 2022, the industry body launched its Roadmap for Responsible Artificial Intelligence. This was designed as a guide for businesses and was released alongside a set of policy recommendations calling on the Biden administration to establish AI governance, oversight and regulation while promoting US leadership.


As these applications are increasingly developed and deployed at scale, societal trust in the use of AI is critical, said Alfred F Kelly Jr, chairman and CEO of financial services company Visa and chair of the Business Roundtable Technology Committee, in a public statement. “Leaders in business and government must work together to earn and maintain trust in AI by demonstrating responsible AI deployment and oversight. Only then can we realise its full beneficial potential for society,” he added.

How is ethical AI being established?

Policymakers are increasingly weighing in on the establishment of ethical AI standards with the US government’s research agency, Darpa, and its National Institute of Standards and Technology both conducting research into explainable AI. In April 2021, the European Commission proposed a new legal framework on AI along with a coordinated plan for member states that aims to turn Europe into the global hub for ‘trustworthy’ AI.


However, the will of policymakers and businesses to address the issue has failed to translate to action. IBM’s survey found that while 79% of CEOs were prepared to implement ethical AI practices, less than one-quarter of organisations have taken steps to do so. GlobalData thematic research found many organisations need help in navigating ethical issues related to AI such as privacy laws, unintentional bias and a lack of model transparency but don’t know where to begin. GlobalData suggests working with a partner to help companies consider the ethical implications of their AI deployments.


Changing regulations and privacy laws, concerns over unintentional bias in training data, lack of transparency in AI models and the dearth of experience with new use cases are all difficult challenges to address. However, companies that ignore responsible AI principles run massive risks, from losing the support and trust of their investors, customers, employees, candidates, governments and interest groups, to legal liability, according to Ray Eitel-Porter, global lead for responsible AI at consultancy Accenture. “Business leaders need to understand that responsible AI brings many organisational, operational and technical challenges,” he says.


Eitel-Porter warns leaders must not underestimate the scale and complexity of the change required. Instead, he advises: “Leading from the top, to train both business and technical colleagues in the role they need to play and establish governance and controls to ensure responsible AI, is considered by design in all AI systems.”

IBM’s survey found that AI teams are still substantially less diverse than their organisations’ workforces.

Building ethical AI by design means a framework that includes bias detection, model traceability, analysis of the impact of regulatory changes, use case analysis and definition of corporate values. Some methods also include a set of fairness metrics and use bias mitigation algorithms.


Ethical AI by design becomes difficult with a lack of diversity on AI teams, a key concern as it could lead to biased decisions, discrimination and unfair treatment. According to IBM’s survey, while 68% of organisations acknowledge that diversity is important to mitigating bias in AI, teams responsible for AI are less likely to include women, LGBTQ+ individuals or people of colour.


Despite organisations publicly endorsing the common principles of ethical AI, action and implemented practices often fall short of these stated values. While 68% of surveyed organisations acknowledge that a diverse and inclusive workplace is important for mitigating AI bias, IBM’s survey found that AI teams are still substantially less diverse than their organisations’ workforces: 5.5 times less inclusive of women, four times less inclusive of LGBTQ+ individuals and 1.7 times less racially inclusive.


Big Tech – including leaders in the field of AI – has long struggled with diversity. For example, female employees only make up between 29% (Microsoft) and 45% (Amazon) of its workforce, and perhaps more significantly the percentage for technical roles is fewer than one in four (25%).

Ignoring ethical AI holds reputational risk for business

Google provides one of the most high-profile examples of ethical AI issues adversely affecting business reputation. Its AI research team has undergone a spate of high-profile senior employee departures following criticism of the tech giant’s alleged biases. Google ethical AI researcher Dr Alex Hanna described the company as having a "whiteness problem" in a Medium post on 2 February 2022 in her public resignation letter.


Hanna’s departure followed her former manager Timnit Gebru’s controversial exit in December 2020 as technical co-lead of Google’s ethical AI team. Gebru was about to go public about biases within natural language processing, which Google protested did not account for new bias mitigation processes the company had been working on.


The controversy eventually prompted a public apology on Twitter from Sundar Pichar, the CEO of Google parent company Alphabet, and led to nine members of Congress reportedly writing to Google to ask for clarification around Gebru's termination of employment.

Examples of ethical AI gone wrong may be driving the idea that trustworthy AI is imperative, but viewed more positively, ethical AI can also be a strategy for competitive advantage.

Many view Google’s case as a warning. If one of the most influential companies in the world could not avoid the reputational damage of perceived AI biases, others may not fare so well.


Such examples of ethical AI gone wrong may be driving the idea that trustworthy AI is imperative, but viewed more positively, ethical AI can also be a strategy for competitive advantage. In fact, among European respondents to IBM’s survey, 73% believe ethics is a source of competitive advantage, and more than 60% of these respondents view AI and AI ethics as important in helping their organisations outperform their peers in sustainability, social responsibility, diversity and inclusion.


A new generation of employees are demanding more ethical practices from employers. With the war for talent raging and a post-pandemic re-evaluation by many employees of their work life, ethical AI could represent another weapon in the employer’s arsenal for attracting and retaining skilled

This article first appeared on Investment Monitor.

Share this article