Ahead of the launch of ICAEW’s AI and Ethics CPD module, a series of round tables have highlighted how effective mitigations are needed to ensure AI can be used appropriately and effectively.
As use of artificial intelligence (AI) enters the accountancy mainstream, development of ethical principles and guidance for the profession on its use is critical. This was the subject of a series of ICAEW round tables held recently to inform the development of the forthcoming AI and Ethics CPD module, to be launched in November this year.
The events brought together senior academics in accounting, philosophy, ethics and AI; representatives from a range of differently sized firms; members in business and industry; the International Ethics Standards Board for Accountants; the Information Commissioner’s Office (ICO); and members of ICAEW’s Data Analytics Community and Tech Faculty.
A number of common themes emerged. User-friendly interfaces, such as those provided by large language models including ChatGPT, have to some extent democratised the application of AI. However, what potentially holds the profession back is not the technical limits of what these tools can do, but rather what users and early adopters feel comfortable doing with them.
The importance of trust and, in particular, the danger of losing the trust of consumers and stakeholders featured heavily in the discussions. Participants agreed that it is important to distinguish between traditional forms of AI and generative AI and the specific risks pertaining to each. In relation to generative AI, the concern is that the speed and scale of reach afforded by the technology amplifies any, and all, potential risks.
Wide-known dangers
The broad risks are both numerous and well-recognised:
- lack of user and consumer understanding of the technology and of its potential applications;
- bias in the data on which AI models have been trained and the algorithms they use;
- cultural dissonance and contextually inappropriate value weightings;
- data provenance;
- confidentiality;
- ‘hallucinations’ and inconsistency of output;
- obfuscation of authoritative truth sources and traceability; and
- the hijacking of the technology by bad actors to commit fraud and propagate disinformation.
Then there are the human concerns: automaton bias; over-reliance on AI systems leading to a ‘dumbing down’ of professionals and their ability to add value; recruitment and HR issues; and the existentialist fear of AI ‘making decisions’ that affect our lives and take away accountants’ jobs.
Transparency also emerged as a key concern. Participants noted issues relating to the perceived impenetrability of the AI ‘black box’. Whether that is because developers and vendors are unwilling to explain how decisions are made and outputs created; or because they are simply unable to do so as the advanced neural networks they use have evolved beyond supervised learning, remains unclear.
Yet it is important to recognise that being able to explain something is not the same as building trust.
However, participants also agree that risks should not prevent accountants from harnessing the technology effectively or applying AI to novel business use cases – from optimising research, secretariat and internal efficiencies to use of AI in counter fraud and audit work.
Rather, the challenge for the accountancy profession is to put in place effective mitigations to ensure that AI can be used appropriately and effectively in the context of a regulated environment.
Participants acknowledged the need to build consensus on issues such as the responsibilities of both suppliers and purchasers of AI models and the importance of raising awareness and understanding of intellectual property and consent to use of data.
Quality control
In addition to training and nudging techniques, it is important to have appropriate processes in place that compensate for potential bias and ensure quality control over AI outputs, including ensuring that the human really does ‘remain in the loop’. The importance of appropriate governance frameworks to oversee the implementation of AI within an organisation and developing tailored business use rules that reflect the values of an organisation, was also highlighted.
A host of existing legal and ethical frameworks already exist, including the resources published by the government and the ICO, and the duties and expectations set out in the EU AI Act. However, round table attendees agreed that there would be value in ICAEW creating a bank of ethical-use scenarios, to illustrate how the fundamental principles in the Code of Ethics might apply to the use of AI and to set out how professional accountants should be expected to behave in specific situations.
Using AI is not simply about risk; it remains an issue of ethics. This assertion begs the question: is there an onus on the profession, as part of its public interest remit, to promote the ethical use of AI? This would dovetail with the obligation in Part 2 of the Code, which requires professional accountants to develop and promote an ethical culture in their organisations. The emphasis and training on professional scepticism and use of professional judgement makes accountants invaluable in this regard.
Great potential
David Gomez, ICAEW’s Senior Lead, Ethics, says: “We are incredibly grateful to all the participants for sharing their expertise and business use applications with us. The potential for the use of AI by the profession is huge and we are keen to work with the profession to develop useful guidance to minimise any potential risks.
“We encourage members to send us actual and potential use cases and ethical dilemmas to help us build a repository of case studies that the profession will find helpful.”
Professor Christopher Cowton, who is developing the AI and Ethics CPD modules, says: “AI has been around for a long time and, at one level, it’s ‘just another tool’, to be adopted and embedded as appropriate. However, its recent rapid advancement in the workplace means that it can now be considered a disruptive technology that brings significant ethical and legal risks.
“It is important that professional accountants are aware of those risks, are able to ask the right questions and know how to develop solutions, applying the fundamental principles from the Code of Ethics with insight and drawing on other resources that are being developed to address the ethics of AI.”
This article was first published by ICAEW at the following URL: https://www.icaew.com/insights/viewpoints-on-the-news/2024/may-2024/ethical-ai-use-the-way-forward