Artificial intelligence opens a world of seemingly endless possibilities, boosting efficiency and quality across business operations. But how can finance professionals identify effective and ethical ways to implement relevant AI solutions?

ICAEW’s Trust and Ethics team have been engaging with experts from multiple domains to explore the promises and pitfalls of artificial intelligence (AI) adoption and shape guidance for the accountancy profession on the ethical use of AI.

At round-table events held over the summer, delegates shared their real-world experiences of developing and using traditional AI and generative AI tools. While participants described a variety of individual use cases, it was agreed that the sector was still working to determine what constituted a good use case. The general-purpose nature of generative AI means that use cases can often be unique, not just to individual firms but even to specific individuals within those organisations.

The first step is to ensure a clear and defined link to your organisational strategy. While it’s easy to get caught up in the idea that ‘everyone is using AI’, succumbing to this fear of missing out could lead to hasty adoption of AI tools that may not be the most appropriate for your needs. This pressure to adopt without proper consideration can result from unintended consequences from misunderstood risks, potentially undermining the promised benefits.

To manage risks associated with AI adoption, firms and individuals should consider the following questions at the outset:

  • What specific problem is the use of AI intended to solve?
  • Is AI truly the best solution, or would traditional analytics or process automation be more appropriate? 
  • What are the potential ethical implications of using AI for this purpose?
  • What data will be required, and do we have access to high-quality, unbiased datasets?
  • What is our plan for explaining the AI’s decision-making process to stakeholders if required?

Understanding patterns of use

Before diving into specific use cases, it’s beneficial to identify broader ‘use themes’, that could encompass multiple potential use cases. This approach helps to distinguish between:

  1. Traditional AI tools: generally managed by developers and focused on specific tasks.
  2. Generative AI: featuring user-friendly interfaces that allow finance team members to experiment more readily.

One firm participating in ICAEW’s round tables shared insights from their analysis of hundreds of use cases and identified a range of use themes. It argued that identifying patterns, rather than individual use cases, helps to better understand the ethical challenges posed by AI.

For traditional AI, the use themes identified included:

  • streamlining order processes or stock management; 
  • predictive modelling; 
  • data analysis and forecasting; 
  • anticipating future trends; and 
  • optimisation engines.

Meanwhile, for generative AI tools broader applications were identified, including:

  • Summarisation: producing abbreviated forms of a given document and to summarise reports, including financial reports.
  • Transformation: converting specific data into different forms (email, image, text), with the ability to add translation and personalisation features.
  • Analysis and insights: generated from reports.
  • Q&A: chatbots and smart assistants.
  • Deep retrieval: searching for specific information within documents or documents sets. For example, querying trade terms from customer contracts or claims in invoice matching.
  • Augmentation: expanding upon existing content, such as autocomplete or synthetic data creation.
  • New net creation: generating original content based on given prompts. For example, generating scenarios to support contract negotiations.

Identifying relevant use themes lays the groundwork for thorough risk analysis and support the design of appropriate risk-mitigating frameworks. This stage allows accountants to leverage their skillsets of analysis and professional judgement to support their organisation effectively. 

As one round-table contributor put it: “This is where the accountancy profession needs to be able to stand back and say: ‘Wait a minute. Yes, we need to put in tools that help us as a company stay competitive, and potentially take the lead, but we’ve got to do so properly.’”

Examples of individual use cases

The round-table participants discussed how they were using AI tools in their day-to-day activities, sharing what was working well for them, but also flagging potential risks. You can read a comprehensive report detailing all the examples discussed, but here we highlight a few examples.

Training

Delivering training on complex subjects to graduates and apprentices new to auditing or accounting can be challenging. One participant shared how their organisation was using AI in developing training materials on their audit methodology, particularly to make auditing standards concepts more accessible. 

“We use AI to come up with analogies to explain these quite technical concepts to people from a non-technical background. One example is explaining the differences between inherent risk and control risk within the audit risk model.”

Large language models (LLMs) excel at tailoring information and breaking down difficult concepts when prompted, making them valuable tools for learning. However, it’s crucial to recognise that while AI can enhance the learning process, it should complement, not replace, peer learning and human instruction.

Detection of fraud

Round-table participants from external audit firms and internal audit functions discussed the real-world application of AI in identifying potentially fraudulent activities.

The ability of AI tools to assess large quantities of data allows them to spot unusual patterns of activity. Some auditors are using AI tools for dynamic risk assessment of journals – identifying outliers in the context of the wider population. These tools can detect, for example, when a user posts an amount or at a time that differs compared with their usual activity and historic postings.

Generative AI has the potential to significantly improve this process. Unlike traditional AI, which requires training on large numbers of labelled data points, generative AI can learn from just a few examples provided in the prompt. This capability allows for more flexible and efficient anomaly detection. However, it’s crucial to consider the potential for hallucinations and the need for refinement when using generative AI in this context.

Similarly, both internal and external auditors are using AI to match expense claims against company policies and invoices against delivery notes, flagging anomalies for human review. These tools help identify duplicate invoices, dubious charges on invoices, fraud and misuse of expenses.

Contracts and leases 

Generative AI tools are increasingly being applied to process complex unstructured data, to pull out relevant information and interact with outputs in a more intuitive, ‘human-like’ manner.

Users can query specific legal documents and overlay the AI’s analysis with the financial statements and relevant accounting standards. This capability enables accountants to efficiently assess proper accounting treatments, such as determining if a lease is correctly accounted for. The approach significantly reduces time spent on manual review, as practitioners no longer need to scrutinise individual lease details.

AI tools are also being used to generate and review terms in various types of contracts (including procurement, sales and partner contracts). LLMs combined with optical character recognition can help individuals to understand what’s in a contract. In some cases, the AI tool can even identify elements that need to be accounted for that fall outside of our standard contracts.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *