The Rise Of Generative AI


Excitement, Concerns, And The Need for Balance


  • Artificial intelligence (AI), with its remarkable analytical capabilities, is able to detect trends and patterns that might elude the human eye, thus providing richer risk assessments and deeper insights.
  • While AI frees up time spent on tedious tasks, the accountants’ role remains pivotal – they are needed to verify AI-generated reports and exercise professional scepticism.
  • With AI lifting the burden of repetitive tasks and other labour-intensive aspects of accounting work, accountants can ascend the value chain and transit to areas that require more strategic thinking and nuanced decision-making.

“ChatGPT is yesterday’s toy.” Attention-grabbing punchlines such as this rippled through LinkedIn feeds, masterfully crafted by digital wordsmiths to ignite a sense of urgency, and FOMO (fear of missing out) among readers who find themselves grappling with the rapid development of technology unfolding in front of them.

ChatGPT’s launch in November 2022 was met with unparalleled hype. Consider this: Netflix took 3.5 years; Facebook, 10 months; Spotify, five months; and Instagram 1.5 months, to amass a million users. In contrast, ChatGPT accomplished the same feat within a week, with the user number soaring to hit 100 million in just two months.

Since then, generative artificial intelligence (AI)’s capabilities have become known to the general public. Now, it seems like every week, there is a new platform being unveiled. In the field of chatbots alone, OpenAI has released the more powerful GPT-4, while Google released Bard and Microsoft released Bing Chat. Text-to-image models like Dall-E 2, Stable Diffusion, and Midjourney, alongside text-to-video systems like Make-A-Video, are ushering in a new era of creative potential.

At its core, generative AI represents a subfield of AI that is capable of creating human-like content. This is possible because generative AI models learn patterns and styles from large human-generated data sets – be they text, images or sound files – and ultimately, are able to produce new content that is strikingly similar to the original sources. This is also how GPT-3.5, the large language model (LLM) that powers ChatGPT, can predict the next word in a text string and generate coherent text effortlessly.

As generative AI’s impact ripples through Singapore’s business landscape, subtly reshaping the accountancy sector and touching ISCA members in diverse ways, the experience can be exciting, yet daunting. Most crucially, it necessitates a delicate balance to effectively harness its potential without falling prey to its pitfalls.


Technology titans are well aware of the transformative potential of generative AI as they incorporate its capabilities into their productivity software. Microsoft has enhanced its productivity suite (Word, PowerPoint, and Excel), Outlook and Teams with machine intelligence. Alphabet, the parent company of Google, is doing the same with Google Docs, Sheets and Gmail. The competition for AI dominance is heating up, as these industry titans vie for supremacy.

In the eyes of many technologists, generative AI models represent the next platform shift akin to the advent of the Internet, cloud computing, and smartphones. Failure to adapt to this transformative platform could render tech companies irrelevant.

So, how do these advancements reshape the business landscape? The Stanford AI Index Report 2023 highlights the most prevalent uses of AI by businesses in 2022, including optimising service operations (24%), developing new AI-based products (20%), customer segmentation (19%), customer service analytics (19%), and enhancing existing products with AI capabilities (19%).

In the world of accountancy, giants aren’t sleeping either. They are embracing AI with billion-dollar bear hugs. PricewaterhouseCoopers is pumping a whopping US$1 billion into an endeavour that could redefine its tax, audit and consulting cosmos. Meanwhile, EY is pouring the same amount into an assurance technology platform to harness AI, trusted data fabric and disruptive technologies to power a new generation of data-driven assurance services.

Then there’s Deloitte, which is birthing a whole new generative AI practice, eyeing to help clients supercharge areas such as fraud detection, supply chain optimisation, and smart factory applications with the power of generative AI and Foundation Models. KPMG is also all in on a flurry of generative AI investments and alliances to empower its workforce and boost its client solutions.

Banks are not left behind in this high-stakes AI race. The finish line? – winning new clients, streamlining operations, unravelling complex risks, and reshaping how they interact with clients and staff alike. The 2023 Banking And Finance Outlook by Jones Lang LaSalle illuminates this trend with staggering clarity. By 2025, the world will have spent an additional US$31 billion on AI, with the banking and financial services industry leading this digital charge.

Take DBS Bank, for example. By integrating AI and machine learning into both their customer-facing businesses and their internal functions, like legal, compliance, and human resources, they are rewarded with S$150 million in additional revenue and a neat S$25 million from productivity gains in 2022 alone.

Looking ahead, the development of domain-specific LLMs holds great promise. Imagine finely tuned models trained in specific domains like finance, law, or accounting, delivering precise and quicker answers than flipping through pages, empowering users with remarkable resources at their fingertips. Companies like Bloomberg are already pursuing this path with BloombergGPT, tailored for financial information. Early versions of Quran GPT and Bible GPT are also emerging.


Rapid progress in generative AI evokes both excitement and fear. Chief among the concerns is the spectre of job displacement, which is casting a shadow of uncertainty over the future of work. Recent forecasts by Goldman Sachs suggest that the latest wave of AI, represented by platforms like ChatGPT, could potentially automate nearly 300 million full-time jobs worldwide, affecting approximately 18% of global work, with advanced economies bearing the brunt of the impact.

Amid these concerns, let’s spotlight the accountancy profession. AI, in this sphere, serves as a magnifying glass and a compass, spotting risks in mountains of data and pointing auditors to areas needing deeper scrutiny, consequently raising audit quality. It streamlines tasks that were once laborious, such as data input, analysis, and reporting, enhancing productivity and reducing auditing costs.

Furthermore, AI, with its remarkable analytical capabilities, is able to detect trends and patterns that might elude human eyes, thus providing richer risk assessments and deeper insights. AI, in the realm of accountancy, isn’t just about doing things faster or cheaper; it’s also about doing them smarter.

While AI frees up time spent on tedious tasks, the accountants’ role remains pivotal – they are needed to verify AI-generated reports and exercise professional scepticism. With AI lifting the burden of repetitive tasks and other labour-intensive aspects of accounting work, accountants can ascend the value chain and transit to areas that require more strategic thinking and nuanced decision-making.

This progression, coupled with the increased productivity brought about by AI, can potentially lead to shorter working hours, better salaries and more intellectually stimulating tasks for accountants. This will enhance accountancy’s attractiveness as a career of choice, which will go a long way towards alleviating the manpower crunch currently faced by the profession.

In a separate context, with AI’s ability to handle extensive data, will it shift the audit goalpost from a “true and fair view” to a more precise “true and correct view” of financial statements? If this shift in paradigm really comes to fruition, it could profoundly affect businesses, significantly bolstering investor confidence in audited accounts and creating unprecedented deterrence against fraud.

History has often shown us that new technologies not only replace old jobs but create new ones. The Economist has suggested that fears of an AI-induced jobs apocalypse might be premature. Importantly, the integration of generative AI won’t occur overnight. Also, there are considerable concerns regarding the protection of confidential information, which has led to companies like JPMorgan Chase to ban the use of ChatGPT at work.

A more significant issue is reliability. While AI can generate remarkably coherent text, it occasionally “hallucinates”, making authoritative yet false statements. Therefore, any AI-produced output would need to be rigorously verified by qualified professionals such as accountants, lawyers and journalists.

Ultimately, the fusion of human expertise and AI forms an extraordinary partnership, enabling us to delve deeper, uncover hidden insights, and unravel complex real-world problems. This synergy of human ingenuity and AI’s data-driven capabilities is where the true transformative potential lies.


The surge in generative AI applications ushers in a new age of potential benefits and ethical conundrums. The most conspicuous concern is AI’s ability to fuel misinformation, as evident by notable incidents such as the deepfake video depicting Ukrainian President Volodymyr Zelenskyy surrendering, and fake photos of Pope Francis in a stylish, branded white puffer jacket.

Moreover, generative AI models, inherently influenced by their training data, can unwittingly carry forth the biases present in that data, thus perpetuating discriminatory practices, especially in areas like hiring and lending. An emblematic case in point is Amazon’s AI recruitment tool, trained primarily on male-dominated resumes for a decade, consequently fostering an inherent bias against women. Simultaneously, the vast array of training data sources, encompassing social media feeds, Internet searches, and copyrighted materials, raises concerns about privacy and copyright infringement related to data scraping for generative AI.

Undoubtedly, ethical considerations stand front and centre in the discussion surrounding generative AI. While the necessity for AI regulation is widely acknowledged, strategies differ. The Economist classifies governments’ approaches to AI regulation into three categories. On one end of the spectrum, Britain and America lean towards a “light-touch” approach, repurposing existing regulations to govern AI systems and focusing on leveraging AI’s potential benefits. In contrast, the European Union proposes a risk classification system for AI uses, mandating rigorous monitoring and disclosure for high-risk uses, like self-driving cars, compared to low-risk uses, such as music recommendations. At the far end of the spectrum, countries like China regulate AI similarly to how medicines are controlled, necessitating product registration and pre-release security reviews.

Amid these differing approaches, Singapore adopts a pragmatic, learning-based approach, opting not to regulate AI yet but to learn from the industry’s AI usage before defining regulatory measures. In alignment with this philosophy, Singapore launched the AI Verify Foundation to promote responsible AI use, enhance AI testing capabilities, and conduct pilot projects with the private sector. These initiatives are designed to provide insights into compliance requirements, guiding future AI governance policies without immediate, stringent regulation.

As with the advent of previous groundbreaking technologies, we are confronted with a delicate balancing act, a careful navigation of the risks and opportunities that lie before us. It is imperative that we approach this newcomer on the block with measured caution, avoiding undue alarmism while purposefully steering towards responsible engagement.

Understanding this delicate act, ISCA has initiated the Artificial Intelligence for the Accountancy Industry (AI for AI) initiative. By earmarking an initial investment of S$2 million, ISCA is preparing to harness the power of AI to empower the accountancy profession.

The initiative includes spearheading strategic research by tapping into ISCA’s Research Network of universities and industry partners to better comprehend AI’s role and anticipate its future impact on accountancy. It also aims to cultivate a vibrant startup ecosystem for developing AI solutions for the profession and accelerate AI integration into daily practices.

This forward-thinking approach acknowledges the growing influence of AI and aims to prepare the profession for its transformative impact. ISCA, by embracing prudence and proactiveness, seeks to leverage AI as an ally, addressing industry challenges, enhancing services, and paving the way for a future where AI and the profession thrive in harmony.


AI’s potential is both disruptive and abundant as it promises to reshape the way we earn our livelihoods and structure our lives. But as we embrace this transformative technology, we must also strive to understand its inner workings, that is, the mechanisms that propel its power and the boundaries that confine it. Attending conferences, educating ourselves with industry publications, enrolling in online courses, and hands-on experimentation with generative AI tools are all avenues to equip ourselves with the skills and knowledge required to navigate the future adeptly.

In line with this, ISCA has since set up an AI for AI Telegram group chat to foster a vibrant community aimed at stimulating insightful discussions about AI and its impact on the accountancy industry. We warmly invite you to join the conversation to raise questions, exchange ideas and share experiences with fellow forward-looking individuals.

As the saying reminds us, “AI will not replace you. A person using AI will.” It falls upon us, professionals across industries, to equip ourselves with the knowledge and acumen necessary to navigate this uncharted terrain.

Guo Binglian is Research and Insights Manager, and Kew Hon Boon is Head of Strategic Planning and International Relations, ISCA.

This article was first published by ISCA at the following URL: