Nipping progress in the bud

AI Accounting

AI has the potential to improve education, increase productivity and transform the way we live. But will regulation pull the handbrake before we even get going?

QUICK TAKE

  • In Australia, the minister for Industry and Science, Ed Husic, said in January that he expected a regime “targeted towards the use of AI in high-risk settings, where harms could be difficult to reverse, while ensuring that the vast majority of low-risk AI use continues to flourish largely unimpeded.”
  • Perhaps New Zealand will ultimately go the same route; so far nothing much has been signalled.
  • Another good default starting point is not to regulate too quickly in dynamic markets where there are a lot of big changes underway.

Unsurprisingly for a diverse entity with 24 official languages, nothing moves quickly in the European Union (EU). It’s reminiscent of Charles de Gaulle’s quip about France, “How can you govern a country which has 246 varieties of cheese?”. It took from April 2021 to early February of this year to get the The EU Artificial Intelligence Act squared away by all the member countries.

Now that it’s got there, though, the EU finds itself ahead of the pack. Twenty-eight countries (including Australia) turned up last year at Bletchley Park in the UK, the home of Britain’s WWII codebreakers, and agreed that while AI “has the potential to transform and enhance human wellbeing, peace and prosperity”, it “also poses significant risks… and we affirm the necessity and urgency of addressing them”. But only the EU, thus far, has rolled out comprehensive regulation.

One element is an outright ban on some AI-powered activities including real-time biometric surveillance (such as security cameras linking into databases to identify people) and Chinese-style ‘social scoring’, where your data trail is turned into an indicator of compliance with the scorer’s purposes. Anyone concerned about civil liberties won’t have any issues. Another is heavy inspection (pre deployment) of ‘high risk’ AI models, once relied on to run key bits of infrastructure for example, or to do sensitive things like hire or assess employees, or set insurance premiums.

The final leg is a relatively hands-off approach to low- or minimal-risk use of AI.

So far, so reasonable, and other countries are likely to buy into something similarly risk-based. In Australia, for example, the minister for Industry and Science, Ed Husic, said in January that he expected a regime “targeted towards the use of AI in high-risk settings, where harms could be difficult to reverse, while ensuring that the vast majority of low-risk AI use continues to flourish largely unimpeded.” Perhaps New Zealand will ultimately go the same route; so far nothing much has been signalled.

Too much, too soon

There’s clearly some effort being made internationally to tailor regulation to the biggest risks and not to block or chill the potential benefits. That’s something. But I can’t help feeling that this early global push to regulation (other than in relatively behind-the-pack New Zealand) is premature, and overemphasises the risks over the payoffs.

For one thing, the default position should be that anything that promises productivity gains should be made as welcome as possible: that’s the only way countries get reliably better off in the longer term. AI offers multiple ways to make our lives smarter and better, from the relatively mundane things ChatGPT offers (“Write an email to my boss for an extension to my project deadline”) through to frontier stuff, like advanced drug design (if you need that email, you can sign up for free access here).

I’m particularly impressed by the potential for AI to reverse the long decline in educational achievement in Australia and New Zealand. I asked ChatGPT a nerdy economics question (“Give me the pros and cons of DSGE [dynamic stochastic general equilibrium] models” – they’re the fancy economic models that are all the rage these days) and it gave me a balanced, accurate answer. I can see AI helping lots of people to learn more than they would otherwise. At the moment, though, educators look more worried about risks to the status quo (“How will I tell if ChatGPT wrote their assignments?”), rather than embracing the possibilities of better ways to teach and assess.

Another good default starting point is not to regulate too quickly in dynamic markets where there are a lot of big changes underway. And we have a prime example to point to. The internet became available to the public in 1993: would we really have experienced the extraordinary, unpredictable, unguided burst of business creativity over the past 30 years that has revolutionised the ways we work, play and interact, if global regulators had set up shop less than two years later? Because it’s only 18 months since ChatGPT went public. Wouldn’t it be better to let AI things play out a bit longer to see what happens?

Clear and present danger?

Let’s hypothesise that something calamitous were to happen, such as an AI-designed virus killing us all off. How realistic are doomsday outcomes?

Below is a current survey from AI Impacts, a collective research project on AI’s likely effects and potential risks. While they can’t be dismissed, some look overcooked – akin to the sorts of early fears raised about genetic engineering – and some look like risks we could live with.

We can look particularly askance at the alleged risks relating to economic powerlessness in the wake of AI-powered job automation. Yes, in advanced economies, some 60% of jobs are exposed to the potential impact of AI, according to the International Monetary Fund (IMF). But roughly half of those will be impacted in a good way: they’ll be using AI to do more and better stuff. For others, the IMF has stated, “In the most extreme cases, some of these jobs may disappear”, including the drudgier parts of accountancy and economics. Frankly, there’s no “may” about it: they’re goneburgers, just as bank teller jobs were decimated by online banking.

Since the Industrial Revolution, people have been coming up with this mass unemployment scare, and it’s never happened. Progress opens as many doors as it closes: in both Australia and New Zealand we have historically low unemployment rates, despite the bank tellers being laid off.

The right policy answers to increased job churn and the likely increase in AI-linked inequality – Microsoft shareholders, already well off, now own the most valuable company on the planet thanks to AI investments – are active labour market policies (helping people retrain and providing effective income safety nets) and a progressive tax system. The wrong answer is to stand in front of the oncoming truck.

In what may be a first for Acuity, I asked ChatGPT to write my final paragraph.

“Opponents of AI regulation argue that strict rules could stifle innovation, impeding AI’s potential benefits, like efficiency and economic growth. They emphasise the complexity of regulating AI across diverse industries and advocate for industry-led guidelines and ethical frameworks to address concerns, without hindering progress.”

This article was first published by Acuity Magazine at the following URL: https://www.acuitymag.com/technology/are-regulators-planning-to-lay-down-the-law-too-soon-for-ai