Skip to Content
 Close search

The right approach to AI regulation

22 February 2024

Artificial Intelligence technologies (AI) could significantly increase Australia’s productivity and drive economic wellbeing. But to gain these benefits, we need the right approach to regulation.

Australia, like most of the developed world, has had poor productivity growth for the last twenty years. AI can boost this growth. By assisting workers, AI can raise labour productivity leading to higher real wages. By using large, underutilised datasets, AI can help identify risks, and improve outcomes in human services like health, education and housing. We are already seeing the benefits, from the AI that drives warehousing and logistics chains, to AI virtual tutors that provide individualised feedback and assistance to students, to diagnostic AI that helps medical professionals.

The transformative potential of artificial intelligence can seem daunting. Certainly, there are risks that require regulation. But calls for new AI-specific regulations are largely misguided.

Most uses of AI are already covered by existing rules and regulations, such as consumer, competition, privacy and anti-discrimination laws. These laws are not perfect, and many have well-recognised limitations that are already being examined by government. AI may raise new challenges for these laws, for example, by making it easier to mislead consumers or by AI algorithms helping businesses to collude on prices.

The key however, is that the laws exist. We generally don’t need new laws for AI. Rather, we need to examine existing regulations and better explain how they apply to the uses of AI.

One of Australia’s advantages is the strength and expertise of its regulators. They can play a key role, showing how AI is covered by existing rules, issuing guidelines, working with industry to evaluate risks, and running test cases where coverage is unclear. This process will help build trust in AI, as consumers see that they are protected by existing regulations. It also provides clarity for business. AI might be a new tool but the broad rules of business behaviour are largely unchanged.

In some situations, existing regulations will need amendment to ensure an AI use is covered. Approval processes for vehicles, machinery, or medical equipment will increasingly need to account for AI. And in some cases new regulations will be needed. But this is the end point of the process, not the beginning.

The Productivity Commission has released a report to help guide governments’ approach to AI regulation.

The starting point for any regulation is the real-world use of AI technology. Governments should identify how a specific AI-based technology is being used, or likely to be used in the immediate future. This could be based on, for instance, stated intended uses or overseas experience.

Alternative starting points, based on regulating AI model design or banning open-source AI, are wrong headed. Many AI models can be adapted to a wide range of uses – some risky, some not. Trying to regulate the AI-engine will, at best be ineffective. At worst they will limit socially desirable development and use.

Many uses of AI technology will create little if any risk. And where potential harm exists, it must be weighed against the benefits of the technology. Governments need to determine whether the identified uses of the AI technology result in heightened risks of serious harm compared to the alternative. The risks of AI should be judged against real-world, human-based alternatives, not a fictious risk-free world.

If existing regulations adequately address the identified risks, then there is no need for new regulation. Existing regulations may clearly cover the use of AI technology. Alternatively existing regulations may need clarification, potentially via the courts, or amendment. This means that regulators must be trained and resourced to understand and respond to any risks that come from the use of AI-based technology.

New regulation is only useful if existing regulations, even when clarified, amended or extended, are inadequate. However, regulation is only one element in securing safe, ethical AI use in Australia. Risks of harm can be tempered by social norms, market pressures and coding architecture. New regulations should only be introduced if there is an overall benefit, taking into account the likelihood and consequences of any risk, who can control the risk, and the cost of the regulation. In particular, new regulations should be technology-neutral whenever possible. Rules written for specific technologies will likely be obsolete in a few years’ time.

Finally, Australia will often be an international ‘regulation taker’. Other jurisdictions, such as the EU, are designing specific AI-regulations. Developers, including those based in Australia, will need to meet these new rules if they want to access some of the world’s biggest markets. This means that AI-technologies released in Australia, even if developed locally, will often be designed to meet overseas standards.

In contrast, Australia is a market minnow. If Australia adopts idiosyncratic AI-specific rules, then developers may simply bypass our market and go elsewhere. Even local developers may find it uneconomic to have an Australian-specific version of an AI-technology.

This means that, in those limited situations where AI-specific regulation is needed, the starting point should be overseas rules and standards that might meet Australia’s requirements. Mutual recognition will be better than idiosyncratic regulatory independence. It also means that Australia should be active on the world-stage. We need to encourage and participate in international standard setting for AI-technologies.

The landscape for AI in Australia is still developing. Governments need to ensure they give us the best chance to maximise the productivity gains from AI, while providing safety-nets against adverse outcomes for individuals and business. But existing regulatory protections, not new regulation, is the starting point.

This article was written by Commissioner Stephen King.