Governments around the world are considering how they can – and should – regulate the development and deployment of increasingly powerful and disruptive artificial intelligence (AI) technologies. Australia is no exception. On 1 June 2023, the Australian government announced the release of two papers intended to help ‘ensure the growth of artificial intelligence technologies (AI) in Australia is safe and responsible’. The first of these is the Rapid Response Report: Generative AI, which was commissioned by Australia’s National Science and Technology Council at the request of the Minister for Industry and Science, Ed Husic, back in February. The Rapid Response Report assesses potential risks and opportunities in relation to AI, and is intended to provide a scientific basis for discussions about the way forward. The second paper is the Safe and Responsible AI in Australia Discussion Paper which, according to the Minister’s media release, ‘canvasses existing regulatory and governance responses in Australia and overseas, identifies potential gaps and proposes several options to strengthen the framework governing the safe and responsible use of AI.’
The discussion paper seeks feedback on how Australia can address the potential risks of AI. It provides an overview of existing domestic and international AI governance and regulation, and identifies potential gaps and additional mechanisms – including regulations, standards, tools, frameworks, principles and business practices – to support the development and adoption of AI. It focuses on ensuring AI is used safely and responsibly, but does not consider all issues related to AI, such as the implications of AI on the labour market and skills, national security, or military specific AI uses.
Another key area that is expressly excluded from this consultation is intellectual property. That is, in my view, a serious shortcoming. It appears to presume that IP is somehow separable from the other issues covered by the discussion paper. This is a flawed presumption, particularly in relation to business practices. In the contemporary world, IP is at the heart of many business practices, and the laws and regulations that we make around IP can be the difference between a business practice that is viable, and one that is untenable. And not every business practice that might be enabled by IP laws is necessarily desirable or of net benefit to society. If we fail to consider the interplay between IP laws, business practices, and other forms of regulation, then we risk making mistakes that might prove very difficult to undo in the future.
This article is prompted by, but is not primarily about, the Australian consultation process (although I will return to that at the end). It is about how IP rights, and other forms of regulation, could operate to concentrate increasing levels of power in the hands of the few big tech companies – such as Microsoft (through its partnership with OpenAI), Google and Amazon – that have risen in recent years as the dominant players in AI and its enabling technologies. Based on recent developments, I believe that the stage is already being set for implementation of exactly the kinds of laws and regulations that would most benefit these companies, under the guise of protecting innovators, content creators, and the general public against the various threats said to be presented by AI.
A perfect storm is brewing. Onerous regulation around the development, training and deployment of AI systems could combine with IP-based restraints on the use of training data, and on AI outputs, to bake-in an advantage for the world’s richest and best-resourced companies. The storm is being fuelled by hype and fearmongering which, even though much of it may be well-intentioned, plays to the interests of big tech.