Skip to main content
Published:  

This space invariably captivates me and so too many others who oversee and provide input into government functions.

I usually highlight the impact of AI on the community through the stories of individuals, who may be our neighbour or friend or colleague. Stories like:

  • the social housing tenant whose rent was calculated using an outsourced algorithm not covered by contractual or legal requirements, so the mode of the subsidy and indeed her rent remained inscrutable to challenge
  • the retired civil engineer who possessed a working knowledge of the algorithm that was applied to calculate floodplains and who sought access to the outsourced algorithm via his local council, yet the program prevented access in a form that allowed him to compare calculations
  • the reporter seeking to access information about combustible cladding across multiple local councils that was held under a federated information sharing model from which calculations and decisions could be made – however, because information was held in discrete components and no single government entity held the complete suite of data she requested, access could be denied.

I propose a recontextualisation of AI. Let’s reimagine AI as fundamentally a tool – one that is within our control. We are at the juncture of needing to make this tool work – safely and lawfully – and the starting point has to be how do we apply principles to govern our application of such a dynamic tool as AI to ensure that we are upholding legislated rights, sound administrative decision making – which is a key driver of trust in government – and, importantly, our values.

Today, I’ll explore 3 key principles that I believe can guide us towards a future where AI benefits society as a whole. They are:

  1. transparency
  2. regulatory cohesion
  3. regulatory effectiveness.

Like any tool, AI requires careful application, to the right setting with the right safeguards and close monitoring.

Trust and transparency – the first of 3 principles

The benefits of AI will only be fully realised if public trust is established and if we can promote well founded trust in the application of the tool.

The OECD’s trust survey shows 46% of Australians trust the federal government. Trust is lower among those experiencing disadvantage, such as financial distress, and who have lower education levels.

Many of the risks with AI, if not managed appropriately, have the potential to significantly disadvantage individuals and communities, particularly those who are already disadvantaged, and undermine public trust and confidence in government.

This is a critical issue of our time, as trust gives government the social licence to innovate and transform services for mutual benefit and to address complex, long-term challenges. And low trust environments threaten effective democratic governance.

There is a way to go to build trust in AI. Our 2023 Australian Community Attitudes to Privacy Survey found:

  • Australians are cautious about the use of AI to make decisions that might affect them, with 96% saying there should be some conditions in place first.
  • Only 1 in 5 were comfortable with government agencies using AI to make decisions about them.

Evidence-based decision making by government is a strong driver of trust.

Where AI is involved in government decision making, the opacity of these systems can pose significant challenges for agencies seeking to provide a meaningful explanation about how decisions are made.

It is also difficult for individuals to understand or exercise their right to access information, or to seek review of a decision.

The use of AI may also give rise to risks that may be latent, such as a data breach that might only be revealed when an individual seeks to assert these rights. Our data breach statistics show that government agencies appear slower to notify data breaches, and this may be because of the latency of the harm and notification from impacted individuals.

Many harms are invisible to the public and regulators alike – either because they involve multiple actors in a complex supply chain (sometime across jurisdictions); are obfuscated by the technology; or because they involve collective rather than individual harms.

In relation to the use of personal information, how do we know if our personal information is being used in an AI system? If our personal information is used, how can we request access to it, or that it be corrected?

Trust begins with transparency.

Government agencies must help citizens to understand how AI systems operate, and especially how they influence decisions that affect citizens, because without clear explanations, individuals may struggle to exercise their rights.

The Freedom of Information Act (FOI Act) enshrines the right of access to information and plays an important role in facilitating transparent government decision making.

The objects of the FOI Act include requiring agencies to publish information; providing a right of access to documents; and increasing scrutiny, discussion, comment and review of the government’s activities.

These objects are essential to building and maintaining the openness, responsiveness and integrity of government agencies, which the OECD has found to be key drivers of trust in public institutions. These objects are also essential to our system of responsive, representative democracy.

We must tell citizens that AI is in use, how it’s being used and ensure that when they exercise their right to obtain information under a freedom of information regime, that the information is available to them.

Government agencies also have transparency and notice obligations under the Privacy Act, as well as the right to apply to government agencies to access their personal information and request corrections to their personal records.

There are various work programs across government looking at how guardrails around AI could be strengthened.

In offering our regulatory expertise to policy makers, the OAIC has been broadly supportive of transparency measures, such as requirements for agencies to publish information about their use of automated decision making and make business rules and algorithms available for independent expert scrutiny.

Transparency measures that will safeguard rights where AI is used include preserving and protecting the existing rights to access information, and augmenting those rights by ensuring that the community has notice of when and how this tool is being used

Regulatory cohesion – the 2nd principle

There needs to be a unified approach to addressing these issues, as fragmented policies carry the risk of hindering progress and realising risks to individuals and communities.

Regulatory cohesion can be achieved by establishing clear, consistent and interoperable obligations and safeguards that apply across all government agencies.

Importantly, these obligations and safeguards must extend to the use of contracted service providers, given that many AI systems used by government are outsourced.

By way of an example, I recently made a submission to the Attorney-General’s Department’s consultation on the use of automated decision making in the delivery of government services – automated decision making that increasingly involves the use of AI.

The OAIC supports the development of a consistent legal framework for the use of AI in government, underpinned by strong and consistent transparency and accountability obligations and safeguards, and effective regulatory oversight.

As I said in my submission, it is important that the automated decision making reforms, and any further AI-related reforms, align with existing protections and obligations – whether that be those in the FOI Act, Privacy Act, future Privacy Act reforms or the Australian Government’s proposed mandatory guardrails for high-risk AI.

The development of clear and practical legal frameworks that strengthen legislated rights and protections, while avoiding unnecessary regulatory overlap, will provide certainty and clarity for both citizens and government agencies.

Regulatory effectiveness – the 3rd principle

Robust, cohesive regulatory frameworks alone will not ensure that AI is an effective tool. Regulatory effectiveness will safeguard our application of that tool.

My priority areas to advance the OAIC’s regulatory effectiveness include

  • A focus on data-driven regulatory action – enhancing our data capabilities to inform regulatory and corporate analysis, prioritisation, action and review. This includes building the OAIC’s technical expertise and considering how we might use AI. Our ability to regulate harms brought about by new technologies will be strengthened if we have first-hand experience of using those technologies ourselves. This will not only be helpful, but necessary, given that exponential changes in technology are likely to give rise to more and more complaints and cases for us to take on.
  • Supporting capability uplift across the Australian Public Service (APS) – working with senior leaders to enhance administrative decision making and information governance practices.
  • Preserving rights in the context of new and emerging technologies – guiding APS adoption and deployment of new technologies for data governance and decision making.
  • Providing international leadership – to inform the advancement of rights globally, with a focus on our region and new technologies like AI. We are uniquely placed in this region – we are neither Europe or America; we offer a different approach and importantly that difference reflects our community values and legislated rights.

Outside the OAIC, there is much more that can be done to enhance regulatory effectiveness. For example:

  • How should regulators and agencies approach building a greater portfolio of technical methods for bringing obscure violations to light and investigating potential harms?
  • Stewardship has been recently introduced as an APS value, confirming the obligations of the service in its custodianship of government information. How does that give rise to the expansion of related obligations, such as the creation of and access to information together with sound governance throughout the lifecycle of government information where AI is used?
  • Once we have fit-for-purpose legal guardrails around AI, how does this translate into mature information governance frameworks to ensure the rules created are not just theoretical, but able to be implemented in practice?
  • How do we promote regulatory effectiveness in the application of this new and dynamic tool?

Conclusion

I couldn’t disentangle the actions necessary to deliver transparency, regulatory cohesion and regulatory effectiveness because synergy is needed to ensure the proper application of AI as a tool. So, here are 8 key features to better support transparency, regulatory cohesion and regulatory effectiveness.

  1. Preserve the right to access information by examining the barriers presented by AI. This includes augmenting the right to access through proactive release – that is, a statement that AI is in use and a general explanation of how it’s used together with published policies so that the community is informed and can challenge decisions.
  2. Ensure provenance of data sources – again this is a transparency measure – where did that data set originate? Is it reliable? Is it the source of an aberration?
  3. Promote administrative decision making capabilities in the APS with a particular focus on decision inputs and reasoning – again retain and advance transparency.
  4. Establish a central register of AI in use by government – there are some moves in Europe and the US in this regard to better answer the questions who is using; what is in use and how is AI being applied.
  5. Introduce, by way of contractual requirements, a requirement for real-time notification of harm or unintended consequences in the use of AI systems not limited to the actual contract with government but the same systems in use elsewhere – a bit like a defective product or product warning notification.
  6. Mandatory notification to regulators of the harms – mandate a requirement to notify regulators and inject requirements for government to notify regulators of harms identified in the use of AI systems by agencies. A bit like a mandatory data breach notification scheme.
  7. Ensure a data-informed, collaborative regulatory approach by sharing this intelligence about the prevalence of AI, provenance of data sets and identified and potential harms.
  8. Advocacy – a joined-up regulatory approach will provide an effective advocate for Australian laws and values in the face of a global marketplace where Australian community values and laws are at risk of dilution.

Recognising and treating AI as a tool to be safely applied is in my view the first step in defining our approach.

The second step is to apply principles – transparency, regulatory cohesion and regulatory effectiveness – that will then enable us to take the third step: applying that tool for a purpose that serves the community, the government that serves that community, and curtails community harm.

You here today are in the vanguard of the application of AI and you will influence the parameters for the application of this tool and safeguard the long-term interests of our community. I hope I have contributed to your thinking.

I commend Rita and Macquarie University for your efforts to create this platform and bring this community together to explore what is a pressing issue, and I look forward to exploring these concepts further shortly in the Q&A.