-
On this page
Published:
Introduction
- The Office of the Australian Information Commissioner (OAIC) welcomes the opportunity to comment on the Proposals paper for introducing mandatory guardrails for AI in high-risk settings (Proposals Paper) released by the Department of Industry, Science and Resources (DISR).
- The OAIC is an independent Commonwealth regulator, established to bring together three functions: privacy functions (protecting the privacy of individuals under the Privacy Act 1988 (Cth) (Privacy Act) and other legislation), freedom of information functions (access to information held by the Commonwealth Government in accordance with the Freedom of Information Act 1982 (Cth)) (FOI Act), and information management functions (as set out in the Australian Information Commissioner Act 2010 (Cth)).
- The Proposals Paper seeks feedback on proposed principles for defining high-risk AI, proposed mandatory guardrails to apply across the AI supply chain and throughout the AI lifecycle, and three regulatory options for implementing the guardrails.
- As the Proposals Paper acknowledges, AI has the potential to benefit the Australian economy and society, by improving efficiency and productivity across a wide range of sectors and enhancing the quality of goods and services for consumers. However, the data-driven nature of AI technologies, which rely on large data sets that often include personal information, can also create new specific privacy risks, amplify existing risks and lead to serious harms. The OAIC’s 2023 Australian Community Attitudes to Privacy Survey (ACAPS) identified significant community concern with the use of personal information in AI systems, with 43% of Australians considering AI using their personal information to be one of the biggest privacy risks they face today.[1]
- The OAIC welcomes the Government’s aim to strengthen and clarify the laws that apply to AI in order to address the risks and harms from AI, build public trust, and provide greater regulatory certainty. We broadly support the Proposal Paper’s approach to defining high-risk AI, and the content of the proposed mandatory guardrails. The guardrails have a number of synergies with Privacy Act protections, including in relation to transparency, data quality and accuracy, and information security. To the extent the guardrails apply to Australian Government agencies, they will also support greater government transparency and accountability in respect of AI and have positive impacts on access to government-held information and the operation of the FOI Act.
- This submission highlights the key role already played by the OAIC in the regulation of new and emerging technologies such as AI. It is important that in implementing the mandatory guardrails, the Government considers which option will ensure the best outcomes for individuals. Rather than introducing an AI-specific regulator, which has the potential to create regulatory duplication and confusion, the OAIC suggests that a framework approach will help to uplift existing regulatory frameworks and support coordination and consistency across the individual regulators.
General position on Proposals Paper
- The OAIC broadly supports the Proposal Paper’s proposed approach to defining high-risk AI and the content of the mandatory guardrails. A human rights-based approach to defining high-risk AI will require organisations to consider adverse impacts on privacy alongside other human rights when determining whether a proposed use or application of AI is high-risk. This approach also recognises the well-publicised risks of bias and discrimination in AI system outputs.
- As the Paper acknowledges, the human-rights based approach to defining high-risk AI aligns with the approach adopted or proposed in other jurisdictions, including Canada and the European Union.[2] As discussed in our previous submission to the Department’s Safe and responsible AI in Australia discussion paper, the OAIC supports the need for robust and interoperable regulatory frameworks to support effective cross-border regulation of AI technologies, recognising that personal information flows across national borders.[3]
- As a general comment, we also support the Proposals Paper’s recognition of the regulatory remits that currently exist in respect of AI, including in relation to privacy, copyright, competition and consumer law, and the need for the mandatory guardrails to work in complement with, rather than replace, these existing frameworks.[4]
- The OAIC’s previous submission to the Department’s Safe and responsible AI in Australia discussion paper pointed to the growing intersections between domestic frameworks relating to data and digital technologies, including privacy, competition and consumer law, and online safety and online content regulation. While there are synergies between these frameworks, there are also variances given each regulatory framework is designed to address different economic and societal issues.[5] An effective regulatory approach to AI will require institutional coordination between different regulatory bodies in different areas, given the need for complementary expertise.
- To this end, the OAIC has entered into MOUs with other regulators and is also a member of the Digital Platform Regulators Forum (DP-REG), together with the Australian Competition and Consumer Commission (ACCC), Australian Communications and Media Authority (ACMA) and Office of the eSafety Commissioner. DP-REG has also made a joint submission to this Proposals Paper, which highlights the work already being progressed by DP-REG members in respect of AI technologies and how the existing regulatory frameworks and expertise could be applied to address the challenges of AI.
- The OAIC supports the proposed content of the mandatory guardrails which will set out clear expectations regarding testing, transparency, accountability and data governance in relation to high-risk AI systems which apply across the AI supply chain and throughout the AI lifecycle.
- The Proposals Paper acknowledges that the guardrails, which are largely aimed at preventing harms occurring from the development and deployment of AI systems, will need to operate alongside laws to hold organisations accountable and help people to exercise their rights when harm does occur.[6] This highlights the critical role that the existing regulatory frameworks will continue to play to protect individuals from harms arising from high-risk AI.
- We note that there is significant intersection between obligations under the proposed guardrails and existing obligations under the Privacy Act and Australian Privacy Principles (APPs). In particular, Guardrail 3, which provides for the protection of AI systems and implementation of data governance measures to manage data quality and provenance, substantially overlaps with Privacy Act obligations on entities to:
- Take reasonable steps to ensure the personal information they collect, use and disclose is accurate, up-to-date and complete (APP 10); and
- Take reasonable steps to protect the personal information they hold from misuse, interference and loss, as well as unauthorised access, modification or disclosure (APP 11.1).
- There are also areas of strong alignment across the other guardrails. For example, Guardrails 1 and 2, which require organisations to establish and implement accountability and risk management processes, intersects with transparency and governance obligations under APP 1, which requires entities to take reasonable steps to implement practices, procedures and systems to ensure they comply with the APPs and any binding registered APP Code, and can deal with related inquiries and complaints. Guardrail 6, which requires organisations to inform end-users regarding AI-enabled decisions, AI interactions and AI-generated content, will overlap with notification obligations under APP 5.
- To the extent the mandatory guardrails will apply to Australian Government agencies deploying high-risk AI, they have the potential to improve government transparency and accountability in respect of AI and enhance access to government-held information, by requiring agencies to take steps such as publishing details of their AI accountability processes and keeping and maintaining records about a high-risk system over its lifecycle. This aligns with other initiatives currently underway to ensure that the Australian Government is an exemplar in best-practice approaches to AI governance and transparency, including the Digital Transformation Agency’s Policy for the responsible use of AI in government and draft National framework for the assurance of artificial intelligence in government.[7]
- These intersections highlight the way that due to the data-driven nature of AI technologies, the OAIC is in many ways the default regulator of AI with responsibility for responding to many of the harms that flow from the use of AI systems. At the same time, the guardrails cut across a number of different regulatory areas, emphasising the need for regulatory collaboration and cooperation.
- The OAIC notes that these reforms, while important measures to enhance the transparency of decision-making involving the use of AI, will apply only in the context of decisions relating to an individual and will not apply more broadly to government decision-making that is informed by AI.[8] The mandatory guardrails play an important role in this regard by requiring government agencies to be transparent about the broader deployment of high-risk AI systems, including through the publication of an accountability process and strategy for regulatory compliance, informing end-users about interactions with AI and AI-generated content, and being transparent with other organisations across the AI supply chain.
- This broader application extending beyond privacy to information access rights introduces a further need to adopt a human rights preserving framework manifesting in guardrails that promote and preserve both of the fundamental human rights oversighted by the OAIC.
- The Freedom of Information Act 1982 (FOI Act) enshrines the right of access to information under Art. 19 of the UN Declaration of Himan Rights. The objects of the FOI Act include: to require agencies to publish information; provide a right of access to documents; and increase scrutiny, discussion, comment and review of the Government’s activities.[9]
- AI presents unique risks to the transparency of government policy and decision making. Mitigation of these risks has been actively addressed in other jurisdictions. In some jurisdictions a legislated mandate requires the provision of notice to the community that AI is in use both proactively and reactively.[10] The proactive provision of a notice to the community together with a general statement that provides an adequate description of its operation ensures that the community is aware of the use of AI systems. However, that foundational protection of the right to access information should be augmented by further detail provided reactively, upon request deliver a more specific explanation of the operation of the system. This more detailed explanation injects a further rights preservation strategy, the right of review. In the government setting administrative review is integral to our democratic values and rights.
- Large language models derive data from multiple sources to provide an output. In circumstances where the output impacts the provision of government policy, services and decision making the community expects transparency, accountability and in some circumstances avenues for review.
- Explainability and reviewability will be compromised absent the right to know that these systems are in use and the manner in which they operate including data provenance. Independent decision making within the government sector must also be preserved. Accordingly safeguards to identify inputs and define the impact of outputs on an ultimate decision are required. These safeguards are fortified by a conscientious preservation of the right to access information.
- The FOI Act expressly recognises that the right to access information is preserved notwithstanding contractual arrangements between government and third-party providers.[11] In circumstances where government is reliant upon third party providers to advance its digital agenda preservation of extant rights is paramount. This is particularly evidenced by the increasing use of technology in strategic sectors such as environment, health, transport and defence and security.
- In these high-risk sectors which necessitate engagement of third-party providers preservation of information access rights is not secured directly under the FOI Act, unlike the operation of the Privacy Act. In this regard the ambition pursued within the paper that: The Australian Government recognises the importance of leadership in the use of AI and acting as an exemplar in best-practice approaches to AI governance may be advanced through a more direct application of information access rights which mandate proactive and reactive disclosure of information regarding the use of AI systems in government settings.
- Increasingly the right to access information is compromised in digital government settings that lack mature information governance frameworks. Consistent convention, storage and retrieval together with developed systems to assure data provenance are all required to safeguard the right to access information. The development of an AI regulatory framework should be informed by these foundational requirements.
- Additionally, Australia’s digital sovereignty will be compromised unless extant rights and values are preserved, and our existing regulatory ecosystem is optimised in our development of new regulatory approaches.
- Option 2 presented within the proposals paper provides an opportunity to harness existing regulatory controls and optimise their application through coalition within a new regulatory framework.[12]
- Option 2 also recognises the need for reform within existing regulatory environments and accommodates that development under an AI regulatory model that provides a consistent set of high-risk definitions and guardrails.
Intersection with privacy law reform
- As the Proposals Paper acknowledges, the current proposals intersect with a broader suite of government actions and reform processes underway, including reforms to the Privacy Act to implement the proposals from the Attorney-General’s Department’s (AGD) Privacy Act Review.
- The Privacy and Other Legislation Amendment Bill 2024 was introduced into Parliament on 12 September 2024 and contains the first tranche of amendments responding to proposals from the Privacy Act Review.[13] Relevantly, the Bill proposes amendments to APP 1 to require entities to include information in their privacy policies about automated decisions that significantly affect the rights or interests of an individual.[14]
- The proposed amendments are aimed at providing individuals with greater transparency about how an entity is handling their personal information, and the ability to take further action in the event of a breach of their privacy.[15] This will complement work being led by AGD to develop a whole-of-government legal framework for the use of automated decision-making (ADM) systems, to ensure that government use of ADM (including ADM systems involving AI) complies with administrative law principles.
- The OAIC anticipates that the second tranche of privacy reforms will implement additional proposals from the Privacy Act Review that will operate to further mitigate and address the privacy risks associated with AI and enhance transparency in the use of AI.
- In particular, the proposal to establish a positive obligation on organisations to collect, use and disclose personal information fairly and reasonably will require entities to proactively consider whether their personal information handling activities are appropriate and set a baseline standard of information handling that is flexible and adaptable as circumstances and technology change.[16] A positive obligation for organisations to handle data fairly and reasonably would give individuals engaging with AI technologies greater confidence that they will be treated fairly, and that—like a safety standard—privacy protection is assured.
- Another key proposal to increase organisational accountability is that all APP entities would be required to complete a PIA prior to undertaking any ‘high privacy risk activity’.[17] This is likely to intersect with obligations under proposed Guardrail 2 for organisations to establish and implement a risk management process to identify and mitigate risks.
Implementation of mandatory guardrails
- The Proposals Paper canvasses three options for implementing the proposed mandatory guardrails. Of the options presented, the OAIC considers that the framework approach outlined as Option 2 is likely to be the most suitable, as it will strengthen existing regulatory frameworks while avoiding the risk of regulatory duplication and confusion.
- The domain-specific approach of Option 1, which aims to embed the relevant guardrails in existing frameworks, may not provide these frameworks with the uplift needed to effectively address the challenges posed by high-risk AI technologies. As recognised in the Proposals Paper, this approach is likely to lead to a number of regulatory gaps and inconsistencies and would be limited to the current law and enforcement powers of existing regulators.
- The introduction of an AI-specific Act and regulator, as proposed by Option 3, may lead to inefficiencies and risks creating duplication and lack of regulatory coherence. The regulation of AI will require specialised knowledge of a broad range of existing regulatory systems in place – as an example, Guardrail 3 canvasses obligations that intersect with privacy, copyright and critical infrastructure laws. It is not clear how a standalone AI regulator will be able to operate effectively across these diverse areas, and as the Proposals Paper acknowledges, there is significant risk of overlap. Establishing an AI-specific regulator alongside existing regulatory regimes is also likely to create confusion for the public about where to direct complaints.
- Additionally, given that many high-risk AI technologies are still emerging and the harms and risks are not fully realised, the gaps in existing frameworks are still not well understood. Existing regulators need time to develop their responses to high-risk AI and to understand and clarify the application of existing laws. To prevent a new regulatory model becoming outdated quickly, it may be more appropriate to consider the option of a new AI-specific regulator at a later point in time.
Support for framework approach (Option 2)
- The OAIC is of the view that a framework approach, as set out in Option 2, is the most appropriate model to support a coordinated approach to the regulation of high-risk AI and create the best outcomes for individuals. As the Proposals Paper acknowledges, a framework approach recognises the existing laws that can be used or amended to give effect to the proposed mandatory guardrails and leverages familiarity that people and businesses have with Australia’s existing regulatory regimes. This approach enables current regulators to bring their particular expertise to the enforcement of the mandatory guardrails.
- Given the centrality of our role in whole-of-economy regulation of AI, the OAIC is likely to be best placed to address many of the harms arising from AI technologies. Noting the synergies between the proposed mandatory guardrails and the APPs, discussed earlier in this submission, it may be possible to incorporate many of the requirements in the guardrails either through AI-specific legislative amendments or using the APP codes mechanism under the Privacy Act.[18] APP codes can adapt and particularise the APPs where appropriate, providing greater clarity about obligations where that is warranted by the entity’s particular circumstances.
- The progression of a number of Privacy Law Reform measures, including those discussed above, will further uplift the privacy regulatory framework to enhance the OAIC’s ability to effectively respond to high-risk AI technologies.
- The OAIC considers that the effective implementation of Option 2 may involve a focus on strengthening the underlying regulatory frameworks before introducing a mechanism for enforcement of the guardrails. This would help to ensure that a coherent, effective and efficient model is in place.
- The effective implementation of a framework approach is also likely to require a strong coordinating mechanism that can ensure regulatory coherence. This would include facilitating the coordination and appropriate allocation of individual complaints across regulators, as well as cooperation such as through joint enforcement activities, where appropriate. Existing regulatory cooperative forums, including the DP-REG and the UK’s Digital Regulation Cooperative Forum (DRCF), provide examples of the impact that such mechanisms can have in responding to the challenges of emerging technologies. In particular, the DRCF with its model of a dedicated CEO and staff to perform the coordination role, has been able to establish an AI and Digital Hub to allow innovators to seek consolidated advice across the regulatory remits of DRCF member regulators.[19]
Conclusion
- Regardless of the implementation model chosen, strong coordination between regulators will be essential to ensure the proposed mandatory guardrails are effective. Existing regulators will need to be given the necessary tools and resources to enhance these streams of work. We welcome further opportunities to engage with the DISR, and the opportunity to share our experience across both privacy and access to government information as this important work continues.