24 June 2019

Introduction

The Office of the Australian Information Commissioner (OAIC) welcomes the opportunity to make this submission on the Department of Industry, Innovation and Science (DIIS) and Data 61’s discussion paper, Artificial Intelligence: Australia’s Ethics Framework (the discussion paper). The discussion paper provides a timely opportunity to consider the key legal and ethical principles which underpin the use and design of artificial intelligence (AI) technologies in Australia.

The OAIC acknowledges that the continued development and use of AI has the potential to create significant benefits and opportunities for Australian society. However, the use of AI tools, which rely heavily on personal information, may also amplify existing privacy risks. Privacy and data protection must be a central consideration in the design, development and use of AI technologies.

Personal information in the digital age flows across national borders and so ensuring a global, interoperable system of regulatory oversight, which increases organisational accountability and empowers individuals to make informed choices about how to manage their privacy, will be key to mitigating privacy risks accentuated by AI. Any ethical framework therefore needs to accurately reflect privacy regulations in Australia while drawing on domestic and international privacy and related frameworks, policy and guidance to ensure that efforts to guide the use and development of AI tools are aligned.

This submission considers international developments in AI regulation and sets out the current privacy regulatory framework and general privacy principles. The submission concludes by suggesting that further consideration should be given to Australia’s privacy protection framework to ensure it is fit for purpose in the digital age.[1]

International data protection developments on AI

It is important that any ethical framework draws on international developments and guidance on AI such as the ‘Recommendations of the Council on Artificial Intelligence’, a non-binding agreement developed by the Organisation for Economic Co-operation and Development (OECD) which Australia signed onto on 22 May 2019.[2] These guidelines set out general principles which promote the responsible stewardship of trustworthy AI, including that AI systems should be designed to respect privacy rights, and that the privacy risks of AI systems should be continuously assessed.

The importance of data protection regulation and governance in addressing potential risks of AI is also a current area of significant interest for Australian and international data protection regulators. The OAIC draws on the work of other data protection authorities and seeks to ensure global policy alignment where appropriate. DIIS & Data61 may wish to have regard to the following international regulatory developments to further inform the ethics framework:

  • The International Conference of Data Protection and Privacy Commissioners (ICDPPC) adopted a declaration on ethics and data protection in AI in October 2018 which endorsed principles including that AI and machine learning technologies should be designed, developed and used in respect of fundamental human rights, and that unlawful biases or discriminations that may result from the use of data in artificial intelligence should be reduced and mitigated[3]
  • The Privacy Commissioner for Personal Data in Hong Kong released a report titled ‘Ethical Accountability Framework for Hong Kong, China’[4] in October 2018 which considers the ethical processing of data, including in relation to AI tools, and seeks to foster a culture of ethical data governance
  • Singapore released the ‘Proposed Model Artificial Intelligence Governance Framework’[5] in January 2019, aimed at providing detailed and readily implementable guidance to private sector organisations to address key ethical and governance issues when deploying AI solutions
  • The European Commission adopted the ‘Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679’[6] on 3 October 2017 (last revised and adopted 6 February 2018) which provides guidance on the regulation of automated individual decision-making and profiling under EU’s General Data Protection Regulation[7]
  • The High-Level Expert Group on Artificial Intelligence set up by the European Commission issued the ‘Ethics Guidelines for Trustworthy Artificial Intelligence’[8] on 9 April 2019 which proposes privacy and data governance as one of seven requirements for trustworthy AI.[9]

Australia’s privacy framework and artificial intelligence

The role of existing data protection regulations

The OAIC regulates the Privacy Act, which contains the 13 Australian Privacy Principles (APPs), which are designed to be technology neutral, flexible, and principles-based which can adapt to changing and emerging technologies – including AI. The existing framework contemplates that additional regulation may be required and contains mechanisms to adapt the regulatory regime to changing circumstances.[10] For example, the Australian Information Commissioner and Privacy Commissioner has the power to approve and register enforceable ‘APP codes’ or prepare guidelines which can provide greater particularity around the application of principles-based law.

AI amplifies existing challenges to individuals’ privacy. It is important that personal information used to train AI systems is accurate, collected and handled in accordance with legal requirements and aligns with community expectations. There is a need to ensure that organisations using a range of technologies are accountable for handling personal information appropriately. This may be achieved through increased transparency, building in privacy by design and putting in place an appropriate system of assurance. Such assurance could include third party audit or certification,[11] in addition to regulatory oversight, to increase consumer trust and confidence.[12] Further information on third-party audit or certification scheme is outlined below.

Ensuring an accurate description of Australia’s privacy framework

The OAIC considers that any ethics framework for AI is anchored in a fulsome explanation of how Australia’s privacy regulations apply across the personal information lifecycle. The operation of the Privacy Act 1988 (Cth) is not always accurately reflected in the discussion paper, which risks creating confusion for regulated entities seeking to implement an ethics framework. The OAIC provides additional information below in relation to these issues.

The APPs set out how APP entities must deal with ‘personal information’, which is a defined term under the Privacy Act.[13] Information which is not personal information is not subject to the APPs. To ensure consistency with the Privacy Act, we suggest that future reports be reframed to refer to ‘personal information’ rather than ‘personal data’.

Future reports could also draw out the distinction between ‘personal information’ and ‘sensitive information’. ‘Sensitive information’ is a subset of personal information which is afforded a higher level of protections under the legislation – examples of sensitive information which may be relevant to AI includes genetic information, biometric information and biometric templates.

Additionally, the discussion paper identifies the importance of ensuring the accuracy of datasets, and the potential for discriminatory outcomes if inappropriate or inaccurate personal information is relied on by AI tools. Given that APP entities are required to take reasonable steps to ensure that the personal information it holds is accurate, and provide a mechanism for individuals to access and correct personal information, reference could be made to the existing Privacy Act obligations, and to the practical challenges of complying with these obligations when designing or implementing AI tools.

The discussion paper also recognises the importance of the notifiable data breaches scheme under the Privacy Act in protecting individual’s privacy. The OAIC suggests, however, that future reports be drafted to more closely reflect the circumstances where it is necessary for an organisation or agency to notify affected individuals of a data breach. This arises when the following three criteria are satisfied:

  • there is unauthorised access to or unauthorised disclosure of personal information, or a loss of personal information, that an entity holds
  • this is likely to result in serious harm to one or more individuals, and
  • the entity has not been able to prevent the likely risk of serious harm with remedial action.

My office has released guidelines which may be of assistance in ensuring that the discussion paper accurately reflects Australia’s privacy framework:

  • The APP guidelines which outline the mandatory requirements of the APPs, how the OAIC will interpret the APPs, and matters we may take into account when exercising functions and powers under the Privacy Act[14]
  • The data breach preparation and response guidelines which assists Australian Government agencies and private sector organisations to prepare for and respond to data breaches in line with their obligations under the Privacy Act 1988,[15] and
  • The guide to data analytics and the APPs which addresses some of the privacy challenges when undertaking data analytics activities.[16]

Further information on the role of consent is outlined below.

General privacy principles for further consideration

Securing personal information

The ethics framework appears to focus on ICT systems security as the measure to protect privacy. While ICT security is an important privacy protection and key consideration for any AI system, regulated entities are required under APP 11 to take steps beyond technical security measures in order to protect and ensure the integrity of personal information throughout the information lifecycle. In addition, regulated entities must also take reasonable steps to implement practices, procedures and systems that will ensure compliance with the APPs across the information lifecycle.

Other relevant considerations for entities to ensure privacy protections include:

  • Collection of personal information – APP entities should collect the minimal amount of personal information that is reasonably necessary (and for federal government agencies, directly related) to carry out functions or activities
  • Privacy by design - Privacy should be incorporated into practices, procedures, planning, staff training, priorities, project objectives and design processes[17]
  • Assessing the risks – Assessing the security risks to personal information, for example by using privacy impact assessments (PIA), is a key element of a ‘privacy by design’ approach
  • Security of personal information – This may include technical security (such as software security, encryption and network security), access controls (such as identity management and authentication) and physical security
  • Destruction and de-identification – Entities should consider when and how they will destroy or de-identify information no longer needed for a purpose permitted under the APPs.[18]

For more information on appropriate privacy protections, the OAIC has released guidance on securing personal information.[19] This guide is particularly relevant for APP entities which are required under the Privacy Act to take reasonable steps to protect the personal information they hold from misuse, interference, loss, and from unauthorised access, modification or disclosure. However, it may be useful for other entities as a model for better personal information security practice. The OAIC’s guidelines and resources could be referred to in future reports.

Transparency, choice and control are central themes in the Privacy Act and the APPs which support individuals to make informed decisions about their personal information while ensuring entities are accountable for how it is handled. The discussion paper recognises consent as an important part of Australia’s privacy framework which promotes individual choice and control over personal information. Other issues relevant to consent that may warrant further consideration include:

  • A regulated entity may handle personal information in a manner which is inconsistent with the Privacy Act, for example by collecting unnecessary personal information, and community expectations despite having received consent from relevant individuals
  • A regulated entity may face challenges in seeking voluntary, informed, current and specific consent in relation to information handling practices using AI tools.

The OAIC has prepared guidance on the meaning of consent under the Privacy Act which you may find useful.[20]

Consent, however, is not required or may not be practical in all circumstances,[21] and other privacy principles warrant focus when using and developing AI systems.

Data protection regulation, both domestically and internationally,[22] typically include the following core data protection principles:

  • Collection limitation - An entity may only collect personal information that is reasonably necessary for the entity’s functions or activities. This principle is reflected in APP 3
  • Purpose specification and use limitation - Personal information should be collected for specified purposes, and those purposes should be made known to the individual. Personal information should generally not be used or disclosed for purposes other than the specified purpose of collection. This principle is reflected in APPs 5 and 6
  • Openness and accountability - An entity should be open and transparent about the way it deals with personal information and should be accountable for complying with data protection principles. This principle is reflected in APPs 1 and 5
  • Data quality - Personal information should be relevant, accurate, complete and up-to-date for the purpose for which it is to be used. This principle is reflected in APPs 3 and 10
  • Individual participation - Individuals should have rights to access personal information held about them, and to have inaccurate, out of date, incomplete, irrelevant or misleading personal information corrected. This principle is reflected in APPs 12 and 13.

DIIS and Data61 may benefit from exploring how these principles can address the inherent privacy risks which arise when using or designing AI technology. For example:

  • a lack of transparency about the algorithm used in an AI tool could be addressed through increased openness and accountability
  • poor quality data used to train an AI tool, or poor quality predictions of an AI tool, could be addressed through the data quality and individual participation principles.

We suggest that DIIS & Data 61 consider factors—other than consent—that enable or prevent the collection, use and disclosure of personal information. This may facilitate a more calibrated discussion of the opportunities and challenges that AI tools may pose to Australia’s privacy framework. The OAIC guideline on the APPs may be of assistance in this regard.[23]

Privacy self-management and organisational accountability

The rights and responsibilities under the Privacy Act support individuals to manage their privacy, and to ensure that entities are accountable for how they handle and protect personal information. Privacy self-management allows individuals to exercise choice and control by understanding how their personal information is collected, handled, used and disclosed.

To enable meaningful individual privacy self-management, however, it is essential for businesses and government to operate transparently and accountably so that their information handling practices are accessible and understandable. This is particularly important for AI tools which may be difficult for individuals to understand and which may amplify existing privacy risks.

The Privacy Act includes mechanisms aimed at fostering privacy self-management. These include notice requirements for regulated entities when collecting personal information from an individual, as well as obligations on businesses and government agencies to manage personal information in an open and transparent way (for example by having a clearly expressed and up-to-date APP privacy policy).

However, privacy self-management must be supported by organisational accountability and transparency measures. Businesses and federal government agencies are required under APP 1.2 take reasonable steps to implement practices, procedures and systems that will ensure compliance with the APPs. This supports entities taking a privacy by design approach and being accountable for their handling of personal information.

The Australian Government Agencies Privacy Code[24] particularises the requirements of APP 1.2 for federal government agencies. These include requiring agencies to have a privacy management plan, appoint a designated privacy officer, take steps to enhance internal privacy capability (including providing appropriate education and training for staff) and undertake a PIA for all ‘high privacy risk’ projects.[25] The OAIC considers federal government initiatives utilising AI technologies that rely on personal information are likely to be ‘high privacy risk’ projects. Organisations may also want to consider undertaking a PIA when utilising AI technologies.

The discussion paper tool kit for ethical AI proposes certifications as a tool to support privacy by design and organisational accountability. The OAIC supports the introduction of a third-party audit or certification scheme which may assist APP entities to meet their obligations under the Privacy Act while providing consumers with evidence-based information about the privacy credentials of entities with which they may engage.[26] This will also shift some of the burden from individuals to organisations putting themselves forward for certification. Given the difficulties with explaining the operation of AI outlined in the discussion paper, certification by a trusted third-party auditor with appropriate technical qualifications and expertise may give individuals confidence in the information handling practices of entities using AI tools.

Further review of privacy protections in Australia

The OAIC recognises that international data protection regulations include additional principles beyond those rights and obligations contained in the Privacy Act. In particular, we note that the discussion paper refers to the several rights contained in the EU’s General Data Protection Regulation (EU GDPR) including the right to erasure,[27] and rights related to profiling and automated decision making.[28]

The OAIC appreciates the importance of ensuring that Australia’s privacy protection framework is fit for purpose in the digital age and we suggest that further consideration should be given to the suitability of adopting some EU GDPR rights in the Australian context where gaps are identified in relation to emerging and existing technologies, including AI.[29] Other rights in the EU GDPR that may merit further consideration include rights relating to compulsory data protection impact assessments for data processing involving certain high risks projects,[30] the right of an individual to be informed about the use of automated decisions which affect them and express requirements to implement data protection by design and by default.[31]

Footnotes

[1] See OAIC 2019, Submission to the ACCC Digital Platforms Inquiry preliminary report, page 9 where the OAIC has previously made this recommendation.

[2] Organisation for Economic Co-Operation and Development 2019, Recommendations of the Council on Artificial Intelligence.

[3] International Conference of Data Protection and Privacy Commissioners, Declaration on ethics and data protection in artificial intelligence, 40th International Conference of Data Protection and Privacy Commissioners, Tuesday 23rd October 2018, Brussels.

[4] Office of the Privacy Commissioner for Personal Data 2018, Ethical Accountability Framework for Hong Kong, China.

[5] Personal Data Protection Commission Singapore 2019, A Proposed Model Artificial Intelligence Governance Framework.

[6] European Commission Article 29 Working Party 2018, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679.

[7] These guidelines were adopted by the European Data Protection Board in its first plenary meeting on 25 May 2018.

[8] High-Level Expert Group on Artificial Intelligence 2019, Ethics Guidelines for Trustworthy Artificial Intelligence. This guideline was first released for comment on 18 December 2018 and was issued as final on 9 April 2019.

[9] For additional examples, see the Office of the Victorian Information Commissioner, Artificial intelligence and privacy: Issues paper, June 2018, and Office of the Information Commissioner (UK), Big data, artificial intelligence, machine learning and data protection, 2017.

[10] For example, if a need was identified to make AI algorithms more transparent, this could be achieved by enhancing existing transparency requirements under the Privacy Act (in particular APPs 1 and 5).

[11] See the OAIC 2019, Submission to the ACCC Digital Platforms Inquiry preliminary report, page 5 where the OAIC has previously recommended independent third party certification as a proactive method to increase organisational accountability.

[12] The OAIC considers that it is important that technologies that may impact on the lives of individuals, such as AI, have sufficient community support (‘social licence’). The OAIC suggests that a social licence for AI should be built on several elements including increasing transparency around AI tools, and ensuring that the community trusts AI tools and understands how their personal information is being used.

[13] The Privacy Act defines personal information as information or an opinion about an identified individual, or an individual who is reasonably identifiable. Identity information would be personal information: Privacy Act 1988 (Cth), s 6(1).

[14] OAIC, APP Guidelines.

[15] OAIC, Data breach preparation and response – A guide to managing data breaches in accordance with the Privacy Act 1988 (Cth).

[16] OAIC, Guide to Data Analytics and the Australian Privacy Principles.

[17] For more information on ‘privacy by design’, see the OAIC, Guide to securing personal information.

[18] See OAIC, De-Identification and the Privacy Act.

[19] OAIC, Guide to securing personal information.

[20] OAIC, APP Guidelines – Chapter B: Key Concepts.

[21] Consent is not required in order to collect personal information, unless it is “sensitive information” such as health information. Consent may be provided in order to use or disclose personal information for a secondary purpose, but there are other grounds that can apply.

[22] Many data protection regulations, including the Privacy Act, draw on the principles set out in the Organisation for Economic Co-operation and Development Guidelines on the Protection of Privacy and Transborder Flows of Personal Data (1980).

[23] OAIC, APP Guidelines.

[24] Privacy (Australian Government Agencies – Governance) APP Code 2017.

[25] For more information, see OAIC, Australian Government Agencies Privacy Code.

[26] See the OAIC 2019, Submission to the ACCC Digital Platforms Inquiry preliminary report, page 5 where the OAIC has previously recommended independent third party certification as a proactive method to increase organisational accountability.

[27] EU GDPR article 17.

[28] EU GDPR articles 13(2)(f), 14(2)(g), 15(1)(h), 22.

[29] See the OAIC 2019, Submission to the ACCC Digital Platforms Inquiry preliminary report, page 9-10 where the OAIC has previously recommended a review of Australia’s privacy protection framework. We also note in this respect that the European Commission has sought feedback on the application and implementation of the EU GDPR, including the operation of specific articles under this legislation, and participated in a stock-taking exercise on 13 June 2019 with relevant stakeholders.

[30] EU GDPR article 35.

[31] EU GDPR article 25.