Skip to main content
  • On this page

Published 22 May 2024

Read the keynote address prepared for delivery by Privacy Commissioner Carly Kind for the Biometrics Institute Asia-Pacific Conference on Wednesday 22 May 2024.

Introduction

Good morning, thank you for having me.

I have known the Biometrics Institute for some time, and appreciate the invitation to speak.

I am relatively new in the role of Privacy Commissioner but have been working on issues at the intersection of privacy, digital identity and biometrics for some time in the United Kingdom and in the context of international organisations, such as the UN Refugee Agency.

I have seen first-hand how biometrics registration and identity systems can be used to great effect, for example, to assist in the registration of refugees who have had to flee their homes without paper identity documents.

The risks of biometrics

However, I have also observed the range of risks and harms that can happen in the context of the use of biometrics systems, and heard first-hand from the public their concerns in this regard.

Prior to taking on the role of Privacy Commissioner, I was the director of the Ada Lovelace Institute, and we undertook a large-scale public deliberation on biometrics technologies.

Because, as we all know, there is something different about biometrics.

They are the only truly immutable form of identification, and thus their loss, misuse or abuse has the most serious consequences for individuals.

They can be deployed in identification in a passive way, which is a significant departure from existing forms of identification that involve an individual’s active participation – handing over an identity document, for example, or entering a username and password. This means they can be used in surveillance apparatuses with or without individuals’ knowledge or consent

They also have other uses outside of identification, of course, and can be used to categorise an individual for the purpose of analysing behaviour, personality or intention.

These existing emerging forms of biometrics push at the boundaries of public comfort, particularly against a backdrop of a digital ecosystem fraught with power asymmetries.

Our research shows that Australians are the most comfortable with the collection and use of their biometric information in border security and law enforcement contexts.

There is only moderate support for its use by the private sector for safety and security reasons, and the main areas of concern for Australians when it comes to uses of their biometric information are targeting and profiling for commercial purposes, or when the information is needed to access non-sensitive services like entertainment.

Around 38% are ‘very’ or ‘somewhat uncomfortable’ when it comes to using biometric analysis to estimate a person’s age, sex, gender or ethnicity.

Forty-four per cent are uncomfortable with biometric analysis being used to predict someone’s behaviour, and 51% are uncomfortable with the use of biometric analysis to identify how a person is feeling or their emotional state.

The use and deployment of biometrics must be in the context of strong legislative provisions that ensure a proper balance is struck between the potential efficiencies that can be achieved through the use of biometric technology; the very real societal concerns with such tech, and the potential contravention of societal values such as freedom of movement and privacy; and the protection of individuals’ personal information.

Intersection with the Privacy Act

In Australia, we have an emerging picture of how biometric technologies can be used consistently with the Privacy Act.

My office has worked on a number of investigations and complaints relevant to biometric technology, and so has an emerging jurisprudence related to the legitimate and illegitimate uses and deployments of this technology.

In particular, the appropriate regulation of biometric identification systems and facial recognition technologies is an area of increasing focus for the OAIC. In this regard, we want to ensure that biometric identification systems and facial recognition technologies are only used for one-to-many matching for commercial purposes in very limited circumstances.

Facial recognition technologies and other automatic biometric identification technologies should only be used when it is reasonably necessary for, and the risks to privacy are proportional to, the functions or activity. In this context, necessity does not mean mere convenience or desirability. In order to be legitimately deployed, facial recognition must fall into the need to have, rather than nice to have, bucket.

The OAIC was involved in both the Global Privacy Assembly’s 2020 resolution on facial recognition technology and 2022 resolution on principles and expectations for the appropriate use of personal information in facial recognition technology.

Both reiterate the importance of (among other factors):

  • organisations being able to demonstrate the reasonableness, necessity and proportionality of their use of facial recognition technology
  • transparency about the use of personal information
  • clear and effective accountability mechanisms, including risk mitigation policies.

We have issued a number of determinations in the area of facial recognition, which cumulatively set out some of the OAIC’s expectations on how facial recognition technology should be used. This includes that a privacy by design approach should be used to plan, develop and implement the technology, and that transparency is paramount to ensure individuals are informed about how their personal data is being handled.

The Australian Information Commissioner determined that convenience store group 7-Eleven interfered with customers’ privacy by collecting sensitive biometric information that was not reasonably necessary for its functions and without adequate notice or consent.

7-Eleven collected facial images while surveying customers about their in-store experience, which were compared with other faceprints to exclude survey responses that may not be genuine and provide a demographic profile of customers who completed the survey.

The Australian Information Commissioner Angelene Falk has previously described this case as an example of using a sledgehammer to crack a nut.

The OAIC found that Clearview AI breached Australians’ privacy by scraping their biometric information from the web and disclosing it through a facial recognition tool. The determination followed a joint investigation by the OAIC and the UK’s Information Commissioner’s Office.

Clearview AI’s facial recognition tool includes a database of billions of images taken from social media platforms and other publicly available websites. The tool allows users to upload a photo of an individual’s face and find other facial images of that person collected from the internet. It then links to where the photos appeared for identification purposes.

The OAIC determination highlighted the lack of transparency around Clearview AI’s collection practices, the monetisation of individuals’ data for a purpose entirely outside reasonable expectations, and the risk of adversity to people whose images are included in their database.

The Australian Information Commissioner determined that the Australian Federal Police (AFP) failed to comply with its privacy obligations in using the Clearview AI facial recognition tool. Commissioner Falk found the AFP failed to complete a privacy impact assessment before using the tool, in breach of the Australian Government Agencies Privacy Code, which requires a privacy impact assessment for all high privacy risk projects. The Commissioner also found the AFP failed to take reasonable steps to implement practices, procedures and systems in relation to its use of Clearview AI to ensure it complied with the code.

We also have an ongoing investigation into the personal information handling practices of Bunnings and Kmart, focusing on their use of facial recognition technology. I am hopeful of being able to update the public on the status of that investigation in the coming months.

Looking ahead – Privacy Act reforms

Australia’s privacy legislation, written more than three decades ago, has struggled to keep pace with advancements in technology, such as some of the technologies we are seeing today that involve one-to-many uses of biometric information.

Privacy law reform is urgently needed.

The federal Attorney-General shared earlier this month that at the request of the Prime Minister, he will bring forward legislation in August to overhaul the Privacy Act.

Privacy law reform will up the standards for consent, bring into scope a larger subset of the Australian economy, and expands the powers of the OAIC to enforce privacy law.

A change that we see as the new keystone of the Australian privacy framework is a ‘fair and reasonable’ test, which would prevent organisations from using consent as a shield for bad privacy practices, and require them to consider a range of factors, including whether the impact on privacy is proportionate to the benefit gained.

Also of note for the biometrics sector are reforms around consent management and data deletion and retention.

Biometric templates and biometric information used for the purpose of automated biometric verification or biometric identification are categories of sensitive information. Given the greater privacy risk associated with sensitive information, organisations generally need to seek express consent from an individual before handling this kind of information.

The proposals to define consent to provide that it must be voluntary, informed, current, specific and unambiguous, and to expressly recognise the ability to withdraw consent in an easily accessible manner will be highly relevant to the industry.

The government agreed in principle that organisations should be required to establish their own maximum and minimum retention periods for personal information they hold, and specify these retention periods in privacy policies.

Retention periods will need to take into account the type, sensitivity and purpose of the information being retained, as well as organisational needs and any obligations under other legal frameworks.

Data retention periods will be something for biometric providers to carefully consider, particularly given the role biometrics play in verification to access accounts.

Privacy at the forefront of Digital ID scheme

While we wait for Privacy Act reforms, we will begin applying higher legislated standard to biometric information immediately, with the passage of the Digital ID Bill last week

The OAIC will be the privacy regulator for the Digital ID scheme and will use a range of regulatory powers to ensure that individuals’ privacy is protected when using the system.

The ‘additional privacy safeguards’ in the Digital ID legislation will operate in addition to the general protections under the Privacy Act (or equivalent state or territory privacy legislation). A contravention of these safeguards by any accredited entity will constitute an interference with the privacy of an individual for the purposes of the Privacy Act.

The OAIC’s regulatory role under the Digital ID legislation will include oversight of breaches of the additional privacy safeguards by all accredited entities, including state and territory agencies.

The additional privacy safeguards in the legislation include a number of provisions in relation to biometric information. These provisions put additional restrictions on the collection, use, disclosure, storage and destruction of biometric information by accredited entities.

The Act expressly prohibits the collection, use or disclosure of biometric information for one-to-many matching.

Biometric information can be collected, used, and disclosed by accredited entities (where authorised by their accreditation conditions), for the purposes of verifying the identity of an individual, and/or authenticating an individual to their Digital ID.

Biometric information collected for the purposes of verifying identity must be destroyed immediately after verification is complete.

Biometric information collected for authentication purposes can be retained where the individual has consented to this so the biometric can be used to authenticate the individual in the future, for example, to log back into a Digital ID account using a face biometric match with the individual’s consent.

The rules may require that biometrics are stored in an encrypted manner or on the individual’s local device to prevent access to the original image while maintaining the authentication functionality.

Only limited secondary uses of biometrics are permitted including:

  • for fraud investigation and testing (however, a biometric retained for fraud and testing activities must be deleted within 14 days subject to the authentication purposes mentioned)
  • disclosure to the individual involved
  • disclosure to law enforcement with a warrant issued by a magistrate, judge or tribunal
  • disclosure to law enforcement with consent for an investigation/prosecution or identity verification
  • retention, use and disclosure for the purposes of undertaking testing in relation to the information.

The accredited entity must take reasonable steps to continuously improve its biometric systems to ensure such systems do not selectively disadvantage or discriminate against any group.

Conclusion

Biometrics raise opportunities and challenges for us to consider as a community.

In Australia, we stand on the precipice of some big changes in the privacy space.

I encourage this community to get in front of this now by thinking about how the use of biometrics can embody fairness and reasonableness right from the start.

This can help organisations identify and mitigate risks and think through the ‘should’ as well as the ‘could’.

The reforms will strengthen the power of the OAIC as a regulator, improve protections and rights for individuals, and strengthen the Australian market, ensuring that organisations can innovate in a trusted context.

I am looking forward to working together with industry and government to support this transition.