Skip to main content

Please be advised that our office will be closed from 5pm – Tuesday, 24 December, and will reopen on Thursday, 2 January 2025.

Published:  

Good evening

Thank you for the opportunity to speak to you at the launch of the 10th Deloitte Australian Privacy Index 2024. It is terrific to see this interest in the future of privacy in Australia and how we can prepare for the challenges of tomorrow.

And the theme of a transparent tomorrow is something we can all get behind.

I took up the role of Australian Privacy Commissioner in February this year.  I can see that I am not the first Privacy Commissioner to speak at the launch of the Deloitte Index, and very much like our own Australian Community Attitudes to Privacy Survey, the index has become an important tool to understand the state of privacy in Australia.

Interestingly, looking back at an OAIC speech at this event 8 years ago, I was struck by this line: Privacy law is really underpinned by the requirement for transparency and the consumer choice that results.

There is that word transparency, a theme that is central to this evening’s launch.

That speech also went on to say: Good privacy practice is about the transparency of what is being collected, how it will be used, who will it be shared with, and how will it be protected, which allows our informed choices.

Those remarks stand the test of time.

But what that speech didn’t discuss were such concepts as artificial intelligence or automated decision making (ADM), concepts that even 8 years ago were mostly the stuff of Black Mirror TV episodes. Today, they increasingly dominate our debate about the future of our digital economy and of our societies.

Prior to taking up my role as Privacy Commissioner, I spent 5 years running the Ada Lovelace Institute, a research institute dedicated to ensuring that data and AI work for people and society. A lot of our research was grounded in public attitudes and public participation research, and we heard repeatedly, time and time again, that the key to ensuring trust in AI and data-driven technologies was the accountability and transparency of those using them.

As I’ve stepped into my role at the OAIC, ensuring that I do everything in my power to ensure accountability and transparency are brought to the introduction of these new technologies is a key priority for me. To that end, the OAIC has recently released guidance for organisations on how they can comply with their privacy obligations when participating in this world of artificial intelligence, and guidance on how developers can use personal information to train generative AI models.

But more on that later.

Deloitte Privacy Index

Turning back to the Deloitte Privacy Index, I am struck by how many of the findings resonate with what the OAIC has found with our own ACAPS, and how this information can complement our own knowledge, and assist in understanding the contemporary landscape.

And this knowledge is of great value to the organisations who are here tonight, who need to plan for tomorrow.

These findings show that the community are uncomfortable with the amount of information they are being asked to share, that they want to limit the information they share, and in a significant amount of cases, it is prompting people to change product providers.

What stands out to me the most, though, are the findings on trust.

As a regulator, the OAIC may sometimes be portrayed as seeking to limit innovation, even though this is far from the truth. We see tremendous opportunities from harnessing the opportunities of the digital economy, from seizing the benefits of innovation. We want to be an enabler of innovation.

And that innovation needs to be built on trust with the community, that their data will be used safely. That trust needs to be built on sound privacy principles.

In reading this Privacy Index, one finding really stuck with me.

When consumers were asked what industry sector they most trusted, the most nominated sector was ‘None of the Above’. That was nominated by 30% of respondents, or just under a third. So congratulations go to any representatives of None of the Above who are here tonight.

Next came government on 28% and then the figures fell away sharply. However, even the figures on trust in government were complicated; government was nominated as among both the most trusted and least trusted sectors. Older people tended towards ranking government more highly, and younger people took a dimmer view of government’s trustworthiness.

The bad news on trust does not end there – 50% of respondents are choosing not to buy a buy a product or service because an organisation asked to collect personal information they weren’t comfortable sharing. This was 15 points higher than last year’s 35%.

The index highlights the issue of consumers feeling powerless, and that is a response that resonates with me, for I strongly believe that privacy is about power. No doubt, the significant data breaches we have experienced in recent years – and still experience, as shown in our most recent Notifiable Data Breaches scheme figures – are contributing to this sentiment.

So, the news on trust, is not good. Does this sound like the right foundation for building an innovative data economy?

It doesn’t. But there is reason for optimism. The index makes clear that consumers will respond to privacy practices that are clear and transparent, and put a premium on using their personal information safely.

And a key part of that equation is privacy reform. Eighty-eight per cent of consumers believe privacy reforms are crucial, including wanting organisations to handle personal information in ways that are fair, reasonable, and not overly intrusive.

The first tranche of privacy reform has been released by the government. It is a welcome step, but we are eagerly looking forward to the second tranche to ensure all Australian organisations build the highest levels of security into their operations, and the community’s personal information is protected to the maximum extent possible.

Artificial intelligence

Artificial intelligence, along with ADM, is a key focus of the Deloitte report. There are strong concerns among consumers about decisions being made with inaccurate data, and that a high percentage of consumers are unaware of how their data is used. People also worry about their ability to challenge decisions.

The Index research shows that 72% of Australia’s leading brands mention using AI, ADM, or other innovative technologies in their latest annual report, showing a large uptake across the board, but only 4% of companies explain in their privacy policy which decisions affecting consumers are made using AI or ADM.

That does not sound like a basis for building trust.

In our ACAPS results released last year, we found 3 in 5 Australians are uncomfortable with businesses using AI to make decisions about them, and 54% are uncomfortable with government agencies doing the same.

Ninety-six per cent of Australians want some conditions in place before AI is used to make a decision that might affect them, such as the right to have a human review the decision.

It is against this background, and with reference to the volatile debate about generative AI applications like ChatGPT, that the OAIC released our AI guidance recently.

We are increasingly developing guidance to help shape industry behaviour and improve privacy practices, and to let industry know what we expect. Some of you may also have noted, as well, our guidance on tracking pixels, which we released last week.

As a regulator, we continue to increase our focus on the potential harms that can result from technology with high privacy impact.

And addressing privacy risks arising from AI, including the effects of powerful generative AI capabilities being increasingly accessible across the economy, is high among our priorities.

We released 2 new guides for businesses – one focused on the use of commercially available AI products; the second targeted at developers using personal information to train generative AI models.

There is good reason to do this, and I go back to the issue of trust. If we are not able to establish trust in the use of AI, then the potential gains from innovation could be lost, as well as causing potential harms to the community.

That means business taking a precautionary approach – building in privacy design at the outset – especially as technology in this area is developing so quickly and is still largely untested.

I urge you to read this guidance carefully. The guidance sets the boundaries of what is and is not an appropriate use of personal information and highlights an approach that is respectful of privacy rights.

Here are the top 5 takeaways we identified for business when using commercially available AI products:

  1. Privacy obligations will apply to any personal information input into an AI system, as well as the output data generated by AI (where it contains personal information).
  2. Businesses should update their privacy policies and notifications with clear and transparent information about their use of AI.
  3. If AI systems are used to generate or infer personal information, including images, this is a collection of personal information and must comply with Australian Privacy Principle (APP) 3.
  4. If personal information is being input into an AI system, APP 6 requires entities to only use or disclose the information for the primary purpose for which it was collected.
  5. As a matter of best practice, we recommend that organisations do not enter personal information, and particularly sensitive information, into publicly available generative AI tools.

Our guidance to developers clarifies how the Privacy Act applies to several practices involved in developing and fine-tuning generative AI models.

We are conscious not all generative AI models will be trained using personal information. Models that do not involve personal information in their development or use will not need to comply with this guidance. But if you are a developer, we have 2 messages.

You should carefully consider whether your AI model will involve the collection, storage, use or disclosure of personal information, either by design or through an overly broad collection of data for training. Do this early in the process so you can mitigate any privacy risks.

Personal information is a broad category, and the risk of data re-identification needs to be considered.

I appreciate that for many of you, there is an interest in how personal information can be de-identified, as a way of reducing privacy risk. This is an area that has its complexities and challenges and provides further reason to take a precautionary approach.

For example, the guidance is drafted based on the current state of technology and practices in the market. Some technological aspects that are highly relevant in the context of privacy include:

  • Model unlearning does not yet appear to be a robust solution, elevating privacy risks by making it more difficult to remediate privacy harms.
  • De-identified information is increasingly able to be re-identified, creating privacy risks for an organisation where information is taken outside its control.

We welcome any feedback you have on our guidance. We are cognisant of the need to assist organisations navigate their way through a world where data knows no borders and we have sought, where possible, to reduce the burden on businesses that operate globally, by aligning our work with international guidance on privacy obligations in the context of AI.

Conclusion and privacy reform

As Deloitte has said, there is a risk that customers – and data protection – will be left behind in the race for innovation.

What organisations need to be doing now is building in privacy by design into all part of the AI process if we want to reap the benefits of artificial intelligence and other technologies.

They, or should I say you, need also to be ready for privacy reform. Act now and avoid reputational damage later. Build that trust now, because it is clear that the community is more attuned to its rights and keen to achieve redress if those rights are not respected.

Ultimately, this can be a great story for business, a great story for government, and a great story for the community, if we get our settings right.

And we hope it will be a great story for Australia’s privacy regulator. We are ready to play our part. Here’s to a transparent tomorrow.

Thank you, and congratulations to Deloitte for this year’s privacy index.