Chandni Gupta is the digital policy director at the Consumer Policy Research Centre (CPRC) – an independent, nonprofit, consumer think-tank, where she leads the research stream on protecting consumers in a digital world.
The idea that you have a choice about how your data is used is a myth – despite the digital economy touting choice like never before.
As consumers, we are expected to wade through long privacy policies written in impenetrable legalese and consent to broad uses of our data, often without a specified time period. Privacy policies are presented on a 'take it or leave it' basis – if don't like it, you don't get the product. This isn't choice, and it doesn't really protect you from harmful business practices.
Our current privacy laws lean heavily on notification and consent as a primary means of protecting consumers. But in a data-driven world where every facet of a person's digital presence can be collected, analysed and used for commercial gain, is it time to shift the onus back to businesses?
Holding businesses to account for how they use our data
When it comes to countering harm in other sectors, the concept of 'best-interests' and 'duty of care' frequently come into play. Businesses and professionals are expected to do right by the people they serve or those who are impacted by their decisions.
As our world becomes more data-driven than ever before, it is only fair we start expecting businesses who operate in the data space to do the same.
The concept of requiring businesses to act in the interest of consumers in the digital space is slowly gaining traction. The New York Privacy Act's Data Fiduciary Obligation requires businesses to act in good faith of those who have shared their data with them and place consumers' best interests above those of the business.
The concept of requiring businesses to act in the interest of consumers in the digital space is slowly gaining traction
In the European Union, a duty of care obligation is being imposed on large technology platforms via the Digital Services Act and the proposed Online Safety Bill in the United Kingdom is proposing a duty of care for social media companies.
The concept is not new but it is still unexplored in Australia.
A best interests duty or a duty of care has the potential to strengthen consumer protections by adding a level of accountability for businesses that is currently lacking.
Imagine the balance of responsibility tilting towards the business instead of consumers being expected to navigate lengthy privacy policies and opt-out options on every website they visit or app they use.
A lack of regulation has allowed businesses to use controversial surveillance technology such as facial recognition technology.
Regulation needed to keep consumers safe
In order to keep consumers safe, we need a regulator with the power to halt harmful business practices.
In other sectors, such as finance and product safety, regulators and governments have the power to step in when they see current or emerging harm to consumers.
A power such as this would have meant when Bunnings, The Good Guys and Kmart were found using facial recognition technology as a surveillance tool, the regulator could have paused or limited its use while it was investigating the issue. Instead, we are relying on the good faith of businesses to stop using this controversial technology, many of whom seem to be placing commercial benefits of data harvesting over the safety and wellbeing of Australians.
Safety and care can go hand-in-hand to create a digital economy that is fair, safe and inclusive
As the government considers new data privacy protections for Australians, it's an opportunity to consider notions of safety and care. Safety and care can go hand-in-hand to create a digital economy that is fair, safe and inclusive – a digital economy that supports consumer choices instead of manipulating them, and one that builds trust instead of eroding it.
For more information, visit cprc.org.au/in-whose-interest.
Stock images: Getty, unless otherwise stated.