In the United States, 41 states filed lawsuits against Meta for allegedly causing social media addiction in its young users (under 18), amid growing concerns about the negative effects platforms.
The lawsuits allege that Meta collected data from young users, installed features to promote compulsive use of Facebook and Instagram, and misled the public about the negative effects of those features.
What can we expect to happen next? And are there possible consequences for Australia?
Exploiting whistleblower disclosures
The most significant lawsuit, filed in federal court in California, involves 33 states. The claim is based on violations of state consumer protection statutes and common law principles regarding fraudulent, unfair or unconscionable conduct and federal privacy legal regulation and regulations (collectively “COPPA”) that specifically protect children.
This coordinated action is reminiscent of other class actions in the US and UK Rohingya refugees against Facebook for his role in facilitating hate speech against their community in Myanmar.
These cases rely in part on former employee Meta Frances Haugen’s 2021 revelations about the role of Facebook’s algorithms in platform damage mitigation. Haugenova testimony the algorithms deployed on Facebook and Instagram are designed to increase content sharing, and thus earnings, by using data collected from users over many years.
These algorithms play a key role in determining what content viewers are exposed to, how long they consume it, and how likely they are to share it.
According to Haugen, Meta made changes to its algorithms in 2018 to prioritize meaningful social interactions. Those changes, she said, affected the way content was viewed on the news feed, leading to increased sharing of negative content such as hate speech.
Concerns about algorithms and content
The California case is notable for specific allegations surrounding strategies used to keep young people from interacting with Facebook and Instagram. For example, prosecutors elaborated on the impact of the “infinite scrolling” feature introduced in 2016.
This feature prevents users from viewing a single post individually. Instead it provides a continuous stream of content with no natural endpoint. Haugen described it as being similar to giving users small doses of dopamine. It leaves them wanting more and less likely to exercise self-control.
Prosecutors in the California case argue that this feature encourages users, especially young users, to compulsively use the platforms – which negatively affects their well-being and mental health.
They say the recommendation algorithms used by Meta occasionally present users with harmful material. This includes “content related to eating disorders, violent content, content that promotes negative self-perception and body image issues, (and) bullying content.”
They also mitigate features such as “variable reward schedules” implemented to encourage compulsive use in young people. This causes further physical and mental damage (such as sleep deprivation).
Implications for Australia
in the US, federal laws significantly limit the liability of online intermediaries such as Meta for content shared by users.
In contrast, Australia Internet Security Act strengthen Commissioner for eSecurity compel social media platforms and other online intermediaries to remove problematic material from circulation. This includes material relating to cyberbullying children, cyber bullying adults, abuse based on images and abhor violent material.
Federal court can impose significant penalties for violations Internet Security Act. But this does not cover all harmful content on social media, such as some linked to eating disorders and negative self-image.
Addressing the compulsive use of social media by young users is an entirely different challenge. Some countermeasures are possible. For example, if allegations of US fraud are proven, any evidence relating to Australian users may be the basis of a claim against Meta for misleading or deceptive conduct (or false or misleading representations) under Australian Consumer Protection Act.
Last year, A$60 million in civil penalties was awarded against Google LLC for false or fraudulent representations in 2017-2018 A lesser fine of A$20 million was awarded against two branches of Meta in 2023
Penalties under Australia’s Consumer Protection Act have increased since the Google case, likely due to the platforms’ deep pockets. Possibilities of judicial adjudication penalties include 30% of platform traffic or three times the value of the benefit to the infringer.
However, platforms are in a stronger position where the conduct is not misleading, false or fraudulent, but merely “manipulative” or “dishonest”. For example, the infinite scrolling feature is unlikely to be considered misleading or fraudulent under Australian law.
Australia also has no legal equivalent to COPPA. Australian malpractice law requires such a high level of harsh or oppressive behavior that it is extremely difficult to prove.
A recent one unconscionable behavior A case brought by a problem gambler based on the addictive design of electronic poker machines failed in Federal Court.
Flaws in the current law have partly led to calls for a new ban unfair trade practices. The pressure is also set to reform ineffective and insufficiently implemented Privacy Act.
We need cooperation and innovation
There are still many gaps in Australian law needed to protect consumers, especially children, from the harms posed by social media platforms. But domestic law can only go so far in protecting people using a medium that operates (mostly) seamlessly across borders.
As such, international law scholars have proposed more creative approaches in the context of online hate speech. One suggestion was to make platforms accountable for their actions under the laws of the country in which they are based, for enabling crimes that occurred in other jurisdictions.
In 2021, the world was greeted by the United States District Court order for Facebook to disclose to The Gambia various materials related to hate speech against the Rohingya community in Myanmar.
In doing so, the court strengthened Gambia’s claims in pending action before the International Court of Justice. This action claims that the Myanmar government, with its genocidal actions against the Rohingya people, has violated its obligations to the conventions on genocide – and that hate speech amplified on Facebook made violence possible.
As society grapples with the implications of mass data collection and profit-maximizing algorithms, protecting individuals will require international cooperation and a reassessment of legal frameworks.
Kayleen Manwaringsenior research associate, Allens Hub for Technology, Law and Innovation, and senior lecturer, Faculty of Private and Commercial Law, UNSW Sydney and Siddharth Narrainlaw lecturer, University of Adelaide
This article was republished from Conversation under Creative Commons license. Read Original article.