Almost everyone who is glued to their phone wishes they weren’t and for many people, the problem seems to be starting earlier. Children’s access to the internet is happening at younger ages, with an entire generation growing up alongside the rapid advancement of the digital landscape, which has only intensified with the advent of artificial intelligence. Teens and adults alike can be exposed to extremist content, violence and sexual exploitation online. The internet age has had countless positive impacts on society, but also undeniable negative impacts.
Some critics say social media addiction and worsened teen mental health are direct consequences of these platform’s design. Now they’re taking major tech companies to court — and winning. A few such groups include the plaintiffs in two tandem lawsuits against Meta, the parent company of Facebook, Instagram and Whatsapp, whose jury trials concluded in favor of the plaintiffs March 24 and 25. Meta plans to appeal the verdicts in both cases.
What complicates the social media trials is there are bona fide problems young people face when logging online and seeking retribution feels natural. However, critics say the short term gratification of Meta’s losses is a Trojan horse for much more insidious internet policies that may actually do the opposite of protecting children and give even more market power to Big Tech.
Though support for protecting children online is widely bipartisan, organizations like the Heritage Foundation are very opaque in their desire to create a more censored internet. The organization, which was architect of President Donald Trump’s Project 2025, endorses the popular Kids Online Safety Act because it will “guard against the harms of sexual and transgender content.”
These organization’s version of increased internet safety are largely anti-LGBT+, anti-immigration and, some argue, support shadowy agendas beyond children’s mental health. Regardless, people across the political spectrum are flocking to the legislation following the lawsuits calling it a win against Big Tech and comparing the social media cases to the landmark tobacco class action lawsuits of the 1990s, not just because they attempt to take down a big industry, but because of their specific legal strategy — product liability.
One trial held in California’s Superior Court focused on a single plaintiff, referred to by their initials KGM. Her trial was a test case in multi-district litigation against major social media companies. KGM v. Meta et al. focused on the claim that social media addiction in her youth led to depression, body dysmorphia, self-harm and suicidality. KGM’s lawyers employed the unique theory of product liability, which will serve as a bellwether strategy for the many future social media addiction cases.
“The plaintiffs are trying to extend that doctrine to an online world where intangible content causes intangible injuries.”
“The product liability doctrines were built for an offline world where physical items would cause physical injuries, and the plaintiffs are trying to extend that doctrine to an online world where intangible content causes intangible injuries,” Eric Goldman, a law professor at the Santa Clara University School of Law, told Salon in an interview.
The plaintiffs claimed Meta and fellow defendants Alphabet (the parent company of YouTube), ByteDance (owner of TikTok) and Snap (owner of Snapchat), made intentionally addictive products through features like recommended feeds and infinite scroll. KGM’s extensive use of these products, as she and her lawyers argued, caused her mental health to deteriorate.
The other trial, set in New Mexico, featured state prosecutors arguing that tech CEOs like Mark Zuckerberg misled users about their products’ safety, failed to enforce their minimum user age, and that its algorithms purposely pushed sensationalist or harmful content. In part, this case also employed the product liability strategy.
Juries agreed with both plaintiffs, calling for the Meta and other defendants to pay KGM $6 million and cough up $375 million in the more extensive State of New Mexico v. Meta Platforms, Inc. While the $381 million total is a relatively steep figure, Meta’s total revenue last year was over $200 billion, making the fines worth just a fraction of a percentage for them.
Nevertheless, the fact that a jury decided to award the plaintiffs is notable, not because of the price tag, but because of a law that usually exempts Big Tech from lawsuits like this.
Meta and any other platform that hosts third party content (like a blog with comments sections, a review site, or a chat forum) are typically protected because of a very powerful, yet very brief policy from the 1996 Communications Decency Act. It dictates that a platform can’t be sued for hosting content created by a third party. Notably it does not protect them from federal criminal violations, intellectual property claims or violations of sex trafficking laws. The carve out, known as Section 230, comes from a 30-year-old law and is only 26 words long, yet it is considered the foundation of how the modern internet functions.
Start your day with essential news from Salon.
Sign up for our free morning newsletter, Crash Course.
“ Without Section 230, services like social media become much more concerned about their liability in allowing users to talk to each other, and the principal approach that they would take is to not permit those conversations at all,” Goldman said.
Increased moderation would vastly shrink the ability for users to engage, learn and explore the internet freely, which was one of the express purposes of Section 230. The law states that internet platforms “offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”
“When you think about what the internet has done, it basically removed gatekeepers — you used to have to be very rich, very powerful to have the kind of audience that social media gives every single human being for free,” Ari Cohn, lead counsel of tech policy at the Foundation for Individual Rights and Expression, told Salon. “That is a remarkable development in humanity.”
Many politicians behind the push to sunset Section 230 and pass social media safety bills explicitly said they hope to restrict young people’s access to “damaging” content like LGBTQ+ communities, abortion resources and sex education, which might explain why right wing groups threw their weight behind the plaintiffs.
The lawsuits attempted to skirt around Section 230 by suing on the basis of the platform’s internal features, not third party content posted on the platforms. They said that Meta’s design and delivery of third party content is in and of itself problematic, causing social media addiction and putting children at risk for exploitation. Critics of the product liability argue that the product design and content it delivers cannot be separated.
“ If every video on YouTube was a beige paint drying on the wall, is anyone getting addicted to that? Is anyone getting harmed by that? Probably not,” Cohn said.
“If the product liability workaround to Section 230 is tenable, then there is no need to revise or reform Section 230 because it’s already a dead letter.”
Features like endless scroll and recommendation algorithms are reliant on the content being served, he argues, so the features in and of themselves cannot cause harm. Goldman says this strategy is simply a workaround to hold platforms accountable for problematic third party content it publishes.
“If the product liability workaround to Section 230 is tenable, then there is no need to revise or reform Section 230 because it’s already a dead letter,” Goldman said. “Plaintiffs will always have the ability to claim that they’re not suing based on third party content, they’re suing on the way third party content was presented.”
Parties from across the aisle want to bypass, severely amend or completely get rid of the Section 230 often in the name of protecting young people online. But a smaller group of people warn of the disastrous effects of hobbling Section 230 — or allowing lawsuits like those in California and New Mexico that attempt to bypass the law altogether.
“All the people who are celebrating this ‘win for child safety’ might actually be inviting a set of payloads or baggage that they’re not prepared for,” Goldman said.
On its face, getting rid of Section 230 opens up opportunities to hold tech companies accountable for things they may have done wrong — especially since many argue they should be liable for publishing harmful information and content ranging from unrealistic body standards to violent extremist political rhetoric. However, other critics argue this is oversimplified and the consequences of deleting Section 230 are much more nuanced and likely harmful for free speech.
Any platform that publishes user-generated content — be that an email, a TikTok video or a restaurant review — would be open to lawsuits for publishing said content, opening the floodgates for litigation. To avoid this, tech companies would either need to respond by banning any user-generated posts or severely moderating them.
“There would be more heavy content moderation because these platforms don’t have immunity and they’re going to be fearful that they’re going to be slapped with a ton of lawsuits,” Sophia Cope, a senior staff attorney with the Electronic Freedom Frontier, told Salon. “That may for some of the smaller media platforms actually bankrupt them.”
The EFF, a nonprofit organization that advocates for free speech and privacy rights online, has long been a defender of Section 230. In one blog post, the organization denies the law is “a shield for Big Tech” and is, in fact, “essential to protecting individuals’ ability to speak, organize and create online.”
Ending 230 doesn’t necessarily mean all platforms being sued will be held liable, but it does mean they have to engage in lawsuits by hiring lawyers and defending themselves instead of being removed from such lawsuits automatically. With companies like Meta worth $1.67 trillion, they may be one of the few platforms that can deal with these legal fees without risk of bankruptcy.
In KGM v. Meta, even Snapchat and TikTok settled before going to trial. They admitted no wrongdoing, but because the prohibitive cost of going to trial was seen as a greater financial risk for those huge platforms.
“Even for the big companies who have the money to pay the lawyers, they don’t want to have to deal with a lot of this stuff either, so they’re just going to direct the money in the content moderators and over-moderate, over-censor to reduce the risk that any controversial content is going to subject them to a lawsuit,” Cope said.
Other more niche platforms that allow people to post and interact with one another may not survive constant litigation, even if they try to compensate by investing in extreme moderation. Cory Doctorow, who coined the term “ens**ttification” to describe how, over time, Big Tech purposefully makes its systems worse because of their functional monopolies, argued that gutting Section 230 would only serve to make those monopolies more powerful. In 2021, Meta CEO Mark Zuckerberg himself endorsed sunsetting Section 230, which EFF called “a self-serving and cynical effort to cement the company’s dominance.”
“It’s become a situation where the politicians and the media have convinced the public that the internet is bad, and the censors are loving it. They’re embracing that general skepticism to enact laws that are designed to cause people to not be able to talk with each other,” Goldman said. “The governments win that equation, we as constituents lose and yet constituents are cheering it on because it’s presented under the mantle of protecting kids or beating up the Big Tech giants.”
Many concerned groups and individuals really do want social media companies to be more accountable and help protect children from credible dangers online, but Cope argues there’s other ways to address this problem than focusing on Section 230.
“There are all these sorts of other things that we can try potentially through legislation to give users more agency and how their online experience goes without going down this other route where we’re just suing the companies into oblivion,” Cope said.
The EFF instead endorses robust data privacy legislation that better informs users of what information they’re often unintentionally giving away when they agree to Big Tech’s terms and conditions.
“One of the ways to control the online experience is to limit the ability of these platforms to gather data about you, build a profile about you, and then turn around and use that against you,” Cope said. “If you have a privacy law and an ability for people to better control what information the companies are collecting on them, that may actually stem some of this harm.”
However, much of the internet reform legislation on the docket is not concerned with user privacy or data transparency. Looking a bit closer into “common sense” policies to protect children unveils much thornier incentives.
We need your help to stay independent
Bills like the 2025 App Store Accountability Act, introduced to the House last year, would require age verification to download certain “age restricted” apps and parental consent for users under 18. This bill and others like it that attempt to establish age restrictions on social media platforms, often require the submission of government identification documents, face scans and other data invasive procedures.
Cohn says policies like age verification severely limit every internet user’s privacy and ability to be anonymous online. Platforms that already use these policies have already been subject to data leaks and extortion.
“This country is founded on anonymous speech and anonymous criticism of the government,” Cohn said. “The idea that we should have to identify ourselves and create this link to our identity that removes our ability to safely and candidly critique power and wealth is crazy to me.”
Looking back at reminiscent “think of the children” campaigns, a lot of the same playbooks are being used. Much like the Satanic Panic of the 1980s, people are consenting to invasive and censorial policies in the name of saving children from content deemed harmful. In modern cases, sometimes this content is genuinely problematic, like exposure to eating disorders and suicidal content. But in many cases, saving the children from corruption is more about gatekeeping access to information and platforms that allow young people to express themselves and connect with new communities.
“Everyone is so keyed up about concerns about child safety online there’s still this general presumption that if the legislature’s trying to crack down on the internet, they’re doing it for the right reasons,” Goldman said. “I would assume that right now any laws that are working their way through the system are done in bad faith for the wrong purposes, and I’ll let the legislators try and convince me otherwise.”
Read more
about tech
The post Curbing social media to protect kids online could backfire appeared first on Salon.com.
from Salon.com https://ift.tt/ChywSrk
No comments:
Post a Comment