Nov. 8, 2021 — A leaked trove of papers from inside Facebook shows that the social media giant’s internal research uncovered a host of problems on the platform related to public health and other issues, but did virtually nothing about it.
The files were leaked by a whistleblower, former Facebook employee Frances Haugen, who shared tens of thousands of documents with the Securities and Exchange Commission, Congress, and a consortium of news organizations. She has since testified before the Senate Commerce Subcommittee on Consumer Protection and European lawmakers.
Amplifying ‘Anti-Vaxxers’ and Other Misinformation
President Joe Biden caused a stir in July when he said that thanks to rampant misinformation about the COVID-19 vaccine, social media platforms like Facebook are “killing people — I mean they’re really, look, the only pandemic we have is among the unvaccinated,” he said. “And they’re killing people.”
While he was forced to walk back the statement, the leaked papers suggest he wasn’t necessarily wrong.
According to the papers, in March — a time when the White House was preparing a $1.5 billion campaign against vaccine misinformation — some Facebook employees thought they had figured out a way to counter those lies on the platform, and at the same time prioritize legitimate sources like the World Health Organization.
“Given these results, I’m assuming we’re hoping to launch ASAP,” an employee wrote.
But Facebook ignored some of the suggestions and executives dragged their heels implementing others. Another proposal, aimed at curtailing anti-vaccine comments, was also ignored.
“Why would you not remove comments? Because engagement is the only thing that matters,” Imran Ahmed, CEO of the Center for Countering Digital Hate, an internet watchdog group, told The Associated Press. “It drives attention and attention equals eyeballs and eyeballs equal ad revenue.”
Facebook’s algorithms — which determine the content you see in your feed — also help to spread misinformation.
“It’s not like the anti-vax contingent was created by Facebook,” says Dean Schillinger, MD, director of the Health Communications Research Program at the University of California-San Francisco. “The algorithm said, ‘OK, let’s find certain people with certain political beliefs and let’s link them to anti-vaxxers,’” amplifying the misinformation. “That is certainly something that’s novel.”
If that weren’t enough, it appears Facebook may have misled Congress about the company’s understanding of how COVID misinformation spread on the platform. In July, two top House Democrats wrote to Facebook CEO Mark Zuckerberg requesting details about how many users had seen COVID misinformation and how much money the company made from those posts.
“At this time, we have nothing to share in response to the questions you have raised, outside of what Mark has said publicly,” the company said in response.
But the leaked papers show that by that point, Facebook’s researchers had run multiple studies on COVID misinformation and produced large internal reports. Employees were able to calculate the number of views garnered by a widely shared piece of misinformation. But the company didn’t acknowledge that to Congress.
Keeping this knowledge secret was a huge missed opportunity to ensure science-backed information reached the general public, says Sherry Pagoto, PhD, director of the UConn Center for mHealth and Social Media.
“We know how misinformation spreads, so how can we think more about disseminating good information?” she says. “They have all kinds of data on the characteristics of messages that go far. How can we use what they know in the field of health communication to come up with a plan?”
In an emailed statement, a spokesperson for Meta (in the midst of the uproar, Facebook announced a new corporate name) said, “There’s no silver bullet to fighting misinformation, which is why we take a comprehensive approach, which includes removing more than 20 million pieces of content that break our COVID misinformation policies, permanently banning thousands of repeat offenders from our services, connecting more than 2 billion people to reliable information about COVID-19 and vaccines, and partnering with independent fact-checkers.”
Ignoring Instagram’s Effect on Vulnerable Teens’ Mental Health
Combating misinformation isn’t the only way Facebook and its subsidiaries could have acted to protect public health. The company was also aware of its negative impact on young people’s mental health, but publicly denied it.
Instagram, which is owned by Facebook, is extremely popular among teenage girls. But the photo-sharing app exposes them repeatedly to images of idealized bodies and faces, which can lead to negative self-comparisons and pressure to look perfect.
Pro-eating disorder content is also widely available on the platform. For years, social science and mental health researchers have been looking at social media’s effect on mental health, particularly for adolescents. Studies have found links between Instagram use and depression, anxiety, low self-esteem, and eating disorders.
The Facebook papers revealed what Instagram researchers called a “teen mental health deep dive.” And there were serious problems: The internal research showed that the platform made body image issues worse for 1 in 3 teenage girls, and 14% of teenage boys said Instagram made them feel worse about themselves. The data linked use of the app with anxiety and depression. And among teens who reported thoughts of suicide, 6% of American users and 13% of British ones tied that impulse directly to Instagram.
Jean Twenge, PhD, author of iGen: Why Today’s Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy–and Completely Unprepared for Adulthood, has been studying social media’s effects on young people for almost a decade.
“I was not surprised that Facebook was finding social media could have significant links to depression and self-harm. The academic research has been showing that for years,” she says. “I was surprised how in-depth their research was into exactly the mindset of teen girls using Instagram. Their research really built on what we already knew.”
As with Facebook’s findings on misinformation, the company publicly downplayed Instagram’s negative effects — including in comments to Congress — and did little to adjust teen users’ experience on the app.
“I think that given what they knew about Instagram and mental health, it certainly would’ve been the right thing to do to make changes to the platform,” Twenge says.
In their email, the Meta spokesperson said, “Our research doesn’t conclude that Instagram is inherently bad for teens. While some teens told us Instagram made them feel worse when they were struggling with issues like loneliness, anxiety, and sadness, more teens told us that Instagram made them feel better when experiencing these same issues.”
A Responsibility to the Public Good?
While Facebook users may be surprised to learn how the company regularly put profits ahead of its customers’ health, those who study public health are anything but.
“This is not a problem in any way unique to social media platforms,” Schillinger says.
“Corporate entities frequently pursue policies that engage the public to participate in activities, to purchase or consume products, to implement behaviors that are unhealthy to themselves or others or the planet. … Do you think Facebook is acting differently than any other company in that space?”
Which is where the potential for regulation comes in, Haugen, the whistleblower, said. She has called for it, as have many lawmakers in the wake of her revelations.
“Large organizations that have influence and access to lots of people need to be accountable to the well-being of that population, just as a matter of principle,” says sociologist Damon Centola, PhD, author of Change: How to Make Big Things Happen.
He likens the explosion of social media to the history of television, which has been regulated in numerous ways for decades.
“I think that provides us with a parallel of social media and the capacity of the medium to influence the population,” he says. “It seems to me that organizations can’t get away with saying they won’t take public welfare into account.”
The so-called Facebook Papers are most damning, some experts say, because the company’s defense claims their research was only intended for product development, so it doesn’t prove anything.
This disregards all the peer-reviewed papers, published in respected journals, that reinforce the findings of their internal research. Taken together, the two kinds of research leave little room for doubt, and little doubt that something needs to change.
“Think of it like environmental polluting,” Centola says. “Companies can know they’re polluting, but they can also say it didn’t actually matter, it didn’t cause any harm. But then you get the documentation saying no, that has huge effects. That’s when it really does matter.”
Social Media as a Force for Good
But there is one potential upside of the Facebook papers, according to the experts: It’s clear that the company knows a lot about how to spread messages effectively. With enough pressure, Facebook and other social media platforms may now begin to use these insights in a positive direction.
“Facebook should be developing a strong collaboration with trustworthy entities to develop content that’s both true and promotes public health, while also engaging and algorithmically driven,” Schillinger says. “If we can use the platform and the reach and the [artificial intelligence] Facebook has for health-promoting content, the sky’s the limit.”
And efforts like that may be on the horizon.
“We’re focused on building new features to help people struggling with negative social comparison or negative body image,” the Meta spokesperson wrote in the email. “We’re also continuing to look for opportunities to work with more partners to publish independent studies in this area, and we’re working through how we can allow external researchers more access to our data in a way that respects people’s privacy.”
Which is not to say that Facebook will voluntarily put public health before the company’s need to make money, without regulations forcing them to do so.
“I do think Facebook is interested in making their platform better for users. But their first interest is always going to be having as many users as possible spending as much time as possible on the platform,” Twenge says. “Those two desires are often at cross-purposes.”