Transparency
Real
Change
How?

Transparency is often described as a first step towards holding social media companies accountable. But what does meaningful transparency look like and how could it spark real change?

January 2021
Decorative illustration of the web as a tunnel with colorful objects Decorative illustration of the web as a tunnel with colorful objects
Transparency
Real
Change
How?

Transparency is often described as a first step towards holding social media companies accountable. But what does meaningful transparency look like and how could it spark real change?

January 2021

In the aftermath of the storming of the U.S. Capitol on January 6, 2021, Facebook and Twitter suspended the accounts of Donald Trump after years of rebuffing requests to silence him.

Trump is by no means the only world leader to use social media as a megaphone to incite violence or to spread disinformation. But in this case, public pressure and internal protests in the U.S. were intense, the skirting of community guidelines apparently too risky to brush under the carpet. Rioters had planned the attack openly on social media, and disinformation about the election had spread for months. Some have lauded the decision to suspend the accounts as an example of tech companies finally being accountable for the harms perpetrated on their platforms, while others have questioned the power they hold over public discourse with such little oversight.

Does it mean platforms will now step up and take greater responsibility? Without more meaningful transparency and accountability around both human and algorithmic decisions by platforms, probably not — and certainly not everywhere in the world. What this one high profile example actually illustrates, by contrast, is how far platforms are from enabling ongoing transparency and systemic change. In other words, meaningful transparency doesn’t just help us understand what happened — it helps us to understand and even shape what happens next.

For the billions of people who frequent social media platforms (four of the most popular of which belong to Facebook) global crises are mediated via automated systems with opaque inner workings. Evidence from researchers continues to mount demonstrating that these systems enable harmful content to thrive and make communities more susceptible to disinformation and polarizing information. Yet companies are usually only superficially forthcoming about harms or their policies, even in moments of heightened political tension or violence affecting millions.

Accountability to whom?

On October 20, 2020, during peaceful mass protests against police brutality in Nigeria, security forces opened fire on a crowd in Lagos, killing 15 people and injuring hundreds. Protestors uploaded images of the attacks to Facebook and Instagram with the hashtag #EndSARS, calling an end to the abusive Special Anti-Robbery Squad of the Nigerian police, but their posts were automatically mislabeled as “false information,” playing into the hands of authorities.

Outrage spread beyond Nigeria’s borders as activists and journalists demanded answers. Facebook apologized that their moderation algorithms automatically mistook the hashtag for pandemic misinformation (because of a similarity to SARS, a respiratory illness). But when asked what they would do to prevent this from happening again, a Facebook spokesperson had no response.

Nwachukwu Egbunike was one of the first to report on the blackout of the #endSARS protests in Nigeria on Global Voices. He was unsurprised by the lackluster response from the company to a major crisis for Nigerians and says it further erodes trust. “These platforms have for so long ignored local voices from around the world. It takes so many more stories coming out of places like Nigeria, Uganda, Ethiopia, or South Sudan to get anyone to pay attention,” he says.

According to a former Facebook data scientist and whistleblower, Sophie Zhang, Facebook’s global efforts to tackle “coordinated inauthentic behavior” were often decided by arbitrary processes and sooner directed to spam or “public relations risks” than to various governments coordinating disinformation. In a September 2020 memo, initially reported by Buzzfeed, Zhang wrote, “I have personally made decisions that affected national presidents without oversight, and taken action to enforce against so many prominent politicians globally that I’ve lost count.”

It can be infuriating to see platforms roll out security and election integrity measures in the U.S. or U.K for researchers monitoring political speech in countries where they haven’t. Rosemary Ajayi experienced this with Twitter during the 2019 election in Nigeria, as she and a team of eight monitors collected hundreds of tweets spreading election disinformation and hate speech. In March 2020 she said, “There is really no justification for why existing tools that could protect users in Nigeria aren’t made available.” Even after volunteering to advise social media companies on local concerns herself, Ajayi said: “The platforms don’t necessarily understand what the issues are, but then even when they do, I don’t know that they have the will to actually address them.”

More meaningful transparency

Calls for increased transparency from major social media companies have intensified among internet researchers, policymakers, and digital advocates as a means to hold companies accountable. As in human rights or environmental advocacy, this is part of a proven playbook for change when dealing with global corporations. For instance, EFF renewed calls for “meaningful transparency” in 2020 in the context of content moderation that shows why and how decisions are made. The European Commission supports a Code of Practice on Disinformation for online platforms. And Ranking Digital Rights [a Mozilla grantee partner] calls for clearer public disclosures on algorithms, human rights, and privacy, tracking progress over time.

There has been progress on transparency in the tech sector over the years, but most of the voluntary public facing measures — such as databases of digital ads or transparency reports detailing government requests — are still not really designed to hold platforms themselves accountable or to enable civil society to shape their practices. Instead, they lend superficial transparency to how others use the platforms. For example, Facebook’s Ad Library can be used to parse through ads displayed on the platform. But it does not show how Facebook uses data mining and algorithms to target those ads to users, nor does it provide information about the impact of such ads, which is critical for understanding what leads to discriminatory outcomes.

Worse, researchers say that ad transparency tools from Facebook and Google (who dominate the digital ad market) are nearly unusable. “I’ve worked with Facebook’s transparency offerings for more than nine months now, and it feels like Facebook never even tried to verify that its transparency offerings actually achieve the promises the company has made,“ wrote Paul Duke, a software engineer of the Online Political Transparency Project of NYU’s Tandon School in October 2020. In a blog post, Duke outlined numerous technical issues along with incomplete and inaccurate data. He speculates that a company like Facebook, with access to resources and skilled engineers, perhaps simply does not want to solve these simple technical problems.

After years of complaints from around the world about harms including data leaks, toxic hate speech, online violence, double standards in content moderation, misinformation, unethical data collection practices, harmful content recommendations, and opaque political ad sales, it seems unlikely that large social media companies will self-correct more than they must.

Regulation pending

So far, most of the measures platforms implement to mitigate harm are purely voluntary, insufficient, and have no real accountability under the law. Andrew Puddephatt of Cedar Partners is the lead author of a global study on platform accountability for the NetGain Partnership based on interviews with 85 experts in six jurisdictions. He says he has seen broad agreement on platform harms worldwide, but little consensus on what to do practically or in terms of regulation.

Asking governments to regulate social media platforms is not without qualms for anyone aware of the pervasive online censorship and surveillance that exists worldwide. But neither is it practical to imagine a court process for every single content takedown or algorithmic decision.

Julie Owono, the executive director of Internet Sans Frontières and a member of Facebook’s new Oversight Board, a body of independent members that review Facebook’s most difficult decisions related to online content, says she believes it is crucial for social media platforms to work towards responsible transparency measures that empower civil society in different regions to affect decisions. “Companies shouldn’t wait to be pressured to provide that informed transparency. It should be a part of their responsible business conduct,” says Owono.

The Oversight Board began accepting appeals in October 2020 and is deciding on its first cases in January 2021. As its first high profile test, it will take on whether to indefinitely suspend Donald Trump from Facebook and Instagram, driving global interest into what happens next. Announcing the news in January 2021, Facebook’s VP of global affairs, Nick Clegg, wrote, “It would be better if these decisions were made according to frameworks agreed by democratically accountable lawmakers. But in the absence of such laws, there are decisions that we cannot duck.” Facebook has said it will abide by the Oversight Board’s decision, in effect deferring judgement.

Around the world, different legal frameworks are under consideration for regulating social media, including some that focus on design and testing of services before they are ever deployed. But perhaps the most promising new framework is outlined in a draft law of the European Union called the Digital Services Act. It emerged at the end of 2020 and, if passed, would set high standards for assessing and mitigating the risks of “very large online platforms” for both individuals and societies. For instance, platforms would be required to report more transparently with specific obligations towards both regulators and public interest researchers. In this “procedural accountability” framework, platforms could be held responsible for lapses in mitigation of harms or for the misuse of their services by others. They could be required by law to make changes.

It could take years before the Digital Services Act becomes law, but in the meantime it is sure to influence discourse on platform accountability in many other jurisdictions, just as the European Union’s General Data Protection Regulation (GDPR) has also influenced privacy laws worldwide in recent years. But even with transparency mandated by regulation, creating meaningful and lasting transparency requires ongoing effort and cooperation from inside and outside companies. With the GDPR (as it would be with the Digital Services Act) civil society researchers play an outsize role in analyzing and cross-checking how new regulations are enforced to hold companies accountable.

Forcing transparency

That a giant corporation would be secretive about its business is nothing new, but it takes on special flavors in the realm of big tech, where artificial intelligence powers key products and services. If algorithms are the special sauce that makes ad targeting, social media news feeds, and content moderation work at scale, then stockpiles of data are the secret ingredient.

Big tech companies are untransparent about many things that typically fall under corporate social responsibility: taxes, environmental impact, harassment, labor rights, military contracts, and more. But in the realm of online systems there are unique opportunities to create (or even force) transparency before, during, and after technologies reach the hands of consumers.

Companies should be developing tools for meaningful transparency and data access for researchers themselves, but until they do there is a role to play for researchers and communities working together to make more of the processes visible that are otherwise kept hidden.

One pioneer of this approach is Surya Mattu, an engineer and journalist who has worked with several investigative journalism organizations in the U.S. to expose harms embedded in algorithms. In 2016, Mattu developed a browser extension with ProPublica to give people direct insight into how Facebook targeted ads to them. A team of ProPublica journalists, including Julia Angwin (who is now-editor-in-chief of The Markup) discovered that Facebook allowed housing advertisers to racially discriminate. It was one of a highly lauded series of reports about algorithms and machine bias. A year later, the team discovered that Facebook never fixed the problem.

In an interview by Angwin for The Markup’s newsletter in January 2021, Mattu shared this story as an early example of how algorithms are used by the tech industry to “skirt responsibility” for wrongdoing and why “persistent monitoring” is necessary.

“My engineering background enabled me to see in the code, on the platforms, when they ignored what we had shown in our stories. I wanted to highlight it at the level of the tool and the engineering, what their negligence was enabling,” Mattu told Angwin.

In October 2020, Mattu and Angwin launched The Markup’s Citizen Browser, which combines browser data from a representative sample of 1,200 people in the U.S. to expose how disinformation travels across social media platforms. Articles based on this data show how Facebook failed to stop recommending partisan groups and how Biden and Trump voters were exposed to “radically different news coverage” about the storming of the U.S. Capitol.

“As big tech platforms gain more power over our discourse, I believe that persistent monitoring tools like these — while expensive to build — are going to be a key strategy in holding accountable the constantly changing algorithms that govern our lives,” wrote Angwin in the newsletter.

Many others are innovating in this technically challenging field. The Ad Observatory by the NYU Online Political Transparency Project combines data volunteered by thousands of people to force more transparency around political advertising on Facebook. Mozilla’s RegretsReporter (also a browser extension) collects data about how “regrettable” videos are encountered on YouTube.

Such “real time” transparency mechanisms that involve thousands — or even millions — of people in privacy-preserving ways, are opening up new pathways for accountability. They are complex to scale across different platforms, languages, and geographies, but by laying bare the facts, they can help guide policymakers on what types of transparency to require for everyone; consumers on what social media conditions to refuse; and developers on what aspects of social media to rebuild.

Unexplored territory

Transparency tends to be the most commonly cited principle in ethical AI guidelines. But it can mean entirely different things to different stakeholders. To nonprofit organizations and data controllers, transparency can mean audits and oversight. To social media users, it can mean seeing explanations of why things are shown to them. To AI developers, it can mean more rigorous documentation practices in the development of machine learning models. To move towards social media systems that are more transparent and accountable to their core, we will need to weave together disparate and meaningful work that is happening across different sectors.

If technology were designed to be transparent from the outset, could it empower people to take action or to shape their digital experience? Could algorithms be trustworthy in ways that could make fewer bad things happen? These are questions that cut to the core of the social media systems people interact with daily. For content moderation in particular, algorithmic transparency is useful for understanding when content moderation hurts, what went wrong, and how things could be better. We are still far away from any of this happening meaningfully at scale.

“We have allowed our public sphere to be controlled by a small number of private companies, and we are now discovering how vulnerable online speech can be,” wrote Ethan Zuckerman in an opinion piece on CNN following Trump’s expulsion from Twitter and Facebook, noting that decisions on what speech to restrict (or not) should not simply be handed off to “algorithms and low-paid moderators.” Zuckerman leads the Institute for Digital Public Infrastructure at the University of Massachusetts at Amherst and is a longtime advocate for civic discourse on the internet. In a new podcast, Reimagining the Internet, he speaks to people with deep insights on how to create better social media experiences. “There is already so much energy focused on fixing the platforms, so my energy is going to be focused on building alternatives,” says Zuckerman.

Working in concert with platforms, regulators, and civic groups, we can inch closer to understanding what systems work for different purposes and make more informed and principled decisions. If transparency is a prerequisite to accountability, we need to keep exploring new ways to open the most opaque functions of social media to the world. As organizers of the New Public Festival wrote in January 2021, the challenge for creators of digital spaces is to uncover “how they might help weave, rather than tear, the social fabric.” With increased transparency about the algorithms, governance, and community dynamics of large platforms, a broader set of stakeholders can engage in more fruitful conversations about strategies for the future.