Over the last 5 years we’ve seen an upward trend of tech companies issuing transparency reports. Typically, these reports disclose the number of requests a company received from governments for information on its users, or restriction of content accessed through company platforms. Google issued the first such report in 2010, and around 60 tech companies have followed suit.
Transparency reports have raised important questions about the scope of government requests to companies, and on how companies comply or don’t comply. But they are just one element in a much bigger debate about how tech companies and governments interact—and how user privacy and expression can be at risk. A project co-chaired by the Center issued a report last Monday on how to improve transparency overall as a form of accountability in this process.
Over the course of the last year, the Center co-chaired a working group on online privacy and transparency established under the auspices of the Freedom Online Coalition (FOC). The FOC is a group of 28 governments working together to advance internet freedom. The working group we co-chaired brought together civil society, company, and government experts to look at risks to human rights at the intersection of tech company and government practice.
We consulted with 15 major multinational tech companies and governments to understand how they manage requests for user information or content restriction, what safeguards they do—or do not—build into the process, and how they decide what to disclose to the public about these interactions. Every day governments make these requests to companies for legitimate national security and law enforcement reasons. But this process can easily be abused to reveal the identity of activists and repress content a government doesn’t like. Being transparent about how governments and companies handle these requests can mitigate the risk that the process is abused to undermine human rights.
Public debate on this topic easily becomes unidimensional: governments cite vague national security reasons to withhold information, and companies put the blame on governments for prohibiting disclosure. When our group talked to government and company representatives in a candid setting, we revealed many more—often shared—challenges, ranging from internal resistance to capacity gaps. We also found a disproportionate focus on numbers: while statistics are useful, we need more information on internal policies and processes to judge whether a company or government has institutionalized human rights protections. Our report outlines challenges as well as opportunities for both parties to improve transparency.
Transparency isn’t an end in itself: it’s a means to make sure that companies and governments are held accountable to have responsible systems in place to protect human rights. And while this working group focused on government requests, we can’t separate that conversation from corporate transparency in other areas—namely, how companies collect and sell user information for commercial purposes.
This week, Rebecca MacKinnon launched the Ranking Digital Rights Corporate Accountability Index, which evaluates 16 major internet and telecommunications companies on their disclosure of policies and practices that affect users’ free expression and privacy. The research shows huge room for improvement in transparency about privacy protections across these two areas. Some companies that disclose information on third party requests for user information—e.g., from governments—obscure if and how the company itself shares user information with third parties for commercial gain.
The Center just launched a major project to shine light on how companies collect, share, and sell your data. Stay tuned for more on online tracking and its implications for individual privacy online.