Are tech companies always neutral? Cloudflare’s latest controversy shows why the answer is no.

Jenna Ruddock is a Fellow of the Technology and Social Change Program, and April Glaser is a Senior Internet Policy Fellow at the Shorenstein Center at Harvard Kennedy School.

Time to Review Internet Infrastructure Policy

Infrastructure rarely makes headlines — unless it fails. Internet infrastructure is no exception. But last month, Cloudflare — a popular internet infrastructure company that offers everything from domain name support to web security and content delivery — was reluctantly (again) dragged into the spotlight. The problem is not a broken pipe or a cyberattack against its network or customers, but Cloudflare’s continued protection of one of its customer sites despite overwhelming evidence of continued online and offline harassment and abuse by the site’s user community. (To avoid harassing or directing readers to its content, this article will not name specific sites in question.)

banner. blocked. pause. demonetize. Most of us are familiar with the range of strategies major social media platforms use to moderate online content — and the confusion and challenges that result from the erratic efforts to moderate user-generated content at scale. The internet’s Facebook and YouTube have proven to be ineffective at preventing the online communities they host from engaging in disruptive behavior, including incitement to violence. The prospects for internet infrastructure companies not directly involved in the social media business are even more worrisome.

But the stakes are just as high: consider Cloudflare’s decision in 2019 to stop serving 8chan — a site known for violent extremism and explicitly white supremacist content. That year, three mass shooters issued a hate-filled manifesto to 8chan before opening fire. The shooting killed 75 people, with a total of 141 casualties. Even after the third attack — El Paso, Texas — Cloudflare initially said it would not stop serving 8chan. Hours later, Cloudflare terminated technical support for the site due to public outrage and negative press.

So how should we view online infrastructure companies and their responsibility for harms caused by websites using their services?

Social media sites dedicated to encouraging people to post content have more targeted tools to moderate content — such as flagging or removing questionable posts or banning individual pages. However, companies that provide internet infrastructure services such as web hosting or domain name services typically have far fewer options. They are usually limited to straightforward actions such as deleting an entire webpage or blocking an entire domain. Governments are increasingly turning to infrastructure providers such as ISPs to disrupt internet access across regions during times of turmoil.

For those who want to see companies like Cloudflare stay out of the content moderation game entirely – well, that ship has sailed.

For those who want to see companies like Cloudflare stay out of the content moderation game entirely – well, that ship has sailed. Up and down the “stack,” Internet infrastructure services have repeatedly made unilateral decisions to drop entire sites — and Cloudflare isn’t alone.When Cloudflare dropped neo-Nazi sites daily storm 2017, The same goes for Google, the site’s domain registrar, and GoDaddy, the site’s web host. However, these decisions are largely out of the public eye and rarely make headlines unless they are followed by a sustained public outcry. Internet infrastructure companies rarely actively cite clear pre-existing guidelines or policies when acting in these situations. The result: The record of ad hoc and reactive decisions is often so opaque and inconsistent that it is difficult for anyone outside the company to imagine better solutions to these thorny policy problems.

In a recent blog post, Cloudflare’s leadership offered some compelling analogies to the company’s gross reluctance, and sometimes outright refusal, to part ways with sites with a long track record of harm. The company believes that as a provider of website security services, Cloudflare is a lot like a fire department. Therefore, refusing to serve this site based on its content is tantamount to refusing to respond to a fire because the home belongs to someone who lacks “sufficient moral character.”

Without digging into this specific analogy, there are two obvious problems with comparing most Internet infrastructure providers to any public service rooted in the community it serves. The first and most obvious problem is that the vast majority of Internet infrastructure providers are for-profit corporations, not subject to any similar system of public oversight and accountability. While these Internet infrastructure companies may fairly position themselves and their services as valuable, even integral parts of the Internet as a whole, their most specific obligations are ultimately to their paying customers, especially to their owner or shareholder.

The provision of infrastructure services is often positioned as a neutral default – only deny Some of these services are seen as a political choice.

But the second, more subtle difference is How we identify Rights and hazards in games.The provision of infrastructure services is often positioned as a neutral default – only deny Some of these services are seen as a political choice. Or in other words: denial of service Sites or forums that promote or directly relate to violence can easily be characterized as potentially disenfranchised and thus an insult to a “free and open internet”.But when a company chooses continue Provide services Even if there is solid evidence that a website is being used to promote hate and abuse, it is largely not seen as a threat to the overall health of the internet in the same way. However, as legal scholar Danielle Citron has pointed out, online abuse itself “endangers freedom of speech” — particularly the suppression of “women, minorities and political dissidents” who are disproportionately targeted online.

Infrastructure companies themselves espouse this idea of ​​neutrality, and without the backing of law enforcement or the courts, calls to action from targeted individuals and communities often degenerate into subjective disagreements over content or politics. Cloudflare’s analogy provides just one example here: not serving a website is compared to refusing to manage potentially life-saving emergency assistance, while the harm of persistent, targeted harassment is reduced to a judgment of “moral quality.” While companies may be inclined to act in accordance with legal process, shifting the burden entirely to the legal system doesn’t account for the reality that law enforcement agencies and courts are not only dismissive of community-reported harm, but poorly documented. Network abuse targets frequently, and additional damage is done in the process.

An often expressed concern is that refusing to serve bad actors is a “slippery slope” that results in refusing to serve anyone, including marginalized communities often targeted by forums such as 8chan. So far, this has not been the case.Although Cloudflare claims it terminated 8chan and daily storm It has led to a “dramatic increase in authoritarian regimes trying to get us to end our security services for human rights groups,” and it’s unclear whether any of those demands are reflected in the company’s transparency report. Greater transparency is needed across the stack for a well-informed public conversation. But equally important is the study of how and when to apply the “slippery slope” argument. Cloudflare claims its latest takedown decision was due to an escalating threat — in just 48 hours — that led the company to believe “an unprecedented emergency and immediate threat to human life.” The slope from “objectionable content” to the harassment, assault and mass shootings encouraged by online hate communities also appears to be very slippery.

Trying to unravel complex policy issues in a time of crisis will not work – but so will continuing to insist that there are any neutral players.

Those concerned with creating a safe and prosperous digital world learned two things by watching the long, never-ending conversations about social media content moderation. On the one hand, there are few, if any, easy answers. This applies to Internet infrastructure services as well as major social media platforms. Second, problems don’t resolve themselves or go away — tech companies respond to public outcry and investigative journalism that makes them look bad. Trying to unravel complex policy issues in a time of crisis will not work – but so will continuing to insist that there are any neutral players.

There’s no doubt that these scary corners of the internet will persist — in some form, on some forum. There will always be places on the web where those determined to cause harm and long-term abuse can regroup and build new outposts. Fighting these harms clearly requires a whole-of-society approach – but internet infrastructure providers are just as part of a social and online ecosystem as the rest of us. Honest, robust conversations about the real-world consequences of allowing hateful communities to grow online and the ways Internet infrastructure companies enable them to do so are the only way to achieve an Internet where diverse communities can safely create and thrive.

Jenna Ruddock is a fellow in the Technology and Social Change Program at the Shorenstein Center at Harvard Kennedy School.She is also a documentary photographer and producer in Washington, D.C.

April Glaser is a Senior Internet Policy Fellow at the Shorenstein Center at Harvard Kennedy School. She previously worked as an investigative reporter at NBC News.

Leave a Comment

Your email address will not be published.