First appeared at Future Tense

In January 2016, the widow of a military contractor killed in an ISIS attack in Jordan sued Twitter in U.S. court. Her claim was simple: By allowing terrorists to radicalize potential recruits, spread propaganda, and fundraise, Twitter shared responsibility for her husband’s death.

There is no question that terrorist organizations, eager to spread their message, have seized on the potential of social media. From Twitter storms praising acts of violence to virtual coaches providing instructions and encouragement to potential recruits, social media provides endless opportunities for groups like ISIS to reach mass audiences quickly at a low cost. In fact, according to one estimate, about 90 percent of terrorist activity online happens on social media.

Does that mean social media bears some of the responsibility when these terrorists attack? Families of victims argue that it does. Since the first claim was filed last year, victims’ families have filed similar lawsuits against social media companies related to the Pulse nightclub shooting in Orlando, Florida; the Brussels airport bombing; the November 2015 Paris attacks;, and various Hamas assaults. Recently, relatives of the victims of the 2015 San Bernardino, California, attack filed a similar suit—and the lawsuits show no sign of stopping. But they are starting to take a new legal tack.

The claims are based on federal laws that make it a crime to provide “material support” to terrorists. Terrorist organizations don’t operate in isolation. Their actions are made possible by those who provide property, services, money, weapons, facilities, or personnel. Recognizing this, Congress enacted a set of laws criminalizing the provision of assistance to terrorists. The laws specifically prohibit providing “communications equipment” to terrorists. This is a problem for social media companies. Their platforms offer individuals and organizations the ability to network and communicate, which fits squarely into the definition of material support. It is immaterial that the platforms—like Twitter, Facebook, and YouTube—are not themselves illegal. It doesn’t matter that the companies don’t support the terrorist organizations’ goals. It matters only that they help terrorists achieve their goals.

When Congress enacted these laws, it added a civil provision that would allow victims to sue those who violated the law for money damages. For grieving families in search of accountability and closure, it is practically impossible to sue ISIS, Hamas, or other international terrorist groups directly. On the other hand, it may be easier to locate and recover from those providing support, like the deep-pocketed U.S.-based social media companies.

Plaintiffs in these suits need to show that the platform knew that terrorists were using the service. This is actually relatively easy to prove. The government has publicly identified social media as crucial communication tools used by terrorists, particularly ISIS. Indeed, Twitter itself has acknowledged the issue.

Plaintiffs also need to prove a causal relationship between the material support and terrorist attack. In several of the current cases, plaintiffs have made a compelling connection. The families of the San Bernardino victims argued that the husband and wife shooters were radicalized by ISIS on Facebook, YouTube, and Twitter. If social media had prevented the abuse of their platforms, they say, the killers couldn’t have been radicalized and the attack wouldn’t have taken place. The families of victims of the Pulse nightclub shooting made the same allegation: ISIS used social media to radicalize the killer, Omar Mateen. Mateen watched ISIS jihadist videos online, and he searched for information on the San Bernardino shooters on Facebook.

Yet even where plaintiffs have a strong material support claim, social media companies have a powerful defense. A federal law, Section 230 of the Communications Decency Act, protects websites from civil liability when they engage in the traditional editorial functions of a publisher—such as deciding whether to publish, remove, or edit posts, tweets, or content generated by third parties. For example, if a user writes a defamatory post on Facebook, the person may face liability, but the company doesn’t. Unless the website is actually the party responsible for development of content, it’s off the hook.

This defense led to the failure of some of the early lawsuits. In those cases, plaintiffs argued liability wasn’t premised on the content posted on social media but on the consequences of allowing terrorists to use those services. The courts disagreed (one as recently as last month) and held that allowing ISIS to have accounts was akin to a publishing function, granting immunity under Section 230.

But the new suits—including the one filed in May by relatives of the victims of the San Bernardino shooting—are based on a fresh legal theory. These plaintiffs argue that social media companies are liable not for allowing terrorists to use their platforms but for profiting from that use. Many social media companies—including Facebook, Twitter, and YouTube—draw revenue from advertising. The ads target specific viewers based on the content of the pages they visit. Thus, the new lawsuits argue that social media has designed specific algorithms to finely target advertising based on users’ shared data. When it comes to terrorist posts, plaintiffs argue that social media companies don’t just publish content provided by ISIS—they actually profits from selling ads to those who might be most sympathetic to terrorist messages. In other words, plaintiffs argue that targeting ads is not a traditional publishing function that Section 230 immunizes. What’s more, in some cases, this revenue may be shared with terrorists in exchange for placing additional advertisements on a page or video, like under Google’s AdSense program. If the connection between a terrorist’s tweet and an attack intuitively seems too attenuated, what happens when social media profits on that content?

There’s merit to this argument. After all, advertisers don’t choose to have their ads displayed next to particular posts. The social media platform makes that decision and generates a composite page of user-generated content and advertising, based on bits of information known about the end user. And this is where plaintiffs have some strength: Section 230 shouldn’t immunize social media where the allegations are based on content solely authored by defendants. Social media defendants will likely argue that the algorithms facilitating ad placement are neutral tools and should be afforded immunity just like more traditional publishing functions. But this argument inflates Section 230 beyond what it was intended to do—when a defendant generates content, it should not apply.

For their part, advertisers haven’t been pleased when their ads appear next to ISIS videos. Anheuser-Busch, Procter & Gamble, Johnson & Johnson, and other U.S. advertisers have pulled advertising and expressed frustration that their ads have appeared alongside controversial content, including terrorism videos. In Europe, Audi, McDonald’s U.K., L’Oréal, and a slew of other retailers pulled their online ads from Google for the same reason.

Despite awareness of the issue, backlash from advertisers, and these lawsuits, social media has not adequately addressed the problem. If plaintiffs get around Section 230—their biggest hurdle—they will be in a position to force social media into significant accountability. Regardless of the outcome, each time a suit is filed, Facebook, YouTube, and Twitter become more proactive about fighting terrorist abuse of their platforms. In the end, whether the plaintiffs win or lose, the natural consequence of the legal exposure created for these companies will likely lead to a safer social media landscape for all of us.

Tags: ; ; ; ;