Donate
Looking the GIFCT in the Mouth Thumbnail
‹ Back
Shaping the Internet's Future 11 October 2019

Looking the GIFCT in the Mouth

Andrew Sullivan
By Andrew SullivanPresident & CEO

The recent meeting of the United Nations General Assembly (UNGA) was notable because of the attention it paid to the climate of the planet Earth. A different set of meetings around the UNGA was about another climate: the one of fear, anger, and violence swirling about the Internet.

It was only last March that a man (there is only one accused) shot dozens of people in a pair of attacks on Muslims at prayer. The shooter streamed the first 17 minutes of his attacks using Facebook Live. The use of an Internet service in this event, combined with general concern about how Internet services are being used for terrorism and violent extremism, resulted in the Christchurch Call.

There is some reason to be optimistic about the Christchurch Call. Rarely have governments worked so decisively or quickly, together, to take on a global social issue. At a side meeting in New York at UNGA, some 30-odd additional countries signed the Call; more than 50 countries have signed on. New Zealand has led this while insisting that governments cannot tackle the issue alone, and has tried to involve everyone – through an Advisory Network – in decisions that are bound to affect us all.

There may nevertheless be reasons to be more cautious than optimistic. The same side meeting announced the “overhaul” of the Global Internet Forum to Counter Terrorism (GIFCT). GIFCT (usually pronounced “gif-cee-tee”, with a hard g) was established by some large social media companies in 2017 in order to cooperate in efforts to address “terrorist content” found on the companies’ services. The definition of “terrorist content” has shifted over time, and is now referred to as “terrorist and violent extremist content” in order to make clear that not all of the targets are members of identifiable terrorist organizations. Regardless of these details, GIFCT sounds like a marvelous idea, and it has the potential to be a worthwhile effort. There is, however, a great deal of work that needs to be done to ensure that GITCT does not simply become a mechanism for some large companies to lock in their existing advantages.

The first issue with GIFCT is that, while it claims to be an Internet forum, it actually isn’t. It’s a forum for large social media platform companies to share their content filtering techniques in order to find certain kinds of content and remove it from the participating platforms. Social media platforms are not the Internet, and the architectural features that they generally share are not shared by every other service on the Internet. Of course, nobody except terrorists want terrorist content on social media, so it is probably a good thing for social media companies to collaborate in doing something about that content. Yet most social media services are designed with a central authority that controls the flow of content, and many other services on the Internet are designed to resist such centralization. The techniques that work for one kind of service will not work for all.

In addition, it is not merely that the mechanisms that fit most social media services are a poor fit for other kinds of Internet activity. It’s also the case that the operators of large social media companies have a real interest in blurring the distinction between their services and the Internet. No matter how well-meaning, the companies in question have an interest in supporting regulation that makes their platforms’ architecture a permanent feature of the whole Internet. If governments start adopting regulation of the Internet that favors the GIFCT approach, then the design of social media platforms will become a permanent feature of what we can do with the Internet. That shuts down future innovation, including innovations that might be more resistant to viral violent content in the first place. This is part of what makes the GIFCT restructuring announcement in New York so worrisome: it was announced approvingly by the governments who sponsored the Christchurch Call, as part of their ongoing program.

It is frankly weird that governments appear to be so comfortable with GIFCT, since the restructured organization has settled on a governance model that puts participating companies completely in charge. The four founding members of the GIFCT get permanent board seats, and other participating tech companies may also be on the board that makes all the decisions. Neither civil society, nor the technical community, nor governments, nor platform users, nor anyone else ever gets a say in how the GIFCT will work, what content will be covered, and so on. Such participation is relegated to an advisory committee with membership from government and the wider society but without any obvious teeth. That committee is also supposed to be small enough such that the real diversity of opinion is questionable, especially since it is entirely unclear how the committee is supposed to be populated. What is clear is that, in the end, only tech companies will have any ability to influence any decisions of GIFCT. It is correct for President Macron to keep calling this a “new multilateralism.” Multilateralism always depends on only certain stakeholders being involved. This new multilateralism basically outsources the solution to the problem of undesirable content to a consortium of industry players, which will be inevitably dominated by the largest companies in the industry due to the resources they have to put into this.

Setting aside the miserable governance story, the very approach of GIFCT rests on the principle that stamping out this undesirable content is both possible and efficacious. In fact, there is some reason to suppose that trying to stamp out unwanted messages backfires – that it further radicalizes people already radicalized. So, paradoxically, by weakening protected communications and filtering content, global leaders risk bolstering terrorists, rather than deterring them. And of course, even the best filters are imperfect: they miss content that should be filtered and they filter out content that should not have been filtered. They also have side effects: those working to protect journalists or prosecute war crimes are gradually finding that the evidence they used to rely on is all disappearing due to content filters. So, the fundamental activity of GIFCT is at best a half measure on the way to a healthy online environment; and, the activity might actually make things worse. Predictably enough, these problems are all supposed to be solved by the magic dust of Artificial Intelligence, but nobody can say how that will work.

As if that were not enough, the GIFCT has always been controversial in part because it looks like a solution to a problem without actually addressing the roots of that problem. Mere tech fixes to social problems almost never work, and the linked issues of terrorism and violent extremism are unquestionably social problems. The tech fix offered is to try to suppress undesirable content. The social problem appears to be rooted in the ways that current social media platforms both entice and reward users. It is at least possible that terrorist and violent extremist content “goes viral” because of a design feature of platforms. Perhaps their advertising-based operation, which requires user attention, makes them especially good at amplifying horrific content. Yet tackling that issue might have negative effects for the business models of the companies involved in GIFCT – the ones who will appoint all the voting board members of GIFCT. Without a countervailing voice in its governance, there will be no way for GIFCT to take on this issue credibly. There isn’t even a promise that the companies involved will abide by the GIFCT decisions – just that they’ll contribute to it.

Defenders of the new GIFCT organization design would point to the number of (new) working groups, which might be able to make recommendations to do something other than just filter content. But ironically, even here the GIFCT refresh might turn out to hinder as much as it helps. Since GIFCT is being turned into an institution separate from the social media platform companies, access to data that might have been possible while working within a given social media company will now have to be handled like all other data requests from researchers. Due to increased (and desirable!) efforts around privacy by social media companies, such data requests are today harder to satisfy than they used to be. So, even though the new institution will be dominated, if not controlled, by the platform companies, it appears likely to have the data-access disadvantages of being a separate entity.

There is still time to prevent these pitfalls. To do so requires reverting to the habits that brought us so many of the benefits of the Internet. Instead of the “new multilateralism,” which has brought us this institution of dubious legitimacy and questionable effectiveness, it is necessary to ensure wide and meaningful consultation with the rest of the Internet. A problem where everyone has a stake is properly addressed through collaboration. This means, in practice, the now-maligned, but still-serviceable approach of a multistakeholder institution. To achieve this, the GIFCT advisory panel can be made more useful through meaningful and binding commitments to organizational transparency: make the board of the GIFCT work in public and let us all understand what it is doing, and use the advisory panel to supervise that. This will also probably mean that “transparency reports” that are entirely defined, created, published, and audited by the same organization will need to end in favor of something at least as robust as modern accounting methods. At the same time, governments need to acknowledge, publicly, that GIFCT, even if it turns out to work in addressing an issue on social media platforms, will never be a perfect solution and almost certainly will be a poor fit for other kinds of technology. And everyone involved needs to state clearly what works means.

We need to be alert not only to what we dislike on the Internet. For example, we can certainly prevent the sharing of terrorist and violent extremist content by preventing the sharing of everything. That would be a pyrrhic victory indeed.

‹ Back

Disclaimer: Viewpoints expressed in this post are those of the author and may or may not reflect official Internet Society positions.

Join the conversation with Internet Society members around the world