The New Gatekeepers: How proprietary algorithms increasingly determine the news we see

14 March, 2021

 

Andy Lee Roth

 

Algorithms, artificial intelligence programs controlled by Big Tech companies including Google, Facebook and Twitter—corporations with no commitment to ethical journalism—are the new gatekeepers. More and more, proprietary algorithms rather than newsroom editors determine which news stories circulate widely, raising serious concerns about transparency and accountability in determinations of newsworthiness.

The rise of what is best understood as algorithmic censorship makes newly relevant the old concept of “gatekeeping” in ways that directly address previous critiques of how we get our news. To illustrate the power of algorithms to control the flow of information, consider the example of what happened to the digital record of an academic conference that I attended last year. 


YouTube and the Critical Media Literacy Conference of the Americas

In October 2020 I participated in an academic conference focused on media literacy education. The event brought together the field’s leading figures for two days of scholarly panels and discussions. Many of the participants, including those in a session I moderated, raised questions about the impact of Big Tech companies such as Google and Facebook on the future of journalism and criticized how corporate news media—including not only Fox News and MSNBC but also the New York Times and Washington Post—often impose narrow definitions of newsworthiness. In other words, the conference was like many others I’ve attended, except that due to the pandemic we met virtually via Zoom.

After the conference concluded, its organizers uploaded video recordings of the keynote session and more than twenty additional hours of conference presentations to a YouTube channel created to make those sessions available to a wider public.

Project Censored's State of the Free Press | 2021 surveys
Project Censored’s State of the Free Press | 2021 surveys “the desolate landscape of corporate news reporting, where powerful forces interlock to restrict the free flow of information…”

Several weeks later, YouTube removed all of the conference videos, without any notification or explanation to the conference organizers. As MintPress News reported, an academic conference at which many participants raised warnings about “the dangers of media censorship” had, ironically, “been censored by YouTube.” Despite the organizers’ subsequent formal appeals, YouTube refused to restore any of the deleted content; instead, it declined to acknowledge the content was ever posted in the first place.

Through my work with Project Censored, a nonprofit news watchdog with a global reputation for opposing news censorship and championing press freedoms, I was familiar with online content filtering. Thinking about YouTube’s power to delete the public video record of an academic conference, without explanation, initially reminded me of the “memory holes” in George Orwell’s Nineteen Eighty-Four. In Orwell’s dystopian novel, memory holes efficiently whisk away for destruction any evidence that might conflict with or undermine the government’s interests, as determined by the Ministry of Truth.

But I also found myself recalling a theory of news production and distribution that enjoyed popularity in the 1950s but has since fallen from favor. I’ve come to understand YouTube’s removal of the conference videos as (a new form of) gatekeeping—the concept developed by David Manning White and Walter Gieber in the 1950s to explain how newspaper editors determined what stories to publish as news.

The original gatekeeping model

White studied the decisions of a wire editor at a small midwestern newspaper, examining the reasons that the editor, whom White called “Mr. Gates,” gave for selecting or rejecting specific stories for publication. Mr. Gates rejected some stories for practical reasons—”too vague,” “dull writing,” or “too late—no space.” But in 18 of the 423 decisions that White examined, Mr. Gates rejected stories for political reasons, rejecting stories as “pure propaganda” or “too red,” for example.  White concluded his 1950 article by emphasizing “how highly subjective, how based on the gatekeeper’s own set of experiences, attitudes and expectations the communication of ‘news’ really is.”

In 1956, Walter Gieber conducted a similar study, this time examining the decisions of 16 different wire editors. Gieber’s findings refuted White’s conclusion of gatekeeping as subjective. Instead, Gieber found that, independently of one another, editors made much the same decisions. Gatekeeping was real, but the editors treated story selection as a rote task, and they were most concerned with what Gieber described as “goals of production” and “bureaucratic routine”—not, in other words, with advancing any particular political agenda. More recent studies have reinforced and refined Gieber’s conclusion that professional assessments of “newsworthiness,” not political partisanship, guide news workers’ decisions about what stories to cover.

The gatekeeping model fell out of favor as newer theoretical models—including “framing” and “agenda setting”—seemed to explain more of the news production process. In an influential 1989 article, sociologist Michael Schudson described gatekeeping as “a handy, if not altogether appropriate, metaphor.” The gatekeeping model was problematic, he wrote, because “it leaves ‘information’ sociologically untouched, a pristine material that comes to the gate already prepared.” In that flawed view “news” is preformed, and the gatekeeper “simply decides which pieces of prefabricated news will be allowed through the gate.” Although White and others had noted that “gatekeeping” occurs at multiple stages in the news production process, Schudson’s critique stuck.

With the advent of the Internet, some scholars attempted to revive the gatekeeping model. New studies showed how audiences increasingly act as gatekeepers, deciding which news items to pass along via their own social media accounts. But, overall, gatekeeping seemed even more dated: “The Internet defies the whole notion of a ‘gate’ and challenges the idea that journalists (or anyone else) can or should limit what passes through it,” Jane B. Singer wrote in 2006.

Algorithmic news filtering

Fast forward to the present and Singer’s optimistic assessment appears more dated than gatekeeping theory itself. Instead, the Internet, and social media in particular, encompass numerous limiting “gates,” fewer and fewer of which are operated by news organizations or journalists themselves.

Incidents such as YouTube’s wholesale removal of the media literacy conference videos are not isolated; in fact, they are increasingly common as privately-owned companies and their media platforms wield ever more power to regulate who speaks online and the types of speech that are permissible.

Independent news outlets have documented how Twitter, Facebook, and others have suspended Venezuelan, Iranian, and Syrian accounts and censored content that conflict with U.S. foreign policy; how the Google News aggregator filters out pro-LGBTQ stories while amplifying homophobic and transphobic voices; and how changes made by Facebook to its news feed have throttled web traffic to progressive news outlets.

Some Big Tech companies’ decisions have made headline news. After the 2020 presidential election, for example, Google, Facebook, YouTube, Twitter, and Instagram restricted the online communications of Donald Trump and his supporters; after the January 6 assault on the Capitol, Google, Apple, and Amazon suspended Parler, the social media platform favored by many of Trump’s supporters.

But decisions to deplatform Donald Trump and suspend Parler differ in two fundamental ways from most other cases of online content regulation by Big Tech companies. First, the instances involving Trump and Parler received widespread news coverage; those decisions became public issues and were debated as such. Second, as that news coverage tacitly conveyed, the decisions to restrict Trump’s online voice and Parler’s networked reach were made by leaders at Google, Facebook, Apple, and Amazon. They were human decisions.


"Thought Police" by Ali Banisadr, oil on linen, 82 x 120 inches (2019). Courtesy of the artist.
“Thought Police” by Ali Banisadr, oil on linen, 82 x 120 inches (2019). Courtesy of the artist.

This last point was not a focus of the resulting news coverage, but it matters a great deal for understanding the stakes in other cases, where the decision to filter content—in effect, to silence voices and throttle conversations—were made by algorithms, rather than humans.

Increasingly the news we encounter is the product of both the daily routines and professional judgments of journalists, editors, and other news professionals and the assessments of relevance and appropriateness made by artificial intelligence programs that have been developed and are controlled by private for-profit corporations that do not see themselves as media companies much less ones engaged in journalism. When I search for news about “rabbits gone wild” or the Equality Act on Google News, an algorithm employs a variety of confidential criteria to determine what news stories and news sources appear in response to my query. Google News does not produce any news stories of its own but, like Facebook and other platforms that function as news aggregators, it plays an enormous—and poorly understood—role in determining what news stories many Americans see.

The new algorithmic gatekeeping

Recall that Schudson criticized the gatekeeping model for “leaving ‘information’ sociologically untouched.” Because news was constructed, not prefabricated, the gatekeeping model failed to address the complexity of the news production process, Schudson contended. That critique, however, no longer applies to the increasingly common circumstances in which corporations such as Google and Facebook, which do not practice journalism themselves, determine what news stories members of the public are most likely to see—and what news topics or news outlets those audiences are unlikely to ever come across, unless they actively seek them out.

In these cases, Google, Facebook, and other social media companies have no hand—or interest—in the production of the stories that their algorithms either promote or bury. Without regard for the basic principles of ethical journalism as recommended by the Society of Professional Journalists—to seek the truth and report it; to minimize harm; to act independently; and, to be accountable and transparent—the new gatekeepers claim content neutrality while promoting news stories that often fail glaringly to fulfil even one of the SPJ’s ethical guidelines.

This problem is compounded by the reality that it is impossible for a contemporary version of David Manning White or Walter Gieber to study gatekeeping processes at Google or Facebook: The algorithms engaged in the new gatekeeping are protected from public scrutiny as proprietary intellectual property. As April Anderson and I have previously reported, a class action suit filed against YouTube in August 2019 by LGBT content creators could “force Google to make its powerful algorithms available for scrutiny.” Google/YouTube have sought to dismiss the case on the grounds that its distribution algorithms are “not content-based.”


"Trust in the Future" by Ali Banisadr, oil on linen, 82 x 120 inches (2017). Courtesy of the artist.
“Trust in the Future” by Ali Banisadr, oil on linen, 82 x 120 inches (2017). Courtesy of the artist.

Algorithms, human agency, and inequalities

To be accountable and transparent is one of guiding principles for ethical journalism, as advocated by the Society of Professional Journalists. News gatekeeping conducted by proprietary algorithms crosses wires with this ethical guideline, producing grave threats to the integrity of journalism and the likelihood of a well-informed public.

Most often when Google, Facebook, and other Big Tech companies are considered in relation to journalism and the conditions necessary for it to fulfill its fundamental role as the “Fourth Estate”—holding the powerful accountable and informing the public—the focus is on how Big Tech has thoroughly appropriated the advertising revenues on which most legacy media outlets depend to stay in business. The rise of algorithmic news gatekeeping should be just as great a concern.

Technologies driven by artificial intelligence reduce the role of human agency in decision making. This is often touted, by advocates of AI, as a selling point: Algorithms replace human subjectivity and fallibility with “objective” determinations.

Critical studies of algorithmic bias—including Safiya Umoja Noble’s Algorithms of Oppression, Virginia Eubank’s Automating Inequality, and Cathy O’Neill’s Weapons of Math Destruction—advise us to be wary of how easy it is to build longstanding human prejudices into “viewpoint neutral” algorithms that, in turn, add new layers to deeply-sedimented structural inequalities.

With the new algorithmic gatekeeping of news developing more quickly than public understanding of it, journalists and those concerned with the role of journalism in democracy face multiple threats. We must exert all possible pressure to force corporations such as Google and Facebook to make their algorithms available for third-party scrutiny; at the same time, we must do more to educate the public about this new and subtle wrinkle in the news production process.

Journalists are well positioned to tell this story from first-hand experience, and governmental regulation or pending lawsuits may eventually force Big Tech companies to make their algorithms available for third-party scrutiny. But the stakes are too high to wait on the sidelines for others to solve the problem. So what can we do now in response to algorithmic gatekeeping? I recommend four proactive responses, presented in increasing order of engagement:

·      Avoid using “Google” as a verb, a common habit that tacitly identifies a generic online activity with the brand name of a corporation that has done as much as any to multiply epistemic inequality, the concept developed by Shoshana Zuboff, author of The Age of Surveillance Capitalism, to describe a form of power based on the difference between what we can know and what can be known about us.

·      Remember search engines and social media feeds are not neutral information sources. The algorithms that drive them often serve to reproduce existing inequalities in subtle but powerful ways. Investigate for yourself. Select a topic of interest to you and compare search results from Google and DuckDuckGo.

·      Connect directly to news organizations that display firm commitments to ethical journalism, rather than relying on your social media feed for news. Go to the outlet’s website, sign up for its email list or RSS feed, subscribe to the outlet’s print version if there is one. The direct connection removes the social media platform, or search engine, as an unnecessary and potentially biased intermediary.

·      Call out algorithmic bias when you encounter it. Call it out directly to the entity responsible for it; call it out publicly by letting others know about it.

Fortunately, our human brains can employ new information in ways that algorithms cannot. Understanding the influential roles of algorithms on our lives—including how they operate as gatekeepers of the news stories we are most likely to see—allows us to take greater control of our individual online experiences. Based on greater individual awareness and control, we can begin to organize collectively to expose and oppose algorithmic bias and censorship.

 

Andy Lee Roth, PhD, is associate director of Project Censored where he coordinates the Project’s Campus Affiliates Program, a news media research network of several hundred students and faculty at two dozen colleges and universities across North America. He co-edits the Project’s State of the Free Press yearbook series and his work has been published in a number of outlets, including Index on Censorship, In These Times, YES! Magazine, Media, Culture & Society, and the International Journal of Press/Politics.

Big Techcorporate mediaFacebookGooglemainstream mediapress censorshipTwitterYouTube

3 comments

  1. Great article. One related issue that comes to mind is the general demise of journalism with fewer subscribers to magazines like Time and newspapers like the Los Angeles Times. Today, many Americans no longer find value in paying for news and, by extension, for journalists to do their jobs. How does journalism survive in this new era of news content as now expected to be free? These platforms shape news through their algorithms but they have also created this new era where users expect the news for free.

Leave a comment

Your email address will not be published. Required fields are marked *

Become a Member