The New Gatekeepers: How proprietary algorithms increasingly determine the news we see

14 March, 2021

“The Gate­keep­ers” by Ali Ban­isadr, (b. Tehran 1976, lives and works in New York), oil on linen, 72 x 108 inch­es (2010). Cour­tesy of the artist.

Andy Lee Roth

  

Algo­rithms, arti­fi­cial intel­li­gence pro­grams con­trolled by Big Tech com­pa­nies includ­ing Google, Face­book and Twitter—corporations with no com­mit­ment to eth­i­cal journalism—are the new gate­keep­ers. More and more, pro­pri­etary algo­rithms rather than news­room edi­tors deter­mine which news sto­ries cir­cu­late wide­ly, rais­ing seri­ous con­cerns about trans­paren­cy and account­abil­i­ty in deter­mi­na­tions of newsworthiness.

The rise of what is best under­stood as algo­rith­mic cen­sor­ship makes new­ly rel­e­vant the old con­cept of “gate­keep­ing” in ways that direct­ly address pre­vi­ous cri­tiques of how we get our news. To illus­trate the pow­er of algo­rithms to con­trol the flow of infor­ma­tion, con­sid­er the exam­ple of what hap­pened to the dig­i­tal record of an aca­d­e­m­ic con­fer­ence that I attend­ed last year. 


YouTube and the Crit­i­cal Media Lit­er­a­cy Con­fer­ence of the Americas

In Octo­ber 2020 I par­tic­i­pat­ed in an aca­d­e­m­ic con­fer­ence focused on media lit­er­a­cy edu­ca­tion. The event brought togeth­er the field­’s lead­ing fig­ures for two days of schol­ar­ly pan­els and dis­cus­sions. Many of the par­tic­i­pants, includ­ing those in a ses­sion I mod­er­at­ed, raised ques­tions about the impact of Big Tech com­pa­nies such as Google and Face­book on the future of jour­nal­ism and crit­i­cized how cor­po­rate news media—includ­ing not only Fox News and MSNBC but also the New York Times and Wash­ing­ton Post—often impose nar­row def­i­n­i­tions of news­wor­thi­ness. In oth­er words, the con­fer­ence was like many oth­ers I’ve attend­ed, except that due to the pan­dem­ic we met vir­tu­al­ly via Zoom. 

After the con­fer­ence con­clud­ed, its orga­niz­ers uploaded video record­ings of the keynote ses­sion and more than twen­ty addi­tion­al hours of con­fer­ence pre­sen­ta­tions to a YouTube chan­nel cre­at­ed to make those ses­sions avail­able to a wider public.


Project Censored's    State of the Free Press  |  2021        surveys

Project Cen­sored’s State of the Free Press | 2021 sur­veys “the des­o­late land­scape of cor­po­rate news report­ing, where pow­er­ful forces inter­lock to restrict the free flow of information…”

Sev­er­al weeks lat­er, YouTube removed all of the con­fer­ence videos, with­out any noti­fi­ca­tion or expla­na­tion to the con­fer­ence orga­niz­ers. As Mint­Press News report­ed, an aca­d­e­m­ic con­fer­ence at which many par­tic­i­pants raised warn­ings about “the dan­gers of media cen­sor­ship” had, iron­i­cal­ly, “been cen­sored by YouTube.” Despite the orga­niz­ers’ sub­se­quent for­mal appeals, YouTube refused to restore any of the delet­ed con­tent; instead, it declined to acknowl­edge the con­tent was ever post­ed in the first place.

Through my work with Project Cen­sored, a non­prof­it news watch­dog with a glob­al rep­u­ta­tion for oppos­ing news cen­sor­ship and cham­pi­oning press free­doms, I was famil­iar with online con­tent fil­ter­ing. Think­ing about YouTube’s pow­er to delete the pub­lic video record of an aca­d­e­m­ic con­fer­ence, with­out expla­na­tion, ini­tial­ly remind­ed me of the “mem­o­ry holes” in George Orwell’s Nine­teen Eighty-Four. In Orwell’s dystopi­an nov­el, mem­o­ry holes effi­cient­ly whisk away for destruc­tion any evi­dence that might con­flict with or under­mine the gov­ern­men­t’s inter­ests, as deter­mined by the Min­istry of Truth. 

But I also found myself recall­ing a the­o­ry of news pro­duc­tion and dis­tri­b­u­tion that enjoyed pop­u­lar­i­ty in the 1950s but has since fall­en from favor. I’ve come to under­stand YouTube’s removal of the con­fer­ence videos as (a new form of) gate­keep­ing—the con­cept devel­oped by David Man­ning White and Wal­ter Gieber in the 1950s to explain how news­pa­per edi­tors deter­mined what sto­ries to pub­lish as news.

The orig­i­nal gate­keep­ing model

White stud­ied the deci­sions of a wire edi­tor at a small mid­west­ern news­pa­per, exam­in­ing the rea­sons that the edi­tor, whom White called “Mr. Gates,” gave for select­ing or reject­ing spe­cif­ic sto­ries for pub­li­ca­tion. Mr. Gates reject­ed some sto­ries for prac­ti­cal reasons—“too vague,” “dull writ­ing,” or “too late—no space.” But in 18 of the 423 deci­sions that White exam­ined, Mr. Gates reject­ed sto­ries for polit­i­cal rea­sons, reject­ing sto­ries as “pure pro­pa­gan­da” or “too red,” for exam­ple.  White con­clud­ed his 1950 arti­cle by empha­siz­ing “how high­ly sub­jec­tive, how based on the gate­keep­er’s own set of expe­ri­ences, atti­tudes and expec­ta­tions the com­mu­ni­ca­tion of ‘news’ real­ly is.” 

In 1956, Wal­ter Gieber con­duct­ed a sim­i­lar study, this time exam­in­ing the deci­sions of 16 dif­fer­ent wire edi­tors. Gieber’s find­ings refut­ed White’s con­clu­sion of gate­keep­ing as sub­jec­tive. Instead, Gieber found that, inde­pen­dent­ly of one anoth­er, edi­tors made much the same deci­sions. Gate­keep­ing was real, but the edi­tors treat­ed sto­ry selec­tion as a rote task, and they were most con­cerned with what Gieber described as “goals of pro­duc­tion” and “bureau­crat­ic routine”—not, in oth­er words, with advanc­ing any par­tic­u­lar polit­i­cal agen­da. More recent stud­ies have rein­forced and refined Gieber’s con­clu­sion that pro­fes­sion­al assess­ments of “news­wor­thi­ness,” not polit­i­cal par­ti­san­ship, guide news work­ers’ deci­sions about what sto­ries to cover. 

The gate­keep­ing mod­el fell out of favor as new­er the­o­ret­i­cal models—including “fram­ing” and “agen­da setting”—seemed to explain more of the news pro­duc­tion process. In an influ­en­tial 1989 arti­cle, soci­ol­o­gist Michael Schud­son described gate­keep­ing as “a handy, if not alto­geth­er appro­pri­ate, metaphor.” The gate­keep­ing mod­el was prob­lem­at­ic, he wrote, because “it leaves ‘infor­ma­tion’ soci­o­log­i­cal­ly untouched, a pris­tine mate­r­i­al that comes to the gate already pre­pared.” In that flawed view “news” is pre­formed, and the gate­keep­er “sim­ply decides which pieces of pre­fab­ri­cat­ed news will be allowed through the gate.” Although White and oth­ers had not­ed that “gate­keep­ing” occurs at mul­ti­ple stages in the news pro­duc­tion process, Schud­son’s cri­tique stuck. 

With the advent of the Inter­net, some schol­ars attempt­ed to revive the gate­keep­ing mod­el. New stud­ies showed how audi­ences increas­ing­ly act as gate­keep­ers, decid­ing which news items to pass along via their own social media accounts. But, over­all, gate­keep­ing seemed even more dat­ed: “The Inter­net defies the whole notion of a ‘gate’ and chal­lenges the idea that jour­nal­ists (or any­one else) can or should lim­it what pass­es through it,” Jane B. Singer wrote in 2006. 

Algo­rith­mic news filtering

Fast for­ward to the present and Singer’s opti­mistic assess­ment appears more dat­ed than gate­keep­ing the­o­ry itself. Instead, the Inter­net, and social media in par­tic­u­lar, encom­pass numer­ous lim­it­ing “gates,” few­er and few­er of which are oper­at­ed by news orga­ni­za­tions or jour­nal­ists themselves. 

Inci­dents such as YouTube’s whole­sale removal of the media lit­er­a­cy con­fer­ence videos are not iso­lat­ed; in fact, they are increas­ing­ly com­mon as pri­vate­ly-owned com­pa­nies and their media plat­forms wield ever more pow­er to reg­u­late who speaks online and the types of speech that are permissible.

Inde­pen­dent news out­lets have doc­u­ment­ed how Twit­ter, Face­book, and oth­ers have sus­pend­ed Venezue­lan, Iran­ian, and Syr­i­an accounts and cen­sored con­tent that con­flict with U.S. for­eign pol­i­cy; how the Google News aggre­ga­tor fil­ters out pro-LGBTQ sto­ries while ampli­fy­ing homo­pho­bic and trans­pho­bic voic­es; and how changes made by Face­book to its news feed have throt­tled web traf­fic to pro­gres­sive news outlets. 

Some Big Tech com­pa­nies’ deci­sions have made head­line news. After the 2020 pres­i­den­tial elec­tion, for exam­ple, Google, Face­book, YouTube, Twit­ter, and Insta­gram restrict­ed the online com­mu­ni­ca­tions of Don­ald Trump and his sup­port­ers; after the Jan­u­ary 6 assault on the Capi­tol, Google, Apple, and Ama­zon sus­pend­ed Par­ler, the social media plat­form favored by many of Trump’s supporters. 

But deci­sions to deplat­form Don­ald Trump and sus­pend Par­ler dif­fer in two fun­da­men­tal ways from most oth­er cas­es of online con­tent reg­u­la­tion by Big Tech com­pa­nies. First, the instances involv­ing Trump and Par­ler received wide­spread news cov­er­age; those deci­sions became pub­lic issues and were debat­ed as such. Sec­ond, as that news cov­er­age tac­it­ly con­veyed, the deci­sions to restrict Trump’s online voice and Par­ler’s net­worked reach were made by lead­ers at Google, Face­book, Apple, and Ama­zon. They were human decisions. 

“Thought Police” by Ali Ban­isadr, oil on linen, 82 x 120 inch­es (2019). Cour­tesy of the artist.

This last point was not a focus of the result­ing news cov­er­age, but it mat­ters a great deal for under­stand­ing the stakes in oth­er cas­es, where the deci­sion to fil­ter content—in effect, to silence voic­es and throt­tle conversations—were made by algo­rithms, rather than humans. 

Increas­ing­ly the news we encounter is the prod­uct of both the dai­ly rou­tines and pro­fes­sion­al judg­ments of jour­nal­ists, edi­tors, and oth­er news pro­fes­sion­als and the assess­ments of rel­e­vance and appro­pri­ate­ness made by arti­fi­cial intel­li­gence pro­grams that have been devel­oped and are con­trolled by pri­vate for-prof­it cor­po­ra­tions that do not see them­selves as media com­pa­nies much less ones engaged in jour­nal­ism. When I search for news about “rab­bits gone wild” or the Equal­i­ty Act on Google News, an algo­rithm employs a vari­ety of con­fi­den­tial cri­te­ria to deter­mine what news sto­ries and news sources appear in response to my query. Google News does not pro­duce any news sto­ries of its own but, like Face­book and oth­er plat­forms that func­tion as news aggre­ga­tors, it plays an enormous—and poor­ly under­stood—role in deter­min­ing what news sto­ries many Amer­i­cans see.

The new algo­rith­mic gatekeeping

Recall that Schud­son crit­i­cized the gate­keep­ing mod­el for “leav­ing ‘infor­ma­tion’ soci­o­log­i­cal­ly untouched.” Because news was con­struct­ed, not pre­fab­ri­cat­ed, the gate­keep­ing mod­el failed to address the com­plex­i­ty of the news pro­duc­tion process, Schud­son con­tend­ed. That cri­tique, how­ev­er, no longer applies to the increas­ing­ly com­mon cir­cum­stances in which cor­po­ra­tions such as Google and Face­book, which do not prac­tice jour­nal­ism them­selves, deter­mine what news sto­ries mem­bers of the pub­lic are most like­ly to see—and what news top­ics or news out­lets those audi­ences are unlike­ly to ever come across, unless they active­ly seek them out. 

In these cas­es, Google, Face­book, and oth­er social media com­pa­nies have no hand—or interest—in the pro­duc­tion of the sto­ries that their algo­rithms either pro­mote or bury. With­out regard for the basic prin­ci­ples of eth­i­cal jour­nal­ism as rec­om­mend­ed by the Soci­ety of Pro­fes­sion­al Journalists—to seek the truth and report it; to min­i­mize harm; to act inde­pen­dent­ly; and, to be account­able and transparent—the new gate­keep­ers claim con­tent neu­tral­i­ty while pro­mot­ing news sto­ries that often fail glar­ing­ly to ful­fil even one of the SPJ’s eth­i­cal guidelines. 

This prob­lem is com­pound­ed by the real­i­ty that it is impos­si­ble for a con­tem­po­rary ver­sion of David Man­ning White or Wal­ter Gieber to study gate­keep­ing process­es at Google or Face­book: The algo­rithms engaged in the new gate­keep­ing are pro­tect­ed from pub­lic scruti­ny as pro­pri­etary intel­lec­tu­al prop­er­ty. As April Ander­son and I have pre­vi­ous­ly report­ed, a class action suit filed against YouTube in August 2019 by LGBT con­tent cre­ators could “force Google to make its pow­er­ful algo­rithms avail­able for scruti­ny.” Google/YouTube have sought to dis­miss the case on the grounds that its dis­tri­b­u­tion algo­rithms are “not content-based.”

Algo­rithms, human agency, and inequalities

“Trust in the Future” by Ali Ban­isadr, oil on linen, 82 x 120 inch­es (2017). Cour­tesy of the artist.

To be account­able and trans­par­ent is one of guid­ing prin­ci­ples for eth­i­cal jour­nal­ism, as advo­cat­ed by the Soci­ety of Pro­fes­sion­al Jour­nal­ists. News gate­keep­ing con­duct­ed by pro­pri­etary algo­rithms cross­es wires with this eth­i­cal guide­line, pro­duc­ing grave threats to the integri­ty of jour­nal­ism and the like­li­hood of a well-informed public. 

Most often when Google, Face­book, and oth­er Big Tech com­pa­nies are con­sid­ered in rela­tion to jour­nal­ism and the con­di­tions nec­es­sary for it to ful­fill its fun­da­men­tal role as the “Fourth Estate”—holding the pow­er­ful account­able and inform­ing the public—the focus is on how Big Tech has thor­ough­ly appro­pri­at­ed the adver­tis­ing rev­enues on which most lega­cy media out­lets depend to stay in busi­ness. The rise of algo­rith­mic news gate­keep­ing should be just as great a concern. 

Tech­nolo­gies dri­ven by arti­fi­cial intel­li­gence reduce the role of human agency in deci­sion mak­ing. This is often tout­ed, by advo­cates of AI, as a sell­ing point: Algo­rithms replace human sub­jec­tiv­i­ty and fal­li­bil­i­ty with “objec­tive” determinations.

Crit­i­cal stud­ies of algo­rith­mic bias—including Safiya Umo­ja Noble’s Algo­rithms of Oppres­sion, Vir­ginia Eubank’s Automat­ing Inequal­i­ty, and Cathy O’Neil­l’s Weapons of Math Destruc­tion—advise us to be wary of how easy it is to build long­stand­ing human prej­u­dices into “view­point neu­tral” algo­rithms that, in turn, add new lay­ers to deeply-sed­i­ment­ed struc­tur­al inequalities.

With the new algo­rith­mic gate­keep­ing of news devel­op­ing more quick­ly than pub­lic under­stand­ing of it, jour­nal­ists and those con­cerned with the role of jour­nal­ism in democ­ra­cy face mul­ti­ple threats. We must exert all pos­si­ble pres­sure to force cor­po­ra­tions such as Google and Face­book to make their algo­rithms avail­able for third-par­ty scruti­ny; at the same time, we must do more to edu­cate the pub­lic about this new and sub­tle wrin­kle in the news pro­duc­tion process.

Jour­nal­ists are well posi­tioned to tell this sto­ry from first-hand expe­ri­ence, and gov­ern­men­tal reg­u­la­tion or pend­ing law­suits may even­tu­al­ly force Big Tech com­pa­nies to make their algo­rithms avail­able for third-par­ty scruti­ny. But the stakes are too high to wait on the side­lines for oth­ers to solve the prob­lem. So what can we do now in response to algo­rith­mic gate­keep­ing? I rec­om­mend four proac­tive respons­es, pre­sent­ed in increas­ing order of engagement:

·      Avoid using “Google” as a verb, a com­mon habit that tac­it­ly iden­ti­fies a gener­ic online activ­i­ty with the brand name of a cor­po­ra­tion that has done as much as any to mul­ti­ply epis­temic inequal­i­ty, the con­cept devel­oped by Shoshana Zuboff, author of The Age of Sur­veil­lance Cap­i­tal­ism, to describe a form of pow­er based on the dif­fer­ence between what we can know and what can be known about us.

·      Remem­ber search engines and social media feeds are not neu­tral infor­ma­tion sources. The algo­rithms that dri­ve them often serve to repro­duce exist­ing inequal­i­ties in sub­tle but pow­er­ful ways. Inves­ti­gate for your­self. Select a top­ic of inter­est to you and com­pare search results from Google and Duck­Duck­Go.

·      Con­nect direct­ly to news orga­ni­za­tions that dis­play firm com­mit­ments to eth­i­cal jour­nal­ism, rather than rely­ing on your social media feed for news. Go to the out­let’s web­site, sign up for its email list or RSS feed, sub­scribe to the out­let’s print ver­sion if there is one. The direct con­nec­tion removes the social media plat­form, or search engine, as an unnec­es­sary and poten­tial­ly biased intermediary.

·      Call out algo­rith­mic bias when you encounter it. Call it out direct­ly to the enti­ty respon­si­ble for it; call it out pub­licly by let­ting oth­ers know about it.

For­tu­nate­ly, our human brains can employ new infor­ma­tion in ways that algo­rithms can­not. Under­stand­ing the influ­en­tial roles of algo­rithms on our lives—including how they oper­ate as gate­keep­ers of the news sto­ries we are most like­ly to see—allows us to take greater con­trol of our indi­vid­ual online expe­ri­ences. Based on greater indi­vid­ual aware­ness and con­trol, we can begin to orga­nize col­lec­tive­ly to expose and oppose algo­rith­mic bias and censorship.

Big Techcorporate mediaFacebookGooglemainstream mediapress censorshipTwitterYouTube

Andy Lee Roth, PhD, is associate director of Project Censored where he coordinates the Project’s Campus Affiliates Program, a news media research network of several hundred students and faculty at two dozen colleges and universities across North America. He co-edited the Project’s newest yearbook, State of the Free Press 2021 (Seven Stories Press) and his work has been published in a number of outlets, including Index on Censorship, In These Times, YES! Magazine, Media, Culture & Society, and the International Journal of Press/Politics.

guest

0 Comments
Inline Feedbacks
View all comments