Mainstream media coverage of AI is therefore becoming increasingly one-dimensional: the way the technology is discussed, the regulatory landscape it sits in, and whose interests it should prioritize, are conversations governed disproportionately by the commercial technology industry, and its business interests. Various academics, industry leaders, institutes, companies, civil society projects, and social media posts have explored how mainstream news perpetuates industry narratives about AI.
You are likely already familiar with some of these narratives, such as “AI will replace creative talent,” “AI will cause mass job loss,” “AI is capable of thinking for itself,” and “AI will diagnose diseases.” This reporting fuels an array of polarizing and reductive narratives that simultaneously overhype the advantages of AI technologies and dramatize the risk, while obscuring the myriad harms and risks present in current AI technologies. These narratives are constructed through a range of mechanisms. One of these mechanisms is the explicit choice about which experts are included in citations.
We decided to conduct a content analysis of how the New York Times reports on AI — who’s voices and perspectives are most frequently cited in NYT articles, and how does this affect public perception of AI?
Some key findings from our content analysis:
- The NYT’s reporting is disproportionately influenced by the perspectives of individuals within the commercial technology industry. Specifically, the breakdown of individuals mentioned and quoted focuses on individuals working at commercial technology organizations. Elon Musk and Sam Altman are the individuals the NYT frequently covers, and OpenAI and Google are the two organizations mentioned the most.
- Those within the industry are often framed as ‘experts’ while those from academia, civil society, et al are framed as ‘outside experts’ and are featured less frequently, and are typically framed as critics and/or skeptics despite the substance of their claims.
- The NYT lacks clear and consistent definitions of specific AI technologies and relies instead on loose definitions and hyperlinks to vague or outdated articles.
- Reporting consistently overlooks the perspectives of voices coming from every other sector except the commercial technology industry, with narratives often shifting to current news stories concerning commercial technology organizations.
- Stories rely on a reductive hero vs. villain literary trope between Anthropic vs. OpenAI, the United States vs. China, and Humans vs. Machines, fueling polarizing and reductive narratives about AI.
This work contributes to our understanding of how we can support the inclusion of more voices and expertise in media coverage as part of our New Protagonist Network (NPN) initiative. We conducted this research on the NYT first, with an intention of scaling our methods to other online publications in order to get a better understanding of the prevailing AI narratives within mainstream media as a whole.
The final section of this report also contains ideas for next steps on what can be done about the lack of nuance, accuracy, and diversity of voices in AI reporting.