Have Journalists Skipped The Ethics Conversation When It Comes To Using AI?
It's even being used to identify story ideas from the minutes of municipal council meetings in cases where time-strapped reporters don't have time to do so.
What's lagging behind all this experimentation are the important conversations about the ethics of using these tools. This disconnect was evident when we interviewed journalists in a mix of newsrooms across Canada from July 2022 to July 2023, and it remains a problem today.
We conducted semi-structured interviews with 13 journalists from 11 Canadian newsrooms. Many of the people we spoke to told us that they had worked at multiple media organizations throughout their careers.
The key findings from our recently published research :
AI literacy varies within the same newsroom and certainly within the industry as a whole.
There's agreement that humans play an important role in supervising the use of AI, but there's no agreement on where human journalists must be involved in the process - at the AI tool coding level? Before a piece is published?Journalists believe professional practice and industry standards are being followed when using AI in journalism, but there is no agreed-upon“rule book” for how AI should be used. There are issues with transparency about how and when AI is being used, both among journalists working in the same newsroom and in terms of what is revealed to audiences about whether the content they are consuming was created using AI tools.
Studies show that Canadian audiences want to know if AI tools are being used in newsrooms, and aren't sure they want to pay for journalism created using AI. (Shutterstock)
Read more: Transparency and trust: How news consumers in Canada want AI to be used in journalism
What journalists told usSome of what we heard was reassuring. One journalist told us:
At the same time, however, it became clear that many news organizations are still operating in the ethical equivalent of the Wild West.
In many cases, journalists we spoke to talked about just following their gut when it came to deciding if using that AI tool to do that task was ethical. As one of our interviewees put it:“There's a rule book in my head.”
When we asked interviewees how they knew their colleagues at the same publication followed the same ethical code they did when using AI, most could not answer except to imply that their co-workers wouldn't have been hired if they didn't share the same principles. One journalist said:
Getting the ethics of AI right and being seen to be doing so is important because journalism has a growing trust problem and needs to do everything possible to reverse the trend.
Multiple studies have shown that Canadian audiences want to know if AI tools are being used in newsrooms, and they aren't sure if they want to pay for journalism created using AI.
Read more: How audience data is shaping Canadian journalism
AI and newsAudiences, meanwhile, are being fed a steady diet of examples that illustrate how using AI tools to create journalistic work can go very wrong. For instance:
-
The Winnipeg Free Press was forced to disavow its AI audio tool because it was mispronouncing the Manitoba premier's name.
An article in the Los Angeles Times was accused of“softening the image of the Ku Klux Klan.” An AI-generated poll about a report in The Guardian provoked outrage when it quizzed readers on how a woman who was featured in the article had died. The poll was created by a Microsoft news aggregator, but The Guardian stated that it damaged their reputation.
Sports Illustrated was caught creating fake bylines for AI-generated stories on their websites.
Journalists and news organizations are still struggling to arrive at a shared understanding of how to use AI tools. (Shutterstock)
News organizations might think they're being transparent with audiences about how much content is being created using AI, but our research finds the evidence is mixed at best, especially in circumstances where AI generates the content and an editor approves it in the content management system before it is published.
In one memorable Zoom interview, an editor walked us through the AI-generated content in an article posted online, saying that it was clearly identified as AI on the webpage.
However, upon sharing the page, they were shocked to discover there was no information about the article being AI-generated anywhere. They said it would be fixed immediately, but when we last checked, the article still said nothing about the AI tool used to generate it.
While we gathered data from interviews, newsrooms in Canada started releasing guidance through internal emails and public blog posts . It is hard to find any language in publicly accessible policies that refers explicitly to how AI is being used or the ethics surrounding such use. It's also unclear who is involved in conversations about ethical AI use in newsrooms, and who is not.
As one journalist we interviewed put it:
Our research suggests journalists and news organizations are still struggling in the midst of rapid technological change to arrive at a shared understanding of AI tools, their usage, the limitations of programming and best practices that build rather than erode trust.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment