The world of journalism is turning at Mach speed with several partnerships being forged on licensing and AI usage within the newsroom.
Last week, OpenAI partnered with Dotdash Meredith, the publisher of People magazine. The deal allows OpenAI to make use of “trusted content” from the publisher in ChatGPT, particularly to help improve the chatbot.
This is only the latest in a series of similar licensing partnerships that OpenAI has forged over the past year. As of now, OpenAI potentially has access to an entire portfolio of articles from Politico, Business Standard, Financial Times, and Associated Press, to name a few.
Meanwhile, partnerships are being established the other way around, too. Earlier this month, Bloomberg partnered with AppliedXL to make use of their AI agents to generate analyses for their Bloomberg Terminal users.
Similarly, in a survey taken by JournalismAI, around 90% of the newsrooms surveyed stated that they use AI, though mainly to fact-check, proofread and generate analyses, with end results being human-generated.
Meanwhile, Sports Illustrated has reportedly used AI to generate articles, which means that at least one major news organisation is making use of AI-generated content.
Now, the licensing deals make sense, especially in light of several publications, the most notable of which was The New York Times, suing OpenAI and Microsoft over copyright infringement. It seems that the AI companies have now taken a more cautious approach to their GenAI, opting to instead forge licensing deals preemptively rather than deal with legal issues later on.
However, the use of AI within the newsroom itself is not new – and it’s growing. While only a few years ago, the usage of AI in journalism would have been considered unethical or lazy, opinions are fast changing on how AI could help journalists streamline their processes.
AI Use Then and Now
AppliedXL CEO Francesco Marconi said that the Associated Press has been using AI tools as early as 2013, becoming possibly the first news organisation to deploy them at scale.
In conversation with AIM, Marconi, who had previously worked with AP and the Wall Street Journal, said, “Although there’s a lot of attention now to the AI boom, there have been things that have been in place for a long time, in terms of, even considering the implications, standards, and best practices.”
However, with AI tools advancing faster than ever before, the focus has sharpened significantly. As evidenced by the recent partnerships, news organisations have been scrambling to get ahead in the AI game.
“I think there’s a lot of excitement and many organisations are making investments in new technology. I see people embracing these technologies but also being mindful of the potential risks and issues,” Marconi said.
There are already several debates raging on possible ethical considerations in using AI in the industry. However, this is entirely reliant on how an organisation chooses to use AI and whether they’re focused on providing reliable information to their readers or cashing out.
Could AI Help Weed Out Bad Faith Actors?
Now it’s obvious that AI in journalism isn’t going to go away. However, on a level playing field, it could be used to figure who’s on the level and who’s not.
Marconi points out that the reason why AppliedXL’s agents work is because they focus on highly specific domains and rely on niche and reliable datasets.
When asked whether AI could potentially replace newswires, he said, “I don’t think so because, again, this is applicable for very specific domains. I think news agencies can become way more productive and have more depth, but there’s always going to be coverage that will simply be impossible to replicate.”
This seems to be the general sentiment currently, with AI being used only as a tool for assistance rather than something that can reliably cover all the bases of journalistic integrity.
However, this comes with a caveat that a certain standard of gatekeeping should be met.
Sports Illustrated’s recent debacle, where it was found publishing AI-generated content under fake names and bios, serves as a good reminder of why AI can’t be a good journalist – and how a news organisation can use AI irresponsibly.
Recently, CyberMedia managing director Dhaval Gupta said, “We have to imagine a very different newsroom moving forward, where the quality of the story is the most critical element. I think newsrooms need to be the gatekeepers that not only produce good quality content, but in fact, protect against AI-generated content from bad actors.”
Marconi believes that if done right, AI could foster a new era of journalism with potentially new roles, responsibilities, and even a new horizon for journalism.
“There’ll be new forms of journalism. There will be new types of analyses that this generation of AI would enable, something not possible previously. I have an optimistic view. There will be new types of stories and things that we will unveil,” he said.
A proper conversation currently taking place around AI in journalism could potentially help separate the wheat from the chaff in terms of who is using AI responsibly and who is not.
The post Newsrooms Are (Not) Using AI Responsibly appeared first on Analytics India Magazine.