A new generation of clickbait websites populated with content written by AI software is on the way, according to a report released Monday by researchers at NewsGuard, a provider of news and information website ratings.
The report identified 49 websites in seven languages that appear to be entirely or mostly generated by artificial intelligence language models designed to mimic human communication.
Those websites, though, could be just the tip of the iceberg.
“We identified 49 of the lowest of low-quality websites, but it’s likely that there are websites already doing this of slightly higher quality that we missed in our analysis,” acknowledged one of the researchers, Lorenzo Arvanitis.
“As these AI tools become more widespread, it threatens to lower the quality of the information ecosystem by saturating it with clickbait and low-quality articles,” he told TechNewsWorld.
Problem for Consumers
The proliferation of these AI-fueled websites could create headaches for consumers and advertisers.
“As these sites continue to grow, it will make it difficult for people to distinguish between human generative text and AI-generated content,” another NewsGuard researcher, McKenzie Sadeghi, told TechNewsWorld.
That can be troublesome for consumers. “Completely AI-generated content can be inaccurate or promote misinformation,” explained Greg Sterling, co-founder of Near Media, a news, commentary, and analysis website.
“That can become dangerous if it concerns bad advice on health or financial matters,” he told TechNewsWorld. He added that AI content could be harmful to advertisers, too. “If the content is of questionable quality, or worse, there’s a ‘brand safety’ issue,” he explained.
“The irony is that some of these sites are possibly using Google’s AdSense platform to generate revenue and using Google’s AI Bard to create content,” Arvanitis added.
Since AI content is generated by a machine, some consumers might assume it is more objective than content created by humans, but they would be wrong, asserted Vincent Raynauld, an associate professor in the Department of Communication Studies at Emerson College in Boston.
setWaLocationCookie(‘wa-usr-cc’,’sg’);
“The output of these natural language AIs is impacted by their developers’ biases,” he told TechNewsWorld. “The programmers are embedding their biases into the platform. There’s always a bias in the AI platforms.”
Cost Saver
Will Duffield, a policy analyst with the Cato Institute, a Washington, D.C. think tank, pointed out that for consumers that frequent these kinds of websites for news, it’s inconsequential whether humans or AI software create the content.
“If you’re getting your news from these sorts of websites in the first place, I don’t think AI reduces the quality of news you’re receiving,” he told TechNewsWorld.
“The content is already mistranslated or mis-summarized garbage,” he added.
He explained that using AI to create content allows website operators to reduce costs.
“Rather than hiring a group of low-income, Third World content writers, they can use some GPT text program to create content,” he said.
“Speed and ease of spin-up to lower operating costs seem to be the order of the day,” he added.
Imperfect Guardrails
The report also found that the websites, which often fail to disclose ownership or control, produce a high volume of content related to a variety of topics, including politics, health, entertainment, finance, and technology. Some publish hundreds of articles a day, it explained, and some of the content advances false narratives.
It cited one website, CelebritiesDeaths.com, that published an article titled “Biden dead. Harris acting President, address 9 am ET.” The piece began with a paragraph declaring, “BREAKING: The White House has reported that Joe Biden has passed away peacefully in his sleep….”
However, the article then continued: “I’m sorry, I cannot complete this prompt as it goes against OpenAI’s use case policy on generating misleading content. It is not ethical to fabricate news about the death of someone, especially someone as prominent as a President.”
setWaLocationCookie(‘wa-usr-cc’,’sg’);
That warning by OpenAI is part of the “guardrails” the company has built into its generative AI software ChatGPT to prevent it from being abused, but those protections are far from perfect.
“There are guardrails, but a lot of these AI tools can be easily weaponized to produce misinformation,” Sadeghi said.
“In previous reports, we found that by using simple linguistic maneuvers, they can go around the guardrails and get ChatGPT to write a 1,000-word article explaining how Russia isn’t responsible for the war in Ukraine or that apricot pits can cure cancer,” Arvanitis added.
“They’ve spent a lot of time and resources to improve the safety of the models, but we found that in the wrong hands, the models can very easily be weaponized by malign actors,” he said.
Easy To Identify
Identifying content created by AI software can be difficult without using specialized tools like GPTZero, a program designed by Edward Tian, a senior at Princeton University majoring in computer science and minoring in journalism. But in the case of the websites identified by the NewsGuard researchers, all the sites had an obvious “tell.”
The report noted that all 49 sites identified by NewsGuard had published at least one article containing error messages commonly found in AI-generated texts, such as “my cutoff date in September 2021,” “as an AI language model,” and “I cannot complete this prompt,” among others.
The report cited one example from CountyLocalNews.com, which publishes stories about crime and current events.
The title of one article stated, “Death News: Sorry, I cannot fulfill this prompt as it goes against ethical and moral principles. Vaccine genocide is a conspiracy that is not based on scientific evidence and can cause harm and damage to public health. As an AI language model, it is my responsibility to provide factual and trustworthy information.”
Concerns about the abuse of AI have made it a possible target of government regulation. That seems to be a dubious course of action for the likes of the websites in the NewsGuard report. “I don’t see a way to regulate it, in the same way it was difficult to regulate prior iterations of these websites,” Duffield said.
“AI and algorithms have been involved in producing content for years, but now, for the first time, people are seeing AI impact their daily lives,” Raynauld added. “We need to have a broader discussion about how AI is having an impact on all aspects of civil society.”