Welcome to NewsGuard's "Reality Check"
A weekly report on how misinformation online is undermining trust—and who’s behind it. Produced by co-CEOs Steven Brill, Gordon Crovitz, and the NewsGuard team.
Welcome to the first edition of NewsGuard’s weekly newsletter. We’re launching “Reality Check” after seeing how much interest there is in our work beyond the business and tech communities that we serve. We hope you enjoy it! Let us know: realitycheck@newsguardtech.com
This week:
‘Entirely AI’-produced podcast plagiarizes mainstream sources
A pro-Russia site delivers the trifecta of deceptive practices
Al Jazeera’s descent into fiction
The birth of birther 2.0
And more…
Subscribe to this newsletter to support our apolitical mission to counter misinformation for readers, brands, and democracies. Today’s newsletter was edited by Jack Brewster and Eric Effron.
1. A Podcast ‘Entirely’ Produced by AI: Hosted by AI, About AI, Using AI to Plagiarize
By Macrina Wang
This was bound to happen, we guess: A podcast boasting that it’s entirely AI produced has launched …. and has perfected the fine art of AI plagiarism.
"The AI Report," a podcast covering AI and technology news that is hosted by two AI "anchors," recites verbatim news reports from a wide range of outlets, including The New York Times, CBS News, Fox News, and The Hill—all without credit.
Example: On the Dec. 29, 2023, episode, one of the AI anchors read part of a Dec. 27, 2023, Business Insider article word-for-word, stating, “Humanoid robots are already beginning to enter the workforce. Many people may not be too concerned about whether the machines threaten their jobs ...” Business Insider was not credited.
Podcast Playground, the podcast platform that produces the show, did not respond to NewsGuard’s multiple requests for comment.
Big picture: AI models are trained on content available on the internet, including news articles. This podcast goes a step further: Presenting stolen stories as its own reporting.
In addition to misinformation, AI has turbocharged plagiarism because, well, robots don’t care about journalism ethics. But do they worry about lawsuits?
Other ways bad actors use AI to plagiarize:
In August 2023, NewsGuard was first to identify the emergence of AI content farms using AI to copy and rewrite content from mainstream sources without credit. In these cases, AI’s ability to scramble text just enough means that it is even harder to detect when a site copies content from another source.
This could be called plagiarism squared: The original sin is that bots themselves are already trained on data from millions of articles from news sources, without permission. Now the plagiarism-trained machines are plagiarizing more directly.
In December 2023, The New York Times sued OpenAI for ChatGPT’s use of copyrighted material.
2. D.C. Weekly’s AI-Driven Propaganda Machine Wins Disinformation Triple Crown
Disinformation Triple Crown Winner: Here’s a website that NewsGuard recently discovered (with the help of researchers at Clemson University) that is a hall of fame standout in the three classic ingredients of the most nefarious disinformation:
Stunningly creative, elaborate false narratives—these folks really work at spinning great tales.
Run by particularly diabolic bad actors—in this case, spreaders of Russian disinformation.
Trailblazing use of generative AI to spread false claims designed to be as divisive as possible.
Keep reading: DCWeekly.org isn’t your typical pro-Kremlin propaganda site. Masquerading as a local site in the U.S. capital, this operation stands out among disinformation campaigns, spreading false claims about Ukrainian officials supposedly purchasing lavish villas and luxury yachts.
Who’s behind it: The site doesn’t disclose its ownership, but Clemson University researchers reviewed IP addresses (unique numbers attached to every computer) and domain-registration records (official records associated with the registration of a domain name) and found that the site is hosted on a server in Moscow and that it is apparently owned by John Mark Dougan.
Dougan is a former U.S. Marine and Florida police officer who, according to U.S. federal authorities, has previously been involved in Russian disinformation operations. According to U.S. officials, he fled to Moscow in 2016 and was granted asylum after his home was raided by the FBI, which alleged that he leaked confidential information about local officials.
A closer look: DCWeekly.org crafts its articles to create and exploit divisions among Americans. It’s a vivid example of how AI can be strategically used to amplify disinformation. It uses generative AI to rewrite articles from mainstream sources, specifically prompting the technology to focus on rewriting partisan and divisive content based on specific scoring criteria the site established.
For example, AI error messages inadvertently left in the text on the site have included, “Please note: The above article is presented in accordance with the provided context, which favors Republicans and Trump while portraying Democrats and Biden in a negative light.”
Context: AI error messages, which sometimes mistakenly appear in text, can provide clues about how bad actors intended to direct (which is to say, weaponize) a chatbot.
Tricks of our trade: You can spot AI generated text by looking for error messages yourself. These error messages include phrases commonly found in AI-generated texts, such as “Please note,” “as an AI language model,” and “I cannot complete this prompt,” among others.
3. Website on the Way Down: AlJazeera.com
AlJazeera.com’s coverage of the Israel-Hamas war led to a drop in its NewsGuard Trust Score. NewsGuard now warns readers to “proceed with caution.”
(Click here to find out more about NewsGuard Trust Scores and our process for rating websites. You can download NewsGuard’s browser extension, which displays NewsGuard Trust Score icons next to links on search engines, social media feeds, and other platforms by clicking here.)
What happened: The NewsGuard Trust Score of AlJazeera.com, a prominent news source owned and funded by the Qatari government, dropped more than 25 points to 52/100 following a November 2023 NewsGuard assessment of the site.
NewsGuard analysts found that the website’s coverage of the Oct. 17, 2023, explosion at a Gaza hospital left the impression that Israel was clearly responsible, despite the fact that no definitive cause has been established. In fact, Western governments and media reports strongly point to a failed rocket fired from Gaza as the cause. See AlJazeera.com’s coverage here and here.
With a 52/100 score, NewsGuard warns news consumers: “Proceed with Caution: This website generally fails to maintain basic standards of accuracy and accountability.”
Read NewsGuard’s AlJazeera.com Nutrition Label
Why it matters: The question of who caused the Oct. 17 explosion at Al-Ahli Hospital in Gaza stands as one of the most hotly debated issues emerging from the war. Hamas instantly blamed Israel for the attack, and some Western news outlets initially relied on Hamas’ claims in their original reporting.
4. Birther 2.0: How the Nikki Haley Birther Claim Made it to Trump’s Truth Social Account
By Sam Howard
What happened: A false claim that Republican presidential candidate Nikki Haley is ineligible for the office of the presidency has taken off on social media and far-right news sites. And now former President Donald Trump, who championed the Barack Obama birther hoax, has jumped on the bandwagon.
How birther 2.0 spread:
The false claim about Haley apparently first emerged in a Dec. 24, 2023, article by far-right activist and commentator Laura Loomer on her website Loomered.com.
Other fringe sites took it from there. AmGreatness.com (NewsGuard Trust Score: 42/100), TheGatewayPundit.com (Trust Score: 30/100), CreativeDestructionMedia.com (Trust Score: 20/100), USSANews.com (Trust Score: 0/100), BeforeItsNews.com (Trust Score: 0/100), and SurviveTheNews.com (Trust Score: 0/100) all published articles advancing the false claim about Haley’s supposed ineligibility between Dec. 26, 2023, and Jan. 2, 2024. (You can download NewsGuard’s browser extension, which displays NewsGuard Trust Score icons next to links on search engines, social media feeds, and other platforms by clicking here.)
On Truth Social, Trump shared The Gateway Pundit’s coverage, which said that Haley’s parents’ immigration status “disqualifies Haley from presidential or vice-presidential candidacy.”
And? They’re all wrong. The courts have consistently ruled that under the U.S. Constitution, anyone born in the U.S. is considered a natural-born citizen and therefore eligible to be president if they meet the other requirements.
Haley was born in South Carolina in 1972 and is a natural-born citizen, thus eligible to be president.
It makes no difference that Haley’s Indian immigrant parents gained citizenship only in 1978 and 2003, as an article in South Carolina newspaper The State (Trust Score: 100/100) reported they did.
5. Phony ‘News Reports’ Posing as Real News in Israel-Hamas War
Tricks of their trade: Online “news reports” on the Israel-Hamas war may be fakes—even (and maybe especially) if they cite authoritative sources. Bad actors on Instagram and TikTok are using photoshopping tools to generate bogus reports about the war, then presenting these false claims as reporting from trusted news organizations.
What happened: Since Oct. 7, 2023, when Hamas launched its attacks against Israel, NewsGuard analysts have identified eight viral false claims based on fabricated news reports emulating credible media outlets.
Examples include the false claim that The Washington Post published an article in November 2023 headlined, “Weapons supplies from Ukraine to Hamas have tripled over the past month.”
Big picture: The Israel-Hamas war is not the first time this tactic has been used.
Since Russia’s February 2022 invasion of Ukraine, NewsGuard analysts have identified
16 false Russia-Ukraine war claims based on fabricated news reports purporting to come from outlets including Fox News, USA Today, Al Jazeera, and Politico.
6. NewsGuard Commentary: Targeted by Beijing, Taiwan Shows How to Counter Disinformation Without Censorship
By Gordon Crovitz, NewsGuard co-CEO
On a recent trip to Taiwan, I was struck by what was missing from the political campaigns leading to its presidential election: There was little focus on false claims in the news, despite efforts by Beijing to spread disinformation. This was surprising considering that, according to CBS News, Taiwan has been the No. 1 target of foreign disinformation, chiefly from China, for the past decade. The election went off without a hitch, giving a third term to the political party most hostile to Beijing’s Chinese Communist Party.
How it works: Perhaps thanks to Beijing’s energetic efforts to undermine Taiwan’s democracy, the country has had to pioneer unique solutions to the disinformation challenge, with the private sector taking a leading role. When Taiwanese see a false claim in the news, they have an experience unlike any elsewhere: On the leading messaging app in the country, called Line, users often see debunking of claims at the same time they see the original claim.
A chatbot, called Auntie Meiyu, delivers fact checks as part of the Line experience when it spots a reference to a false claim. When a member of a messaging group asks the chatbot about a claim, Line’s machine-learning tool looks for keywords and phrases describing the claim and delivers a fact check that all members of the group see and that can be forwarded to others, according to an explanation in the Rest of World news site.
What it means: Instead of falsehoods spreading across the internet unchallenged unless a news consumer makes the effort to go to a fact-checking site, these claims are countered in the same chat as the user first sees the claim, as part of the messaging experience.
Contrast Line taking responsibility for trustworthy news with the long-standing refusal of Facebook and its WhatsApp, Google and its YouTube, and China-owned TikTok, and others to allow their users access to debunking of false claims integrated into their platforms. Users must instead search for fact checks, typically after the hoax has spread. American political scientist Francis Fukuyama has proposed that Silicon Valley platforms should offer such “middleware” solutions to their users. This middleware would provide immediate context and debunking for the misinformation.
Applying the Taiwan lesson: The Taiwanese government developed a “Humor over Rumor” campaign debunking false claims about COVID-19 using a cartoon Shiba Inu, a popular dog in Taiwan, to urge social distancing defined as three dog lengths. But much of the work countering false claims is done outside of government, by the private sector and independent fact-checking groups. Taiwan “has historically been Beijing’s testing ground for information warfare,” according to RAND researchers, making Taiwan’s defenses against Beijing well worth studying and copying by other democracies and the platforms that claim to serve their citizens.
7. And one last thing … Bill Gates and the Dancing Broccoli
What happened: Bill Gates, a favorite target of conspiracy theorists, is now accused of trying to get us to eat lab-grown broccoli. Social media users and unreliable news sites claim the billionaire Microsoft co-founder is secretly plotting to force consumers to eat artificial and genetically modified foods.
A video shared on TikTok, X, Instagram, and Facebook showed a piece of broccoli being injected with a liquid and immediately starting to twitch on its own. Social media users claimed that the video depicted an “artificial vegetable” created in a laboratory by Gates. The clip was shared in languages including Italian, English, French, and Thai.
Actually: The video showcases a piece by artist Russel Cameron originally posted on his TikTok and Instagram accounts as a sample of his work. The Bill & Melinda Gates Foundation said that the claim is false, and that Gates has no connection to the video, Reuters reported.
Watch the video by clicking here, or pressing the play button below.