AI - Media Helping Media https://mediahelpingmedia.org Free journalism and media strategy training resources Sun, 24 Dec 2023 09:48:53 +0000 en-GB hourly 1 https://wordpress.org/?v=6.4.4 https://mediahelpingmedia.org/wp-content/uploads/2022/01/cropped-MHM_Logo-32x32.jpeg AI - Media Helping Media https://mediahelpingmedia.org 32 32 How to detect AI-generated images https://mediahelpingmedia.org/advanced/how-to-detect-ai-generated-images/ Sun, 24 Dec 2023 09:22:31 +0000 https://mediahelpingmedia.org/?p=2923 Fact-checking journalist Deepak Adhikari, the editor of Nepal Check, shares how he and his colleagues combat the spread of fake AI images on social media and in other news output.

The post How to detect AI-generated images first appeared on Media Helping Media.

]]>
Image of robot and smartphone by Matt Brown (https://www.flickr.com/photos/londonmatt/) released via Creative Commons BY DEED 2.0 (https://creativecommons.org/licenses/by/2.)
Image of robot and smartphone by Matt Brown released via Creative Commons BY DEED 2.0

Fact-checking journalist Deepak Adhikari, the editor of Nepal Check, has shared a piece he wrote about the spread of AI-generated image following an earthquake in Nepal in November 2023. The article, below, explains how his organisation and others set about identifying the fake photographs. Deepak hopes the methods he and his team used will be of use to other journalists trying to combat the spread of fake images on social media and in other news output.


Social media is flooded with AI-generated images. Here’s how to detect them

By Deepak Adhikari, editor of Nepal Check

Following the devastating earthquake that struck Jajarkot district in Karnali Province in early November, social media users shared AI-generated images claiming to show the devastation caused.

One photograph showed dozens of houses ruined by the earthquake with people and rescuers walking through the debris. The photo was initially shared by Meme Nepal. It was subsequently used by celebrities, politicians and humanitarian organisations keen to draw attention to the disaster in one of Nepal’s poorest regions.

The image was used by Anil Keshary Shah, a former CEO of Nabil Bank and Rabindra Mishra, a senior vice president of National Democratic Party. (See archived version here and here)

When Nepal Check contacted Meme Nepal in an attempt to find the original source of the photo, they replied that they had found the image on social media.

A screenshot of Arjun Parajuli’s post on Facebook along with a poem lamenting the scene from the image
A screenshot of Arjun Parajuli’s post on Facebook along with a poem lamenting the scene from the image

A month on, the AI-generated images supposedly showing the aftermath of the earthquake continued. On December 14, 2023, Arjun Parajuli, a Nepali poet and founder of Pathshala Nepal, posted a photo claiming to show students studying in the ruins of the earthquake at Jajarkot. Parajuli. The poet attached the photo to a poem, had reshared the image from Manish Khadka, who identifies himself as a journalist based in Musikot of Rukum district.

A screenshot of Manish Khadka’s post on Facebook with a caption claiming to show students in Rukum and Jajarkot
A screenshot of Manish Khadka’s post on Facebook with a caption claiming to show students in Rukum and Jajarkot

Both these viral and poignant images were fake. They were generated using text-to-image generator platforms such as Midjourney, DALLE.

In the digital age it’s easy to manipulate images. With the rise of AI-enabled platforms it’s possible to generate images online quickly and convincingly. AI-generated images have evolved from amusingly odd to realistic. This has created further challenges for fact-checkers who are already inundated with misleading or false information circulating on social media platforms.

Fact-checkers often rely on Google’s Reverse Image Search, a tried and tested tool used to detect an image’s veracity. But Google and other search engines only show photos that have been previously published online.

So, how can one ascertain if an image is AI-generated? Currently, there is no tool that can determine this with 100% accuracy.

A screengrab of result on ISITAI after uplaoding the viral image on the platform
A screengrab of result on ISITAI after uplaoding the viral image on the platform

For example, Nepal Check used Illuminarty.ai and isitai.com to check the earthquake images to try to find out if they were generated using AI tools. After uploading an image to the platforms a percentage of how likely the image is to be generated by AI is shown.

A screengrab of result on Illuminarty after uplaoding the viral image on the platform
A screengrab of result on Illuminarty after uplaoding the viral image on the platform

Nepal Check contacted Kalim Ahmed, a former fact-checker at AltNews. He made the following observations about the image claiming to show devastation of the earthquake in Jajarkot.

  • If you zoom in and take a closer look at the people they appear deformed and like toys.
    The rocks/debris just at the centre look like they’re straight out of a video game made in the late 90s or early 2000s.
  • In the absence of a foolproof way to determine whether a photo is AI-generated, using observational skills and finding visual clues is the best way to tackle them.
Examination of an AI image
Examination of an AI image

A healthy dose of skepticism about what you see online (seeing is no longer believing), a search for the source of the content, whether there’s any evidence attached to the claim, and looking for context are powerful ways to separate fact from fiction online.

Further examination of an AI image
Further examination of an AI image

In a webinar in August this year organised by News Literacy Project, Dan Evon urged users to keep asking questions (is it authentic?). With the AI-images, their surfaces seem unusually smooth, which can be a giveaway, according to him. “Everything looks a little off,” he said.

Dan suggests looking for visual clues, adding that it was crucial to find out the provenance of the image. Experts caution that the virality of content on social media often stems from its ability to generate outrage or controversy, highlighting the need for careful consideration when encountering emotionally charged material.

In her comprehensive guide on detecting AI-generated images, Tamoa Calzadilla, a fellow at the Reynolds Journalism Institute in the US, encourages users to pay attention to hashtags that may indicate the use of AI in generating the content.

While AI has made significant progress in generating realistic images, it still faces challenges in accurately replicating human organs, such as eyes and hands. “That’s why it’s important to examine them closely: Do they have five fingers? Are all the contours clear? If they’re holding an object, are they doing so in a normal way?”, Tamoa writes in the guide.

Experts recommend that news media disclose information to readers and viewers regarding AI-generated images. Social media users are also advised to share the process publicly to mitigate the spread of misinformation.

Although the images purporting to depict the earthquake in Jajarkot lack a close-up view of the subjects, upon closer examination it becomes evident that they resemble drawings rather than real humans. Nepal Check also conducted a comparison between the viral AI-generated images and those disseminated by news media. We couldn’t find any such images that had been published on mainstream media in the aftermath of the earthquake.

The post How to detect AI-generated images first appeared on Media Helping Media.

]]>
The role of AI in the newsroom https://mediahelpingmedia.org/advanced/the-role-of-ai-in-the-newsroom/ Thu, 02 Nov 2023 09:35:06 +0000 https://mediahelpingmedia.org/?p=2859 Three excellent free training resources designed to help newsrooms "learn about the opportunities" and "support and grow all aspects of a news operation" by embracing AI.

The post The role of AI in the newsroom first appeared on Media Helping Media.

]]>
Image by 6eo tech https://www.flickr.com/photos/6eotech/ released via Creative Commons CC BY 2.0 DEED
Image by 6eo tech released via Creative Commons CC BY 2.0 DEED

Below are three excellent free training resources designed to help newsrooms “learn about the opportunities” and “support and grow all aspects of a news operation” by embracing AI.

They also include “writing guidelines for the role of AI in the newsroom.” The material has been produced by the AP, the LSE, and NiemanLab.

According to the AP, its course is “based on findings from AP’s research with local U.S. newsrooms and is designed for local news journalists and managers at all levels.”

The AP guide is designed to “Get your newsroom ready to incorporate technologies that include artificial intelligence to support and grow all aspects of your news operation.”

View the AP course.

In the first video in the AP’s course (link above), Jim Kennedy talks about how AP uses AI for “streamlining workflows and freeing journalists to focus on higher-order work” by “removing the grunt work that bogged down the news process every day”. Jim mentions how with some data-heavy journalism, such as sports stats and company financial results, content production increased tenfold.

The London School of Economics and Political Science (LSE) course is “a guide designed to help news organisations learn about the opportunities offered by AI to support their journalism”.

The LSE says its guide will help news organisation decide how to embrace AI journalism “to make your work more efficient and serve your audiences better”.

View the LSE’s guide.

The NiemanLab has produced “Writing guidelines for the role of AI in the newsroom.” It says the material sets out “the importance of meaningful human involvement and supervision in the use of AI, including through additional editing and factchecking of outputs before publication”.

View the NiemanLab suggested guidelines.

For more background on the development of AI and how it impacts news, you might want to view David Caswell’s presentation on “Generative AI and Automation of Media”.

David Caswell is the founder of StoryFlow Ltd., an innovation consultancy focused on AI workflows in news production. He was formerly an Executive Product Manager at BBC News Labs, focused on AI-based new product initiatives. He previously led product management for machine learning at Tribune Publishing and the Los Angeles Times, and was Director of Product Management for Automated Content Understanding at Yahoo!. David has also researched and published extensively on computational, structured and automated forms of journalism, including as a Fellow at the Reynolds Journalism Institute at the Missouri School of Journalism.

The post The role of AI in the newsroom first appeared on Media Helping Media.

]]>