Claire Wardle - Media Helping Media https://mediahelpingmedia.org Free journalism and media strategy training resources Thu, 26 Oct 2023 14:01:54 +0000 en-GB hourly 1 https://wordpress.org/?v=6.4.4 https://mediahelpingmedia.org/wp-content/uploads/2022/01/cropped-MHM_Logo-32x32.jpeg Claire Wardle - Media Helping Media https://mediahelpingmedia.org 32 32 Information disorder – mapping the landscape https://mediahelpingmedia.org/advanced/information-disorder-mapping-the-landscape/ Thu, 08 Aug 2019 16:23:12 +0000 https://mediahelpingmedia.org/?p=1224 Over recent months, there has been a surge of interest in trust and truth in a digital age. Claire Wardle of First Draft News sets out her 13 priority areas for further research.

The post Information disorder – mapping the landscape first appeared on Media Helping Media.

]]>
Photo by Zainul Yasni on Unsplash
Photo by Zainul Yasni on Unsplash

The following article is reproduced courtesy of First Draft News.
First draft news logo

Surge of interest in trust and truth

Over the past eighteen months, there has been a surge of interest in trust and truth in a digital age.

There have been hundreds of conferences, reports and papers on the subject.

As our understanding of the space becomes more sophisticated, it’s time to recognize thirteen smaller sub-categories, so we can undertake more targeted research, and convene workshops and conferences on more clearly defined and specific topics.

Here, I suggest thirteen sub-categories where I’m seeing specific initiatives, research or natural alliances.

It’s important to note that all these sub-categories should also be seen through an international lens. It is the one overarching theme that connects all of the following.

The thirteen spaces are:

  1. AI & Manipulation: Researching the ways that AI-generated synthetic media (otherwise known as ‘deepfakes’) will impact society, and developing tools and techniques tactics for identifying and verifying these types of sophisticated manipulated visual imagery.
  2. Closed Online Spaces & Messaging Apps: Researching the patterns of disinformation on private and semi-private spaces online, as well as messaging apps.
  3. Data Harvesting, Ad Tech & Micro-targeting: Researching the connections between data collection and targeted disinformation campaigns.
  4. Fact-Checking & Verification: Investigating claims made by official sources (politicians, think tanks, journalists), and investigating information, images and videos from unofficial sources on the social web.
  5. Identification of Disinformation Content & Tactics: Monitoring, verifying and providing contextual information around specific types of disinformation and the campaigns used to amplify them.
  6. Manufactured Amplification: Understanding techniques for artificially inflating disinformation campaigns, as well as attempts to distort ‘public opinion’, as when manipulating trending topics or purchasing signatures on online petitions.
  7. Media Ecosystems: Understanding how information disorder spreads across platforms and between traditional media (TV, radio and interpersonal communication).
  8. Media Literacy: Researching and evaluating best practices for teaching digital literacy in an age of information disorder.
  9. News Credibility: Developing machine-readable indicators that ensure quality information sources are given priority in social streams and search results.
  10. Polarization: Understanding the impact of polarization on the ways in which information is used, understood and shared.
  11. Policy & Regulation: Investigating the question of ‘regulation’, and ensuring it is based on clear definitions and evidence.
  12. Reporting best practices: Researching and experimenting with best practices for publishing fact-checks or debunks, particularly investigating the concepts of the ‘tipping point’ and ‘strategic silence’ to prevent providing additional oxygen to rumours, false content and amplification tactics.
  13. Trust in Media: Research and initiatives designed to improve trust in the professional media.

Note: This material first appeared on First Draft and has been reproduced here with the author’s consent. 

The post Information disorder – mapping the landscape first appeared on Media Helping Media.

]]>
Information disorder – how to recognise the forms https://mediahelpingmedia.org/advanced/information-disorder-how-to-recognise-the-forms/ Mon, 09 Jul 2018 09:25:32 +0000 https://mediahelpingmedia.org/?p=1231 Four free-to-download high-resolution graphics created by First Draft News to help explain the different categories, types, elements, and phases of information disorder. They are available for use in publications and presentations.

The post Information disorder – how to recognise the forms first appeared on Media Helping Media.

]]>
Image courtesy of Randy Colas on Unsplash
Image courtesy of Randy Colas on Unsplash

The following article is reproduced courtesy of First Draft News.
First draft news logo

Categories, types, elements and phases

The high-resolution graphics below were created to help explain the different categories, types, elements, and phases of information disorder. They are available for use in publications and presentations under a Creative Commons BY-NC-ND 3.0 license. Click the link under each image to download it.

Categories of information disorder

Figure 1: The seven categories of information disorder. Credit: Claire Wardle, 2017. Click here to download high-resolution version.

  1. Satire or parody: No intention to cause harm but has potential to fool.
  2. Misleading content: Misleading use of information to frame an issue or individual.
  3. Imposter content: when genuine sources are impersonated.
  4. Fabricated content: New content is 100% false, designed to deceive and do harm.
  5. False connection: When headlines, visuals, or captions don’t support the content.
  6. False context: When genuine content is shared with false contextual information.
  7. Manipulated content: When genuine information or imagery is manipulated to deceive.
information graphic by Claire Wardle
Information graphic courtesy of First Draft News

Types of information disorder

Figure 2: Three types of information disorder. Credit: Claire Wardle & Hossein Derakshan, 2017. Click here to download high-resolution version.

  1. Misinformation: Unintentional mistakes such as inaccurate photo captions, dates, statistics, translations, or when satire is taken seriously.
  2. Disinformation: Fabricated or deliberately manipulated audio.visual content. Intentionally created conspiracy theories or rumours.
  3. Malinformation: Deliberate publication of private information for personal or corporate rather than public interest. Deliberate change of context, date or time of genuine content.
Types of information disorder. Graphic by Claire Wardle & Hossein Derakshan
Information graphic courtesy of First Draft News

Elements of information disorder

Figure 3: Three elements of information disorder. Credit: Claire Wardle & Hossein Derakshan, 2017. Click here to download high-resolution version.

  1. Agent
  2. Message
  3. Interpeter
3 Elements of Information Disorder. Credit: Claire Wardle & Hossein Derakshan
Information graphic courtesy of First Draft News

Phases of information disorder

Figure 4: Three phases of information disorder. Credit: Claire Wardle & Hossein Derakshan, 2017. Click here to download high-resolution version.

  1. Creation: When the message is created.
  2. (Re) Production: When the message is turned into a media product.
  3. Distribution: When the product is distributed or made public.
3 Phases of Information Disorder. Credit: Claire Wardle & Hossein Derakshan, 2017
Information graphic courtesy of First Draft News

Note: This material first appeared on First Draft and has been reproduced here with the author’s consent. 

The post Information disorder – how to recognise the forms first appeared on Media Helping Media.

]]>
Information disorder – the essential glossary https://mediahelpingmedia.org/advanced/information-disorder-the-essential-glossary/ Mon, 09 Jul 2018 08:22:01 +0000 https://mediahelpingmedia.org/?p=1205 For the policy-makers, technology companies, politicians, journalists, librarians, educators, academics, and civil society organisations all facing the challenges of information disorder, agreeing to a shared vocabulary is essential.

The post Information disorder – the essential glossary first appeared on Media Helping Media.

]]>
Image of computer screen Markus Spiske on Unsplash
Image of computer screen by Markus Spiske on Unsplash

The following article is reproduced courtesy of First Draft News.
First draft news logo

Definitions and terminology matter

For the policy-makers, technology companies, politicians, journalists, librarians, educators, academics, and civil society organisations all wrestling with the challenges posed by  information disorder, agreeing to a shared vocabulary is essential.

This glossary has been compiled with research support from Grace Greason, Joe Kerwin & Nic Dias. You can download a PDF of this glossary which is embedded at the foot of this piece.

An algorithm is a fixed series of steps that a computer performs in order to solve a problem or complete a task. Social media platforms use algorithms to filter and prioritize content for each individual user based on various indicators, such as their viewing behavior and content preferences. Disinformation that is designed to provoke an emotional reaction can flourish in these spaces when algorithms detect that a user is more likely to engage with or react to similar content.¹

An API, or application programming interface, is a means by which data from one web tool or application can be exchanged with, or received by another. Many working to examine the source and spread of polluted information depend upon access to social platform APIs, but not all are created equal and the extent of publicly available data varies from platform to platform. Twitter’s open and easy-to-use API has enabled thorough research and investigation of its network, plus the development of mitigation tools such as bot detection systems. However, restrictions on other platforms and a lack of API standardization means it is not yet possible to extend and replicate this work across the social web.

Artificial intelligence (AI) describes computer programs that are “trained” to solve problems that would normally be difficult for a computer to solve. These programs “learn” from data parsed through them, adapting methods and responses in a way that will maximize accuracy. As disinformation grows in its scope and sophistication, some look to AI as a way to effectively detect and moderate concerning content. AI also contributes to the problem, automating the processes that enable the creation of more persuasive manipulations of visual imagery, and enabling disinformation campaigns that can be targeted and personalized much more efficiently.²

Automation is the process of designing a ‘machine’ to complete a task with little or no human direction. It takes tasks that would be time-consuming for humans to complete and turns them into tasks that are completed quickly and almost effortlessly. For example, it is possible to automate the process of sending a tweet, so a human doesn’t have to actively click ‘publish’. Automation processes are the backbone of techniques used to effectively ‘manufacture’ the amplification of disinformation.

Black hat SEO (search engine optimization) describes aggressive and illicit strategies used to artificially increase a website’s position within a search engine’s results, for example changing the content of a website after it has been ranked. These practices generally violate the given search engine’s terms of service as they drive traffic to a website at the expense of the user’s experience.³

Bots are social media accounts that are operated entirely by computer programs and are designed to generate posts and/or engage with content on a particular platform. In disinformation campaigns, bots can be used to draw attention to misleading narratives, to hijack platforms’ trending lists and to create the illusion of public discussion and support.⁴ Researchers and technologists take different approaches to identifying bots, using algorithms or simpler rules based on number of posts per day.⁵

A botnet is a collection or network of bots that act in coordination and are typically operated by one person or group. Commercial botnets can include as many as tens of thousands of bots.⁶

Data mining is the process of monitoring large volumes of data by combining tools from statistics and artificial intelligence to recognize useful patterns. Through collecting information about an individual’s activity, disinformation agents have a mechanism by which they can target users on the basis of their posts, likes and browsing history. A common fear among researchers is that, as psychological profiles fed by data mining become more sophisticated, users could be targeted based on how susceptible they are to believing certain false narratives.⁷

Dark ads are advertisements that are only visible to the publisher and their target audience. For example, Facebook allows advertisers to create posts that reach specific users based on their demographic profile, page ‘likes’, and their listed interests, but that are not publicly visible. These types of targeted posts cost money and are therefore considered a form of advertising. Because these posts are only seen by a segment of the audience, they are difficult to monitor or track.⁸

Deepfakes is the term currently being used to describe fabricated media produced using artificial intelligence. By synthesizing different elements of existing video or audio files, AI enables relatively easy methods for creating ‘new’ content, in which individuals appear to speak words and perform actions, which are not based on reality. Although still in their infancy, it is likely we will see examples of this type of synthetic media used more frequently in disinformation campaigns, as these techniques become more sophisticated.⁹

A dormant account is a social media account that has not posted or engaged with other accounts for an extended period of time. In the context of disinformation, this description is used for accounts that may be human- or bot-operated, which remain inactive until they are ‘programmed’ or instructed to perform another task.¹⁰

Doxing or doxxing is the act of publishing private or identifying information about an individual online, without his or her permission. This information can include full names, addresses, phone numbers, photos and more.¹¹ Doxing is an example of malinformation, which is accurate information shared publicly to cause harm.

Disinformation is false information that is deliberately created or disseminated with the express purpose to cause harm. Producers of disinformation typically have political, financial, psychological or social motivations.¹²

Encryption is the process of encoding data so that it can be interpreted only by intended recipients. Many popular messaging services such as WhatsApp encrypt the texts, photos and videos sent between users. This prevents governments from reading the content of intercepted WhatsApp messages.

Fact-checking (in the context of information disorder) is the process of determining the truthfulness and accuracy of official, published information such as politicians’ statements and news reports.¹³ Fact-checking emerged in the U.S. in the 1990s, as a way of authenticating claims made in political ads airing on television. There are now around 150 fact-checking organizations in the world,¹⁴ and many now also debunk mis- and disinformation from unofficial sources circulating online.

Fake followers are anonymous or imposter social media accounts created to portray false impressions of popularity about another account. Social media users can pay for fake followers as well as fake likes, views and shares to give the appearance of a larger audience. For example, one English-based service offers YouTube users a million “high-quality” views and 50,000 likes for $3,150.¹⁵

Malinformation is genuine information that is shared to cause harm.¹⁶ This includes private or revealing information that is spread to harm a person or reputation.

Manufactured amplification occurs when the reach or spread of information is boosted through artificial means. This includes human and automated manipulation of search engine results and trending lists, and the promotion of certain links or hashtags on social media.¹⁷ There are online price lists for different types of amplification, including prices for generating fake votes and signatures in online polls and petitions, and the cost of downranking specific content from search engine results.¹⁸

The formal definition of the term meme, coined by biologist Richard Dawkins in 1976, is an idea or behavior that spreads person to person throughout a culture by propagating rapidly, and changing over time.¹⁹ The term is now used most frequently to describe captioned photos or GIFs that spread online, and the most effective are humorous or critical of society. They are increasingly being used as powerful vehicles of disinformation.

Misinformation is information that is false, but not intended to cause harm. For example, individuals who don’t know a piece of information is false may spread it on social media in an attempt to be helpful.²⁰

Propaganda is true or false information spread to persuade an audience, but often has a political connotation and is often connected to information produced by governments. It is worth noting that the lines between advertising, publicity and propaganda are often unclear.²¹

Satire is writing that uses literary devices such as ridicule and irony to criticize elements of society. Satire can become misinformation if audiences misinterpret it as fact.²² There is a known trend of disinformation agents labelling content as satire to prevent it from being flagged by fact-checkers.

Scraping is the process of extracting data from a website without the use of an API. It is often used by researchers and computational journalists to monitor mis- and disinformation on different social platforms and forums. Typically, scraping violates a website’s terms of service (i.e., the rules that users agree to in order to use a platform). However, researchers and journalists often justify scraping because of the lack of any other option when trying to investigate and study the impact of algorithms.

A sock puppet is an online account that uses a false identity designed specifically to deceive. Sock puppets are used on social platforms to inflate another account’s follower numbers and to spread or amplify false information to a mass audience.²³ The term is considered by some to be synonymous with the term “bot”.

Spam is unsolicited, impersonal online communication, generally used to promote, advertise or scam the audience. Today, it is mostly distributed via email, and algorithms detect, filter and block spam from users’ inboxes. Similar technologies to those implemented in the fight against spam could potentially be used in the context of information disorder, once accepted criteria and indicators have been agreed.

Trolling is the act of deliberately posting offensive or inflammatory content to an online community with the intent of provoking readers or disrupting conversation. Today, the term “troll” is most often used to refer to any person harassing or insulting others online. However, it has also been used to describe human-controlled accounts performing bot-like activities.

A troll farm is a group of individuals engaging in trolling or bot-like promotion of narratives in a coordinated fashion. One prominent troll farm was the Russia-based Internet Research Agency that spread inflammatory content online in an attempt to interfere in the U.S. presidential election.²⁴

Verification is the process of determining the authenticity of information posted by unofficial sources online, particularly visual media.²⁵ It emerged as a new skill set for journalists and human rights activists in the late 2000s, most notably in response to the need to verify visual imagery during the ‘Arab Spring’.

A VPN, or virtual private network, is used to encrypt a user’s data and conceal his or her identity and location. This makes it difficult for platforms to know where someone pushing disinformation or purchasing ads is located. It is also sensible to use a VPN when investigating online spaces where disinformation campaigns are being produced.
Download a PDF of this glossary.

1 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
2 Ghosh, D. & B. Scott (January 2018) #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet, New America
3 Ghosh, D. & B. Scott (January 2018) #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet, New America
4 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
5 Howard, P. N. & K. Bence (2016) Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendu, COMPROP Research note, 2016.1, http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2016/06/COMPROP-2016-1.pdf
6 Ignatova, T.V., V.A. Ivichev, V.A. & F.F. Khusnoiarov (December 2, 2015) Analysis of Blogs, Forums, and Social Networks, Problems of Economic Transition
7 Ghosh, D. & B. Scott (January 2018) #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet, New America
8 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
9 Li, Y. Chang, M.C. Lyu, S. (June 11, 2018) In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking, Computer Science Department, University at Albany, SUNY
10 Ince, D. (2013) A Dictionary of the Internet (3 ed.), Oxford University Press
11 MacAllister, J. (2017) The Doxing Dilemma: Seeking a Remedy for the Malicious Publication of Personal Information, Fordham Law Review, https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5370&context=fl
12 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
13 Mantzarlis, A. (2015) Will Verification Kill Fact-Checking?, The Poynter Institute, https://www.poynter.org/news/will-verification-kill-fact-checking
14 Funke, D. (2018) Report: There are 149 fact-checking projects in 53 countries. That’s a new high, The Poynter Institute, https://www.poynter.org/news/report-there-are-149-fact-checking-projects-53-countries-thats-new-high
15 Gu, L., V. Kropotov & F. Yarochkin (2017) The Fake News Machine: How Propagandists Abuse the Internet and Manipulate the Public. Oxford University, https://documents.trendmicro.com/assets/white_papers/wp-fake-news-machine-howpropagandists-abuse-the-internet.pdf
16 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
17 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
18 Gu, L., V. Kropotov & F. Yarochkin (2017) The Fake News Machine: How Propagandists Abuse the Internet and Manipulate the Public. Oxford University, https://documents.trendmicro.com/assets/white_papers/wp-fake-news-machine-howpropagandists-abuse-the-internet.pdf
19 Dawkins, R. (1976) The Selfish Gene. Oxford University Press.
20 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
21 Jack, C. (2017) Lexicon of Lies, Data & Society, https://datasociety.net/pubs/oh/DataAndSociety_LexiconofLies.pdf
22 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
23 Hofileña, C. F. (Oct. 9, 2016) Fake accounts, manufactured reality on social media, Rappler, https://www.rappler.com/newsbreak/investigative/148347-fake-accounts-manufactured-reality-social-media
24 Office of the Director of National Intelligence. (2017). Assessing Russian activities and intentions in recent US elections. Washington, D.C.: National Intelligence Council, https://www.dni.gov/files/documents/ICA_2017_01.pdf.
25 Mantzarlis, A. (2015) Will Verification Kill Fact-Checking?, The Poynter Institute, https://www.poynter.org/news/will-verification-kill-fact-checking

By Claire Wardle, with research support from Grace Greason, Joe Kerwin & Nic Dias.

Note: This material first appeared on First Draft and has been reproduced here with the author’s consent. 

The post Information disorder – the essential glossary first appeared on Media Helping Media.

]]>