verification - Media Helping Media https://mediahelpingmedia.org Free journalism and media strategy training resources Thu, 26 Oct 2023 13:58:12 +0000 en-GB hourly 1 https://wordpress.org/?v=6.4.4 https://mediahelpingmedia.org/wp-content/uploads/2022/01/cropped-MHM_Logo-32x32.jpeg verification - Media Helping Media https://mediahelpingmedia.org 32 32 Fact-checking and adding context https://mediahelpingmedia.org/basics/fact-checking-and-adding-context/ Thu, 01 Dec 2022 16:39:02 +0000 https://mediahelpingmedia.org/?p=2420 Journalism is about far more than simply gathering information and passing it on. An essential part of the editorial process is to examine everything we are told to make sure it is factual, and then add context so that any facts that are uncovered are considered alongside existing knowledge.

The post Fact-checking and adding context first appeared on Media Helping Media.

]]>
Image by Cathy released via Creative Commons BY-NC 2.0
Image by Cathy released via Creative Commons BY-NC 2.0

Journalism is about far more than simply gathering information and passing it on. An essential part of the editorial process is to examine everything we are told to make sure it is factual, and then add context so that any facts that are uncovered are considered alongside existing knowledge.

The author skyfishgoo wrote in a piece about critical thinking that “science and journalism both seek to put facts in context so they become useful to others”. He goes on to say that “science dictates that when a claim is made it is subject to critical review”.

Giving content “a bit of a scrub”

Put simply, he says that all of us have a responsibility to give every new piece of information that comes our way “a bit of a scrub” before passing it on to others.

This is particularly important in terms of producing original journalism and then broadcasting or publishing that material and sharing it on social media.

Once a piece of journalism is in the public domain it will be referenced, quoted, and possibly plagiarised as it becomes part of the global conversation. If that piece of journalism is untrue, then lasting damage will have been done.

But let’s first agree what is meant by the word ‘fact’.

According to the Oxford English Dictionary, a fact is something that is “known or proved to be true”. It is also “information used as evidence or as part of a report or news article”. In legal terms, a fact is “the truth about events as opposed to interpretation”.

And that last definition is interesting, because journalists ‘interpret’ events by adding context – but more on that later. For now, let’s refer to facts that have not yet been fully tested as ‘claims’.

Here are a few tests that should be applied to information that a journalist receives from someone who ‘claims’ that what they are passing on is factual.

The first three tests are about source verification and fact-checking, the fourth is about adding context.

1: Is the source credible?

  • What do we know about the source?
  • What is their motive for sharing the information?
  • Could the source have an agenda about which we are not aware?

To Do: Research the background of the source, their connections, any previous record of sharing information.

2: Has it happened?

  • Could there be a simple explanation?
  • Has your source been misled? If so, by whom?
  • Is there a history of such an event taking place?

To Do: Research the chronology of events. Check your own news organisations archive. Search the web.

3: Where is the evidence?

  • Is the information available elsewhere?
  • What is the evidence to support the claim?
  • Has that evidence been tested?

To Do: Seek out a second, independent and trusted source.

4: What is the context?

  • What are the implications if the claims are true?
  • How many people are affected and how?
  • Gather data and statistics for comparison purposes.

To Do: Paint the bigger picture, understand the importance of the event in relation to other news stories.

Those of you who are new to journalism might want to print out the following checklist and put it on the wall in your newsroom as a reminder.

Fact-checking and context graphic by Media Helping Media

If the results of your research make you feel uneasy you might want to drop the story. However, even a false claim, presented as fact, but, on investigation, found to be untrue, could still be a story. It could point to a political, commercial, or social conflict that might require investigation.

Never rule out a possible news story because the initial evidence presented proves to be shaky.

Now let’s look at point four ‘the context’ more closely.

Adding context

One dictionary definition of ‘context’ is: “the circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood.

That word, ‘understood’, is important.

The role of a journalist is to enhance understanding. We do that by surrounding proven facts with data, statistics, history, and circumstances that, together, help paint a fuller picture of what has happened.

Think of it this way.

Imagine you are at home watching a series on TV. It’s the final episode of six. Just as the programme is reaching the conclusion there is a knock at the door. It’s a friend you haven’t seen for some time. You welcome them in.

As they walk through the door there is a scream from the lounge. One of the characters in the TV series has discovered the gruesome remains of a body. Your guest is shocked, but fascinated.

You offer to turn the TV off so you can chat, but they are so intrigued by what they saw on the screen that they ask you whether they could watch the programme with you, particularly as it’s reaching its conclusion. They want to know what happens next.

So you pause the programme, put the kettle on, make a cup of tea, and tell your guest about what has happened so far.

You explain who the characters are, what has taken place in previous episodes, how the situation has developed, the relationships between the characters, what clues you have picked up along the way, and how the plot has thickened to reach the point where your guest heard the scream.

And explaining the background proves to be important because your friend thought you must be watching a murder mystery, when, in fact, the series you were watching was a documentary about archeology. The scream was from an archeologist who had unexpectedly found mummified remains. It was not a modern-day crime thriller.

Now your guest has the context, so you can watch the end of the final episode together, with your guest informed about the background to the story and better able to understand events.

The same is true with journalism.

A colleague who was working as an intake editor on a news desk remembers receiving a call from an off-duty reporter who had just passed an overturned red double decker bus on  a London street. People were wandering around with blood pouring from wounds. Two camera crews were mobilised, but before they’d even left the building the reporter discovered that it was a film crew making a movie. The story had changed once the reporter had checked his facts and explored the context.

I made a similar mistake when reporting on a fire at an inner city block of flats in Liverpool. I reported live into the 4pm news bulletin saying that residents were trying to salvage what they could from their burning homes. I was wrong. Had I checked my facts, not made assumptions, and taken time to establish the context of events I would have discovered that I was witnessing rioting and looting. You can read about that experience and the lessons learnt here.

The challenge all journalists face is not just to report the news but to also set out the background to an event as well as all related events in order to help the audience understand the elements of a story which they might otherwise find hard to comprehend – or even reach the wrong conclusion.

Perhaps it involves researching and setting out the chronology of events that have led to the current breaking news story. These can be presented as related stories.

You might need to research the backgrounds of the characters involved as you look for any social connections to anyone else involved. These can be presented as profiles.

Essentially, what you are doing is gathering as much information as possible in order to put together the most detailed, in-depth, and informative account of what has happened.

All this illustrates that journalism helps people make sense of the world – not just what’s happening, but why it’s happening. Stories that raise questions without even attempting to address those questions are weak stories.

  • A bridge has collapsed. Why?
  • A racing driver stops his car while leading the race. Why?
  • A politician resigns. Why?

A news story without context can never be completely understood. A news source that is not verified can never be completely trusted. A claim, left unchecked, might not necessarily be a fact. And a news story without fact-checking and context could add more to the cacophony of confusion than to the enhancement of understanding.

If you found this interesting and, perhaps, helpful, you might want to check our other, related training modules.

Accuracy in journalism
The basics of fact-checking
How to identify and deal with fake news
Dealing with disinformation and misinformation
Unconscious bias and its impact on journalism


The post Fact-checking and adding context first appeared on Media Helping Media.

]]>
Information disorder – mapping the landscape https://mediahelpingmedia.org/advanced/information-disorder-mapping-the-landscape/ Thu, 08 Aug 2019 16:23:12 +0000 https://mediahelpingmedia.org/?p=1224 Over recent months, there has been a surge of interest in trust and truth in a digital age. Claire Wardle of First Draft News sets out her 13 priority areas for further research.

The post Information disorder – mapping the landscape first appeared on Media Helping Media.

]]>
Photo by Zainul Yasni on Unsplash
Photo by Zainul Yasni on Unsplash

The following article is reproduced courtesy of First Draft News.
First draft news logo

Surge of interest in trust and truth

Over the past eighteen months, there has been a surge of interest in trust and truth in a digital age.

There have been hundreds of conferences, reports and papers on the subject.

As our understanding of the space becomes more sophisticated, it’s time to recognize thirteen smaller sub-categories, so we can undertake more targeted research, and convene workshops and conferences on more clearly defined and specific topics.

Here, I suggest thirteen sub-categories where I’m seeing specific initiatives, research or natural alliances.

It’s important to note that all these sub-categories should also be seen through an international lens. It is the one overarching theme that connects all of the following.

The thirteen spaces are:

  1. AI & Manipulation: Researching the ways that AI-generated synthetic media (otherwise known as ‘deepfakes’) will impact society, and developing tools and techniques tactics for identifying and verifying these types of sophisticated manipulated visual imagery.
  2. Closed Online Spaces & Messaging Apps: Researching the patterns of disinformation on private and semi-private spaces online, as well as messaging apps.
  3. Data Harvesting, Ad Tech & Micro-targeting: Researching the connections between data collection and targeted disinformation campaigns.
  4. Fact-Checking & Verification: Investigating claims made by official sources (politicians, think tanks, journalists), and investigating information, images and videos from unofficial sources on the social web.
  5. Identification of Disinformation Content & Tactics: Monitoring, verifying and providing contextual information around specific types of disinformation and the campaigns used to amplify them.
  6. Manufactured Amplification: Understanding techniques for artificially inflating disinformation campaigns, as well as attempts to distort ‘public opinion’, as when manipulating trending topics or purchasing signatures on online petitions.
  7. Media Ecosystems: Understanding how information disorder spreads across platforms and between traditional media (TV, radio and interpersonal communication).
  8. Media Literacy: Researching and evaluating best practices for teaching digital literacy in an age of information disorder.
  9. News Credibility: Developing machine-readable indicators that ensure quality information sources are given priority in social streams and search results.
  10. Polarization: Understanding the impact of polarization on the ways in which information is used, understood and shared.
  11. Policy & Regulation: Investigating the question of ‘regulation’, and ensuring it is based on clear definitions and evidence.
  12. Reporting best practices: Researching and experimenting with best practices for publishing fact-checks or debunks, particularly investigating the concepts of the ‘tipping point’ and ‘strategic silence’ to prevent providing additional oxygen to rumours, false content and amplification tactics.
  13. Trust in Media: Research and initiatives designed to improve trust in the professional media.

Note: This material first appeared on First Draft and has been reproduced here with the author’s consent. 

The post Information disorder – mapping the landscape first appeared on Media Helping Media.

]]>
Information disorder – the essential glossary https://mediahelpingmedia.org/advanced/information-disorder-the-essential-glossary/ Mon, 09 Jul 2018 08:22:01 +0000 https://mediahelpingmedia.org/?p=1205 For the policy-makers, technology companies, politicians, journalists, librarians, educators, academics, and civil society organisations all facing the challenges of information disorder, agreeing to a shared vocabulary is essential.

The post Information disorder – the essential glossary first appeared on Media Helping Media.

]]>
Image of computer screen Markus Spiske on Unsplash
Image of computer screen by Markus Spiske on Unsplash

The following article is reproduced courtesy of First Draft News.
First draft news logo

Definitions and terminology matter

For the policy-makers, technology companies, politicians, journalists, librarians, educators, academics, and civil society organisations all wrestling with the challenges posed by  information disorder, agreeing to a shared vocabulary is essential.

This glossary has been compiled with research support from Grace Greason, Joe Kerwin & Nic Dias. You can download a PDF of this glossary which is embedded at the foot of this piece.

An algorithm is a fixed series of steps that a computer performs in order to solve a problem or complete a task. Social media platforms use algorithms to filter and prioritize content for each individual user based on various indicators, such as their viewing behavior and content preferences. Disinformation that is designed to provoke an emotional reaction can flourish in these spaces when algorithms detect that a user is more likely to engage with or react to similar content.¹

An API, or application programming interface, is a means by which data from one web tool or application can be exchanged with, or received by another. Many working to examine the source and spread of polluted information depend upon access to social platform APIs, but not all are created equal and the extent of publicly available data varies from platform to platform. Twitter’s open and easy-to-use API has enabled thorough research and investigation of its network, plus the development of mitigation tools such as bot detection systems. However, restrictions on other platforms and a lack of API standardization means it is not yet possible to extend and replicate this work across the social web.

Artificial intelligence (AI) describes computer programs that are “trained” to solve problems that would normally be difficult for a computer to solve. These programs “learn” from data parsed through them, adapting methods and responses in a way that will maximize accuracy. As disinformation grows in its scope and sophistication, some look to AI as a way to effectively detect and moderate concerning content. AI also contributes to the problem, automating the processes that enable the creation of more persuasive manipulations of visual imagery, and enabling disinformation campaigns that can be targeted and personalized much more efficiently.²

Automation is the process of designing a ‘machine’ to complete a task with little or no human direction. It takes tasks that would be time-consuming for humans to complete and turns them into tasks that are completed quickly and almost effortlessly. For example, it is possible to automate the process of sending a tweet, so a human doesn’t have to actively click ‘publish’. Automation processes are the backbone of techniques used to effectively ‘manufacture’ the amplification of disinformation.

Black hat SEO (search engine optimization) describes aggressive and illicit strategies used to artificially increase a website’s position within a search engine’s results, for example changing the content of a website after it has been ranked. These practices generally violate the given search engine’s terms of service as they drive traffic to a website at the expense of the user’s experience.³

Bots are social media accounts that are operated entirely by computer programs and are designed to generate posts and/or engage with content on a particular platform. In disinformation campaigns, bots can be used to draw attention to misleading narratives, to hijack platforms’ trending lists and to create the illusion of public discussion and support.⁴ Researchers and technologists take different approaches to identifying bots, using algorithms or simpler rules based on number of posts per day.⁵

A botnet is a collection or network of bots that act in coordination and are typically operated by one person or group. Commercial botnets can include as many as tens of thousands of bots.⁶

Data mining is the process of monitoring large volumes of data by combining tools from statistics and artificial intelligence to recognize useful patterns. Through collecting information about an individual’s activity, disinformation agents have a mechanism by which they can target users on the basis of their posts, likes and browsing history. A common fear among researchers is that, as psychological profiles fed by data mining become more sophisticated, users could be targeted based on how susceptible they are to believing certain false narratives.⁷

Dark ads are advertisements that are only visible to the publisher and their target audience. For example, Facebook allows advertisers to create posts that reach specific users based on their demographic profile, page ‘likes’, and their listed interests, but that are not publicly visible. These types of targeted posts cost money and are therefore considered a form of advertising. Because these posts are only seen by a segment of the audience, they are difficult to monitor or track.⁸

Deepfakes is the term currently being used to describe fabricated media produced using artificial intelligence. By synthesizing different elements of existing video or audio files, AI enables relatively easy methods for creating ‘new’ content, in which individuals appear to speak words and perform actions, which are not based on reality. Although still in their infancy, it is likely we will see examples of this type of synthetic media used more frequently in disinformation campaigns, as these techniques become more sophisticated.⁹

A dormant account is a social media account that has not posted or engaged with other accounts for an extended period of time. In the context of disinformation, this description is used for accounts that may be human- or bot-operated, which remain inactive until they are ‘programmed’ or instructed to perform another task.¹⁰

Doxing or doxxing is the act of publishing private or identifying information about an individual online, without his or her permission. This information can include full names, addresses, phone numbers, photos and more.¹¹ Doxing is an example of malinformation, which is accurate information shared publicly to cause harm.

Disinformation is false information that is deliberately created or disseminated with the express purpose to cause harm. Producers of disinformation typically have political, financial, psychological or social motivations.¹²

Encryption is the process of encoding data so that it can be interpreted only by intended recipients. Many popular messaging services such as WhatsApp encrypt the texts, photos and videos sent between users. This prevents governments from reading the content of intercepted WhatsApp messages.

Fact-checking (in the context of information disorder) is the process of determining the truthfulness and accuracy of official, published information such as politicians’ statements and news reports.¹³ Fact-checking emerged in the U.S. in the 1990s, as a way of authenticating claims made in political ads airing on television. There are now around 150 fact-checking organizations in the world,¹⁴ and many now also debunk mis- and disinformation from unofficial sources circulating online.

Fake followers are anonymous or imposter social media accounts created to portray false impressions of popularity about another account. Social media users can pay for fake followers as well as fake likes, views and shares to give the appearance of a larger audience. For example, one English-based service offers YouTube users a million “high-quality” views and 50,000 likes for $3,150.¹⁵

Malinformation is genuine information that is shared to cause harm.¹⁶ This includes private or revealing information that is spread to harm a person or reputation.

Manufactured amplification occurs when the reach or spread of information is boosted through artificial means. This includes human and automated manipulation of search engine results and trending lists, and the promotion of certain links or hashtags on social media.¹⁷ There are online price lists for different types of amplification, including prices for generating fake votes and signatures in online polls and petitions, and the cost of downranking specific content from search engine results.¹⁸

The formal definition of the term meme, coined by biologist Richard Dawkins in 1976, is an idea or behavior that spreads person to person throughout a culture by propagating rapidly, and changing over time.¹⁹ The term is now used most frequently to describe captioned photos or GIFs that spread online, and the most effective are humorous or critical of society. They are increasingly being used as powerful vehicles of disinformation.

Misinformation is information that is false, but not intended to cause harm. For example, individuals who don’t know a piece of information is false may spread it on social media in an attempt to be helpful.²⁰

Propaganda is true or false information spread to persuade an audience, but often has a political connotation and is often connected to information produced by governments. It is worth noting that the lines between advertising, publicity and propaganda are often unclear.²¹

Satire is writing that uses literary devices such as ridicule and irony to criticize elements of society. Satire can become misinformation if audiences misinterpret it as fact.²² There is a known trend of disinformation agents labelling content as satire to prevent it from being flagged by fact-checkers.

Scraping is the process of extracting data from a website without the use of an API. It is often used by researchers and computational journalists to monitor mis- and disinformation on different social platforms and forums. Typically, scraping violates a website’s terms of service (i.e., the rules that users agree to in order to use a platform). However, researchers and journalists often justify scraping because of the lack of any other option when trying to investigate and study the impact of algorithms.

A sock puppet is an online account that uses a false identity designed specifically to deceive. Sock puppets are used on social platforms to inflate another account’s follower numbers and to spread or amplify false information to a mass audience.²³ The term is considered by some to be synonymous with the term “bot”.

Spam is unsolicited, impersonal online communication, generally used to promote, advertise or scam the audience. Today, it is mostly distributed via email, and algorithms detect, filter and block spam from users’ inboxes. Similar technologies to those implemented in the fight against spam could potentially be used in the context of information disorder, once accepted criteria and indicators have been agreed.

Trolling is the act of deliberately posting offensive or inflammatory content to an online community with the intent of provoking readers or disrupting conversation. Today, the term “troll” is most often used to refer to any person harassing or insulting others online. However, it has also been used to describe human-controlled accounts performing bot-like activities.

A troll farm is a group of individuals engaging in trolling or bot-like promotion of narratives in a coordinated fashion. One prominent troll farm was the Russia-based Internet Research Agency that spread inflammatory content online in an attempt to interfere in the U.S. presidential election.²⁴

Verification is the process of determining the authenticity of information posted by unofficial sources online, particularly visual media.²⁵ It emerged as a new skill set for journalists and human rights activists in the late 2000s, most notably in response to the need to verify visual imagery during the ‘Arab Spring’.

A VPN, or virtual private network, is used to encrypt a user’s data and conceal his or her identity and location. This makes it difficult for platforms to know where someone pushing disinformation or purchasing ads is located. It is also sensible to use a VPN when investigating online spaces where disinformation campaigns are being produced.
Download a PDF of this glossary.

1 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
2 Ghosh, D. & B. Scott (January 2018) #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet, New America
3 Ghosh, D. & B. Scott (January 2018) #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet, New America
4 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
5 Howard, P. N. & K. Bence (2016) Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendu, COMPROP Research note, 2016.1, http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2016/06/COMPROP-2016-1.pdf
6 Ignatova, T.V., V.A. Ivichev, V.A. & F.F. Khusnoiarov (December 2, 2015) Analysis of Blogs, Forums, and Social Networks, Problems of Economic Transition
7 Ghosh, D. & B. Scott (January 2018) #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet, New America
8 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
9 Li, Y. Chang, M.C. Lyu, S. (June 11, 2018) In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking, Computer Science Department, University at Albany, SUNY
10 Ince, D. (2013) A Dictionary of the Internet (3 ed.), Oxford University Press
11 MacAllister, J. (2017) The Doxing Dilemma: Seeking a Remedy for the Malicious Publication of Personal Information, Fordham Law Review, https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5370&context=fl
12 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
13 Mantzarlis, A. (2015) Will Verification Kill Fact-Checking?, The Poynter Institute, https://www.poynter.org/news/will-verification-kill-fact-checking
14 Funke, D. (2018) Report: There are 149 fact-checking projects in 53 countries. That’s a new high, The Poynter Institute, https://www.poynter.org/news/report-there-are-149-fact-checking-projects-53-countries-thats-new-high
15 Gu, L., V. Kropotov & F. Yarochkin (2017) The Fake News Machine: How Propagandists Abuse the Internet and Manipulate the Public. Oxford University, https://documents.trendmicro.com/assets/white_papers/wp-fake-news-machine-howpropagandists-abuse-the-internet.pdf
16 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
17 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
18 Gu, L., V. Kropotov & F. Yarochkin (2017) The Fake News Machine: How Propagandists Abuse the Internet and Manipulate the Public. Oxford University, https://documents.trendmicro.com/assets/white_papers/wp-fake-news-machine-howpropagandists-abuse-the-internet.pdf
19 Dawkins, R. (1976) The Selfish Gene. Oxford University Press.
20 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
21 Jack, C. (2017) Lexicon of Lies, Data & Society, https://datasociety.net/pubs/oh/DataAndSociety_LexiconofLies.pdf
22 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
23 Hofileña, C. F. (Oct. 9, 2016) Fake accounts, manufactured reality on social media, Rappler, https://www.rappler.com/newsbreak/investigative/148347-fake-accounts-manufactured-reality-social-media
24 Office of the Director of National Intelligence. (2017). Assessing Russian activities and intentions in recent US elections. Washington, D.C.: National Intelligence Council, https://www.dni.gov/files/documents/ICA_2017_01.pdf.
25 Mantzarlis, A. (2015) Will Verification Kill Fact-Checking?, The Poynter Institute, https://www.poynter.org/news/will-verification-kill-fact-checking

By Claire Wardle, with research support from Grace Greason, Joe Kerwin & Nic Dias.

Note: This material first appeared on First Draft and has been reproduced here with the author’s consent. 

The post Information disorder – the essential glossary first appeared on Media Helping Media.

]]>