Last updated:01/19/2021 - 10:57
Internet Sources
Fookuse peateema sisu
Lisa sisu juurde
  • Althuis, Jente ; Haiden, Leonie (Editors). Fake News. A Roadmap / King’s Centre for Strategic Communications (KCSC), NATO Strategic Communications Centre of Excellence (NATO StratCom COE), 2018.
    The term Fake News became the default catchphrase for truth-seekers wishing to label inaccurate reporting, truth-obscurers spreading malevolent assertions, or the unprepared who simply want to close down uncomfortable discussion. The shorthand expression ‘Fake News’ may fit neatly into tweeted messages, but willing amplifiers spread it across all media, traditional and social, without necessarily giving it a meaningful definition.
    This book explores the character, consequences, and challenges of fake news. The twists and turns that connect fake news to related buzzwords and themes are far from straightforward. It uses the image of a map to navigate the complexity of localised events, mounting pressures, and seismic shifts in the political and media landscapes that appear to have converged in recent years.
  • A multi-dimensional approach to disinformation: Report of the independent High level Group on fake news and online disinformation / High-Level Expert Group (HLEG) on Fake News and Online Disinformation, 2018. The European Union’s High-Level Expert Group (HLEG) on Fake News and Disinformation published its final report on 12 March 2018, in which it suggests a definition of the phenomenon and makes a series of recommendations.
  • Bentzen, Naja. Online disinformation and the EU’s response / European Parliament: European Parliamentary Research Service (EPRS), 2017-
    The proliferation of disinformation – including false news posing as factual stories – became increasingly visible ever since the conflict in Ukraine. While existing research indicates that a majority of people have difficulties determining when news is not factual, the EU has progressively stepped up its efforts to tackle ‘fake news’ online. Version 1 [April 2017] ; Version 2 [November 2017] ; Version 3 [May 2018]
  • Brattberg, Erik ; Maurer, Tim. Russian Election Interference: Europe’s Counter to Fake News and Cyber Attacks / Carnegie, 2018.
    Russia’s aggressive campaign targeting the 2016 U.S. election revealed not only the extent to which information and communications technologies are being used to undermine democratic processes but also the weaknesses of protection measures. The U.S. government was effectively caught off guard, once again highlighting that such interference presents a rising global threat. Comprehensive strategies and tools are clearly needed as part of a long-term, holistic approach to building resilience, but to be effective, they should be informed by the regular sharing of best practices and lessons learned between countries.
    In reaction to Russia’s disruptive campaigns in Europe and the United States, European governments took steps before and during their 2017 elections to better protect against disinformation campaigns and cyber attacks. Unsurprisingly, an examination of their efforts shows the importance of identifying risks at the local, regional, and national levels and actively engaging political parties and traditional and social media outlets. These lessons and others could provide the basis for a common, analytical framework to assess the different dimensions of risk and guide countries’ preparatory actions.
  • Disinformation and propaganda – impact on the functioning of the rule of law in the EU and its Member States / European Parliament, 2019.
    This study, commissioned by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs and requested by the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs, assesses the impact of disinformation and strategic political propaganda disseminated through online social media sites. It examines effects on the functioning of the rule of law, democracy and fundamental rights in the EU and its Member States. The study formulates recommendations on how to tackle this threat to human rights, democracy and the rule of law. It specifically addresses the role of social media platform providers in this regard.
  • Fletcher, Richard ; Cornia, Alessio ; Graves, Lucas ; Kleis Nielsen, Rasmus. Measuring the reach of “fake news” and online disinformation in Europe / Reuters Institute : University of Oxford, 2018.
    The purpose of this RISJ factsheet is to provide toplevel usage statistics for the most popular sites that independent fact-checkers and other observers have identified as publishers of false news and online disinformation in two European countries: France and Italy. We focus specifically on sites that independent fact-checkers have shown to publish demonstrably false news and information, whether for profit or for ideological/political purposes.
  • Hameleers, Michael ; Van Der Meer, Toni G. ; Brosius, Anna. Feeling “disinformed” lowers compliance with COVID-19 guidelines: evidence from the US, UK, Netherlands and Germany / Misinformation Review, May 2020.
    This study indicates that, during the first phase of the coronavirus (SARS-CoV-2) pandemic in 2020, citizens from the US, UK, Netherlands, and Germany experienced relatively high levels of mis- and disinformation in their general information environment. We asked respondents to indicate the extent to which they experienced that information on coronavirus (SARS-CoV-2 and the disease it causes, COVID-19) was simply inaccurate (misinformation) or intentionally misleading (disinformation). Those who experienced misinformation were willing to seek further information and to comply with official guidelines. Individuals perceiving more disinformation – on the other hand – were less willing to seek additional information and reported lower willingness to comply with official guidelines.  
  • Martens, Bertin ; Aguiar, Luis ; Gomez-Herrera, Estrella ; Mueller-Langer, Frank. The digital transformation of news media and the rise of disinformation and fake news – An economic perspective / European Commission, Joint Research centre, 2018.
    This report contains an overview of the relevant economic research literature on the digital transformation of news markets and the impact on the quality of news. It compares various definitions of fake news, including false news and other types of disinformation and finds that there is no consensus on this. It presents some survey data on consumer trust and quality perceptions of various sources of online news that indicate relatively high trust in legacy printed and broadcasted news publishers and lower trust in algorithm-driven news distribution channels such as aggregators and social media. Still, two thirds of consumers access news via these channels. More analytical empirical evidence on the online consumption of genuine and fake news shows that strong newspaper brands continue to attract large audiences from across the political spectrum for direct access to newspaper websites. Real news consumption on these sites dwarfs fake news consumption. Fake news travels faster and further on social media sites. Algorithm-driven news distribution platforms have reduced market entry costs and widened the market reach for news publishers and readers. At the same time, they separate the role of content editors and curators of news distribution. The latter becomes algorithm-driven, often with a view to maximize traffic and advertising revenue. That weakens the role of trusted editors as qualintermediaries and facilitates the distribution of false and fake news content. It might lead to news market failures. News distribution platforms have recently become aware of the need to correct for these potential failures. Non-regulatory initiatives such as fact-checking, enhanced media literacy and news media codes of conduct can also contribute.
  • Nimmo, Ben. Identifying disinformation: an ABC / Institute for European Studies : Vrije Universitet Brussel, 2016.
    One of the key challenges in countering information warfare is identifying when it is taking place. The concept of disinformation is widely understood and has been exhaustively defined; however, the currently available definitions do not allow for the operational identification of disinformation in a sufficiently rapid manner to allow for effective countermeasures. This paper argues that the essence of disinformation is the intent to deceive. While such an intent is difficult to prove, it can be inferred by reference to three key criteria, termed the “ABC approach”. These criteria are: the accuracy of factual statements, balance in reporting and the credibility of the sources chosen. This ABC approach is intended to give academics, analysts and policy-makers an operational method to determine whether disinformation has been committed in a given case.
  • Regulating disinformation with artificial intelligence / European Parliamentary Research Service, 2019.
    This study examines the consequences of the increasingly prevalent use of artificial intelligence (AI) disinformation initiatives upon freedom of expression, pluralism and the functioning of a democratic polity. The study examines the trade-offs in using automated technology to limit the spread of disinformation online. It presents options (from self-regulatory to legislative) to regulate automated content recognition (ACR) technologies in this context. Special attention is paid to the opportunities for the European Union as a whole to take the lead in setting the framework for designing these technologies in a way that enhances accountability and transparency and respects free speech. The present project reviews some of the key academic and policy ideas on technology and disinformation and highlights their relevance to European policy.
  • Tenove, Chris ; Buffie, Jordan ; McKay, Spencer ; Moscrop, David. Digital Threats to Democratic Elections: How Foreign Actors Use Digital Techniques to Undermine Democracy / Centre for the Study of Democratic Institutions, University of British Columbia, 2018.
    This report addresses key questions about foreign actors’ use of digital communication technologies (DCTs) to interfere in democratic elections. It does so by employing the schema of a cyber-security “threat model.” A threat model asks the following key questions: What in a system is most valued and needs to be secured? What actions could adversaries take to harm a system? Who are potential adversaries, with what capacities and intentions? What are the system’s key vulnerabilities? What will be the most effective counter-measures to address these threats? The authors of this report draw on existing research to engage these questions to make several observations.
  • Xaudiera, Sergi ; Cardenal, Ana S. Ibuprofen narratives in five European countries during the COVID-19 pandemic / Misinformation Review, July 2020.
    We follow the trajectory of the unverified story about the adverse effects of using Ibuprofen for treating the Coronavirus disease 2019 (COVID-19) on Twitter, across five European countries. Our findings suggest that the impact of misinformation1 is massive when credible sources (e.g., elected officials, mainstream media) participate in its propagation; yet, they also imply that crisis communication management has a local scope given the greater reach and impact of regional channels in the spread and countering of misinformation. These patterns reveal both the global and local dynamics involved in the spread of misinformation. However, they are based on Twitter data, which might cast doubt on their generalizability. We discuss these and other limitations of the study as well as some of their implications for future research in the closing section of this article.