User:TimRJordan/sandbox/Approaches to Knowledge/2020-21/Seminar group 4/Evidence

From Wikibooks, open books for an open world
Jump to navigation Jump to search

False Balance of Evidence in Scientific Media Coverage[edit | edit source]

False balance[edit | edit source]

False balance is a type of media bias that occurs when the arguments of two opposing sides in a debate are presented by journalists as equally valid, despite there being significantly stronger evidence in support of one side.[1] Whereas the most commonly discussed types of media bias, such as coverage bias, gatekeeping bias, and statement bias, arise when journalists neglect the journalistic norm of balance, false balance may actually occur when journalists attempt to adhere to the norm, with the intention to avoid bias. False balance is especially common in science journalism where evidence is rarely balanced between two opposing opinions, which means journalists will have to exaggerate the amount or validity of evidence for one side, or omit evidence from the other, in order to present an issue as balanced, which contributes to misinformation.[2]

Whilst journalists could contribute to false balance subconsciously, they could also be motivated by their own interest to, as argued by the historians of science Naomi Oreskes and Erik M. Conway in Merchants of Doubt, give false balance to the minority view in order to "keep the controversy alive" in sensational issues such as climate change.[3] As media is the American public’s main source for scientific news and information today,[4] this false balance in scientific media coverage, climate change especially, creates a significant discrepancy between the scientific consensus and the public perceptions.

False balance in climate change coverage[edit | edit source]

An example of false balance in media is in the coverage of climate change. Several studies published in peer-reviewed journals show that 97-98 per cent of actively publishing climate scientists hold the view that human activities are the main cause of the past century’s global warming,[5] and after the American Association of Petroleum Geologists dropped their dissent in 2007, there are also no longer any national and international bodies maintaining a formal opinion of dissent towards this view.[6]

Despite such overwhelming scientific consensus on the causes of climate change and strength of evidence in support of it, studies of media content on climate change in America, Europe, and Asia have all found that the views of climate change sceptics are given disproportionate voice in the debate, misleadingly portraying the issue as one of scientific controversy with balanced evidence on both sides.[7][8] A survey of 636 articles from top newspapers (the New York Times, the Washington Post, the Los Angeles Times, and the Wall Street Journal) in the United States between 1988 and 2002 found that the coverage was almost evenly split between the scientific consensus view and the small group climate change sceptics.[9]

False balance in COVID-19 pandemic coverage[edit | edit source]

False balance is also an element of the reporting on the ongoing COVID-19 pandemic, as news outlets have had to quickly reassign journalists with little to no medical, or even scientific, background to cover the news.[10] Christina Pazzanese, a staff writer for the Harvard Gazette, argued that these journalists lack the scientific knowledge to view evidence critically, which causes them to resort to traditional journalistic norms, such as the norm of balance, but "[i]n doing so, they lift up outlier and inaccurate counterarguments and hypotheses, unnecessarily muddying the water."[11], and end up spreading misinformation.

References[edit | edit source]

  1. 1. Boykoff M, Boykoff J. Balance as bias: global warming and the US prestige press. Global Environmental Change [Internet]. 2004;14(2):125-136. Available from: https://www.sciencedirect.com/science/article/pii/S0959378003000669
  2. Ibid.
  3. Oreskes N, Conway E. Merchants of doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. Bloomsbury Press;.
  4. Funk C, Gottfried J, Mitchell A. How Americans Get Science News and Information [Internet]. Pew Research Center's Journalism Project. 2020 [cited 27 October 2020]. Available from: https://www.journalism.org/2017/09/20/science-news-and-information-today/
  5. J. Cook, et al, "Consensus on consensus: a synthesis of consensus estimates on human-caused global warming," Environmental Research Letters Vol. 11 No. 4, (13 April 2016); DOI:10.1088/1748-9326/11/4/048002
  6. Julie Brigham-Grette; et al. (September 2006). "Petroleum Geologists' Award to Novelist Crichton Is Inappropriate". DOI:10.1029/2006EO360008.
  7. Brüggemann M, Engesser S. Beyond false balance: How interpretive journalism shapes media coverage of climate change. Global Environmental Change [Internet]. 2020 [cited 27 October 2020];42:58-67. Available from: http://www.sciencedirect.com/science/article/pii/S0959378016305209
  8. Boykoff M, Boykoff J. Balance as bias: global warming and the US prestige press. Global Environmental Change [Internet]. 2004;14(2):125-136. Available from: https://www.sciencedirect.com/science/article/pii/S0959378003000669
  9. Ibid.
  10. Buonocore M. Media in the Time of COVID-19 (and Climate Change) - Foresight [Internet]. Foresight. 2020 [cited 10 November 2020]. Available from: https://www.climateforesight.eu/global-policy/media-in-the-time-of-covid-19/
  11. Pazzanese C. Social media used to spread, create COVID-19 falsehoods [Internet]. The Harvard Gazette. 2020 [cited 10 November 2020]. Available from: https://news.harvard.edu/gazette/story/2020/05/social-media-used-to-spread-create-covid-19-falsehoods/

Evidence in Social anthropology[edit | edit source]

The studies in the discipline of social anthropologists are firstly grounded on ethnographic fieldworks, which imply developing intimate understanding of social and cultural phenomena from the perspective of the research participants, through a long-time experience in their local context. Those fieldworks are considered as primary evidence in social anthropological studies, since they directly gather discussions between the local people and the ethnographer. The importance of these immersive experiences is clearly emphasized by many anthropologists, notably by L. Abu-Lughod in her study about Bedouin people, in order to fully understand another culture.[1]

Ethnographic knowledge is used to produce anthropological knowledge. Indeed, anthropological researches correspond to the analysis and interpretation of the anthropologist’s ethnographic fieldwork and its comparison to other concepts. Social anthropological texts in this way constitute qualitative secondary evidence, as future works in this discipline could be based on them or be compared with them.

Nevertheless, ethnographic evidence is considered neutral and objective compared to anthropological productions since the intersectionality of the author, his intellectual, social and political positionality inevitably shapes the analysis he makes. This concept acts as a mediation between the reality and the written account the author produces[2]. Although several fieldworks occur within the same community, the same local context, they can lead to different analysis[3]. Anthropologists’ identity background surely influences their ethnographic experience and consequently their interpretation of it as explained R.Rosaldo in 1989 in Culture and Truth.

Even if anthropology has been re-thought through accepting the subjective aspect in it, anthropological texts can remain evidence if the author’s positionality is not ignored. Hence, evidence in social anthropology raise the notion of reflexivity[4], which questioned the possibility of an impartial study when our own epistemologies are involved.

References[edit | edit source]

  1. Abu-Lughod L., 1986. Veiled Sentiments: Honor and Poetry in a Bedouin Society. University of California Press, pp. 1 to 36. Available form: https://www.jstor.org/stable/10.1525/j.ctv1wxrvj
  2. Sisson Runyan A. What Is Intersectionality and Why Is It Important? | AAUP [Internet]. Aaup.org. 2020 [cited 26 October 2020]. Available from: https://www.aaup.org/article/what-intersectionality-and-why-it-important
  3. Rosaldo R. Culture & truth. Boston: Beacon Press; 2008
  4. Malterud K. Qualitative research: standards, challenges, and guidelines. The Lancet. 2001; 358(9280):483-488. Available from: https://www.sciencedirect.com/science/article/abs/pii/S0140673601056276?via%3Dihub

Evidence in Aggression[edit | edit source]

The most widely used definition of aggression is 'any act that harms another individual who is motivated to avoid such harm'. Importantly, aggression refers to behaviour that is intended to hurt another living being, physically or verbally, and this can be direct or indirect. [1] Evidence in aggression consists mostly of correlational studies and laboratory studies.

Biological Evidence[edit | edit source]

Genetic Factors[edit | edit source]

MAOA-L is a variant of the monoamine oxidase A (MAOA) gene which causes low MAOA activity and has been linked to increased levels of aggression. The MAOA gene governs the production of the MAOA enzyme which deconstructs neurotransmitters to be reused. Evidence of this correlation includes a study of an extremely violent Dutch family, convicted of rape and attempted murder, which found unusually low MAOA activity in their brains. [2]

Isolating genetic factors has been a challenge for researchers. The difficulty lies within separating genetic and environmental factors. Frazzetto et al. provide evidence for the interaction between these factors - within his sample, only participants who had experienced significant trauma in early life demonstrated a correlation between MAOA-L activity and high levels of aggression. This gene-environment (GxE) interaction is often referred to as diathesis-stress.

As there is no standardisation when measuring aggression, researchers use a range of methods including self-reports, teacher and parent reports, and direct observations. This can prove problematic when results differ based on the method used. In a meta-analysis of 51 twin and adoption studies by Rhee and Waldman, genetic factors were found to have a greater effect on aggression in studies that used self-reports rather than teacher or parent reports. [3]

Social Psychological Evidence[edit | edit source]

The social learning theory[edit | edit source]

Bandura observed that aggression can be learnt directly, through operant conditioning, and indirectly, through observational learning. He describes vicarious reinforcement - a child is reinforced to behave aggressively if they see it work effectively for their models i.e. aggressive behaviour results in desirable consequences. [4] Bandura's famous Bobo doll study supports this hypothesis. In his study, children individually watched an adult model behaving aggressively towards a 'Bobo doll'. To create frustration, the participants weren't allowed to play with other toys between their observation time and their time with the 'Bobo doll'. Results show the children mirrored the aggressive behaviour they had seen earlier, without being provoked. In another group of children observing a non-aggressive model, hardly any aggressive behaviour was shown towards the 'Bobo doll'. [5]

Cultural differences mean that in certain cultures, like !Kung San in the Kalahari desert, social norms don't permit aggression so aggressive behaviour isn't reinforced directly or indirectly. [6] Without the presence of a model, children still displayed aggressive behaviour. [7] This perhaps suggests there is an instinctive element to aggression that a biological approach might better explain or that the social learning theory can only be applied to western cultures as the theory would struggle to interpret this specific finding.

Media Influences on Aggression[edit | edit source]

The effects of computer games[edit | edit source]

Evidence for the effects of computer games on aggression includes laboratory experiments, correlational studies and longitudinal studies. Anderson et al. completed a meta-analysis of this evidence and found a correlation between exposure to violent computer games and aggressive behaviour. These results were present in both sexes and across individualist and collectivist cultures. [8]

A correlation, however, is not causation i.e. exposure to violent computer games and aggressive behaviour can be closely associated but this doesn't mean that one causes the other, nor does it indicate the direction of causality. It is also possible that a third variable, a confounder, is responsible for aggressive behaviour which a correlation doesn't consider as it only studies two variables. [9]

References[edit | edit source]

  1. Richardson D. Everyday Aggression Takes Many Forms. Sage Publications, Inc. on behalf of Association for Psychological Science; 2014.
  2. [Internet]. 2020. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4306065/
  3. Flanagan C, Berry D, Jarvis M, Liddle R. AQA Psychology for A Level Year 2 - Student Book. Cheltenham: Illuminate Publishing; 2016.
  4. Mcleod S. Albert Bandura | Social Learning Theory | Simply Psychology [Internet]. Simplypsychology.org. 2020 [cited 25 October 2020]. Available from: https://www.simplypsychology.org/bandura.html
  5. Nolen J. Bobo doll experiment. Encyclopædia Britannica; 2020.
  6. Draper P. The Learning Environment for Aggression and Anti-Social Behavior among the !Kung. Lincoln: University of Nebraska; 1978.
  7. Christiansen K, Winkler E. Hormonal, anthropometrical, and behavioral correlates of physical aggression in !Kung San men of Namibia. Aggressive Behavior. 1992;18(4):271-280.
  8. [Internet]. 2020 [cited 25 October 2020];. Available from: https://www.apa.org/pubs/journals/releases/bul-136-2-151.pdf
  9. Ranganathan P, Aggarwal R. Common pitfalls in statistical analysis: The use of correlation techniques. Perspectives in Clinical Research. 2016;7(4):187.

Evidence in Urban Planning[edit | edit source]

Urban planning is an interdisciplinary field of study which can involve ideas and methods from a diverse range of disciplines, such as engineering, architecture, environmental studies, political science, economics, social sciences, law, health, history and more[1]. Its main objective is to improve quality of life by planning out the infrastructure in urban, suburban or rural areas[2]. When making a planning decision, such as choosing the location of a bus stop, an urban planner will consider its effects on education, inequality, public health, access to culture, pollution, and will try to make the decision which has the most positive effects on these issues.

Epistemological problems[edit | edit source]

However, the diversity within urban planning creates epistemological problems. Since the methods of gathering evidence are so varied, ranging from surveys and interviews to scientific experiments or measures, the evidence can be of both quantitative or qualitative nature, which makes it difficult for scholars to agree on what constitutes valid evidence. For instance, certain scholars called ‘New Humanists’ such as Friedmann[3] or Goldstein[4] subscribe to the Frankfurt School’s theory on science, which stipulates that the scientific method is used by the upper class to further control the population[5]. They, therefore, disregard all empirical and quantitative evidence, and "emphasize personal knowledge, communication and dialogue as a way of solving society’s problems", according to Marc Los[5]. Other theorists such as Simon (1969) defend a ‘rational paradigm’ of planning, which presents the planner as a problem-solver, proposing several mathematical models to solve an issue[5]. In this case, the planner is not even considered responsible for determining the issue or engaging with it, just solving it.

Moral issues[edit | edit source]

These epistemological problems, in turn, create moral issues. Planning theories directly influence planners, who can directly or indirectly influence policies and have drastic effects on populations[5]. Disagreements in the way that planners evaluate the possible consequences of policies or assess the success of implemented measures can lead to harmful measures being taken. According to Jose Richard Aviles, planners can have a ‘technocratic’ bias, which leads them to undermine personal experience when searching for evidence in planning and apply policies which go against the will and the interests of a community[6]. To remedy this, Aviles suggests planners make use of concepts borrowed from social work[6]. Marc Los also believes that Urban planning should make use of concepts from other disciplines, notably History of Science and Design Theory, to create an ‘epistemology of planning’[5] which would bring together the different theories and give more unity to the academic world of planning.


References[edit | edit source]

  1. McGill University School of Urban Planning. About Urban Planning. [online]. McGill University; 2020. [Accessed 7 November 2020]. Available from: https://www.mcgill.ca/urbanplanning/planning
  2. Branch M. C. Critical Unresolved Problems of Urban Planning Analysis. Journal of the American Institute of Planners [online]. 1978;44(1):47-59, Available from: <DOI:10.1080/01944367808976877>
  3. Friedmann J. Retracking America: a Theory of Transactive Planning. Garden City, NY: Anchor Press; 1973.
  4. Goldstein H. Towards a Critical Theory of Planning. In: Noyelle T., Wilson R. Symposium on Planning Theory. Philadelphia : Dept. of City and Regional Planning,University of Pennsylvania; 1975. p.24-39.
  5. a b c d e Los M. Some reflexions on epistemology, design and planning theory. In: Dear M. Allen S., editors. Urbanization and Urban Planning in Capitalist Society. New York: Methuen. P.63-89.
  6. a b Aviles J. R. Viewpoint: Planners as therapists, Cities as clients. Planning Journal [online]. 2020;9:10. [Accessed 7 November 2020]. Available from: https://www.planning.org/planning/2020/oct/intersections-viewpoint/

The Role of Evidence in Archaeology[edit | edit source]

The discipline of archaeology can be defined as the retrieval and study of qualitative and primary evidence to understand human activity and its history[1]. The archaeological practice relies on evidence[2], and the interpretation of it is a central challenge to the discipline as evidence is often retrieved only long after it was first created. The task relies on the archaeologist who has to analyse the data to validate or revoke his or someone else’s thesis. For this interpretation to be as objective as possible is one of the key stakes of archaeology. Disagreement about how the process should be carried out has formed two distinct movements within archaeology, processual and postprocessual. Processual archaeology is defined by its trust in a scientific method to analyse and interpret data, whereas postprocessual archaeology views the archaeologist as inevitably biased and unable to achieve complete objectiveness, and emphasises the reliance on theoretical discussions and criteria when studying evidence.[3].

The ambiguity of evidence in archaeology has prompted scholars to acknowledge the inability of evidence to prove or demonstrate any aspect of the human’s past and way of living on its own, and stress the importance of interpretation in giving meaning to evidence. The status of evidence is only given to artefacts and ecofacts through the work and analyses of archeologists. [4].

Nevertheless, studying artefacts without a method can lead to extreme interpretations, which is illustrated by pseudoarchaeology, also known as alternative archaeology. By refuting scientific analysis, pseudo archaeologists can give evidence completely different meanings[5]. Therefore, the status of evidence and its meaning must be carefully apprehended in archaeology.

References[edit | edit source]

  1. Wikipedia. Archaeology. Available from: https://en.wikipedia.org/wiki/Archaeology [Accessed 24th October 2020]
  2. Hardesty DL. Goals of Archaeology, Overview. In: Deborah M. Pearsall. (ed.) Encyclopedia of Archaeology. 2008. p. 1414-1416. Available from: https://doi.org/10.1016/B978-012373962-9.00121-7
  3. Holder I. Interpretive Archaeology and Its Role. American Antiquity. 1991;56(1):7-18. Available from: DOI: 10.2307/280968/https://www.jstor.org/stable/280968
  4. Brumfield EM. The Quality of Tribute Cloth: The Place of Evidence in Archaeological Argument. American Antiquity. 1996;61(3):453-462. Available from: DOI: 10.2307/281834/https://www.jstor.org/stable/281834
  5. Feder KL. Irrationality and Popular Archaeology. American Antiquity. 1984;49(3):525-541. Available from: DOI: 10.2307/280358/https://www.jstor.org/stable/280358

The flexibility of evidence in advertising[edit | edit source]

Advertising is a means of marketing communication which is both practised and studied. The implementation of evidence as a persuasive technique is a major foundation of advertising. Evidence can come in many forms such as statistics, interviews and research data and is often used to connote an aspect of credibility and objectivity. However, it is evident through time that so-called empirical evidence can be utilised with a bias to present products in particular- and sometimes deceptive- ways. [1]

Cigarettes[edit | edit source]

Although now commonly known as carcinogenic and dangerous, corporate denial and propaganda generated a healthy presentation of tobacco cigarettes which was alarmingly successful up until the 1950s/1960s, causing a global lung cancer epidemic.[2] Smoking culture became highly popularised throughout the early 20th century through glamorisation in the media and cheap manufacturing. [3] However, in the 1930s, health concerns around cigarettes began to gain traction in the public eye and tobacco companies were proactive in their response by capitalising on the unwavering trust the public put into doctors.[4][5] [6] These companies began to feature doctors (or actors playing them) in their advertising alongside research that conflicted these rising health worries.

The American Tobacco Company pioneered this trend in the 1920s. Their cigarette brand Lucky Strike put out an advertisement featuring a physician next to the phrase “20, 679 physicians say ‘Luckies are less irritating’”. [7] ‘20, 679 physicians’ flourished into a catchphrase of sorts within their advertising in the following years and saw extreme success. Rj Reynolds’s “More Doctors” campaign is also one of the most notable, they referred to a ‘nationwide campaign’ and claimed that “More doctors smoke Camels than any other brand”. RJ Reynolds even established a Medical Relations division so they could cite real evidence on their posters.[8] [9]

In the mid-1930s, tobacco advertising saw a shift again, some companies continued to argue their cigarettes were healthier in comparison, but now other companies began to claim that their cigarettes weren’t unhealthy at all. Additionally, physicians were targeted themselves in adverts which argued that if patients were going to keep smoking, they should at least be prescribed healthier brands. These types of advertisements were featured in many established medical journals such as the [Journal] of the American Medical Association. [10]

However, by the mid-1950s more and more scientific research was proving links between cancer and tobacco smoking, leading to a decline in doctor-based adverts. By 1964 the Surgeon General Advisory Committee on Smoking and Health reported that smoking tobacco was significantly linked to diseases such as chronic bronchitis and lung cancer and by 1967 public health risk notifications were compulsory on packaging.[11]

Advertising has changed significantly since the 1950s and many of these changes can be attributed to the presentation of evidence in smoking advertising. [12]

References[edit | edit source]

  1. Deighton, J. (1984). The Interaction of Advertising and Evidence. Journal of Consumer Research, [online] 11(3), pp.763–770. Available at: https://www.jstor.org/stable/2489066?seq=1 [Accessed 26 Oct. 2020].
  2. Leidner, A.J., Shaw, W.D. and Yen, S.T. (2014). An historical perspective on health-risk awareness and unhealthy behaviour: cigarette smoking in the United States 1949-1981. Health Expectations, 18(6), pp.2720–2730. Available at: https://onlinelibrary.wiley.com/doi/full/10.1111/hex.12246 [Accessed 21st October 2020]
  3. Rodrigues, J. (2009). When smoking was cool, cheap, legal and socially acceptable. [online] the Guardian. Available at: https://www.theguardian.com/lifeandstyle/2009/apr/01/tobacco-industry-marketing.[Accessed 21st October 2020]
  4. Leidner, A.J., Shaw, W.D. and Yen, S.T. (2014). An historical perspective on health-risk awareness and unhealthy behaviour: cigarette smoking in the United States 1949-1981. Health Expectations, 18(6), pp.2720–2730. Available at: https://onlinelibrary.wiley.com/doi/full/10.1111/hex.12246[Accessed 21st October 2020]
  5. www.healio.com. (n.d.). Cigarettes were once ‘physician’ tested, approved. [online] Available at: https://www.healio.com/news/hematology-oncology/20120325/cigarettes-were-once-physician-tested-approved.[Accessed 21st October 2020]
  6. Hochberg, M.S. (2002). The Doctor’s White Coat: An Historical Perspective. AMA Journal of Ethics, [online] 9(4), pp.310–314. Available at: https://journalofethics.ama-assn.org/article/doctors-white-coat-historical-perspective/2007-04 [Accessed 20th October 2020].
  7. tobacco.stanford.edu. (n.d.). Stanford Research into the Impact of Tobacco Advertising. [online] Available at: http://tobacco.stanford.edu/tobacco_main/images.php?token2=fm_st002.php&token1=fm_img0101.php&theme_file=fm_mt001.php&theme_name=Doctors%20Smoking&subtheme_name=20,679%20Physicians [Accessed 21st October 2020]
  8. www.healio.com. (n.d.). Cigarettes were once ‘physician’ tested, approved. [online] Available at: https://www.healio.com/news/hematology-oncology/20120325/cigarettes-were-once-physician-tested-approved. [Accessed 21st October 2020]
  9. tobacco.stanford.edu. (n.d.). Stanford Research into the Impact of Tobacco Advertising. [online] Available at:http://tobacco.stanford.edu/tobacco_main/images.php?token2=fm_st001.php&token1=fm_img0002.php&theme_file=fm_mt001.php&theme_name=Doctors%20Smoking&subtheme_name=More%20Doctors%20Smoke%20Camels[Accessed 21st October 2020]
  10. tobacco.stanford.edu. (n.d.). Stanford Research into the Impact of Tobacco Advertising. [online] Available at: http://tobacco.stanford.edu/tobacco_main/images.php?token2=fm_st174.php&token1=fm_img11867.php&theme_file=fm_mt021.php&theme_name=Targeting%20Doctors&subtheme_name=Advice%20for%20Patients [Accessed 21st October 2020]
  11. www.healio.com. (n.d.). Cigarettes were once ‘physician’ tested, approved. [online] Available at: https://www.healio.com/news/hematology-oncology/20120325/cigarettes-were-once-physician-tested-approved.[Accessed 21st October 2020]
  12. World Health Organization (2020). Tobacco. [online] Who.int. Available at: https://www.who.int/news-room/fact-sheets/detail/tobacco.[Accessed 21st October 2020]

The Role of Evidence in Analysis of Media Bias[edit | edit source]

Media bias is ‘a portrayal of reality that is significantly and systematically distorted’, for example, in partisan media, the favouring of one party over another[1]. This raises a number of theoretical, methodological and political questions.

Perceptions of Media Bias[edit | edit source]

Three common sense perceptions of media bias include ‘The Liberal Media’, in which the media is left-leaning, ‘Pluralism and Citizen Empowerment in the News’, claiming that media include a diversity of sources, governmental and nongovernmental, where ‘consumer demand drives creation of content’, and finally, ‘The Bad News Bias,’ which asserts that journalists overemphasise negativity, working to reinforce official agendas.[2]

A more social scientific understanding of media bias is derived from assessing competing theories. Anthony Di’Maggio outlines two main theories, the first being the ‘Pro-Government Bias Theory.’ This claims that media is moulded to the views of political officialdom, through source and power indexing. The other, ‘Pro-Business, Hegemonic Bias Theory’ has its roots in Marxism, and states that ‘media corporations promote upper-class business interests at the expense of democratic deliberation.’[2] Thus, corporate ownership of media induces censorship of corporate criticism.

Such bias can work through different mechanisms. According to Groeling, media bias can be broadly be categorized into selection bias (in which the choices of what events or information to cover is skewed) and presentation bias (in which the content of covered stories is skewed).

Challenges and Methods of Measuring Media Bias[edit | edit source]

Groeling identifies two major issues facing researchers attempting to measure media bias. The first is the ‘Problem of the Unobserved Population’[1], which describes a pool of potential stories that were not selected for coverage and which observers are unaware of. Scholars have attempted to overcome this challenge by making assumptions about the composition of that unobserved population. For example, Niven used equivalent cases where political leaders have engaged in comparable behaviour, such as party switching, to study partisan media bias. This equivalent behaviour should theoretically produce even coverage, so differences in coverage can be reasoned with partisanship and media bias. This strategy also highlights another factor hampering the metrics of media bias - the challenge of establishing a baseline to ‘define parameters for fair coverage.’[3] Another approach is to control for structural factors. Schiffer, in 2006, considered controls for candidate quality, national conditions, market forces and editorial endorsements by the newspapers. This eliminated the statistical impact of the partisanship on amount of coverage received by candidates. Another method to address the issue is to compare coverage across other news organisations, or, finally, to attempt to create the unobserved population. Here, researchers narrow research to a certain event in which the researcher is able to view all potential news stories, and compare this with those covered.[1]

The second major difficulty facing researchers in this field is ‘The Subjectivity Problem’: Differing perceptions of story content by human coders undermines reliability in conducting content analysis. Cognitive biases such as confirmation/disconfirmation biases, selective perception, anchoring, attention bias and the clustering illusion will affect evaluations.[1] The most common attempt to overcome this issue is through modelling of the news outlet’s behaviour against an exemplar. For example, Gentzkow and Shapiro gathered a dataset ‘of all phrases used by members of Congress in the 2005 Congressional Record, and identified those that are used much more frequently by one party than another.’[4] They then compared these results with analysis of a news outlet’s language to determine its congruence to that of either a Republican or Democrat.


One of the most common methods for quantitatively measuring media bias is content analysis.[5] Researchers define an analysis hypothesis and gather the relevant body of news data, which is then systematically annotated by coders. The frequency of specific words or phrases, the number of articles published on a certain event, and the size and placement of a story is determined. This can be done according to a codebook of rules – ‘deductive content analysis’ – or coders can read texts without instructions, known as ‘inductive content analysis.’ The deductive form tends to produce more statistically sound conclusions, provided the coding rules allow for accurate measurement of the variables in question. Finally, this evidence is used to accept or reject the hypothesis.[6]

Comparatively, a qualitative analysis attempts capture subtle forms of media bias that require human interpretation. This requires manual effort and expertise, limiting the scope of social scientific study of media bias, but is at the same time a necessary element of developing coding rules, meaning there is a qualitative underpinning even for quantitative methods.[6]

The Role of Technology in Media Bias[edit | edit source]

The emergence of ‘big data’ with the advancement of digital technology has paved the way for opportunities of media bias through event and source selection, as journalists must filter through increasingly large pools of available evidence and sources.[7] The study of media bias metrics has historically been placed within the social sciences and communication studies, but is increasingly being adapted to computer and data science, with algorithms and AI allowing researchers to analyse larger datasets and visual biases such as facial expressions. [8] In this way, technology has both amplified the issue of media bias, and is now being used as a tool for its analysis and measurement.

References[edit | edit source]

  1. a b c d Groeling, Tim (2012). "Media Bias by the Numbers: Challenges and Opportunities in the Empirical Study of Partisan News". Annual Review of Political Science. Annual Review. 16: 129–151. doi:10.1146/annurev-polisci-040811-115123. {{cite journal}}: |access-date= requires |url= (help); Check date values in: |access-date= (help)
  2. a b Di'Maggio, Anthony (2018). The Politics of Persuasion: Economic Policy and Media Bias in the Modern Era (PDF). SUNY Press. ISBN 9781438463445. Retrieved 25/10/2020. {{cite book}}: Check date values in: |access-date= (help)
  3. Niven, David (2018). "Objective Evidence on Media Bias: Newspaper Coverage of Congressional Party Switchers". Journalism & Mass Communication Quarterly. SAGE. 80: 311–326. doi:10.1177/107769900308000206. {{cite journal}}: |access-date= requires |url= (help); Check date values in: |access-date= (help)
  4. Gentzkow, Matthew; Shapiro, Jesse M. (2010). "What drives media slant? Evidence from U.S. daily newspapers". Econometrica. Wiley-Blackwell. 78: 35–71. doi:https://doi.org/10.3982/ECTA7195. {{cite journal}}: Check |doi= value (help); External link in |doi= (help)
  5. Adkins Covert, Tawnya; Wasburn, Philo (2007). "Measuring Media Bias: A Content Analysis of Time and Newsweek Coverage of Domestic Social Issues, 1975-2000". Social Science Quarterly. Wiley. 88 (3). doi:https://doi.org/10.1111/j.1540-6237.2007.00478.x. {{cite journal}}: |access-date= requires |url= (help); Check |doi= value (help); Check date values in: |accessdate= (help); External link in |doi= (help)
  6. a b Hamborg, Felix; Donnay, Karsten; Gipp, Bela (2018). "Automated identification of media bias in news articles: an interdisciplinary literature review". International Journal on Digital Libraries. Springer Verlag. 20: 391–415. doi:https://doi.org/10.1007/s00799-018-0261-y. {{cite journal}}: |access-date= requires |url= (help); Check |doi= value (help); Check date values in: |access-date= (help); External link in |doi= (help)
  7. Davies, William (09-09-2019). "Why can't we agree on what's true any more?". The Guardian. {{cite web}}: Check date values in: |publication-date= (help)
  8. Wilner, Tamar (09-01-2018). "We can probably measure media bias. But do we want to?". Colombia Journalism Review. {{cite web}}: Check date values in: |publication-date= (help)

Manipulating Evidence: "Fake News" and Mail-in Voter Fraud in the US Political System[edit | edit source]

The recent discourse surrounding the handling and regulation of given evidence within political science and digital media argues that the boundaries between fraud committed by voters, and by governing officials have been blurred by the media. Allegations of fraud can generate a “Rashomon effect” which is defined as the infamous capricious nature of an observer’s account.[1] Acknowledging the evidence that exposes the truth is useful if it can refute or verify the opposition’s proof. Due to the Rashomon effect, opinions can often be affected by fiction instead of fact. [2]

Fraud can be committed by a singular voter or by a larger interested party, but these two diverse types of fraud are still contained under the same labels of vote, voter and election fraud.[3] The terms “absentee vote” and “mail-in vote” have been a source of great debate because some believe the “absentee vote” specifically denotes voters that are out of the state. [4] Until there are distinctive fraudulent categories and offenders, the meaning is void and potentially propagates misinformation. [5]

US President Trump has claimed that “mail-in” voting is open to abuse. [6]There have been 1300 confirmed fraudulent voting cases according to the Heritage Election Fraud Database. [7]However, other interested parties argue against this. Although voter fraud was rare prior to the November 2016 elections, this did not make Trump’s claim vacuous. However, the issue with his claim was the lack of primary evidence, his term “widespread voter fraud” infers that fraud is viral across the country instead of specific people. [8]In a 2017 study by the Brennan Center of Justice states the rate of mail-in voter fraud is between 0.0003% and 0.0025%. [9] Although this evidence suggests that voter fraud is an anomaly, Trump is assuring the public that voting is a rigged system. This creates a disconnection between Trump’s “adaptable fiction” and the validity of democracy. [10]In addition to undermining the democratic system, the fear and misperception surrounding Coronavirus has exacerbated potential mail-in validity questions in 2020.[11]

In an epoch, where misinformation is so readily available through social media, allegations and propaganda over democratic voting systems will continue. The public cannot be sure of evidence provided due to the ever-changing narratives surrounding issues such as mail-in voter fraud.[12]

References[edit | edit source]

  1. Lorraine C. Minnite, "What Is Voter Fraud?” The Myth of Voter Fraud, 1st ed., Cornell University Press, Ithaca; London, 2010, pp. 19–36. JSTOR, www.jstor.org/stable/10.7591/j.ctt7zgg1.5. Accessed 22 Oct. 2020.
  2. https://en.wikipedia.org/wiki/Rashomon_effect
  3. Lorraine C. Minnite, "What Is Voter Fraud?” The Myth of Voter Fraud, 1st ed., Cornell University Press, Ithaca; London, 2010, pp. 19–36. JSTOR, www.jstor.org/stable/10.7591/j.ctt7zgg1.5. Accessed 22 Oct. 2020.
  4. https://www.dictionary.com/e/absentee-ballot-vs-mail-in-ballot/
  5. Lorraine C. Minnite, "What Is Voter Fraud?” The Myth of Voter Fraud, 1st ed., Cornell University Press, Ithaca; London, 2010, pp. 19–36. JSTOR, www.jstor.org/stable/10.7591/j.ctt7zgg1.5. Accessed 22 Oct. 2020.
  6. Reality Check Team, 'US election: Do postal ballots lead to voting fraud?', BBC NEWS, 25 September 2020, https://www.bbc.co.uk/news/world-us-canada-53353404, (last accessed: 22/10/2020)
  7. Hans A. von Spakovsky and Kaitlynn Samalis-Aldrich, “More Examples of Election Fraud Prove the Left Is in Denial”, 15TH of October 2020, https://www.heritage.org/election-integrity/commentary/more-examples-election-fraud-prove-the-left-denial-about-it, (last accessed: 22nd of October 2020)
  8. David Cottrell., An exploration of Donald Trump's allegations of massive voter fraud in the 2016 General Election, Electoral Studies, Volume 51, February 2018, pages 123-142, https://www.sciencedirect.com/science/article/pii/S026137941730166X#fn2, (last accessed: 22nd of October 2020)
  9. Justin Levitt., The Truth About Voter Fraud, Nov 9 2007,, https://www.brennancenter.org/our-work/research-reports/truth-about-voter-fraud
  10. Jim Rutenberg, How President Trump’s false claim of voter fraud is being used to disenfranchise Americans, 30th of September 2020, https://www.nytimes.com/2020/09/30/magazine/trump-voter-fraud.html
  11. Shapiro, Ilya, and James T. Knight. Election Regulation during the COVID-19 Pandemic. Cato Institute, 2020, www.jstor.org/stable/resrep26205. Accessed 22 Oct. 2020.
  12. https://en.wikipedia.org/wiki/Fake_news

Evidence in Experimental Philosophy[edit | edit source]

What is Experimental Philosophy?[edit | edit source]

Experimental Philosophy is a relatively new area of philosophical inquiry that originated from the idea of fusing philosophy to the experimental rigor of social sciences[1]. It is characterised by its interdisciplinary approach and use of empirical data as its main source of evidence. It is therefore distinguished from the traditional philosophical methodology which revolves around creation of arguments that are based on a priori justifications[1].

Methodology and Focus of the Studies[edit | edit source]

By taking advantage of the rigor of quantitative research, experimental philosophers conduct studies aimed at understanding the intuitions of ordinary people. The empirical data is usually gathered through a series of surveys or interviews, which then form a basis for the conceptual analysis of the imposed philosophical questions[1].

Much of the research in experimental philosophy focuses on presenting the evidence for defectiveness and unreliability of traditional philosophical methodology. For example, a 2013 research by Cameron and a 2014 research by Buckwalter & Stich show that intuitions are in a great degree subjective to gender, ethnicity and incidental emotions[2][3], and therefore should not  be regarded a valid method for addressing philosophical problems.

Other projects in experimental philosophy are aimed to make progress in the field of epistemology. Some researchers such as Gerken, Leslie or Greene focused on assessing  which intuitions are reliable and appropriate in particular domains of philosophy[1].

Criticisms[edit | edit source]

The development of experimental philosophy raised some serious disputes among the philosophers. Some of them questioned whether an interdisciplinary approach and research based on empirical data is the right path for the philosophy and if it is any helpful in solving its problems. Cappelen and Deutsch clearly expressed their concerns about the results of experimental studies on intuitions. They disputed their relevance and advocated traditional methodology which, as they argued, is driven by logical arguments rather than intuitions[4][5].

In addition to that, in a series of studies published in 2012, Hamid Seyedsayamdost pointed out that much of the results produced by experimental philosophers do not meet the reproducibility principle[6]. This discovery was then independently confirmed by other researchers[7] which has consequently put the future of the experimental philosophy in question.


References[edit | edit source]

  1. a b c d Knobe, Joshua; Nichols, Shaun (2017), Zalta, Edward N. (ed.), "Experimental Philosophy", The Stanford Encyclopedia of Philosophy (Winter 2017 ed.), Metaphysics Research Lab, Stanford University, retrieved 2020-10-25
  2. Buckwalter, Wesley; Stich, Stephen (2010-09-26). "Gender and Philosophical Intuition". Rochester, NY. doi:10.2139/ssrn.1683066. {{cite journal}}: Cite journal requires |journal= (help)
  3. Cameron, C. Daryl; Payne, B. Keith; Knobe, Joshua (2010-12). "Do Theories of Implicit Race Bias Change Moral Judgments?". Social Justice Research. 23 (4): 272–289. doi:10.1007/s11211-010-0118-z. ISSN 0885-7466. {{cite journal}}: Check date values in: |date= (help)
  4. Cappelen, Herman. (2012). Philosophy without intuitions (1st ed ed.). Oxford: Oxford University Press. ISBN 978-0-19-964486-5. OCLC 759177673. {{cite book}}: |edition= has extra text (help)
  5. Deutsch, Max (2009-09). "Experimental Philosophy and the Theory of Reference". Mind & Language. 24 (4): 445–466. doi:10.1111/j.1468-0017.2009.01370.x. {{cite journal}}: Check date values in: |date= (help)
  6. Siamdoust, Hamid (2012). "On Gender and Philosophical Intuition: Failure of Replication and Other Negative Results". SSRN Electronic Journal. doi:10.2139/ssrn.2166447. ISSN 1556-5068.
  7. Adleberg, Toni; Thompson, Morgan; Nahmias, Eddy (2015-07-04). "Do men and women have different philosophical intuitions? Further data". Philosophical Psychology. 28 (5): 615–641. doi:10.1080/09515089.2013.878834. ISSN 0951-5089.

Evidence in Cell Biology[edit | edit source]

From qualitative to quantitative data[edit | edit source]

Cell Biology was founded on electron micrographs [1], which seemed to be the type of primary evidence that should explain almost everything in a single image. With technological progress, microscopes have not only become more advanced and automated but also digital – greatly increasing the amount of data available.[2] Consequently, the role of intuition and assumptions made by the scientist about mainly qualitative image data was replaced by the need for more quantitative proof.

Stephen Royle, the author of The Digital Cell [3] argues that what comes out of microscopes should be treated as data, rather than images. The need to automate the process for analysing images quantitatively stems from the quest for objectivity and unbiased and reproducible results, which are central to the scientific process.

Role of databases[edit | edit source]

Massive databases, such as BioNumbers, which consists of key numbers in molecular and cell biology, have been created to further quick access to precise data. In the past, the lack of the possibility to search through the numerical evidence slowed progress and contributed to molecular biology being less quantitative than it probably should have been.[4]

However, the amount of data available does not imply the links between various sets are easily made. This data can serve as evidence only when the ideas are organised and linked [5]. One of the initiatives that seek to provide a more integrated view of interaction networks in Homo sapiens, but not limited to, is ConsensusPathDB. [6]

References[edit | edit source]

  1. H. Verschueren (1985). "Interference reflection microscopy in cell biology: methodology and applications". Journal of Cell Science. 75: 279–301.
  2. Li, Y., & Chen, L. (2014). "Big biological data: challenges and opportunities". Genomics, proteomics & bioinformatics. 12 (5): 187–189. doi:https://doi.org/10.1016/j.gpb.2014.10.001. {{cite journal}}: Check |doi= value (help); External link in |doi= (help)CS1 maint: multiple names: authors list (link)
  3. Royle, Stephen J. (2019-12-01). The Digital Cell: Cell Biology as a Data Science. Cold Spring Harbor Laboratory Press. ISBN 9781621822783.
  4. R. Milo; et al. (2010). "BioNumbers—the database of key numbers in molecular and cell biology". Nucleic Acids Research. 38 (1): D750–D753. doi:https://doi.org/10.1093/nar/gkp889. {{cite journal}}: Check |doi= value (help); Explicit use of et al. in: |author= (help); External link in |doi= (help)
  5. N. Friedman; et al. (2000). "Using Bayesian Networks to Analyze Expression Data". Journal of Computational Biology. 7 (3–4): 601–620. {{cite journal}}: Explicit use of et al. in: |author= (help)
  6. A. Kamburov; et al. (2011). "ConsensusPathDB: toward a more complete picture of cell biology". Nucleic Acids Research. 39 (1): D712–D717. doi:https://doi.org/10.1093/nar/gkq1156. {{cite journal}}: Check |doi= value (help); Explicit use of et al. in: |author= (help); External link in |doi= (help)

The Role of Evidence in Policy Making[edit | edit source]

What is evidence-based policy?[edit | edit source]

Evidence-based policy is about helping policy makers make astute and insightful decisions regarding public policies and regulations. It is based on the objective analysis of quality research on a given topic[1]. The practice of evidence-based policy making is aimed at reducing policy failures due to any potential gaps between actual conditions and constitutional expectations[2].

Accessing evidence[edit | edit source]

Surprisingly, numerous decision-makers neither rely on academic research nor on internal expertise. Instead, they depend on direct advice from a selected group of field experts as well as easy-to-read sources that are accessed primarily from the internet. This means they prefer simple and convenient access to evidence as compared to ones that are more detailed and complex[3].

Types of evidence[edit | edit source]

Evidence can be both qualitative and quantitative. Governments use illustrative studies and large-scale analysis and surveys to fine-tune their decisions, as well as case studies to better understand specific cases that are relevant to decisions in policy making[4]. Furthermore, decision-makers prefer the presentation of evidence to be formatted in bullet points, even though this may prevent an in-depth analysis.

Subjectivity in policy making[edit | edit source]

Even though information accessed by decision-makers generally maintains good standards of evidence and objectivity, drawn conclusions are still often subjective. Decision-makers may select studies and content that they can use to fit and match their argument. To better justify their decisions, policy makers even happen to educe different conclusions from that of the research they had recourse to[5]. Furthermore, unconscious biases can play a role in the interpretation of evidence to better make it suit one’s original opinion on an issue. This may favour a business as usual type of approach and inhibits innovation in terms of policy making.

References[edit | edit source]

  1. Davies, P. (2004, February 19). Is evidence-based government possible? Paper presented at the 4th Campbell Collaboration Colloquium, Washington, D.C.
  2. Howlett, M. (2009). Policy analytical capacity and evidence-based policy-making: Lessons from Canada. Canadian Public Administration, 52(2), 153-175. DOI: 10.1111/j.1754-7121.2009.00070_1.x
  3. Ritter, A. (2009). How do drug policy makers access research evidence?. International Journal Of Drug Policy, 20(1), 70-75. DOI: 10.1016/j.drugpo.2007.11.017
  4. Orton, L., Lloyd-Williams, F., Taylor-Robinson, D., O'Flaherty, M., & Capewell, S. (2011). The Use of Research Evidence in Public Health Decision Making Processes: Systematic Review. Plos ONE, 6(7). DOI: 10.1371/journal.pone.0021704
  5. Bennett, T., & Holloway, K. (2010). Is UK drug policy evidence based?. International Journal Of Drug Policy, 21(5), 411-417. DOI: 10.1016/j.drugpo.2010.02.004