Research on the effects of violence in mass media
Many social scientists support the correlation. However, some scholars argue that media research has methodological problems and that findings are exaggerated.(Ferguson & Kilburn, 2009; Freedman, 2002; Pinker 2002; Savage, 2004)
Complaints about the possible deleterious effects of mass media appear throughout history; even Plato was concerned about the effects of plays on youth. Various media/genres, including dime novels, comic books, jazz, rock and roll, role playing/computer games, television, films, internet (by computer or cell phone) and many others have attracted speculation that consumers of such media may become more aggressive, rebellious or immoral. This has led some scholars to conclude statements made by some researchers merely fit into a cycle of media-based moral panics (e.g. Gauntlett, 1995; Trend, 2007; Kutner & Olson, 2008). The advent of television prompted research into the effects of this new medium in the 1960s. Much of this research has been guided by social learning theory developed by Albert Bandura. Social learning theory suggests that one way in which human beings learn is by the process of modeling.
Media effects theories
Social learning theory
The findings of this experiment suggest that children tended to model the behavior they witnessed in the video. This has been often taken to imply that children may imitate aggressive behaviors witnessed in media. However, Bandura's experiments have been criticized (e.g. Gauntlett, 1995) on several grounds. First, it is difficult to generalize from aggression toward a bo-bo doll (which is intended to be hit) to person-on-person violence. Secondly, it may be possible that the children were motivated simply to please the experimenter rather than to be aggressive. In other words, the children may have viewed the videos as instructions, rather than incentives to feel more aggressive. Third, in a latter study (1965) Bandura included a condition in which the adult model was punished for hitting the bo-bo doll by himself being physically punished. Specifically the adult was pushed down in the video by the experimenter and hit with a newspaper while being berated. This actual person-on-person violence actually decreased aggressive acts in the children, probably due to vicarious reinforcement. Nonetheless these last results indicate that even young children don't automatically imitate aggression, but rather consider the context of aggression.
Given that some scholars estimate that children's viewing of violence in media is quite common, concerns about media often follow social learning theoretical approaches.
Social cognitive theory
Moral panic theory
Failure to adequately control experimental conditions when assessing aggressive outcomes between violent and non-violent games (see Adachi & Willoughby, 2010). Traditionally, researchers have selected one violent game and one non-violent game, yet shown little consideration of the potentially different responses to these games as a result of differences in other game characteristics (e.g., level of action, frustration, enjoyment).
Failure to acknowledge the role of social contexts in which media violence is experienced. Within theoretical models explaining the influence of violent video game exposure on aggressive attitudes and behaviour, no acknowledgement is made towards understanding the influence of social gaming experiences and contexts on these outcomes. That is, differential outcomes of gaming arise as a result of different social contexts (online versus offline gaming) and social dynamics involved in social gaming experiences (Kaye & Bryce, 2012). Existing theoretical models assume that the outcomes of gaming are equivalent, regardless of these different contexts. This is a key limitation of current theory within media violence research
Failure to employ standardized, reliable and valid measures of aggression and media violence exposure. Although measurement of psychological variables is always tricky at best, it is generally accepted that measurement techniques should be standardized, reliable and valid, as demonstrated empirically. However, some scholars argue that the measurement tools involved are often unstandardized, sloppily employed and fail to report reliability coefficients. Examples include the "Competitive Reaction Time Test" in which participants believe that they are punishing an opponent for losing in a reaction time test by subjecting the opponent to noise blasts or electric shocks. There is no standardized way of employing this task, raising the possibility that authors may manipulate the results to support their conclusions. This task may produce dozens of different possible ways to measure "aggression", all from a single participant's data. Without a standardized way of employing and measuring aggression using this task, there is no way of knowing whether the results reported are a valid measure of aggression, or were selected from among the possible alternatives simply because they produced positive findings where other alternatives did not. Ferguson and Kilburn, in a paper in Journal of Pediatrics, have found that poorly standardized and validated measures of aggression tend to produce higher effects than well validated aggression measures.
Failure to report negative findings. Some scholars contend that many of the articles that purport positive findings regarding a link between media violence and subsequent aggression, on a closer read, actually have negative or inconclusive results. One example is the experimental portion of Anderson & Dill (2000; with video games) which measures aggression four separate ways (using an unstandardized, unreliable and unvalidated measure of aggression, the Competitive Reaction Time Test mentioned above) and finds significance for only one of those measures. Had a statistical adjustment known as a Bonferroni correction been properly employed, that fourth finding also would have been insignificant. This issue of selective reporting differs from the "file drawer" effect in which journals fail to publish articles with negative findings. Rather, this is due to authors finding a "mixed bag" of results and discussing only the supportive findings and ignoring the negative findings within a single manuscript. The problem of non-reporting of non-significant findings (the so-called "file cabinet effect") is a problem throughout all areas of science but may be a particular issue for publicized areas such as media violence.
Failure to account for "third" variables. Some scholars contend that media violence studies regularly fail to account for other variables such as genetics, personality and exposure to family violence that may explain both why some people become violent and why those same people may choose to expose themselves to violent media. Several recent studies have found that, when factors such as mental health, family environment and personality are controlled, no predictive relationship between either video games or television violence and youth violence remain (Ferguson, San Miguel & Hartley, 2009; Ybarra et al., 2008, Figure 2).
Failure to adequately define "aggression." Experimental measures of aggression have been questioned by critics (Mussen & Rutherford, 1961; Deselms & Altman, 2003). The main concern of critics has been the issue of the external validity of experimental measures of aggression. The validity of the concept of aggression itself, however, is rarely questioned. Highly detailed taxonomies of different forms of aggression do exist. Whether researchers agree on the particular terminology used to indicate the particular sub-types of aggression (i.e. relational versus social aggression), concepts of aggression are always operationally defined in peer-reviewed journals. However many of these operational definitions of aggression are specifically criticized. Many experimental measures of aggression are rather questionable (i.e. Mussen & Rutherford, 1961; Berkowitz, 1965; Bushman & Anderson, 2002; Deselms & Altman, 2003). Other studies fail to differentiate between "aggression" aimed at causing harm to another person, and "aggressive play" in which two individuals (usually children) may pretend to engage in aggressive behavior, but do so consensually for the purpose of mutual enjoyment. (Goldstein)
Small "effects" sizes. In the research world, the meaning of "statistical significance" can be ambiguous. A measure of effect size can aid in the interpretation of statistical significance. In a meta-analysis of 217 studies by Paik and Comstock (1994), effect sizes for experiments were r = .37 and r = .19 for surveys, which are small to moderate effects. Most of these studies however did not actually measure aggression against another person. Paik and Comstock note that when aggression toward another person, and particularly actual violent crime is considered, the relationship between media violence and these outcomes is near zero. Effects can vary according to their size (for example the effects of eating bananas on your mood could very well be "statistically significant" but would be tiny, almost imperceptible, whereas the effect of a death in the immediate family would also be "statistically significant" but obviously much larger). Media violence studies usually produce very small, transient effects that do not translate into large effects in the real world. Media violence researchers often defend this by stating that many medical studies also produce small effects (although as Block and Crain, 2007, note, these researchers may have miscalculated the effect sizes from medical research).
Media violence rates are not correlated with violent crime rates. One limitation of theories linking media violence to societal violence is that media violence (which appears to have been consistently and unfailingly on the rise since the 1950s) should be correlated with violent crime (which has been cycling up and down throughout human history). By discussing only the data from the 1950s through the 1990s, media violence researchers create the illusion that there is a correlation, when in fact there is not. Large spikes in violent crime in the United States occurred without associated media violence spikes during the 1880s (when records were first kept) and 1930s. The homicide rate in the United States has never been higher than during the 1930s. Similarly, this theory fails to explain why violent crime rates (including among juveniles) dramatically fell in the mid 1990s and have stayed low, during a time when media violence has continued to increase, and saw the addition of violent video games. Lastly media violence researchers can not explain why many countries with media violence rates similar to or equal to the U.S. (such as Norway, Canada, Japan, etc.) have much lower violent crime rates. Huesmann & Eron's own cross-national study (which is often cited in support of media violence effects) failed to find a link between television violence and aggressive behavior in most of the countries included in the analysis (including America, and even in studies on American boys).
Media violence on TV is a reflection of the level of violence that occurs in the real world. Many TV programmers argue that their shows just mirror the violence that goes on in the real world. Zev Braun, of CBS, in 1990 argued in a debate on the Violence Bill that, "We live in a violent society. Art imitates modes of life, not the other way around: it would be better for Congress to clean that society than to clean that reflection of society."Culture and Media Violence
The majority of this research derives from American communication and psychological research. Concerns about the 'effect' of media violence is far less prominent in public and academic discourse in Europe and other parts of the developed world. To a large degree, this is because European and Australian scholars, in particular, recognise that the relationship between media and culture is a great deal more complex than is often conceded by psychological and communications research in North America. There is a recognition that culture is critical to our understanding of these complexities, and that there are no clear causal relations between culture, media, politics and human violence. They simply work in complicated ways through and upon one another through social interactions and history.A small study published in Royal Society Open Science on 13 March 2019 found that "both fans and non-fans of violent music exhibited a general negativity bias for violent imagery over neutral imagery regardless of the music genres."
Response to criticisms
Regarding the inconclusive nature of some findings, media researchers who argue for causal effects often contend that it is the critics who are misinterpreting or selectively reporting studies (Anderson et al., 2003). It may be that both sides of the debate are highlighting separate findings that are most favorable to their own "cause".
Regarding "third" variables, media violence researchers who argue for causal effects acknowledge that other variables may play a role in aggression (Bushman & Anderson, 2001) and that aggression is due to a confluence of variables. These variables are known as "third variables" and if found, would probably be mediator variables (which differ from moderator variables). A mediator variable could 'explain away' media violence effects, whereas a moderator variable cannot. For instance, some scholars contend that trait aggressiveness has been demonstrated to moderate media violence effects (Bushman), although in some studies "trait aggression" does appear to account for any link between media violence exposure and aggression. Other variables have also been found to moderate media violence effects (Bushman & Geen, 1990). Another issue is the way in which experimental studies deal with potential confounding variables. Researchers use random assignment to attempt to neutralize the effects of what commonly are cited as third variables (i.e. gender, trait aggressiveness, preference for violent media). Because experimental designs employ random assignment to conditions, the effect of such attributive variables on experimental results is assumed to be random (not systematic). However, the same can not be said for correlational studies, and failure to control for such variables in correlational studies limits the interpretation of such studies. Often, something as simple as gender proves capable of "mediating" media violence effects.
Regarding aggression, the problem may have less to do with the definition of aggression, but rather how aggression is measured in studies, and how aggression and violent crime are used interchangeably in the public eye.
Much of the debate on this issue seems to revolve around ambiguity regarding what is considered a "small" effect. Media violence researchers who argue for causal effects contend that effect sizes noted in media violence effects are similar to those found in some medical research which is considered important by the medical community (Bushman & Anderson, 2001), although medical research may suffer from some of the same interpretational flaws as social science. This argument has been challenged as based on flawed statistics, however (Block & Crain, 2007). Block & Crain (2007) recently found that social scientists (Bushman & Anderson, 2001) had been miscalculating some medical effect sizes. The interpretation of effect size in both medical and social science remains in its infancy.
More recently, media violence researchers who argue for causal effects have acknowledged that societal media consumption and violent crime rates are not well associated, but claim that this is likely due to other variables that are poorly understood. However, this effect remains poorly explained by current media violence theories, and media violence researchers may need to be more careful not to retreat to an unfalsifiable theory – one that cannot be disproven (Freedman, 2002).
Researchers who argue for causal effects argue that the discrepancy of violent acts seen on TV compared to that in the real world are huge. One study looked at the frequency of crimes occurring in the real world compared with the frequency of crimes occurring in the following reality-based TV programs: America's Most Wanted, Cops, Top Cops, FBI, The Untold Story and American Detective, (Oliver, 1994). The types of crimes were divided into two categories, violent crimes and non-violent crimes. 87% of crimes occurring in the real world are non-violent crimes, whereas only 13% of crimes occurring on TV are considered non-violent crimes. However, this discrepancy between media and real-life crimes may arguably dispute rather than support media effects theories. Some previous research linked boxing matches to homicides although other researchers consider such linkages to be reminiscent of ecological fallacies (e.g. Freedman, 2002). Much more research is required to actually establish any causal effects.