Political ads have become a fixture of modern political campaigns. Accordingly, political advertising has been studied from a variety of perspectives and methodological approaches. Even a cursory review of political advertising literature is likely to produce evidence of methodological diversity. The majority of studies examining political advertising deploy empirical methods, but researchers have also used rhetorical and interpretive methods (e.g., Gronbeck, 1992; Kates, 1998; Parmelee, Perkins, & Sayre, 2007; Reyes, 2006; Richardson, 2000; Sheckels, 2002). Researchers often use rhetorical and interpretive methods to examine specific themes or case studies, but this research informs much of the empirical research on political advertising as well.
Political ads have become a fixture of modern political campaigns. Accordingly, political advertising has been studied from a variety of perspectives and methodological approaches. Even a cursory review of political advertising literature is likely to produce evidence of methodological diversity. The majority of studies examining political advertising deploy empirical methods, but researchers have also used rhetorical and interpretive methods (e.g., Gronbeck, 1992; Kates, 1998; Parmelee, Perkins, & Sayre, 2007; Reyes, 2006; Richardson, 2000; Sheckels, 2002). Researchers often use rhetorical and interpretive methods to examine specific themes or case studies, but this research informs much of the empirical research on political advertising as well.
Use of focus groups is one of the most common methods of qualitative research on political advertising. Focus groups are facilitated and/or organized discussions among small groups of recruited participants about a well-known issue or study-specific stimulus. Focus groups are particularly well suited for in-depth investigations concerning the ways in which participants perceive certain phenomena. Compared to the traditional survey and experimental methods discussed throughout the rest of this chapter, focus groups allow researchers to avoid over-simplifications of the communicative processes that take place within political advertising (Delli Carpini & Williams, 1994). Focus groups are frequently used to better understand how political ads are received by overlooked, time-sensitive, or difficult-to-access populations (e.g., Parmelee, Perkins, & Sayre, 2007; Sparrow & Turner, 2001).
In the following sections, we provide an overview, along with several examples, of popular empirical research methods used in political advertising research, including content analysis, surveys and experiments. Research methods in the literature can generally be divided into three categories – content analysis, experimental and survey. We therefore organized this chapter to reflect these categories. First, we define content analysis and describe several examples of its use in the literature. Second, we discuss independent and dependent variables used in experimental research. In this section we also discuss the pros and cons of several types of experiments. In the third section we review advantages and disadvantages of survey methods and discuss developments related to the rise of digital media and “big data.” In each of these sections, we review methodological trends and issues in political advertising research.
Before describing different empirical approaches to the study of political advertising, we feel it necessary to first reflect on some fundamental assumptions of empirical research. Empirical approaches to the study of political advertising predominantly reflect assumptions associated with null hypothesis significance testing, a hybrid of Fisher’s significance testing and Neyman-Pearson’s model comparison approach to statistical inference. In contrast to these frequentist approaches to statistical inference, recent advances in software and computational efficiency have made alternative methods, i.e., Bayesian approaches to statistical inference, increasingly popular as well. Despite growth of Bayesian methods, however, frequentist approaches continue to dominate the literature. The focus of this chapter is on the different empirical approaches to the study of political advertising.
Content analysis refers to the systematic categorization, or coding, of one or more communication artifacts via numeric assignment. Content analysis is perhaps the most common quantitative method used to explore and describe political advertisements. Unlike historical, critical and interpretive methods, which have also been used to identify themes and analyze political ads (Kaid, 2004), content analysis attempts to quantify distinguishing elements of the content in question. This systematic categorization often referred to as coding is applied to each unit of analysis, which is defined by the researchers given the topic and purpose of study. Each unit of analysis is coded (i.e., counted) in one or more categories as theorized by the researchers. Categories are typically defined prior to data analysis, though some exploratory research creates and uses categories derived from patterns observed in the data exploration process. Many researchers rely on one or (preferably) more human coders – frequently undergraduate or graduate students – though computerized content analysis has become increasingly more common over the past few decades. Scholars using content analysis methods have employed numerous types of categories (e.g., valence, or whether political ads are positive or negative, frames, e.g., whether ads focus on issues or candidate image (Meirick, 2002), emotion, language, verbal style and tone).
One of the systematic content analysis methods by which political candidates’ televised advertisements have been studied is that of videostyle (e.g., Banwart, Bystrom, & Robertson, 2003; Bystrom, Robertson, Banwart, & Kaid, 2004; Johnston & Kaid, 2002). Videostyle as a method (Kaid & Davidson, 1986) is in part grounded in Goffman’s (1959) work on self-presentation and is the study of “the way candidates present themselves to voters through the television medium” (Johnston & Kaid, 2002, p. 285). The broadcast ad in its entirety – whether it is 15 seconds, 30 seconds, or one minute in length, with most samples including ads that are 30 seconds in length – serves as the unit of analysis. Coders then search for items that represent three major categories of study: verbal content, nonverbal content and production techniques (Kaid & Davidson, 1986).
The first category, verbal content, of videostyle focuses on the “semantic characteristics” of the advertisement’s message (Kaid & Tedesco, 1999). The variables coded, which represent the spot ad’s verbal content, include the positive or negative tone of the message, whether issues or candidate image is discussed and the content of both issues and candidate images presented, language choice, appeals (e.g., ethos, logos, pathos, fear), and the presence of incumbent or challenger strategies. The second category, nonverbal content, examines “visual elements and audio elements that do not have specific semantic meaning” (Kaid & Davidson, 1986, p. 187). Variables such as personal appearance, kinesics, paralanguage, body language of the candidate, and facial expressions are analyzed. The third category of analysis is the film/video production techniques. Such techniques are central to the design content of an ad as they aid in the delivery of the verbal and nonverbal messages by setting the mood and directing the focus of the viewer. Variables in this category include music, camera angles, special effects, cutting techniques and setting.
Videostyle has been widely used to analyze broadcast advertising in presidential elections (e.g., Bystrom et al., 2004; Johnston & Kaid, 2002) and in comparing US presidential and Korean presidential advertising (Tak, Kaid, & Khang, 2007). Scholars have also advanced videostyle analysis to account for the influence of gender. For instance, Bystrom (1995) introduced new items identified by scholars as distinct to female and male communication styles. For example, variables included candidate posture, use of touch, language intensifiers, traits identified with female communication styles (i.e., sensitive/understanding, cooperation with others, etc.), and an expanded list of issues that are analyzed within feminine, masculine and neutral categories.
Another approach to content analysis categorizes political ads using Benoit’s (1999) functional analysis theory. According to functional analysis political campaign messages can attack (criticism, negative association, etc.), acclaim (praise), and/or defend (explain, respond, etc.). This method has been used to examine broadcast ads during US presidential elections (e.g., Benoit, 1999; Benoit & Glantz, 2015; Henson & Benoit, 2016), to draw a comparison with web-only ads and broadcast ads from US presidential campaigns (Roberts, 2013), congressional broadcast ads (Brazeal & Benoit, 2001, 2006), and comparisons between broadcast advertising in US presidential and Taiwanese presidential (Wen, Benoit, & Yu, 2004) and Korean presidential races (Lee & Benoit, 2004).
In much of the content analysis approaches to studying campaign advertising, persuasion theory and media effects research guides category selections. Granted, researchers have incorporated more applied approaches to content analysis as well. For example, Atkin and Heald (1976) interviewed campaign media directors to identify the key communication messages in order to examine voter opinions.
Content analysis is a flexible method that also extends to effects-focused research. When combined with experimental or observational data, content analysis can be used to make claims about potential effects of certain political advertising media. In response to medium-to-small effect sizes commonly found in political communication research and the limitations associated with participant recall, scholars have introduced new techniques designed to increase the validity of this kind of content analysis. For example, researchers analyzed political ads and then matched the results with levels of exposure to different geographical areas or different populations (e.g., Slater, 2016). It is also possible to specify procedures in content analysis to systematically alter the results from the analysis. This is especially useful when dealing with sampling limitations. For example, Prior (2001) proposed using weighted content analysis in order to distinguish between aired versus watched political ads. In his study, political ad buy data were used as weights in a model examining differences between major parties in advertising tone.
Scholars have taken a number of different approaches in examining the effects of political advertising, but experimental methods, in particular, are favored in the literature. Advancements in software have also made it easier to conduct experiments entirely online (Fridkin, Kenney, & Wintersieck, 2015; Iyengar & Vavreck, 2012). Experiments allow researchers to examine the unique effects of political advertising. Researchers have explored a wide range of possible effects including voting, candidate image, knowledge, cynicism, polarization and efficacy, among others. Political advertising experiments can be further broken down into different study designs and stimuli.
Experimental designs in political advertising research include pretest/post-test designs (e.g., Kaid, Postelnicu, Landreville, Yun, & LeGrange, 2007) using control or comparison groups (e.g., Broockman & Green, 2014; Valentino, Hutchings, & Williams, 2004), or post-test only, factorial designs (Phillips, Urbany, & Reynolds, 2008). Many of these experiments use political advertisements from real political campaigns (e.g., Kahn & Geer, 1994; Meirick, 2002; Phillips et al., 2008). Participants in experiments include general populations, variety of targeted populations and, most commonly, college students. Although experimental methods offer greater external validity compared to cross-sectional surveys, though it may not be enough to correct for potential shortcomings associated with student samples. For example, some research suggests the effects of political advertising may be larger on students (Benoit, Leshner, & Chattopadhyay, 2007). This makes sense as college students are more likely to be politically engaged later in life than non-college students (Klofstad, 2015). Student samples are thus not necessarily representative. College students tend to be more engaged with public affairs than the general population, making them more sensitive to experimental manipulations involving politics. In short, experimental methods avoid some of the problems associated with observational research methods, but results may still be shaped by the nature of the sample.
Experimental methods in political advertising research typically treat political advertisements as independent variables, though some derivatives do examine contingencies which frame political ads – e.g., fact-checking following a negative ad experiment (Fridkin et al., 2015). In many of these experiments, participants are randomly assigned to media containing different political ads or different levels (or types) of exposure to political ads. For instance, participants could be assigned to groups to watch positive ads or negative ads, or they could be assigned to groups to watch several ads or only one ad. By using political ads as stimuli, experiments provide direct evaluations of their effects. Researchers have used experiments to compare one or more political ads from radio (e.g., Shapiro & Rieger, 1992), print (e.g., Pinkleton, 1998), television (Garramone, Atkin, Pinkleton, & Cole, 1990; Roddy & Garramone, 1988), or the Internet (Broockman & Green, 2014), with a control group or with different (comparison) ads. When using a control group, often researchers will display non-political entertainment or informational media content in order to control for medium-specific effects. When using comparison group, researchers will often maximize systematic differences by displaying several examples of each type of ad. For example, one group may be exposed to ten different image-focused political ads while a second group is exposed to ten different candidate-focused political ads. Control groups allow researchers to isolate effects of exposure to political ads. Comparison groups allow researchers to isolate the effects of types of political ads.
In terms of the stimulus being tested, some experiments focus on individual elements of advertisements – e.g., voice-over announcers (Strach, Zuber, Fowler, Ridout, & Searles, 2015), background music and images (Brader, 2005), etc. – with some experiments examining audio and video separately. Most lab experiments expose participants to several ads in succession (Kaid et al., 2007) or within other types of programming (Roddy & Garramone, 1988). Participants are typically exposed to multiple advertisements, though media environments and whether advertisements are embedded in other programming tends to vary with the study. In other words, researchers almost always expose participants to multiple ads, but, depending on the context and purpose of the study, researchers may also decide to replicate different physical environments (e.g., private versus public television exposure or personal versus laboratory computer use) sandwich political ads between entertainment or news programming – like one might imagine exposure to political ads occurs in real life.
Researchers also take multiple approaches to measuring the effects, or dependent variables. Common dependent variables include voter turnout, vote choice, political participation, perceptions of candidates, recall, affect and others. Many studies rely on self-reports from participants, though researchers employ other measurement techniques as well, such as thought-listing techniques (e.g., Phillips et al., 2008; Pinkleton, Um, & Austin, 2002; Schenck-Hamlin, Procter, & Rumsey, 2000; Shen, 2004) or asking participants to report on the perceived effects on other people (Cohen & Davis, 1991). More recently, advancements in technology have made other methods such as computer-assisted self-interviews and dials more popular as well (e.g., Iyengar & Simon, 2000; Iyengar & Vavreck, 2012; Schenck-Hamlin et al., 2000). Many of these alternatives to self-report are used not only to produce more reliable data but also to improve the reliability of self-report measurement strategies.
The major advantage of using experimental methods is relatively straightforward. Experiments give researchers unparalleled ability to ensure that manipulation only occurs on the independent variable of interest. This makes experiments uniquely suited for tests of causality. However, some scholars consider the external validity of these kinds of lab experiments to be in question (Goldstein & Ridout, 2004). Lab settings allow researchers to maximize control over experiments but they are not necessarily realistic. This raises several questions. Are advertisements presented in isolation or embedded in other media programming? Does the experiment capture the intensity or frequency of ad exposure that happens in the real world? These questions and concerns over ecological validity have caused many scholars to pursue field experiments (Goldstein & Ridout, 2004). For example, researchers have used field experiments to exposure to political advertising on the Internet (Broockman & Green, 2014) and on television (Phillips et al., 2008).
In one particularly interesting use of field experiments, researchers worked with campaigns to randomly assign different television ad buys to different television markets during the 2006 gubernatorial campaign in Texas (Gerber, Gimpel, Green, & Shaw, 2011). Although concerns of campaign strategy led to the exclusion of two of the largest markets from random assignment, researchers were still able to manipulate levels of exposure by cooperating with campaigns to systematically vary ad buys by geographical area and population. Given the degree of control and influence their design had on the actual election, the researchers were able to leverage short, low-cost surveys to voter populations at relatively high frequencies (e.g., weekly). As this study demonstrates, the collection of time series data during large-scale field experiments makes it possible to trace things such as the persistence of advertising effects. Thus, while meaningful insights can still be gained from laboratory experiments, field experiments offer much needed evidence of the effects of political advertising in real media environments.
In addition to experimental methods, literature describing the effects of political advertising comes from survey research as well. Less resource intensive and easier to distribute than experiments, surveys enable the collection of observational data. Survey research methods offer researchers several advantages. Surveys are relatively easy to access. Respondents can therefore complete surveys in more natural (and more familiar) settings. Ease of access even makes it feasible for many researchers to use longitudinal research designs. Longitudinal data can be used to track naturally occurring exposure to political ads over time. When longitudinal research consists of collecting panel data, or repeated observations over multiple points of time, researchers can isolate the directional effects, or temporal sequences, of political advertisements. For example, Shah et al. (2007) used panel data to analyze the influence of exposure to political advertisements on information-seeking behaviors. In this study, researchers used panel data to examine lagged effects and time-sensitive mediation models – relationships that are not directly testable when analyzing cross-sectional data.
Although surveys are relatively low-cost and easily accessible for numerous populations, they are not without disadvantages. One notable problem with survey methods concerns the endogeneity of political ad exposure. Researchers in this area of research often assume exposure to political advertisements is exogenous (i.e., the independent variable) when, in reality, exposure is likely influenced by media diet, political interest, geographical location, etc.
The problem of endogeneity potentially confounds all of the identified effects of political advertising. Survey designs that rely on self-report, for example, are likely bias due to endogeneity between respondent recall and political interest. That is, people who are more interested in politics are more likely to remember seeing political ads, while people who are less interested are more likely to forget, or under-report, ad exposure. For another example, consider one of the more reliable effects of political ads found in the literature, the effect of negative ads on negativity toward campaigns. Rather than concluding that negative ads influence the minds of voters to be more negative toward campaigns, it could be that people who are not as cynical about politics often forget (i.e., under-report) the degree to which they were exposed to negative political ads. Conversely, people who are more cynical about politics, on the other hand, might over-report their exposure to negative ads since they likely pay more attention when negative political ads get played. To address this problem, several studies on media effects have leveraged Nielson ratings and media tracking technology, but these alternative approaches are not without their own problems as well. Nielson estimates are still susceptible to sampling error, and they do not reflect the degree to which people paid attention to the media. Media tracking devices have potential to record large amounts of rich data, but the use of tracking devices for the purpose of research is largely not feasibile due to technological hurdles and prohibitive costs.
In addition to inherent limitations on a researcher’s ability to control for other variables, one of the biggest obstacles to survey research is measuring exposure, or the amount of contact with communication messages, to political advertising. Recall problems, in particular, plague the measurement of media exposure (Iyengar & Vavreck, 2012; Prior, 2009, 2013). To minimize participant memory problems, scholars recommend increasing specificity in choices (Dilliplane, Goldman, & Mutz, 2013) and including population values or other specific referents (Prior, 2009). Despite proposed alternatives to traditional approaches to measurement, however, research suggests recall problems remain in media effects research (Prior, 2013). Proposed alternatives, such as measuring proxies for political advertising such as campaign spending or ad buys, offer imperfect solutions (Goldstein & Ridout, 2004). For example, several studies have used data on political ad buys as a proxy for political advertising (Goldstein & Ridout, 2004; Prior, 2001) even though ad buys do not necessarily translate into direct measurements of ad exposure. Ads are likely more expensive in New York than they are in Montana, but, as Goldstein and Ridout (2004) note, researchers rarely apply weights to ad buys to account for these differences in baseline costs in different markets.
Problems associated with measurements of political ad exposure remain, but new types of data do offer some potential. The rise of digital media, for example, has made it possible to track when and where political advertisements are played (Fowler & Ridout, 2013). In addition, the use of media tracking technology (Goldstein & Ridout, 2004; Iyengar & Vavreck, 2012) appear promising as well. These advancements may still suffer from similar limitations associated with traditional observational methods (e.g., ad buys), but they also create new possibilities for researchers as well. For example, technological advancements in the tracking of political advertising exposure make it easier to merge content analysis with survey methods (Iyengar & Simon, 2000). For example, in a study analyzing US Senate races in 1988 and 1992, Iyengar and Simon (2000) were able to replicate a data set of newspapers during that time in only six weeks. Researchers today have unprecedented access to news archives as nearly all news organizations now organize and maintain their archives electronically.
Technological advances continue to drive innovations in research. Many scholars are currently exploring numerous ways to access huge amounts of digital data generated from millions of online users. Future studies attempting to track the effects of political advertisements on voters will likely benefit from data made publicly available via application programming interface (API). APIs refer to sets of software routines and procedures that allow individuals to make requests for information created or maintained by a source. The New York Times, for example, allows users to request API keys associated with different sections of content. API keys function like a digital password in API requests sent to the New York Times. APIs enable users to make requests for information in much the same way that browsers make requests of a news organization’s website. API documentation typically describes the procedures that can produce a specific call – like entering a specific URL – and response – like the loading of the webpage associated with that URL address. One promising source of online, digital-trace data made available via APIs comes from social media platforms, such as Facebook and Twitter. Millions of users, young and old, frequently use social media platforms to communicate with others about politics and current events. Many users use these platforms to share links, videos and news articles with other users in their networks. This method of message transmission is particularly effective as it also serves as a social validating function. Results from a nationwide Facebook experiment during the 2010 US congressional elections found that exposure to social-validating political messages had a positive effect on voter turnout (Bond et al., 2012). Political campaigns have taken note, and they will likely continue to design ads in ways to maximize their potential shareability on social media sites. In fact, campaigns have already started running ads specifically designed for social media sites like Facebook and Twitter. Facebook’s API offers developers some access to user ad experiences, though much of the data require permission from users via third party applications. Future research will likely use these applications as a means of data collection, particularly for panel or experimental studies.
Although political campaigns cannot track individual users, Facebook does allow targeted political messaging. At the time of writing this, Facebook offers a feature that allows political advertising to target “political influencers,” or very politically active Facebook users (Lapowsky, 2015). As of 2016, political campaigns can match voter files with Facebook user accounts in order to target certain users via Facebook for political advertisements (Davies & Yadron, 2016). In other words, campaigns can purchase ads, identify users also found in their voter files, and then track how those users interact with the purchased advertising. As a result, studies that measure ad exposure on Facebook may also be measuring strategic, empirically driven advertising decisions made by campaigns. For example, if over time ad exposure concentrated on rural users, one might reasonably infer the campaign in question decided to specifically target that group of rural voters. Even without voter files, campaigns can still purchase advertising plans that target certain demographics and even use Facebook’s algorithm to identify liberal versus conservative users (Lapowsky, 2015).
Grounded in frequentist assumptions and informed by rhetorical and interpretive research, empirical research methods can be used in a variety of ways in the study of political advertising. Content analysis can be used to identify trends or claims in political ads. Experimental research can be used to isolate causal relationships – i.e., effects – related to political advertising and other political behaviors. Survey research can be used to describe and track these relationships nationwide and/or over time. In this chapter we reviewed advantages and disadvantages to each of these methodological approaches. Given the shifting nature of media technology and political communication, we can expect certain aspects of these methods, along with their advantages and disadvantages, to change over time. Although we have outlined several future directions for research methods in the study of political advertising, scholars should continue to adapt to the evolving political climate and, most importantly, the ever-changing media environment.