Methodological Approaches

Authored by: Michael W. Kearney , Mary C. Banwart

Routledge Handbook of Political Advertising

Print publication date:  March  2017
Online publication date:  February  2017

Print ISBN: 9781138908307
eBook ISBN: 9781315694504
Adobe ISBN: 9781317439783




Political ads have become a fixture of modern political campaigns. Accordingly, political advertising has been studied from a variety of perspectives and methodological approaches. Even a cursory review of political advertising literature is likely to produce evidence of methodological diversity. The majority of studies examining political advertising deploy empirical methods, but researchers have also used rhetorical and interpretive methods (e.g., Gronbeck, 1992; Kates, 1998; Parmelee, Perkins, & Sayre, 2007; Reyes, 2006; Richardson, 2000; Sheckels, 2002). Researchers often use rhetorical and interpretive methods to examine specific themes or case studies, but this research informs much of the empirical research on political advertising as well.

 Add to shortlist  Cite

Methodological Approaches

Political ads have become a fixture of modern political campaigns. Accordingly, political advertising has been studied from a variety of perspectives and methodological approaches. Even a cursory review of political advertising literature is likely to produce evidence of methodological diversity. The majority of studies examining political advertising deploy empirical methods, but researchers have also used rhetorical and interpretive methods (e.g., Gronbeck, 1992; Kates, 1998; Parmelee, Perkins, & Sayre, 2007; Reyes, 2006; Richardson, 2000; Sheckels, 2002). Researchers often use rhetorical and interpretive methods to examine specific themes or case studies, but this research informs much of the empirical research on political advertising as well.

Use of focus groups is one of the most common methods of qualitative research on political advertising. Focus groups are facilitated and/or organized discussions among small groups of recruited participants about a well-known issue or study-specific stimulus. Focus groups are particularly well suited for in-depth investigations concerning the ways in which participants perceive certain phenomena. Compared to the traditional survey and experimental methods discussed throughout the rest of this chapter, focus groups allow researchers to avoid over-simplifications of the communicative processes that take place within political advertising (Delli Carpini & Williams, 1994). Focus groups are frequently used to better understand how political ads are received by overlooked, time-sensitive, or difficult-to-access populations (e.g., Parmelee, Perkins, & Sayre, 2007; Sparrow & Turner, 2001).

In the following sections, we provide an overview, along with several examples, of popular empirical research methods used in political advertising research, including content analysis, surveys and experiments. Research methods in the literature can generally be divided into three categories – content analysis, experimental and survey. We therefore organized this chapter to reflect these categories. First, we define content analysis and describe several examples of its use in the literature. Second, we discuss independent and dependent variables used in experimental research. In this section we also discuss the pros and cons of several types of experiments. In the third section we review advantages and disadvantages of survey methods and discuss developments related to the rise of digital media and “big data.” In each of these sections, we review methodological trends and issues in political advertising research.

Empirical Research

Before describing different empirical approaches to the study of political advertising, we feel it necessary to first reflect on some fundamental assumptions of empirical research. Empirical approaches to the study of political advertising predominantly reflect assumptions associated with null hypothesis significance testing, a hybrid of Fisher’s significance testing and Neyman-Pearson’s model comparison approach to statistical inference. In contrast to these frequentist approaches to statistical inference, recent advances in software and computational efficiency have made alternative methods, i.e., Bayesian approaches to statistical inference, increasingly popular as well. Despite growth of Bayesian methods, however, frequentist approaches continue to dominate the literature. The focus of this chapter is on the different empirical approaches to the study of political advertising.

Content Analysis Research

Content analysis refers to the systematic categorization, or coding, of one or more communication artifacts via numeric assignment. Content analysis is perhaps the most common quantitative method used to explore and describe political advertisements. Unlike historical, critical and interpretive methods, which have also been used to identify themes and analyze political ads (Kaid, 2004), content analysis attempts to quantify distinguishing elements of the content in question. This systematic categorization often referred to as coding is applied to each unit of analysis, which is defined by the researchers given the topic and purpose of study. Each unit of analysis is coded (i.e., counted) in one or more categories as theorized by the researchers. Categories are typically defined prior to data analysis, though some exploratory research creates and uses categories derived from patterns observed in the data exploration process. Many researchers rely on one or (preferably) more human coders – frequently undergraduate or graduate students – though computerized content analysis has become increasingly more common over the past few decades. Scholars using content analysis methods have employed numerous types of categories (e.g., valence, or whether political ads are positive or negative, frames, e.g., whether ads focus on issues or candidate image (Meirick, 2002), emotion, language, verbal style and tone).

One of the systematic content analysis methods by which political candidates’ televised advertisements have been studied is that of videostyle (e.g., Banwart, Bystrom, & Robertson, 2003; Bystrom, Robertson, Banwart, & Kaid, 2004; Johnston & Kaid, 2002). Videostyle as a method (Kaid & Davidson, 1986) is in part grounded in Goffman’s (1959) work on self-presentation and is the study of “the way candidates present themselves to voters through the television medium” (Johnston & Kaid, 2002, p. 285). The broadcast ad in its entirety – whether it is 15 seconds, 30 seconds, or one minute in length, with most samples including ads that are 30 seconds in length – serves as the unit of analysis. Coders then search for items that represent three major categories of study: verbal content, nonverbal content and production techniques (Kaid & Davidson, 1986).

The first category, verbal content, of videostyle focuses on the “semantic characteristics” of the advertisement’s message (Kaid & Tedesco, 1999). The variables coded, which represent the spot ad’s verbal content, include the positive or negative tone of the message, whether issues or candidate image is discussed and the content of both issues and candidate images presented, language choice, appeals (e.g., ethos, logos, pathos, fear), and the presence of incumbent or challenger strategies. The second category, nonverbal content, examines “visual elements and audio elements that do not have specific semantic meaning” (Kaid & Davidson, 1986, p. 187). Variables such as personal appearance, kinesics, paralanguage, body language of the candidate, and facial expressions are analyzed. The third category of analysis is the film/video production techniques. Such techniques are central to the design content of an ad as they aid in the delivery of the verbal and nonverbal messages by setting the mood and directing the focus of the viewer. Variables in this category include music, camera angles, special effects, cutting techniques and setting.

Videostyle has been widely used to analyze broadcast advertising in presidential elections (e.g., Bystrom et al., 2004; Johnston & Kaid, 2002) and in comparing US presidential and Korean presidential advertising (Tak, Kaid, & Khang, 2007). Scholars have also advanced videostyle analysis to account for the influence of gender. For instance, Bystrom (1995) introduced new items identified by scholars as distinct to female and male communication styles. For example, variables included candidate posture, use of touch, language intensifiers, traits identified with female communication styles (i.e., sensitive/understanding, cooperation with others, etc.), and an expanded list of issues that are analyzed within feminine, masculine and neutral categories.

Another approach to content analysis categorizes political ads using Benoit’s (1999) functional analysis theory. According to functional analysis political campaign messages can attack (criticism, negative association, etc.), acclaim (praise), and/or defend (explain, respond, etc.). This method has been used to examine broadcast ads during US presidential elections (e.g., Benoit, 1999; Benoit & Glantz, 2015; Henson & Benoit, 2016), to draw a comparison with web-only ads and broadcast ads from US presidential campaigns (Roberts, 2013), congressional broadcast ads (Brazeal & Benoit, 2001, 2006), and comparisons between broadcast advertising in US presidential and Taiwanese presidential (Wen, Benoit, & Yu, 2004) and Korean presidential races (Lee & Benoit, 2004).

In much of the content analysis approaches to studying campaign advertising, persuasion theory and media effects research guides category selections. Granted, researchers have incorporated more applied approaches to content analysis as well. For example, Atkin and Heald (1976) interviewed campaign media directors to identify the key communication messages in order to examine voter opinions.

Content analysis is a flexible method that also extends to effects-focused research. When combined with experimental or observational data, content analysis can be used to make claims about potential effects of certain political advertising media. In response to medium-to-small effect sizes commonly found in political communication research and the limitations associated with participant recall, scholars have introduced new techniques designed to increase the validity of this kind of content analysis. For example, researchers analyzed political ads and then matched the results with levels of exposure to different geographical areas or different populations (e.g., Slater, 2016). It is also possible to specify procedures in content analysis to systematically alter the results from the analysis. This is especially useful when dealing with sampling limitations. For example, Prior (2001) proposed using weighted content analysis in order to distinguish between aired versus watched political ads. In his study, political ad buy data were used as weights in a model examining differences between major parties in advertising tone.

Experimental Research

Scholars have taken a number of different approaches in examining the effects of political advertising, but experimental methods, in particular, are favored in the literature. Advancements in software have also made it easier to conduct experiments entirely online (Fridkin, Kenney, & Wintersieck, 2015; Iyengar & Vavreck, 2012). Experiments allow researchers to examine the unique effects of political advertising. Researchers have explored a wide range of possible effects including voting, candidate image, knowledge, cynicism, polarization and efficacy, among others. Political advertising experiments can be further broken down into different study designs and stimuli.

Experimental designs in political advertising research include pretest/post-test designs (e.g., Kaid, Postelnicu, Landreville, Yun, & LeGrange, 2007) using control or comparison groups (e.g., Broockman & Green, 2014; Valentino, Hutchings, & Williams, 2004), or post-test only, factorial designs (Phillips, Urbany, & Reynolds, 2008). Many of these experiments use political advertisements from real political campaigns (e.g., Kahn & Geer, 1994; Meirick, 2002; Phillips et al., 2008). Participants in experiments include general populations, variety of targeted populations and, most commonly, college students. Although experimental methods offer greater external validity compared to cross-sectional surveys, though it may not be enough to correct for potential shortcomings associated with student samples. For example, some research suggests the effects of political advertising may be larger on students (Benoit, Leshner, & Chattopadhyay, 2007). This makes sense as college students are more likely to be politically engaged later in life than non-college students (Klofstad, 2015). Student samples are thus not necessarily representative. College students tend to be more engaged with public affairs than the general population, making them more sensitive to experimental manipulations involving politics. In short, experimental methods avoid some of the problems associated with observational research methods, but results may still be shaped by the nature of the sample.

Experimental methods in political advertising research typically treat political advertisements as independent variables, though some derivatives do examine contingencies which frame political ads – e.g., fact-checking following a negative ad experiment (Fridkin et al., 2015). In many of these experiments, participants are randomly assigned to media containing different political ads or different levels (or types) of exposure to political ads. For instance, participants could be assigned to groups to watch positive ads or negative ads, or they could be assigned to groups to watch several ads or only one ad. By using political ads as stimuli, experiments provide direct evaluations of their effects. Researchers have used experiments to compare one or more political ads from radio (e.g., Shapiro & Rieger, 1992), print (e.g., Pinkleton, 1998), television (Garramone, Atkin, Pinkleton, & Cole, 1990; Roddy & Garramone, 1988), or the Internet (Broockman & Green, 2014), with a control group or with different (comparison) ads. When using a control group, often researchers will display non-political entertainment or informational media content in order to control for medium-specific effects. When using comparison group, researchers will often maximize systematic differences by displaying several examples of each type of ad. For example, one group may be exposed to ten different image-focused political ads while a second group is exposed to ten different candidate-focused political ads. Control groups allow researchers to isolate effects of exposure to political ads. Comparison groups allow researchers to isolate the effects of types of political ads.

In terms of the stimulus being tested, some experiments focus on individual elements of advertisements – e.g., voice-over announcers (Strach, Zuber, Fowler, Ridout, & Searles, 2015), background music and images (Brader, 2005), etc. – with some experiments examining audio and video separately. Most lab experiments expose participants to several ads in succession (Kaid et al., 2007) or within other types of programming (Roddy & Garramone, 1988). Participants are typically exposed to multiple advertisements, though media environments and whether advertisements are embedded in other programming tends to vary with the study. In other words, researchers almost always expose participants to multiple ads, but, depending on the context and purpose of the study, researchers may also decide to replicate different physical environments (e.g., private versus public television exposure or personal versus laboratory computer use) sandwich political ads between entertainment or news programming – like one might imagine exposure to political ads occurs in real life.

Researchers also take multiple approaches to measuring the effects, or dependent variables. Common dependent variables include voter turnout, vote choice, political participation, perceptions of candidates, recall, affect and others. Many studies rely on self-reports from participants, though researchers employ other measurement techniques as well, such as thought-listing techniques (e.g., Phillips et al., 2008; Pinkleton, Um, & Austin, 2002; Schenck-Hamlin, Procter, & Rumsey, 2000; Shen, 2004) or asking participants to report on the perceived effects on other people (Cohen & Davis, 1991). More recently, advancements in technology have made other methods such as computer-assisted self-interviews and dials more popular as well (e.g., Iyengar & Simon, 2000; Iyengar & Vavreck, 2012; Schenck-Hamlin et al., 2000). Many of these alternatives to self-report are used not only to produce more reliable data but also to improve the reliability of self-report measurement strategies.

The major advantage of using experimental methods is relatively straightforward. Experiments give researchers unparalleled ability to ensure that manipulation only occurs on the independent variable of interest. This makes experiments uniquely suited for tests of causality. However, some scholars consider the external validity of these kinds of lab experiments to be in question (Goldstein & Ridout, 2004). Lab settings allow researchers to maximize control over experiments but they are not necessarily realistic. This raises several questions. Are advertisements presented in isolation or embedded in other media programming? Does the experiment capture the intensity or frequency of ad exposure that happens in the real world? These questions and concerns over ecological validity have caused many scholars to pursue field experiments (Goldstein & Ridout, 2004). For example, researchers have used field experiments to exposure to political advertising on the Internet (Broockman & Green, 2014) and on television (Phillips et al., 2008).

In one particularly interesting use of field experiments, researchers worked with campaigns to randomly assign different television ad buys to different television markets during the 2006 gubernatorial campaign in Texas (Gerber, Gimpel, Green, & Shaw, 2011). Although concerns of campaign strategy led to the exclusion of two of the largest markets from random assignment, researchers were still able to manipulate levels of exposure by cooperating with campaigns to systematically vary ad buys by geographical area and population. Given the degree of control and influence their design had on the actual election, the researchers were able to leverage short, low-cost surveys to voter populations at relatively high frequencies (e.g., weekly). As this study demonstrates, the collection of time series data during large-scale field experiments makes it possible to trace things such as the persistence of advertising effects. Thus, while meaningful insights can still be gained from laboratory experiments, field experiments offer much needed evidence of the effects of political advertising in real media environments.

Survey Research

In addition to experimental methods, literature describing the effects of political advertising comes from survey research as well. Less resource intensive and easier to distribute than experiments, surveys enable the collection of observational data. Survey research methods offer researchers several advantages. Surveys are relatively easy to access. Respondents can therefore complete surveys in more natural (and more familiar) settings. Ease of access even makes it feasible for many researchers to use longitudinal research designs. Longitudinal data can be used to track naturally occurring exposure to political ads over time. When longitudinal research consists of collecting panel data, or repeated observations over multiple points of time, researchers can isolate the directional effects, or temporal sequences, of political advertisements. For example, Shah et al. (2007) used panel data to analyze the influence of exposure to political advertisements on information-seeking behaviors. In this study, researchers used panel data to examine lagged effects and time-sensitive mediation models – relationships that are not directly testable when analyzing cross-sectional data.

Although surveys are relatively low-cost and easily accessible for numerous populations, they are not without disadvantages. One notable problem with survey methods concerns the endogeneity of political ad exposure. Researchers in this area of research often assume exposure to political advertisements is exogenous (i.e., the independent variable) when, in reality, exposure is likely influenced by media diet, political interest, geographical location, etc.

The problem of endogeneity potentially confounds all of the identified effects of political advertising. Survey designs that rely on self-report, for example, are likely bias due to endogeneity between respondent recall and political interest. That is, people who are more interested in politics are more likely to remember seeing political ads, while people who are less interested are more likely to forget, or under-report, ad exposure. For another example, consider one of the more reliable effects of political ads found in the literature, the effect of negative ads on negativity toward campaigns. Rather than concluding that negative ads influence the minds of voters to be more negative toward campaigns, it could be that people who are not as cynical about politics often forget (i.e., under-report) the degree to which they were exposed to negative political ads. Conversely, people who are more cynical about politics, on the other hand, might over-report their exposure to negative ads since they likely pay more attention when negative political ads get played. To address this problem, several studies on media effects have leveraged Nielson ratings and media tracking technology, but these alternative approaches are not without their own problems as well. Nielson estimates are still susceptible to sampling error, and they do not reflect the degree to which people paid attention to the media. Media tracking devices have potential to record large amounts of rich data, but the use of tracking devices for the purpose of research is largely not feasibile due to technological hurdles and prohibitive costs.

In addition to inherent limitations on a researcher’s ability to control for other variables, one of the biggest obstacles to survey research is measuring exposure, or the amount of contact with communication messages, to political advertising. Recall problems, in particular, plague the measurement of media exposure (Iyengar & Vavreck, 2012; Prior, 2009, 2013). To minimize participant memory problems, scholars recommend increasing specificity in choices (Dilliplane, Goldman, & Mutz, 2013) and including population values or other specific referents (Prior, 2009). Despite proposed alternatives to traditional approaches to measurement, however, research suggests recall problems remain in media effects research (Prior, 2013). Proposed alternatives, such as measuring proxies for political advertising such as campaign spending or ad buys, offer imperfect solutions (Goldstein & Ridout, 2004). For example, several studies have used data on political ad buys as a proxy for political advertising (Goldstein & Ridout, 2004; Prior, 2001) even though ad buys do not necessarily translate into direct measurements of ad exposure. Ads are likely more expensive in New York than they are in Montana, but, as Goldstein and Ridout (2004) note, researchers rarely apply weights to ad buys to account for these differences in baseline costs in different markets.

Problems associated with measurements of political ad exposure remain, but new types of data do offer some potential. The rise of digital media, for example, has made it possible to track when and where political advertisements are played (Fowler & Ridout, 2013). In addition, the use of media tracking technology (Goldstein & Ridout, 2004; Iyengar & Vavreck, 2012) appear promising as well. These advancements may still suffer from similar limitations associated with traditional observational methods (e.g., ad buys), but they also create new possibilities for researchers as well. For example, technological advancements in the tracking of political advertising exposure make it easier to merge content analysis with survey methods (Iyengar & Simon, 2000). For example, in a study analyzing US Senate races in 1988 and 1992, Iyengar and Simon (2000) were able to replicate a data set of newspapers during that time in only six weeks. Researchers today have unprecedented access to news archives as nearly all news organizations now organize and maintain their archives electronically.

Technological advances continue to drive innovations in research. Many scholars are currently exploring numerous ways to access huge amounts of digital data generated from millions of online users. Future studies attempting to track the effects of political advertisements on voters will likely benefit from data made publicly available via application programming interface (API). APIs refer to sets of software routines and procedures that allow individuals to make requests for information created or maintained by a source. The New York Times, for example, allows users to request API keys associated with different sections of content. API keys function like a digital password in API requests sent to the New York Times. APIs enable users to make requests for information in much the same way that browsers make requests of a news organization’s website. API documentation typically describes the procedures that can produce a specific call – like entering a specific URL – and response – like the loading of the webpage associated with that URL address. One promising source of online, digital-trace data made available via APIs comes from social media platforms, such as Facebook and Twitter. Millions of users, young and old, frequently use social media platforms to communicate with others about politics and current events. Many users use these platforms to share links, videos and news articles with other users in their networks. This method of message transmission is particularly effective as it also serves as a social validating function. Results from a nationwide Facebook experiment during the 2010 US congressional elections found that exposure to social-validating political messages had a positive effect on voter turnout (Bond et al., 2012). Political campaigns have taken note, and they will likely continue to design ads in ways to maximize their potential shareability on social media sites. In fact, campaigns have already started running ads specifically designed for social media sites like Facebook and Twitter. Facebook’s API offers developers some access to user ad experiences, though much of the data require permission from users via third party applications. Future research will likely use these applications as a means of data collection, particularly for panel or experimental studies.

Although political campaigns cannot track individual users, Facebook does allow targeted political messaging. At the time of writing this, Facebook offers a feature that allows political advertising to target “political influencers,” or very politically active Facebook users (Lapowsky, 2015). As of 2016, political campaigns can match voter files with Facebook user accounts in order to target certain users via Facebook for political advertisements (Davies & Yadron, 2016). In other words, campaigns can purchase ads, identify users also found in their voter files, and then track how those users interact with the purchased advertising. As a result, studies that measure ad exposure on Facebook may also be measuring strategic, empirically driven advertising decisions made by campaigns. For example, if over time ad exposure concentrated on rural users, one might reasonably infer the campaign in question decided to specifically target that group of rural voters. Even without voter files, campaigns can still purchase advertising plans that target certain demographics and even use Facebook’s algorithm to identify liberal versus conservative users (Lapowsky, 2015).

Summary and Conclusion

Grounded in frequentist assumptions and informed by rhetorical and interpretive research, empirical research methods can be used in a variety of ways in the study of political advertising. Content analysis can be used to identify trends or claims in political ads. Experimental research can be used to isolate causal relationships – i.e., effects – related to political advertising and other political behaviors. Survey research can be used to describe and track these relationships nationwide and/or over time. In this chapter we reviewed advantages and disadvantages to each of these methodological approaches. Given the shifting nature of media technology and political communication, we can expect certain aspects of these methods, along with their advantages and disadvantages, to change over time. Although we have outlined several future directions for research methods in the study of political advertising, scholars should continue to adapt to the evolving political climate and, most importantly, the ever-changing media environment.


Atkin, C. , & Heald, G. (1976). Effects of political advertising. Public Opinion Quarterly, 40(2), 216–228.
Banwart, M. C. , Bystrom, D. G. , & Robertson, T. (2003). From the primary to the general election a comparative analysis of candidate media coverage in mixed-gender 2000 races for governor and US Senate. American Behavioral Scientist, 46(5), 658–676.
Benoit, W. L. (1999). Seeing spots: A functional analysis of presidential television advertisements, 1952–1996. Westport, CT: Greenwood Publishing Group.
Benoit, W. L. , & Glantz, M. (2015). A functional analysis of 2008 general election presidential TV spots. Speaker & Gavel, 49(1), 2.
Benoit, W. L. , Leshner, G. M. , & Chattopadhyay, S. (2007). A meta-analysis of political advertising. Human Communication. Retrieved from
Bond, R. M. , Fariss, C. J. , Jones, J. J. , Kramer, A. D. , Marlow, C. , Settle, J. E. , & Fowler, J. H. (2012). A 61-million-person experiment in social influence and political mobilization. Nature, 489(7415), 295–298.
Brader, T. (2005). Striking a responsive chord: How political ads motivate and persuade voters by appealing to emotions. American Journal of Political Science, 49(2), 388–405. doi:10.2307/3647684.
Brazeal, L. M. , & Benoit, W. L. (2001). A functional analysis of congressional television spots, 1986–2000. Communication Quarterly, 49, 436–454. doi:10.1080/01463370109385640.
Brazeal, L. M. , & Benoit, W. L. (2006). A functional analysis of congressional television spots, 1980–2004. Communication Studies, 57, 401–420. doi:10.1080/10510970600945972.
Broockman, D. E. , & Green, D. P. (2014). Do online advertisements increase political candidates’ name recognition or favorability? Evidence from randomized field experiments. Political Behavior, 36(2), 263–289.
Bystrom, Dianne G. (1995). Candidate gender and the presentation of self: The videostyles of men and women in US senate campaigns. Unpublished Doctoral Dissertation, University of Oklahoma.
Bystrom, D. G. , Robertson, T. , Banwart, M. C. , & Kaid, L. L. (Eds.). (2004). Gender and candidate communication: Videostyle, webstyle, newstyle. New York, NY: Routledge.
Cohen, J. , & Davis, R. G. (1991). Third-person effects and the differential impact in negative political advertising. Journalism & Mass Communication Quarterly, 68(4), 680–688. doi:10.1177/107769909106800409.
Davies, H. , & Yadron, D. (2016, January 28). How Facebook tracks and profits from voters in a $10bn US election. Retrieved from
Delli Carpini, M. X. , & Williams, B. (1994). The method is the message: Focus groups as a method of social, psychological, and political inquiry. In M. X. Delli-Carpini , L. Huddy , & R. Y. Shapiro (Eds.), Research in micropolitics: New directions in political psychology (Vol. 4, pp. 57–85). Greenwhich, CT: JAI Press.
Dilliplane, S. , Goldman, S. K. , & Mutz, D. C. (2013). Televised exposure to politics: New measures for a fragmented media environment. American Journal of Political Science, 57(1), 236–248.
Fowler, E. F. , & Ridout, T. N. (2013). Negative, angry, and ubiquitous: Political advertising in 2012. In The Forum (Vol. 10, pp. 51–61). Retrieved from
Fridkin, K. , Kenney, P. J. , & Wintersieck, A. (2015). Liar, liar, pants on fire: How fact-checking influences citizens’ reactions to negative advertising. Political Communication, 32(1), 127–151.
Garramone, G. M. , Atkin, C. K. , Pinkleton, B. E. , & Cole, R. T. (1990). Effects of negative political advertising on the political process. Journal of Broadcasting & Electronic Media, 34(3), 299–311. doi:10.1080/08838159009386744.
Gerber, A. S. , Gimpel, J. G. , Green, D. P. , & Shaw, D. R. (2011). How large and long-lasting are the persuasive effects of televised campaign ads? Results from a randomized field experiment. American Political Science Review, 105(1), 135–150.
Goffman, E. (1959). The presentation of self in everyday life. Garden City, NY: Anchor.
Goldstein, K. , & Ridout, T. N. (2004). Measuring the effects of televised political advertising in the United States. Annual Review of Political Science, 7, 205–226.
Gronbeck, B. E. (1992). Negative narratives in 1988 presidential campaign ads. Quarterly Journal of Speech, 78(3), 333–346.
Henson, J. R. , & Benoit, W. L. (2016). Because I said so: A functional theory analysis of evidence in political TV spots. Speaker & Gavel, 47(1), 2.
Iyengar, S. , & Simon, A. F. (2000). New perspectives and evidence on political communication and campaign effects. Annual Review of Psychology, 51(1), 149–169.
Iyengar, S. , & Vavreck, L. (2012). Online panels and the future of political communication research. In The Sage handbook of political communication (pp. 225–240). Thousand Oaks, CA: Sage.
Johnston, A. , & Kaid, L. L. (2002). Image ads and issue ads in US presidential advertising: Using videostyle to explore stylistic differences in televised political ads from 1952 to 2000. Journal of Communication, 52, 281–300.–2466.2002.tb02545.x.
Kahn, K. F. , & Geer, J. G. (1994). Creating impressions: An experimental investigation of political advertising on television. Political Behavior, 16(1), 93–116.
Kaid, L. L. (2004). Political advertising. In L. L. Kaid (Ed.), Handbook of political communication research (pp. 155–202). Mahwah, NJ: Lawrence Erlbaum.
Kaid, L. L. , & Davidson, D. K. (1986). Elements of videostyle: Candidate presentation through television advertising. In L. L. Kaid , D. Nimmo , & K. R. Sanders (Ed.), New perspectives on political advertising (pp. 184–209). Carbondale, IL: Southern Illinois University Press.
Kaid, L. L. , Postelnicu, M. , Landreville, K. , Yun, H. J. , & LeGrange, A. G. (2007). The effects of political advertising on young voters. American Behavioral Scientist, 50(9), 1137–1151.
Kaid, L. L. , & Tedesco, J. C. (1999). Tracking voter reactions to television advertising. In L. L. Kaid & D. G. Bystrom (Eds.), The electronic election: Perspectives on the 1996 campaign communication (pp. 233–246). Mahwah, NJ: Lawrence Erlbaum.
Kates, S. (1998). A qualitative exploration into voters’ ethical perceptions of political advertising: Discourse, disinformation, and moral boundaries. Journal of Business Ethics, 17(16), 1871–1885.
Klofstad, C. A. (2015). Exposure to political discussion in college is associated with higher rates of political participation over time. Political Communication, 32(2), 292–309.
Lapowsky, I. (2015, November 4). Facebook now lets candidates target political fanatics. Retrieved from
Lee, C. , & Benoit, W. L. (2004). A functional analysis of presidential television spots: A comparison of Korean and American ads. Communication Quarterly, 52(1), 68–79.
Meirick, P. (2002). Cognitive responses to negative and comparative political advertising. Journal of Advertising, 31(1), 49–62.
Parmelee, J. H. , Perkins, S. C. , & Sayre, J. J. (2007). “What about people our age?” Applying qualitative and quantitative methods to uncover how political ads alienate college students. Journal of Mixed Methods Research, 1(2), 183–199.
Phillips, J. M. , Urbany, J. E. , & Reynolds, T. J. (2008). Confirmation and the effects of valenced political advertising: A field experiment. Journal of Consumer Research, 34(6), 794–806.
Pinkleton, B. (1998). Effects of print comparative political advertising on political decision-making and participation. Journal of Communication, 48(4), 24–36. doi:10.1111/j.1460–2466.1998.tb02768.x.
Pinkleton, B. E. , Um, N.-H. , & Austin, E. W. (2002). An exploration of the effects of negative political advertising on political decision making. Journal of Advertising, 31(1), 13–25.
Prior, M. (2001). Weighted content analysis of political advertisements. Political Communication, 18(3), 335–345.
Prior, M. (2009). Improving media effects research through better measurement of news exposure. The Journal of Politics, 71(3), 893–908. doi:10.1017/s0022381609090781.
Prior, M. (2013). The challenge of measuring media exposure: Reply to Dilliplane, Goldman, and Mutz. Political Communication, 30(4), 620–634.
Reyes, G. M. (2006). The Swift Boat Veterans for Truth, the politics of realism, and the manipulation of Vietnam remembrance in the 2004 presidential election. Rhetoric & Public Affairs, 9(4), 571–600.
Richardson, G. W. (2000). Pulp politics: Popular culture and political advertising. Rhetoric & Public Affairs, 3(4), 603–626.
Roberts, C. (2013). A functional analysis comparison of web-only advertisements and traditional television advertisements from the 2004 and 2008 presidential campaigns. Journalism & Mass Communication Quarterly, 90, 23–38. doi:10.1177/1077699012468741.
Roddy, B. L. , & Garramone, G. M. (1988). Appeals and strategies of negative political advertising. Journal of Broadcasting & Electronic Media, 32(4), 415–427. doi:10.1080/08838158809386713.
Schenck-Hamlin, W. , Procter, D. , & Rumsey, D. (2000). The influence of negative advertising frames on political cynicism and politician accountability. Human Communication Research, 26(1), 53–74. doi:10.1111/j.1468–2958.2000.tb00749.x.
Shah, D. V. , Cho, J. , Nah, S. , Gotlieb, M. R. , Hwang, H. , Lee, N. J. , … & McLeod, D. M. (2007). Campaign ads, online messaging, and participation: Extending the communication mediation model. Journal of Communication, 57(4), 676–703.
Shapiro, M. A. , & Rieger, R. H. (1992). Comparing positive and negative political advertising on radio. Journalism & Mass Communication Quarterly, 69(1), 135–145. doi:10.1177/107769909206900111.
Sheckels, T. F. (2002). Narrative coherence and antecedent ethos in the rhetoric of attack advertising: A case study of the Glendening vs. Sauerbrey campaign. Rhetoric & Public Affairs, 5(3), 459–481.
Shen, F. (2004). Chronic accessibility and individual cognitions: Examining the effects of message frames in political advertisements. Journal of Communication, 54(1), 123–137.
Slater, M. D. (2016). Combining content analysis and assessment of exposure through self-report, spatial, or temporal variation in media effects research. Communication Methods and Measures, 10(2–3), 173–175.
Sparrow, N. , & Turner, J. (2001). The permanent campaign: The integration of market research techniques in developing strategies in a more uncertain political climate. European Journal of Marketing, 35, 984–1002.
Strach, P. , Zuber, K. , Fowler, E. F. , Ridout, T. N. , & Searles, K. (2015). In a different voice? Explaining the use of men and women as voice-over announcers in political advertising. Political Communication, 32(2), 183–205. doi:10.1080/10584609.2014.914614.
Tak, J. , Kaid, L. L. , & Khang, H. (2007). The reflection of cultural parameters on videostyles of televised political spots in the US and Korea. Asian Journal of Communication, 17, 58–77. doi:10.1080/01292980601114570.
Valentino, N. A. , Hutchings, V. L. , & Williams, D. (2004). The impact of political advertising on knowledge, internet information seeking, and candidate preference. Journal of Communication, 54(2), 337–354. doi:10.1111/j.1460–2466.2004.tb02632.x.
Wen, W. , Benoit, W. L. , & Yu, T. (2004). A functional analysis of the 2000 Taiwanese and US presidential spots. Asian Journal of Communication, 14, 140–155. doi:10.1080/0129298042000256785.
Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.