Marketing PapersCopyright (c) 2015 University of Pennsylvania All rights reserved.http://repository.upenn.edu/marketing_papers
Recent documents in Marketing Papersen-usFri, 24 Jul 2015 12:28:26 PDT3600Evidence on the Effects of Mandatory Disclaimers in Advertising With reply to commentators: Should We Put a Price on Free Speech?http://repository.upenn.edu/marketing_papers/179
http://repository.upenn.edu/marketing_papers/179Thu, 03 May 2012 12:34:09 PDT
We found no evidence that consumers benefit from government-mandated disclaimers in advertising. Experiments and common experience show that admonishments to change or avoid behaviors often have effects opposite to those intended.We found 18 experimental studies that provided evidence relevant to mandatory disclaimers. Mandated messages increased confusion in all, and were ineffective or harmful in the 15 studies that examined perceptions, attitudes, or decisions. We conducted an experiment on the effects of a government-mandated disclaimer for a Florida court case. Two advertisements for dentists offering implant dentistry were shown to 317 subjects. One advertiser had implant dentistry credentials. Subjects exposed to the disclaimer more often recommended the advertiser who lacked credentials. Women and less-educated subjects were particularly prone to this error. In addition, subjects drew false and damaging inferences about the credentialed dentist.
]]>
Kesten Green et al.MarketingSocial Responsibility in ManagementForecasting Principleshttp://repository.upenn.edu/marketing_papers/178
http://repository.upenn.edu/marketing_papers/178Thu, 09 Feb 2012 09:28:23 PSTJ. Scott Armstrong et al.ForecastingRole Thinking: Standing in Other People's Shoes to Forecast Decisions in Conflictshttp://repository.upenn.edu/marketing_papers/177
http://repository.upenn.edu/marketing_papers/177Thu, 09 Feb 2012 09:28:20 PST
To forecast decisions in conflict situations, experts are often advised to figuratively stand in the other person’s shoes. We refer to this as “role thinking” because, in practice, the advice is to think about how other protagonists will view the situation in order to predict their decisions. We tested the effect of role thinking on forecast accuracy. We obtained 101 role-thinking forecasts of the decisions that would be made in nine diverse conflicts from 27 Naval postgraduate students (experts) and 107 rolethinking forecasts from 103 second-year organizational behavior students (novices). The accuracy of the novices’ forecasts was 33% and the experts’ 31%; both were little different from chance (guessing), which was 28%. The lack of improvement in accuracy from role thinking strengthens the finding from earlier research that it is not sufficient to think hard about a situation in order to predict the decisions groups of people will make when they are in conflict. It is useful instead to ask groups of role players to simulate the situation. When groups of novice participants adopted the roles of protagonists in the aforementioned nine conflicts and interacted with each other, their group decisions predicted the actual decisions with an accuracy of 60%.
]]>
Kesten Green et al.ForecastingComparing Face-to-Face Meetings, Nominal Groups, Delphi and Prediction Markets on an Estimation Taskhttp://repository.upenn.edu/marketing_papers/176
http://repository.upenn.edu/marketing_papers/176Thu, 09 Feb 2012 09:28:17 PST
We conducted laboratory experiments to analyze the accuracy of three structured approaches (nominal groups, Delphi, and prediction markets) compared to traditional face-to-face meetings (FTF). We recruited 227 participants (11 groups per method) that had to solve a quantitative judgment task that did not involve distributed knowledge. This task consisted of ten factual questions, which required percentage estimates. While, overall, we did not find statistically significant differences in accuracy between the four methods, the results differed somewhat at the individual question level. Delphi was as accurate as FTF for eight questions and outperformed FTF for two questions. By comparison, prediction markets were unable to outperform FTF for any of the ten questions but were inferior for three questions. The relative performance of nominal groups and FTF was mixed and differences were small. We also compared the results from the three structured approaches to prior individual estimates and staticized groups. The three structured approaches were more accurate than participants’ prior individual estimates. Delphi was also more accurate than staticized groups. Nominal groups and prediction markets provided little additional value compared to a simple average of forecast. In addition, we examined participants’ perceptions of the group and the group process. Participants rated personal communication more favorable than computer-mediated interaction. Group interaction in FTF and nominal groups was perceived as highly cooperative and effective. Prediction markets were rated least favorable. Prediction market participants were least satisfied with the group process and perceived their method as most difficult.
]]>
Andreas Graefe et al.ForecastingMoneyball: Message for Managershttp://repository.upenn.edu/marketing_papers/175
http://repository.upenn.edu/marketing_papers/175Tue, 07 Feb 2012 11:15:39 PST
Michael Lewis’ book and film, Moneyball, provide valuable advice for people involved with the selection and retention of employees. However, judging from some reviews, there is confusion about the message in Moneyball. I describe the problem and the Moneyball solutions here. The solutions are valuable for personnel decisions in any large organization.
]]>
J. Scott ArmstrongLong-Range Forecasting for a Consumer Durable in an International Markethttp://repository.upenn.edu/marketing_papers/174
http://repository.upenn.edu/marketing_papers/174Thu, 08 Dec 2011 09:25:52 PST
There has been a substantial amount of interest recently in long-range planning. One necessary component of the long-range plan Is the long-range forecast. In contrast to the emphasis on the planning process, however, little attention has been given to forecasting. This study considers the problem of long-range forecasting in a situation which is of growing importance — forecasting sales for international markets.

Many researchers appear to operate under the impression that causal models (i.e., models based on an analysis of underlying factors) lead to more accurate sales forecasts than those provided by naive models (i.e., projections based on historical sales data only). A survey of the research literature led to the conclusion that this confidence in causal models is virtually unsupported. One can hardly criticize firms, then, for relying primarily upon naive models for sales forecasting since these models are simpler and less expensive than causal model.

This study was based on the hypothesis that causal models are superior to naive models in certain situations. The key element of^these situations is that -there are "large changes." Long-range sales forecasting usually involves such large changes; and there are many reasons to expect that long-range forecasting for international markets is a situation in which substantial changes will occur (e.g., the Kennedy round tariff cuts and the formation of common markets.)

A causal model was developed to provide long-range forecasts of the international market for still cameras. This model provided unconditional forecasts of unit camera sales by country for year t + n on the basis of l) knowledge about camera sales in year t and 2) predicted changes in four causal variables from year t to t + n. These four causal variables included, in order of importance, per capita income, price of cameras, number of potential buyers and quality of cameras.

The predictive ability of the causal model was superior to that of a naive model purporting to represent current practice. Each model was used to provide backcasts of 195^ camera sales in 17 countries on the basis of data from 1967 to i960 only. The mean absolute percentage error for the causal model was 2% while that for the naive model was h%. This result was statistically significant (0? = .05); but, more importantly, it appeared to have great practical significance. An evaluation, based on very conservative subjective estimates, indicated that such an improvement in accuracy would have a present value worth in excess of one percent of a typical firm's yearly sales volume.

Further support for the use of the causal model was obtained by noting that the standard errors of the estimated relationships were low (evidence of reliability), that the estimates of causal relationships from different measurement models were in rather close agreement (evidence of construct validity), and that the causal model performed well in another situation where predictions were provided for I96O-65 camera sales in 11 "new" countries (evidence of concurrent validity).

The causal relationships were initially specified by a subjective analysis. Various parts of the causal model were then updated by use of a number of measurement models including an analysis of differences among sales rates for 30 countries, of differences among changes in the sales rates from 1961 to 1965 for 21 countries, and of differences among six income categories from United States household survey data. This updating led to a modest, though valuable, gain as the mean absolute percentage error of the 195*+ backcast was reduced from 2>QPf0 to the 23$ mentioned above.

Additional benefits associated with the development of the causal model included the ability to evaluate large changes in the market; to estimate current sales where trade and production figures are inadequate; to evaluate alternative assumptions about the future rapidly and cheaply; and to identify markets which have not been fully exploited.

In summary, the study argues that the development of better long-range forecasting models is an important problem; describes the development of causal models; and demonstrates the superiority of causal models over naive models in a case involving long-range forecasting for international markets.

]]>
J. Scott ArmstrongForecastingMarketingStrategic PlanningIllusions in Regression Analysishttp://repository.upenn.edu/marketing_papers/173
http://repository.upenn.edu/marketing_papers/173Tue, 29 Nov 2011 11:28:26 PST
Soyer and Hogarth’s article, “The Illusion of Predictability,” shows that diagnostic statistics that are commonly provided with regression analysis lead to confusion, reduced accuracy, and overconfidence. Even highly competent researchers are subject to these problems. This overview examines the Soyer-Hogarth findings in light of prior research on illusions associated with regression analysis. It also summarizes solutions that have been proposed over the past century. These solutions would enhance the value of regression analysis.
]]>
J. Scott ArmstrongForecastingMarketingSocial Responsibility in ManagementStrategic PlanningApplied StatisticsLong-Range Forecasting For International Markets: The Use of Causal Modelshttp://repository.upenn.edu/marketing_papers/172
http://repository.upenn.edu/marketing_papers/172Wed, 02 Nov 2011 06:52:06 PDT
Many researchers appear to operate under the impression that causal models lead to more accurate forecasts than those provided by naive models (or “projections”). This study was based on the premise that causal models lead to better forecasts than do naive models in certain situations. The key element of these situations is that there are “large changes.” One situation where large changes might be expected is that of long-range forecasting—and, in particular, long-range forecasting for international markets. Recent improvements in the quality and availability of international data have substantially reduced the cost of developing causal models in this situation. A study of camera markets in seventeen countries indicated that the margin of superiority of causal models over naive models is of great practical importance.
]]>
J. Scott ArmstrongForecastingWhat is the Appropriate Public-Policy Response to Uncertainty?http://repository.upenn.edu/marketing_papers/171
http://repository.upenn.edu/marketing_papers/171Thu, 02 Jun 2011 07:51:46 PDTJ. Scott Armstrong et al.ForecastingThe Ombudsman: Verification of Citations: Fawlty Towers of Knowledge?http://repository.upenn.edu/marketing_papers/170
http://repository.upenn.edu/marketing_papers/170Thu, 02 Jun 2011 07:51:42 PDT
The prevalence of faulty citations impedes the growth of scientiﬁc knowledge. Faulty citations include omissions of relevant papers, incorrect references, and quotation errors that misreport ﬁndings. We discuss key studies in these areas. We then examine citations to “Estimating nonresponse bias in mail surveys,” one of the most frequently cited papers from the Journal of Marketing Research, to illustrate these issues. This paper is especially useful in testing for quotation errors because it provides speciﬁc operational recommendations on adjusting for nonresponse bias; therefore, it allows us to determine whether the citing papers properly used the ﬁndings. By any number of measures, those doing survey research fail to cite this paper and, presumably, make inadequate adjustments for nonresponse bias. Furthermore, even when the paper was cited, 49 of the 50 studies that we examined reported its ﬁndings improperly. The inappropriate use of statistical-signiﬁcance testing led researchers to conclude that nonresponse bias was not present in 76 percent of the studies in our sample. Only one of the studies in the sample made any adjustment for it. Judging from the original paper, we estimate that the study researchers should have predicted nonresponse bias and adjusted for 148 variables. In this case, the faulty citations seem to have arisen either because the authors did not read the original paper or because they did not fully understand its implications. To address the problem of omissions, we recommend that journals include a section on their websites to list all relevant papers that have been overlooked and show how the omitted paper relates to the published paper. In general, authors should routinely verify the accuracy of their sources by reading the cited papers. For substantive ﬁndings, they should attempt to contact the authors for conﬁrmation or clariﬁcation of the results and methods. This would also provide them with the opportunity to enquire about other relevant references. Journal editors should require that authors sign statements that they have read the cited papers and, when appropriate, have attempted to verify the citations.
]]>
Malcom Wright et al.ForecastingMarketingDemocracy Does Not Make Good Science: On Reforming Review Procedures for Management Science Journalshttp://repository.upenn.edu/marketing_papers/169
http://repository.upenn.edu/marketing_papers/169Thu, 02 Jun 2011 07:51:39 PDTJ. Scott ArmstrongMarketingGlobal Warming: Forecasts by Scientists versus Scientific Forecastshttp://repository.upenn.edu/marketing_papers/168
http://repository.upenn.edu/marketing_papers/168Tue, 31 May 2011 13:07:22 PDT
In 2007, the Intergovernmental Panel on Climate Change’s Working Group One, a panel of experts established by the World Meteorological Organization and the United Nations Environment Programme, issued its Fourth Assessment Report. The Report included predictions of dramatic increases in average world temperatures over the next 92 years and serious harm resulting from the predicted temperature increases. Using forecasting principles as our guide we asked: Are these forecasts a good basis for developing public policy? Our answer is “no”. To provide forecasts of climate change that are useful for policy-making, one would need to forecast (1) global temperature, (2) the effects of any temperature changes, and (3) the effects of feasible alternative policies. Proper forecasts of all three are necessary for rational policy making. The IPCC WG1 Report was regarded as providing the most credible long-term forecasts of global average temperatures by 31 of the 51 scientists and others involved in forecasting climate change who responded to our survey. We found no references in the 1056-page Report to the primary sources of information on forecasting methods despite the fact these are conveniently available in books, articles, and websites. We audited the forecasting processes described in Chapter 8 of the IPCC’s WG1 Report to assess the extent to which they complied with forecasting principles. We found enough information to make judgments on 89 out of a total of 140 forecasting principles. The forecasting procedures that were described violated 72 principles. Many of the violations were, by themselves, critical. The forecasts in the Report were not the outcome of scientific procedures. In effect, they were the opinions of scientists transformed by mathematics and obscured by complex writing. Research on forecasting has shown that experts’ predictions are not useful in situations involving uncertainly and complexity. We have been unable to identify any scientific forecasts of global warming. Claims that the Earth will get warmer have no more credence than saying that it will get colder.
]]>
Kesten C. Green et al.ForecastingUsing Quasi-Experimental Data to Develop Empirical Generalizations for Persuasive Advertisinghttp://repository.upenn.edu/marketing_papers/167
http://repository.upenn.edu/marketing_papers/167Tue, 31 May 2011 13:07:19 PDT
This paper argues that “quasi-experimental data” provide a valid and relatively low-cost approach toward developing empirical generalizations (EGs). These data are obtained from studies in which some key variables have been controlled in the design. These EGs are described as normative statements, i.e., “evidence-based principles.” Using data from 240 pairs of print advertisements from five editions of the Which Ad Pulled Best series, the authors analyzed 56 of the advertising principles (listed) from Persuasive Advertising by J. Scott Armstrong (New York: Palgrave Macmillan, forthcoming). These data controlled for target market, product, size of the advertisement, media, and in half the cases, for the brand. The advertisements differed, however, e.g. in illustrations, headlines, and text. The findings from the quasi-experimental analyses were consistent with field experiments for all seven principles where such comparisons were possible. Furthermore, for 26 principles they unanimously corroborated the available laboratory experiments as well as the meta-analyses for seven principles. In short, the quasi-experimental findings always agreed with experimental findings, even though the quasi-experimental analyses, and some of the experimental analyses, involved small samples, and often used different criteria. From an issue of JAR devoted to `empirical generalisations’: the papers were first presented at a conference at the Wharton School, University of Pennsylvania in December 2008.
]]>
J. Scott Armstrong et al.MarketingStructured analogies for forecastinghttp://repository.upenn.edu/marketing_papers/166
http://repository.upenn.edu/marketing_papers/166Thu, 26 May 2011 09:01:26 PDT
People often use analogies when forecasting, but in an unstructured manner. We propose a structured judgmental procedure whereby experts list analogies, rate their similarity to the target, and match outcomes with possible target outcomes. An administrator would then derive a forecast from the information. When predicting decisions made in eight conflict situations, unaided experts' forecasts were little better than chance, at 32% accurate. In contrast, 46% of structured-analogies forecasts were accurate. Among experts who were able to think of two or more analogies and who had direct experience with their closest analogy, 60% of forecasts were accurate. Collaboration did not help.
]]>
Kesten Green et al.ForecastingResearch With Built-in Replication: Comment and Further Suggestions for Replication Researchhttp://repository.upenn.edu/marketing_papers/165
http://repository.upenn.edu/marketing_papers/165Thu, 26 May 2011 07:50:25 PDTHeiner Evanschitzky et al.Scientific Methods and Peer ReviewForecastinghttp://repository.upenn.edu/marketing_papers/164
http://repository.upenn.edu/marketing_papers/164Thu, 26 May 2011 07:50:22 PDTAndreas Graefe et al.ForecastingConditions Under Which Index Models Are Usefulhttp://repository.upenn.edu/marketing_papers/163
http://repository.upenn.edu/marketing_papers/163Thu, 26 May 2011 07:50:19 PDT
This paper summarizes the key conditions under which the index method is valuable for forecasting and describes the procedures one should use when developing index models. The paper also addresses the specific concern of selecting inferior candidates when using the bio-index as a nomination helper. Political decision-makers should not use the bioindex as a stand-alone method but should combine forecasts from a variety of different methods that draw upon different information.
]]>
Andreas Graefe et al.Predicting Elections from Biographical Information about Candidates: A Test of the Index Methodhttp://repository.upenn.edu/marketing_papers/162
http://repository.upenn.edu/marketing_papers/162Thu, 26 May 2011 07:50:16 PDT
We used 59 biographical variables to create a “bio-index” for forecasting U.S. presidential elections. The bio-index method counts the number of variables for which a candidate rates favourably, and the forecast is that the candidate with the highest score would win the popular vote. The bio-index relies on different information and includes more variables than traditional econometric election forecasting models. The method can be used in combination with simple linear regression to estimate a relationship between the index score of the candidate of the incumbent party and his share of the popular vote. The study tested the model for the 29 U.S. presidential elections from 1896 to 2008. The model‟s forecasts, calculated by cross-validation, correctly predicted the popular vote winner for 27 of the 29 elections; this performance compares favourably to forecasts from polls (15 out of 19), prediction markets (22 out of 26), and three econometric models (12 to 13 out of 15 to 16). Out-of-sample forecasts of the two-party popular vote for the four elections from 1996 to 2008 yielded a forecast error almost as low as the best of seven econometric models. The model can help parties to select the candidates running for office, and it can help to improve on the accuracy of election forecasting, especially for longer-term forecasts.
]]>
J. Scott Armstrong et al.ForecastingPredicting Elections from the Most Important Issue: A Test of the Take-the-Best Heuristichttp://repository.upenn.edu/marketing_papers/161
http://repository.upenn.edu/marketing_papers/161Thu, 26 May 2011 07:50:13 PDT
We used the take-the-best heuristic to develop a model to forecast the popular twoparty vote shares in U.S. presidential elections. The model draws upon information about how voters expect the candidates to deal with the most important issue facing the country. We used cross-validation to calculate a total of 1,000 out-of-sample forecasts, one for each of the last 100 days of the ten U.S. presidential elections from 1972 to 2008. Ninety-seven percent of forecasts correctly predicted the winner of the popular vote. The model forecasts were competitive compared to forecasts from methods that incorporate substantially more information (e.g., econometric models and the Iowa Electronic Markets). The purpose of the model is to provide fast advice on which issues candidates should stress in their campaign.
]]>
J. Scott Armstrong et al.ForecastingValidity of Climate Change Forecasting for Public Policy Decision Makinghttp://repository.upenn.edu/marketing_papers/160
http://repository.upenn.edu/marketing_papers/160Thu, 26 May 2011 07:50:09 PDT
Policymakers need to know whether prediction is possible and if so whether any proposed forecasting method will provide forecasts that are substantively more accurate than those from the relevant benchmark method. Inspection of global temperature data suggests that it is subject to irregular variations on all relevant time scales and that variations during the late 1900s were not unusual. In such a situation, a “no change” extrapolation is an appropriate benchmark forecasting method. We used the U.K. Met Office Hadley Centre’s annual average thermometer data from 1850 through 2007 to examine the performance of the benchmark method. The accuracy of forecasts from the benchmark is such that even perfect forecasts would be unlikely to help policymakers. For example, mean absolute errors for 20- and 50-year horizons were 0.18°C and 0.24°C. We nevertheless demonstrate the use of benchmarking with the example of the Intergovernmental Panel on Climate Change’s 1992 linear projection of long-term warming at a rate of 0.03°C-per-year. The small sample of errors from ex ante projections at 0.03°C-per-year for 1992 through 2008 was practically indistinguishable from the benchmark errors. Validation for long-term forecasting, however, requires a much longer horizon. Again using the IPCC warming rate for our demonstration, we projected the rate successively over a period analogous to that envisaged in their scenario of exponential CO2 growth—the years 1851 to 1975. The errors from the projections were more than seven times greater than the errors from the benchmark method. Relative errors were larger for longer forecast horizons. Our validation exercise illustrates the importance of determining whether it is possible to obtain forecasts that are more useful than those from a simple benchmark before making expensive policy decisions.