Jing Gong

Associate Dean for Research / John D. DeVries Endowed Chair in Business / Professor of Information Systems

Ivy College of Business, Iowa State University

Friday, February 22, 2019

10:30 AM – noon

Speakman Hall Suite 200

Abstract

As the world “gets smaller” and more people engage in cross-cultural communications, their ability to successfully separate truth from deception can be critically important. Yet it is challenging. While deceptive communication has been studied for decades, some areas are not well understood. In particular, two areas that could benefit from further research concern the effects of cultural differences and communication media on deception and its detection. Building on developments in theories of deception and its detection, we examine the question: How do differences in culture between senders and receivers affect deception detection, especially where the deceptive communication occurs across different media? To address this question, stimulus materials from recorded interviews were created featuring participants from the United States, Spain, and India. Three stimulus sets were created, one each in American English, Spanish, and Indian English, and each consisting of 32 interview snippets. Half of the snippets were honest and half were dishonest. Each snippet represented one of four media: full audio-visual, video only, audio only, and text only. Veracity judges were also recruited from the same three countries as the interview participants, to independently observe and evaluate the communication both within their culture and across other cultures. Evidence was found that different combinations of cultural and media effects affected the accuracy of deception detection.

Researchers using randomized controlled trial (RCT) experiments often subgroup or condition on auxiliary variables that are not the randomized treatment variables. There are many good reasons to condition on auxiliary variables—also referred to as control variables or covariates— in randomized experiments. In particular, designing and conducting RCTs is costly to researchers and subjects. Therefore, it’s important to derive greater value from RCTs that are conducted; measuring not just the average treatment effect (ATE), but also finding more nuanced insights about the underlying theoretical mechanisms and generalizing the inferences. Unfortunately, there are many confusing and even contradictory guidelines on the use of subgroups or auxiliary variables in RCTs. For example, the common wisdom is that post-treatment variables (i.e. those that are ex-post to the treatment variable) should not be conditioned on. However, such variables can provide valuable information, and can in many cases be properly utilized. Using causal diagrams, and applying a few simple rules based upon Judea Pearl’s causal diagramming framework, we explain how researchers can leverage covariates without biasing their causal inferences. We provide guidelines for using subgroups and auxiliary variables in randomized experiments, focusing on some well-known digital experiments featured in the Information Systems literature.

In many predictive tasks where human intelligence is needed to label training instances, online crowdsourcing markets have emerged as promising platforms for large-scale, cost-effective labeling. However, these platforms also introduce challenges that must be addressed for these opportunities to materialize. In particular, it has been shown that different trade-offs between payment offered to labelers and the quality of labeling arise at different times, possibly as a result of different market conditions and even the nature of the tasks themselves. Because the underlying mechanism giving rise to different trade-offs is not well understood, for any given labeling task and at any given time, it is not known which labeling payments to offer in the market so as to produce accurate models cost-effectively. Effective and robust methods for dealing with these challenges are essential to enable a growing reliance on these promising and increasingly popular labor markets for large-scale labeling. In this talk I will first present a new data science problem, Adaptive Labeling Payment (ALP): how to learn and sequentially adapt the payment offered to crowd labelers before they undertake a labeling task, so as to produce a given (machine learning) predictive model performance cost-effectively. I will then present our approach to address the problem and a rich set of results that demonstrate its performance under a variety of market settings. We also show that the method is highly versatile and can acquire more labels of lower quality (and cost) under some market conditions, while pursuing fewer and higher quality labels in other settings. Overall, our method yields significant cost savings and robust performance; as such, it can be used as a benchmark for future mechanisms to determine cost-effective payments.

Bio

Maytal Saar-Tsechansky is an Associate Professor of Information, Risk and Operations Management at the McCombs School of Business, The University of Texas at Austin, and a co-founder of Sweetch — a mobile health startup firm. Her research focuses on developing machine learning (ML) and artificial intelligence (AI) methods to improve decision-making and to benefit people, organizations, and society. Most of her work aims to augment ML & AI by bringing to bear the problems that machine learning and AI inform in practice and the context in which learning itself occurs, with the goal of effectively dealing with the constraints and taking advantage of the opportunities presented in these environments. Her research integrates business, machine learning and artificial intelligence, and she has addressed challenges in different domains, including health care, smart electricity grid, fraud detection, finance, and emerging forms of work, such as online labor markets. Maytal received her Ph.D. from New York University’s Stern School of Business. Her research has been published in the Journal of Finance, Management Science, Information Systems Research, Journal of Machine Learning Research, and Machine Learning Journal, among other venues. Maytal’s research has been supported by both government and industry, including the National Science Foundation, SAP, and the Israeli Science Ministry. In recent years she has served on the editorial boards of the Machine Learning Journal, the Information Systems Research (ISR) journal, the INFORMS Journal on Computing, Decision Sciences, and she is a frequent Program Committee member in the premier machine learning, data mining, artificial intelligence, and Information Systems conferences. At McCombs, Maytal has developed and taught popular applied machine learning and data mining courses tailored to business students.

Fake news on social media has received much media attention and many experts believe it influenced the 2016 US Presidential election and the 2016 Brexit vote. More than 60% of Americans consume news on social media, and 84% believe they can detect fake news. But can they? We studied the ability of experienced social media users to detect fake news, and how seeing news headlines – both real and fake – influenced their cognition. Only 18% of subjects could detect fake news better than chance; 82% of users could have made better judgments by flipping a coin. We found that confirmation bias dominates, with users essentially unable to distinguish real news from fake news, and that cognition is driven by how well a news headline aligns with the user’s prior political beliefs.

We conducted a series of studies examining different ways in which the social media user interface could be designed, including how news headlines are presented, and the effects of quality ratings. These different interface designs had different effects on the extent to which users believed social media stories, and how likely they were to read, like, comment on and share the stories.

A for Effort? Using the Crowd to Identify Moral Hazard in NYC Restaurant Hygiene Inspections

by

Anandasivam Gopal

Dean’s Professor of Information Systems

Robert H. Smith School of Business, University of Maryland

Friday, October 5, 2018

10:30 AM – noon

Fred Fox Boardroom (Alter 378)

Abstract

From an upset stomach to a life-threatening foodborne illness, getting sick is all too common after eating in restaurants. While health inspection programs are designed to protect consumers, such inspections typically occur at wide intervals of time, allowing restaurant hygiene to remain unmonitored in the interim periods. Information provided in online reviews may be effectively used in these interim periods to gauge restaurant hygiene. In this paper, we provide evidence for how information from online reviews of restaurants can be effectively used to identify cases of hygiene violations in restaurants, even after the restaurant has been inspected and certified. We use data from restaurant hygiene inspections in New York City from the launch of an inspection program from 2010 to 2016, and combine this data with online reviews for the same set of restaurants. Using supervised machine learning techniques, we then create a hygiene dictionary specifically crafted to identify hygiene-related concerns, and use it to identify systematic instances of moral hazard, wherein restaurants with positive hygiene inspection scores are seen to regress in their hygiene maintenance within 90 days of receiving the inspection scores. To the extent that social media provides some visibility into the hygiene practices of restaurants, we argue that the effects of information asymmetry that lead to moral hazard may be partially mitigated in this context. Based on our work, we also provide strategies for how cities and policy-makers may design effective restaurant inspection programs, through a combination of traditional inspections and the appropriate use of social media.

Where You Live Matters: The Impact of Local Financial Market Competition in Managing Online Peer-To-Peer Loans

by

Mohammad Saifur Rahman

Associate Professor of Management

Krannert School of Management, Purdue University

Friday, September 28, 2018

10:30 AM – noon

Speakman Hall Suite 200

Abstract

Internet related technologies have fundamentally changed many industries, and, in the age of financial technology (FinTech), a question that is being widely discussed is whether the local financial market structure still matters. Unlike traditional retail financial institutions, which are predominantly territorial, FinTech products — in particular, peer-to-peer (P2P) lending platforms — provide equal access to funds to borrowers from across the country, removing any typical geographic restrictions in borrowing options. However, if P2P lending platforms are not immune to competition from local financial institutions and borrowers ultimately gain from the strategic interactions between the local financial institutions and P2P platforms, where a borrower lives might continue to matter! Consequently, we study the impact of local financial market structure on borrowers’ personal loan management decisions — to prepay or to default — on the two leading P2P lending platforms, Lending Club and Prosper. We find consistently, across the two platforms, that an online borrower from a more competitive market is more likely to prepay and less likely to default. Additionally, this study offers novel insights regarding the extent and nature of the substitution between traditional financial institutions and their online, potentially disruptive, alternatives. Also, we utilize machine learning techniques that capitalize on the rich granularity of the data set to create a pseudo-experimental design and further validate the underlying mechanism behind our results. Going beyond P2P lending, these findings suggest that borrowers benefit disproportionately, based on their geographic location, from local lending institutions. We discuss managerial, practical, and policy implications for the burgeoning P2P lending industry as well as other crowd-based markets.

Like many other industries, the global health sector is engaged in significant digital transformation. Given the major investments, and the major consequences for numerous stakeholders, evaluations are important. However, many studies have critiqued both the quality of evaluations and the quality of evaluation research. The persistent lack of progress in this field has led researchers to ask deeper questions about what is actually occurring when teams attempt to measure the benefits of digital transformation. This translational research essay explores how Institutional Theory offers a useful lens for understanding the complexities of evaluation and provides insights for improving research and practice. In particular, we show how Institutional Theory can explain numerous behaviors observed in the literature and in our own case study. We also show how Institutional Theory can benefit from the insights observed in evaluation work. Motivated by these opportunities, we suggest a research agenda through which practitioners and researchers can improve work in this area.

Bio

Andrew Burton-Jones is a Professor of Business Information Systems at the UQ Business School, University of Queensland. He has a Bachelor of Commerce (Honours) and Masters of Information Systems from the University of Queensland and a Ph.D. from Georgia State University. He is a Senior Editor of MIS Quarterly has served on the Editorial Boards of MIS Quarterly, Information Systems Research, Journal of the Association for Information Systems, Information & Organization, and other journals. He has also served as Program Co-Chair for AMCIS and PACIS, and has received several awards for his research, teaching, and service. He conducts research on systems analysis and design, the effective use of information systems, and conceptual/methodological issues. Prior to his academic career, he was a senior consultant in a big-4 accounting/consulting firm.

Professor of Information Systems and the Leonard N. Stern Professor of Business

NYU Stern School of Business

Friday, May 4, 2018

10:30 AM – noon

Speakman Hall Suite 200

Abstract

In this paper, we purpose a novel method called RevGAN to generate user reviews using a combination of Hierarchical AutoEncoder (hAE) and Conditional GAN (cGAN). We describe the proposed method and empirically demonstrate that it significantly outperforms several important benchmarks on the Amazon Review Dataset, and is also empirically indistinguishable from organic user reviews.

We examine the membership-based free shipping (MFS) program offered by some online marketplaces in which a retail platform bears the shipping costs for purchases made by members that have paid an upfront fee, but non-members bear the shipping costs themselves. We show that the membership fee collected by the platform from members does not cover the cost of shipping products to members for their purchases during the membership period. While it may appear from this finding that the MFS program benefits members and hurts the platform, we show the MFS program actually benefits the platform when the shipping cost is less than a threshold value, which is increasing in the commission rate the platform earns from the third-party sellers. However, the gain from the MFS program is not necessarily decreasing in the shipping cost either. The MFS program always hurts non-members; it may also hurt even members. The demand enhancement, price increasing, and negative externality effects of the MFS program explain the results. Our findings imply that judging the success of the MFS program to either the platform or members solely based on the membership fee and shipping cost is misleading and that the MFS program is most attractive to the platform when the shipping cost is neither too low nor too high. Finally, the society can be worse off under the MFS program because the MFS program may stimulate demand from some low valuation and high misfit cost members who would not make a purchase in the absence of the MFS program, but the surplus enjoyed by these members is offset by the shipping cost borne by the platform.

Seeing the Forest and the Trees: A Meta-Analysis of the Antecedents to Information Security Policy Compliance

by

John D’Arcy

Associate Professor of MIS

Lerner College of Business and Economics, University of Delaware

Friday, April 6, 2018

10:30 AM – noon

Speakman Hall Suite 200

Abstract

A rich stream of research has identified numerous antecedents to employee compliance (and non-compliance) with information security policies. However, the number of competing theoretical perspectives and inconsistencies in the reported findings have hampered efforts to attain a clear understanding of what truly drives this behavior. To address this theoretical stalemate and build toward a consensus on the key antecedents of employees’ security policy compliance in different contexts, we conducted a meta-analysis of the relevant literature. Drawing on 84 quantitative studies focusing on security policy compliance, we classified 299 independent variables into 17 distinct categories and analyzed each category’s relationship with security policy compliance, including an analysis for possible domain-specific moderators. We augmented our meta-analytic assessment of the bivariate relationships between the independent variables and security policy compliance with a relative weight analysis that accounted for several construct intercorrelations. Collectively, our results suggest that much of the security policy compliance literature is plagued by suboptimal theoretical framing. Our findings can facilitate more refined theory-building efforts in this research domain and serve as a guide for practitioners to manage policy compliance initiatives.