The authors studied 2,096 research articles from nine well-establish journals (which cover all tiers of journal quality) in management. The focus of their study was on the reporting of nonresponse analysis. In this study, they found out that:

Studies with lower response rates and studies with executive samples are more likely to report nonresponse analyses.

The characteristics of survey studies that more likely to include nonresponse analysis: (i) studies in higher tier journals; (ii) studies in journals with shorter review times; (iii) studies in journals with higher rejection rates, and; (iv) studies in journals with fewer citations.

Other interesting points of discussion in this paper:

Although greater response rates lower the probability of nonresponse bias, studies with large response rates may still have nonresponse bias if there are substantial differences between the respondents and nonrespondents on important variables

It takes a response rate of 85% to conclude that nonresponse error is not a threat; therefore researchers should provide both empirical and theoretical evidence refuting nonresponse bias whenever the response rate is less than 85%.[1]

The quality of the nonresponse analyses depends on these issues: (i) how many variables are compared; (ii) the convergence of the finding; (iii) the relevance of those variables; (iv) the nature of the comparisons; (v) the nature of the population, and; (vi) the statistical tests used. [Note: In previous entry, there is a ‘quality rating’ for each individual nonresponse bias technique. It is indeed very interesting point to consider when we plan to do a nonresponse bias analysis. Click here for quick reference].

[1] This argument is opposite to what has been suggested by Rogelberg & Stanton (2007) [Note: look at my previous entry here to see what Rogelberg & Stanton (2007) said about the requirement to analyze nonresponse bias].

Previously I thought (and I’ve been taught) that when our survey response rate is considerably high – say, 85% or 90% – we don’t have to worry about the probability of nonresponse bias[1] (we are said to be ‘rest’ from the duty to detect and estimate the extent of nonresponse bias). Only after I read this paper, I realize that my previous thought is wrong, totally wrong! This paper argued that “…researchers should conduct a nonresponse bias impact assessment, regardless of how high a response rate is achieved” and they have to provide “good information about presence, magnitude, and direction of nonresponse bias” in the research report.

Needless to say, all researchers dream to achieve 100% response rate in their survey study. But in reality, it is almost impossible to be achieved. There are many strategies and techniques been developed to increase the response rate, but none of them can guarantee the total absence of nonresponse. Listed below are the ‘well known’ suggested techniques available in the literature:

Pre-notify participants

Publicize the survey

Design carefully

Provide incentives

Manage survey length

Use reminder notes

Provide response opportunities

Monitor survey response

Establish survey importance

Foster survey commitment

Provide survey feedback

The core contribution of this paper is the full list of the nonresponse bias impact assessment strategy (N-BIAS), which consists of “a series of techniques that when used in combination, provide evidence about a study’s susceptibility to bias and its external validity”. Listed below are the nine techniques in the N-Bias method:

Demonstrate Generalizibility – researchers triangulate their findings by replicating studies with different methods. Findings that are consistent across the replication studies demonstrate an absence of nonresponse bias. [Quality Rating: ****][2]

Passive Nonresponse analysis – researchers include questions (in the survey) that tap into factors related to passive nonresponse (such as workload, busyness, etc) and later examine the extent to which these factors are related to the key variables of the study. If they detect any occurrence of systematic pattern, then the survey results are susceptible to bias. [Quality Rating: ***]

Interest Level analysis – researchers include questions (in the survey) that may indicate the respondents’ interest over topics related to the study and later examine the extent to which their interests are related to the key variables of the study. If they detect any occurrence of systematic pattern, then the survey results are susceptible to bias. [Quality Rating: ***]

Archival analysis – this technique can be done only if there is an archival database available so that the researchers can compare respondents to nonrespondents on some variables (variables that directly related to actual responses to the research topics). If the researchers observe any archival differences, and later detect that the archival differences show some systematic pattern to the responses on the survey topics, the survey results are susceptible to bias. [Quality Rating: ***]

Follow-up approach – the researchers conduct another survey, but this time the sample is selected randomly from the nonrespondents (those who did not participate in the actual survey). When it is done, the researchers compare the response between the respondents and the nonrespondents on the key variables of the study. If differences occur, then the survey results are susceptible to bias. However, if the follow-up survey is done using different method of soliciting and data collection, researchers should ensure that differences observed is not due to method effects. [Quality Rating: **]

Active Nonresponse analysis – researchers need to conduct interview with randomly selected members of population to estimate roughly the numbers of active nonrespondents (those who explicitly reluctant to participate in a survey during the solicitation stage). If the proportion of active nonrespondents is greater than 15% of the total numbers of interviewee, then the survey results are susceptible to bias. [Quality Rating: **]

Worst Case Resistance – by using data simulation technique, researchers need to explore ‘how resistant their actual data set is to worst-case responses from nonrespondents’. They need to show that the proportion of ‘nonrespondents would have to exhibit the opposite pattern of responding to adversely influence sample results’ is sufficiently low. [Quality Rating: **]

Wave analysis – researchers divide respondents into two groups, (i) early respondent and (ii) late respondents. Then, the researchers compare the response on the key variables of the study between the two groups. If differences occur, then the survey results are susceptible to bias. [Quality Rating: *]

Benchmarking analysis – researchers collect data using the well established measurement (with known norm properties) and later compare the data obtained with the established one. If the comparison reveals any systematic differences, then the survey results are susceptible to bias. [Quality Rating: *]

[1] In Werner, Praxedes and Kim (2007) [Note:I’ll post my notes from this paper to this weblog later], researchers are required to provide “both empirical and theoretical evidence refuting nonresponse bias whenever the response rate is less than 85%”. [Note:I’m now in the effort to collect more evidence in the literature pertaining to analysis of nonresponse bias to get more insight on the related issues. I’d be very glad if any of the visitors could give some input on this matter.]

[2] ‘Quality rating’ shows the quality of the N-Bias technique. It was qualitatively assessed by the authors on the basis of the conclusiveness of the evidence provided by the individual technique.