The increasing popularity of human capital analytics has HR professionals contending with more statistics than they bargained for when they entered the field. Is HR equipped to capitalize on the burgeoning human capital analytical research?

My sense, based on questions received from HR practitioners and fellow consultants is that the research is getting slightly ahead of the HR’s ability to absorb it, let alone act upon it. It may not be politic to say this, but we need to close this gap through a better appreciation of how the research is conducted how much credence to place in the results.

We can’t shy away from the fact that this boils down to getting a better handle on statistics.

At more and more human capital analytics conferences, I’ve noticed that the audience is reluctant to ask the presenter about the statistical model underpinning their analysis and recommendations. On the contrary, there appears to be an eagerness to accept the results without a healthy dose of skepticism. The discussion invariably moves toward relating the results to individuals’ anecdotes.

Audiences are reluctant to ask “troublesome” questions for a number of reasons. These include respect for the presenter’s credentials; not wanting to spoil the wondrous evidence-based story being related; hesitance at side-tracking the discussion onto a boring (i.e., statistical) tangent; avoiding being seen as “geeky” for asking what are viewed as “nerdy” questions; and fear of being viewed as a mathematically challenged.

However, many of these fears are unfounded and everyone, audience and presenters alike, would be better served if people asked more such questions. Other audience members will likely be grateful for the question. The response will provide greater clarity and allow everyone to get more out of the presentation. And researchers and presenters need to be kept on their toes and accountable for their results and recommendations.

It is not necessarily a bad thing that the bulk of research presented at HR conferences does not meet academic standards of rigor. After all, practitioners want practical results, not theoretical discussions. However, the flip side is that you sometimes have to take the HR research presentations with a grain of salt.

Let’s review some aspects of HR research in terms of where it falls short of academic rigor. The purpose is not to cast HR research in poor light, but rather to appreciate the degree to which we ought to feel comfortable acting on the results. HR research need not fulfill all the requirements in full measure, but at least we can ascertain how wide off the mark we might be and therefore how many grains of salt to add.

I’ve included some related questions that conference participants might want to ask presenters in the spirit of enriching the discussion and debate when HR research is presented.

1. The analyses have typically not been peer-reviewed. This means that there usually has not been an independent and authoritative quality check on the data, methodology and conclusions. Even with the loftiest credentials, it is possible to omit important considerations, make errors in model estimation or misinterpret results. Review by qualified independent third parties ensures professional standards and avoids potential embarrassment for the researcher.

Questions to ask include: “Is this the first time your results have been presented? What sorts of feedback have you received on your research to date? Were you able to address all their concerns? Have you considered submitting this research to a peer-reviewed journal?”

2. Researchers are not obliged to allow their analyses to be replicated. The ability to replicate results is a scientific standard that requires complete transparency and guarantees rigor (does anyone remember the hullaballoo on cold fusion back in the ‘90s – no one could replicate the scientists’ amazing results?). HR data are proprietary, confidential and sensitive, so there may not be a way around this one. However, recent inquiries into the impact of HR policies on business outcomes often use aggregated, anonymous data and researchers should share data sets when possible.

Question to ask include: “Can you share your data set so that I can do some further analysis? If you can’t share your data, could I send you some hypotheses to be tested? Are you aware of similar results as yours by other researchers using comparable data?”

3. Results are typically presented selectively. You seldom hear about the negative results or the research that supports a contrary point of view. Even secondary research in HR, which examines primary research from various sources, neglects to tell the whole story. It would be helpful to get a summary of the current state of research on the topic.

Questions to ask include: “Can you tell us about dead-ends your research may have led to? What are some of the contrary findings in the broader research into this topic?”

4. Correlation is sometimes packaged as causation. Or even if the usual disclaimer “correlation does not imply causation” is made, it is hard for the audience not to leave with the impression that an outcome is directly caused by an action. For example, we can’t be sure if employee engagement causes positive business results or positive business results drive employee engagement. However, investment in employee engagement is typically encouraged on the basis of impact on business results.

A question to ask is: “How comfortable are you in claiming that there is causation, not just correlation here?”

5. Multi-variate effects are not adequately treated or explained. Too much research in HR looks at one independent variable at a time. However, the world is not two-dimensional and multi-variate analysis is more appropriate. When looking at the simultaneous impact of many independent variables, you have to be careful with the interpretation of individual independent variables on the dependent variable.

Questions to ask include: “How are you controlling for the influence of other relevant factors? What are some of the other relevant factors that you were unable to find data on to include in your model?”

6. The exhibits seldom contain the information necessary to judge how much credence to place in the results. Perhaps measures of statistical significance such as p-values, t-tests, F-tests and their ilk might not resonate with many participants, but at least there is full disclosure and someone looking at the charts at a later date has the full picture.

A question to ask is: “Can you please talk about the statistical significance of your results and whether you think these results can be generalized?”

7. Crucial information about the sample is often glossed over. The size and nature of the sample as well as the sampling methodology have an important bearing on the results. Biases stemming from sample size and sample selection need to be taken into consideration when evaluating the results of the analysis. I’ve raised this issue in a previous post.

Questions to ask include: “Can you please tell us more about your sample? Why did you use this sample? How did you select the participants? Was there any randomization in the selection? Could sample selection be driving your results?”

8. Adequate context for the research is not provided. A good research product summarizes some of the historical literature and main findings – whether they support the results or not. The context and the researcher’s motivation for examining the topic help us to appreciate the specific approaches used and objectively evaluate the conclusions reached.

Questions to ask include: “Can you please talk about your motivation for exploring this topic? Is there a gap in the general research on this topic?”

9. References are not provided. Unless the research is presented in a peer-reviewed journal, it is unlikely that any sort of bibliography is provided. It is important to know what material the researcher has leveraged and how many of the ideas in the research are original. A list of references offers a number of advantages. It tells us at a glance the depth of secondary research; it alerts us to any biases the researcher might have based on what sources are mentioned; it gives us some comfort that homework has been done; and it provides a reading list for those interested in learning more about the topic.

Questions to ask include: “Can you share a list of references – i.e., articles, web-sites or books – that you used in your research? Can you suggest a few items for those of us interested in learning more about this topic? Which one article or book would you recommend we read to complement your presentation? Is your idea new?”

10. Results are over-generalized. Research results need to be qualified heavily since they are often very specific results that hold in specific situations. The results need to be qualified on the basis of sample selection, sample size, modeling assumptions, statistical significance, etc. This can become quite a mouthful when communicating the results and it is understandable why the focus is on the result not the qualification. However, we might avoid some mistakes if we check in what exact circumstances can we typically expect the results to hold.

Questions to ask include: “That’s a very strong statement; do you need to qualify that in any way? Are there situations in which you would not expect your result to hold?”