by ScoutDirector, Network for LGBT Health EquityA project of The Fenway Institute in Boston, MA

SCIENCEBABBLE ALERT – This is a meeting for scientists, despite my efforts, some of this may get technical.

411 on the issue

Probability sampling = getting a group of people for your research that is statistically proven to be a random selection from the full population of interest, thus the statistics support you being able to draw conclusions for the full population based on the info from this random subgroup. (Like if 50% of your probability sample of LGBT people parachute, you can confidently say 50% of all LGBT people parachute.)

Non-probability sampling = any non-random sample of people. (Like if you do a survey at pride, it’s a non-probability sample.) Unfortunately, the statistics then do not support being able to generalize these findings to the full population, because there’s a chance bias might have snuck in. (Like, maybe pride participants aren’t as closeted as other LGBT people, so even if 50% of your sample are in LGBT parachuting clubs, you can’t say 50% of all LGBT people are in such clubs.)

Why’s this a big issue? Probability sample data is the gold-standard for drawing conclusions, but we have much less of this for LGBT people, mostly because LGBT measures aren’t included on the monster federal surveys that are the big probability studies.

Panel Members:

Dan Kasprzyk, Ph.D. Vice President of NORC (which I realize is so well known as one of 2 fanciest survey shops that his bio doesn’t even say what NORC stands for… so just know, NORC=surveys)

Melissa Clark, Ph.D. Brown University Department of Community Health

Margaret Rosario, Ph.D.

Jeffrey Parsons, PhD. Hunter University

The Panel

Dr. Kasprzyk led the panel off talking about some of his interesting experiences as part of the Institute of Medicine committee for the recent LGBT report. He emphasized that the choice of probability or non-probability might really not be as important as the reporting and impact of any well-designed study, regardless of the methods chosen. Then he moves onto talking about the federal surveys. “If the federal gov’t added LGBT measures to the American Community Survey, then allowed oversampling, that alone would allow the community to target populations, whether it’s regional, city, rural, you name it, and we’d be much better off. But we have to go beyond NHANES, you have to get on other surveys, NHIS and especially the Labor Force Survey would be very valuable.” He emphasized how important it was to get measures on these large full-probability surveys, “because otherwise you remain invisible.”

“Probability data is very important, it is the gold standard, in Washington, that’s what people are going to listen to. I think the real advancement in healthcare policy comes from really pushing hard with the federal government to have these questions on those surveys, and that point cannot be diminished. I think it’s really important that we actually stay focused on the federal government and become part of that health policy debate.” Dr. Kasprzyk

Dr. Clark followed (that’s Melissa to you and me) and led off by echoing all of Dr. Kasprzyk’s points. She says “”That’s usually how I end every talk I give about sexual minorities, I say ‘please help us get these questions added.'” She talked about her experience at Brown University and how much she’s been working to try to get the non-LGBT researchers to include LGBT measures. Through this effort, she’s managed to take one of the IOM report recommendations and institutionalize it, “Now when there’s a new study, people have to either include sexual minorities or explain why they are not.” Kudos to Melissa, let’s hope NIH follows suit!

Next up was Margaret Rosario. She warns us that while probability samples are important, most of our real explanatory data will come from non-probability samples because they are so much cheaper they have more latitude to go much deeper into issues, explore causal models, etc. For her, the bottom line is either approach can be useful, it’s often an issue of cost, if we have the chance to do the higher costs full-probability samples, excellent, if not, let’s just do excellent non-probability studies. Lastly she also weighs in on the importance of getting LGBT measures on the large surveys, “For the probability studies, please please, whatever we can do to get questions on there, do be able to identify the population as best we can, we should definitely do that.”

The panel was rounded out by Jeff Parsons. He talked about how it always seems there’s a flavor of the day at NIH for the newest rage for sampling, some of which are just never really viable in the field. “You can’t just count every 9th person who goes in the bar and pull them for the study, it doesn’t work.” Tonda Hughes from UIC echoes that sentiment, noting that the popular method, Respondent Driven Sampling, has never worked for her in samples of women.

As the discussion opens up to audience comments, there’s an interesting suggestion from Jim McNally, a director at ICPSR (the Intra-university Consortium of Political and Social Research, probably the largest data library in the country). one of the University of Michigan (ICPSR) scientists… “We recommend people work to create a small strong full probability sample and then ask the same questions you have on the federal surveys. That way you have policy strength to compare to the federal questions.”