Doug Bassett Talks Data Interpretation with Front Line Genomics

December 10th 2014

Data interpretation was a big topic of discussion at this year’s ASHG meeting, especially as it relates to translational research and clinical applications. The program was packed with population studies, cancer studies, and other massive-scale hunts for causal variants driving biological change. We were excited to catch up with many of our existing customers to hear about what they are doing; we also met with many new researchers who told us about their challenges in looking for the meaningful variation in bigger and bigger data sets.

Doug Bassett, QIAGEN Bioinformatics CSO & CTO, was among the many team members in San Diego. Front Line Genomics caught up with Doug to get his perspective on addressing the many challenges and exciting opportunities in interpreting next-generation sequencing data. Listen to the full interview here.

“Interpretation is really all about leveraging the content that came before. All the translational research, the clinical studies that have been done, and putting the genome into that context, into that framework,” said Bassett. “So that when you see a particular mutation, or constellation of mutations in a given patient and one or more of those have been observed before, or effects a pathway that has been linked to disease before, you can very quickly identify that association and do the right thing. Whether that’s identifying a novel causal variant for disease, or a novel driver variant of cancer or identifying the right treatment for a given patient.”

Helping clinicians and translational researchers look at their data in context and get to answers faster is a major priority at QIAGEN Bioinformatics. Over the past two decades, for example, we have built the Ingenuity Knowledge Base as a horizontally and vertically structured database that pulls in relevant scientific and medical information and describes it consistently, making this data interoperable and computable so you can interpret your results. Ingenuity Knowledge Base scours scientific journals, publicly available molecular content databases, textbooks, and more to gather data. Our expert curators manually review top-tier scientific literature, pulling out key details to ensure that data is captured with full context. Information is gleaned from the entire paper, including figures. Once curated, data is integrated into the Ingenuity Knowledge Base using our proprietary ontology to ensure that information is represented consistently. The integration process also structures data so you can query, visualize, and compute across it.