Clinical Research Networks: Building the Foundation for Health Care Transformation

Meeting Summary

Welcome and Introduction

Barbara Alving, M.D.
Director National Center for Research Resources (NCRR), National Institutes of Health (NIH)

Dr. Barbara Alving provided a brief welcome and introduction to the final meeting of the Roadmap Clinical Research Investigators. She spoke about the original goal of the Roadmap Clinical Research National Electronics Clinical Trials and Research (NECTAR) network in promoting and expanding common informatics platforms to allow complex research programs to benefit so that clinical studies and trials can be conducted more effectively. The transition of NECTAR has allowed for the development of the Clinical and Translational Science Awards (CTSA) and the expansion into community outreach, training, and informatics. Dr. Alving commented that the Clinical Research Network website is linked to the CTSA website to inform others of how work needs to be done. In her concluding remarks, Dr Alving noted that the work evolved from NECTAR is providing a higher standard for the work of the CTSAs and will continue to play a vital role in the future work of research.

Dr. Silverstein introduced the topic by presenting the technological challenges of moving from interfaces to (service-oriented) architectures, of moving data to warehouses or federated systems where data remains close to its source, of standards - particularly controlled vocabularies and shared information models and of collaboration systems which could permit researchers to work on common sources of information. Dr. Silverstein elaborated on the primacy and challenge of context and what may be lost in the interchange that is misleading. He illustrated the point in examples of the differences between inpatient and outpatient data, data federation may gain additional unanticipated context when needed by going back to the source, and the importance of "bottom up"; standard data elements and metadata that support understanding of provenance. He summarized the socio-technological challenges that exist within systems and policies and the conflicting need for standards as well as flexibility, as well as collecting data with a purpose, the local legal considerations and virtual organizations.

Dr. Christopher G. Chute presented observations about interoperability and standards requirements. He observed that the research world may achieve this status more quickly than clinical data interoperability. As biomedicine and interoperability requirements expand, conducting research as a community emerges as a practical business case. Building this case will require development by the appropriate mix of institutional oversight (top-down effort) and project need (bottom-up effort). Such an effort will engage the faculty and community. caBIG has initiated this approach in cancer research and has stated that it is willing to make the bioinformatics grid available to non-cancer researchers.

Dr. Lee Green described the Michigan Clinical Research Collaboratory (MCRC). The structure is a federated model of an honest broker that operates between data sources, each of which keep their own data. The honest broker functions to transfer data between sources. The systems involved were the Center for the Advancement of Clinical Research, which is now MICHR, our CTSA institute; the BMC2, which is a state-wide cardiac catheterization registry sponsored by the BlueCross-BlueShield organization, which collects data on acute coronary events for which people undergo emergent catheterization at these hospitals; a depression disease management system; and a system of providing clinical reminder system support and disease registry in primary care practices called Clinfo Tracker, which was available to the members of the Great Lakes Research Into Practice Network (GRIN), a primary care network.

VELOS E-Research is the clinical trial support system housed at the university which received data from the federated systems based on research consent as identified by the honest broker. The honest broker served to translate information between systems and confirm identity but did not store patient data; it served only to exchange data between the systems as appropriate. It was built on HL7 Version 2.3, LOINC, SNOMED, ICPC-2e, ICD-9/10, CPT-4, and XML messaging standards.

The challenges encountered were in the heterogeneity of technology, silo culture of research centers, systems not designed around standards nor to interoperate, policy barriers and bureaucratic burdens: a system designed for single-site research, clinical practices as businesses are typically poorly understood in academe.

The lessons learned focused on acknowledging and understanding the complexity of the messaging, transport, and encoding standards; machine vs. human readable encodings, "Standards" were often interpreted differently but most significantly, informatics integration paled in comparison to human collaboration. Ultimately, engaging the various groups in adoption of common standards provided the greatest challenge and benefit. The final result of increased enrollment was achieved due to the intensive human and informatics collaboration.

Dr. Kevin Peterson described the purpose of the electronic Primary Care Research Network (ePCRN) architecture which enhances research participation by enabling the identification of eligible subjects from electronic health records. It manages clinical research in the community and integrates the community provider with the academic clinical research enterprise. The ePCRN creates a gateway to the local clinic which in turn can use that gateway to develop a number of different local applications, quality improvement applications or research tools. The ePCRN delivers information to that gateway so that when there are new findings, then those new findings can be really moved out and introduced into the local clinic. This enables them to identify patients that are eligible for clinical trials, and run those clinical trials using a different architecture. Also used is a grid architecture, GLOBUS OSG frameworks, to make all these clinics look like a single virtual database. These can be accessed through the research portal, and are administered by regional research network directors. Dr. Peterson concluded by directing the audience members to the ePCRN website for further information.

Dr. Landis' comments began with an acknowledgment that three NCRR funding mechanisms (SIG, HEI and BAA CRN Roadmap Program) enabled the Penn Clinical Research Computing Unit (CRCU) to secure the necessary technology (hardware, software and licensing) to implement the Roadmap initiative at Penn and partnered with Oracle Corporation and used Oracle Clinical (OC) software to serve as the backbone of the initiative.

Six Penn investigators were carefully chosen to participate in pilot project that, utilizing the National Cancer Institute (NCI) - caBIG approach to data structure, was designed to bring "institutional value"; to diverse, informal datasets. The investigators (and data) chosen included a wide range of specialties such as infectious diseases, immunology, endocrinology and cardiology. Tools such as the NCI sponsored OC Global Library were used to analyze the datasets in the CRCU's OC environment. New Case Report Forms (CRFs) were developed utilizing Common Data Elements (CDEs) from the global library; newly developed CDEs were inserted into Penn's OC Library for re-use.

The pilot developed a library containing dozens of new CRFs and hundreds of new CDEs. In addition, 35 CRFs and over 330 CDEs were able to be re-used from the global library. Development time was reduced with each successive trial, the size and diversity of the global library's clinical content was increased, and data were brought into alignment with CDISC data standards.

Dr. Landis concluded that the CRCU was able to use this opportunity to put a framework in place to provide tools (where there previously were none) for many investigators conducting a wide variety of studies. The growing global library can be accessed and utilized and forward progress maintained.

Dr. Silverstein opened the discussion for questions.

Select Questions and Answers

Question - Dr. Silverstein to Dr. Landis: With these new report forms and elements, you're building elements that are useful in other things. Can they then be used in organizations that maybe don't have Oracle Clinical? How interoperable are those hundreds of hours of work?

Answer - Dr. Landis: We have very limited resources to be able to move it to a more general area in terms of interoperability. They have been developed in the CDISC format and we are putting them into the Penn OC Library. We are willing to share them back to the caBIG Library, which would immediately make them available through caDSR to anyone who is using the caBIG library. With the proper data kind of refinements where we would do the formal sort of certification of each variable and loading it back into the central library at NIH, it would certainly, within that framework, be reusable. We haven't gone to that step yet.

Question - Dr. James Kahn to the Panelists: You spent a lot of time focusing on the elements and forms, but one of the challenges might be how do you know that the data that's being put in is accurate, and what steps have you taken to go back and look at accuracy of the data that gets submitted.

Answer - Dr. Landis: In terms of the curation process, in our early pilot projects we did standard query, data-cleaning kinds of work with the investigator's research group. The last project in ophthalmology has a very extensive set of validation rules built into it and lots of on-site data querying that takes place instantaneously in the data entry phase of the project. We have as much as possible the on-site research staff verifying at study launch, the most critical data elements for that kind of data validation. In terms of curating the variables and moving them into the library to share with the world, we haven't gotten to that step yet.

Answer - Dr. Green: We had a similar approach in the MCRC. The data we collected specifically were quality checked at the field level by the research associate who was collecting them, and the curation of those data were our responsibility. But an enormous fraction of the data in our research data set was from these federated data entities. The ultimate responsibility for those data curation in our federated model was with the sources of the data.

Answer - Dr. Peterson: In the primary care environment, data quality is a process question rather than a technical question. It's a training process, a staffing process, and a model for how you structure your research organization within a given clinical practice.

Question - Dr. Kahn to Dr. Chute: You talked about business modes. Could you talk about how the two internet and software giant approaches to health data elements are going to influence what we're doing?

Answer - Dr Chute: On the Google PHR and Microsoft Health Vault, both organizations have specified that they want to be customer-driven and, as a consequence, are not imposing data standards in interoperability. A lot of health providers are starting to push back against that and say if we are supposed to populate these PHR from our EMRs, then it would be very elegant if the data we put in we could get out again in a way that we can use in our environment. There's tension there because the primary customer is the American health consumer and their goal is to make it human-readable. I think they'll win in the end, and the impact on practical data standards and interoperability will be less than it might have been.

Answer - Dr. Silverstein: An important distinction in those efforts is the tool versus the data. I think that in an ideal circumstance with progress on the standards and these other things, it won't matter much whether a particular thing is actually contained within the electronic health record in a major hospital or a health organization versus in the patient care scenario, whether in a physician's office or in individual patients. So, if we move forward very quickly in making data widely available that's poorly standardized, the tools march much faster than the data standardization. Then we'll get what we've gotten with the Internet, in which there are all sorts of data out there and one has to search and then chip away and find what you want. There's no tragedy in the tools and data moving at different rates. But we don't arrive at nirvana until both of them have matured sufficiently.

Question - Ms. Shumei Sun, Virginia Commonwealth University: The National Center for Health Statistics have a large data dictionary and standardized measurements from a survey they have been conducting since 1960. Why aren't their standardized measurements used in the clinical trial network and community?

Answer - Dr. Chute: Within the CDC they've established a new National Center for Public Health Informatics with the explicit goal to merge the interoperability within public health and in NCHS in particular so that it does interoperate with the emerging biomedical community.

Integrating and Expanding Research Networks to Transform the Clinical Research Enterprise

Dr. Krensky's began by reviewing the history of the Roadmap Program, where it is going, and its relationship with the CTSAs. The NIH Roadmap, is, by design, a dynamic program intended to have revolving areas of emphasis. Dr. Krensky noted that Re-engineering the Clinical Research Enterprise was one of the original key themes. The vision for the initiative was to integrate clinical research networks and the idea behind the National Electronic Clinical Trials and Research (NECTAR) was an electronic clinical trials and research network that would link existing networks and ensure that patients, physicians, and scientists would form communities of research, paving the way for the CTSAs. The Roadmap has now evolved into 14 initiatives, and all the concepts that were part of this Clinical Research Network are fundamental to the Clinical and Translational Science Awards (CTSAs) Consortium. Dr. Krensky noted that although the CTSAs were focused on academic institutions, the CTSA Program goals include interacting and forging new partnerships with private and public health care organizations and better enhance community engagement. The CTSA builds the homes for clinical and translational science, and emphasizes internal and inter-institution nationwide interoperable informatics.

Dr. Krensky concluded with outlining the next steps which include publicizing the Clinical Research Network successes and lessons learned, and promote and support the utilization of the CRN infrastructure in CTSA and other emerging research networks.

CASE STUDIES 2: INTEGRATIVE INFORMATICS IN SUPPORT OF TRANSLATIONAL RESEARCH

Dr. Jon White and Dr. Edwina Barnett developed a hypothetical case for presentation to the panel members. For the purpose of panel discussion of the case, Dr. White instructed the panel that the focus of the case was to be on the process of how the information tools and systems enable researchers to do their work as well as mesh with the clinical systems where they are located.

Phase 2 - clinical effectiveness, which is the advance of tested ideas from the clinical science, including tools like health services research, effectiveness research, and comparative research;

Phase 3 - clinical effectiveness to systemic effectiveness. This involves tools of measurement, accountability, scaling and spread, implementation and system redesign, learning networks, and research beyond the academic center.

Dr. Barnett presented a hypothetical research case study about a lethal viral infection that was rapidly developing and resistant to all know therapies and the challenges faced by the institutional research disease networks of clinicians and investigators who wanted access to databases of information containing viral resistant data, as well as private practice community physicians who were also participating in the study. Technical and analytic tools were developed to import viral resistant data directly from clinical laboratories into the electronic medical record at the clinical sites. A universal registry and reminder system was also developed to integrate research recruitment and study management. A Virtual Data Warehouse (VDW) was re-used to map legacy data to common data structures and provide a solution to frequent extraction and aggregation of health plan data. In addition, this federated database had the advantage of adapting common coding schemes and facilitated compliance with IRB and HIPAA regulations.

Highlighted in the case study were technical and human factor issues related to importing clinical laboratory data into electronic medical records and integrating the sites systems with the research network. Specific technologic challenges included secure collection and transfer of data, standardized reporting, timely updating of databases, scalability, maintenance, systems sustainability and quality. Human factor challenges included workflow design and reusable management tools, staff availability, cost, and user acceptance of technical tools, and the involvement of all stakeholders in the translational process.

An interactive discussion was held, and the panelists made comments on some of the issues as they pertained to their specific Roadmap projects. Addressed were challenges found in working with different constituencies of investigators and clinicians and the attention that needs to be given early on to minimize disruptions from both groups in the research process; the investigators interest in patient-specific outcomes and results versus population-based outcomes and results; complicated technological issues are easier to solve than the human related issues. Also discussed were issues related to regulatory obstacles, data stewardship, emerging technologies, reduction of burden through information systems, and communication and information exchange barriers that exists between technical staff, investigative staff, and clinical staff.

Dr. Kohane presented the topic "Transforming the Clinical Research Enterprise."; He discussed a wide variety of topics including efficiencies in clinical research on common diseases and large populations versus rare disease and small populations which results in controlled and uniform data collection or uncontrolled heterogeneous data collection methods. He noted that academic medical centers must be prepared for changes in the health information economy regarding access to personal health records.

Dr. Kohane noted that information gained from genetic tests may or may not be useful to clinicians providing patient care if it does not provide reliable information about risk and follow-up for patients. However, such information and the associated follow-up medical testing and incidental findings, are of great interest to insurance companies.

Dr. Kohane challenged the group to reconsider the assumption that medical interventions must be validated by a randomized controlled trial. He urged researchers to challenge the standard clinical research modus operandi and seek ways to combine data sets (even those that are apparently not interoperable), thereby efficiently asking questions and finding useful answers. He related this topic to high throughput genotyping and provided an example of a research study that used genetic variants and clinical factors to predict responsiveness in an emergency department asthma study.

Dr. Kohane concluded by advising researchers to monitor the information that is readily available, and to use it to make decisions that can be implemented in healthcare systems. In summary, he recommended lightweight integration, participation by early adopters and early examination of research results for researchers and nimble ecosystems for institutions in order to inform and enhance public health.

CASE STUDIES 3: REDUCING BARRIERS TO RESEARCH

Moderators:

John Hickner, MD, MSc, Professor and Vice Chair of Family Medicine, University of Chicago

Loretta Jones focused on the issue of relationship-building for research. She noted that there are three kinds of community engagement that do not seem to be addressed: 1) Personal engagement or one-on-one individual engagement; 2) Group engagement, or working with a group of clinics; 3) Engagement of university-bureaucratic or university-community relationships.

She noted that when she looked at the issues addressed in the case studies, the red flag she saw was that the significance of building and sustaining relationships was missing. She suggested that investigators begin the development of research relationships at the earliest stage of a project.

Dr. Carol Dukes-Hamilton addressed barriers and challenges the Tuberculosis (TB) clinical trials found in working with health departments and public health clinics. Sites that were successfully enrolling patients and sites that were not successful in enrolling patients were evaluated. Focus groups were conducted to identify areas of success and failure and to develop interventions. Not surprisingly, they found that clinic sites with high patient enrollment had good communication between the clinic staff and study coordinator. Study coordinators were active in explaining the importance and purpose of the project and enrollment process. The clinic staff understood the project and felt that they were part of the research team. In the low enrolling sites, they found the clinic nurses lacking in knowledge about the purpose and importance of the study, there was little motivation to enroll sites, and the staff did not feel like they were part of a research team. The intervention they developed was to have the study coordinators call and talk frequently with the clinic nurses and work toward helping them feel a part of the research team. Over time, they found that this approach was successful and patient enrollment at these sites greatly improved.

Dr. Dukes-Hamilton also spoke about some of the clinics wherein the public health leadership of those clinics was not supportive of the TB research efforts. Dr. Dukes Hamilton noted that this was a barrier that they were not able to overcome and they were not able to effect change with the leadership in this regard.

Dr. Stephen Johnson spoke about his project that examined the work flow of the clinical research coordinators in the community practice setting. He classified the components of the clinical coordinator work as micro-level, which involves tasks they have to carry out to implement clinical research and macro level which is related to organizational structures.

At the micro-level the nature of the work requires their involvement with multiple stakeholders (patients, sponsors, investigators, clinicians not involved in research, community leaders, etc). They utilize multimedia artifacts (fax, phone, computers, in-person meetings, paper), not one input or output channel for interacting. Another important characteristic of their work environment is that it is highly interruptive. Tasks initiated can be repeatedly interrupted before completion. Information required to conduct the research process is scattered across different media, artifacts, and people involved in the process.

At the macro level organizationally, there is a tremendous cultural split between patient care and research. Two different cultures coexisting in the same space with some of the same people but yet completely unintegrated activities happen in that same space. There is competition for very limited resources and a lack of infrastructure to get the work done. The organization or macro structures create the work processes and the coordinator role which has much responsibility but no authority to affect the process. Dr. Johnson noted that information technology available for use in this environment for the most part is focused on collecting data rather than processes and frequently add to the complexity as coordinators frequently work with multiple studies and separate computer systems that are not integrated.

Dr. Johnson's project developed a basic software system that tracked research tasks and clinic visits completed to enable sites to be more efficiently reimbursed financially for their work. This intervention proved to have a successful outcome. Work flow and morale of the research coordinator greatly improved. Dr. Johnson concluded by stating that the process of how technology is diffused into practice involves aligning work activities with the organizational structure.

Dr. Morris presented comments on his project focus - the clinician-patient encounter and the process of clinician decision making and the tools to enable clinical trials to be done well and to move those results into practice. His project used electronic protocols to provide decision support for the management of blood glucose and intravenous insulin in intensive care critically-ill patients. They were able to demonstrate that the tool produced replicable behavior in multiple institutions from different cultural environments. This work was replicated in both the treatment of adults and children. These tools have also been reused in the broader clinical practice environment. The protocol adapts to changing patient states and is accepted by the clinicians.

In the discussion period, Loretta Jones noted that issues related to teaching communities about the process of getting IRB clearance must be addressed by universities if they are going to have community co-investigators involved in their research projects. She gave examples of methods she uses in her work with communities that are highly effective. The panelists and audience provided comments and opinion on issues related to central IRBs and engaging the IRB as stakeholders.

Dr. Myers began his remarks by instructing the panel to focus their thinking on how clinical research networks actually transform clinical practice within the community. He stated that translational research is asking the right questions to get the right answers that change the way clinicians practice to improve the care they provide their patients and the larger community. If we are to improve the value of health care and health services research, we need to improve the ways we translate research findings into clinical practice. Dr. Meyers posed the following questions for the panel: What kind of research techniques and expertise are needed to do translational research? What is the responsibility of a clinical research network and the researchers who work in them in considering translational needs along the continuum of the research they do?

Dr. Baty spoke of his involvement in the Michigan Clinical Research Collaboratory (MCRC), and described the initial efforts he and the research technical team had to make in the collection of patient data needed in developing an electronic medical record and data usable in helping target possible depression in patients.

The panelist responded to the questions posed by Dr. Myers remarking on the lessons learned from their research projects and how they may add to the translational process.

Nancy Dianis described the work of the Inventory and Evaluation of Clinical Research Network (IECRN) project conducted by Westat. An important part of the IECRN project was to look at network operational issues related to management and governance, regulatory, and find out what were the barriers and what were the facilitators to successful research network operations.

Dr. Williams spoke of the partnership formed between the research community, the clinician community, and the patient community, and how this relationship is needed in the translational process of returning results to the community. He identified what he viewed as pragmatic and ethical reasons. Pragmatic reasons related to the importance of the voluntary nature of participation of both clinicians and patients, and the need to give them some benefit from their participation. Ethical reasons related to researchers conduct of research in the community. It is increasingly less acceptable for researchers to engage in research simply to obtain data.

Dr. Harrington described an informational "Lessons Learned"; process conducted at the completion of clinical studies. He also described the involvement of clinician specialty groups in critiquing the protocol and discussing its application in the practice setting.

Sarah Green spoke of the alignment needed between the clinical goals and the research goals and finding ways to bridge these two cultures.

Dr. Myers asked the panelists to share their experiences in trying to connect research questions as a network with the interest of a practice group for the purpose of quality improvement.

Dr. Williams provided remarks about the fine line between quality improvement work and effectiveness research. He suggested that research projects targeting the topics relevant to the clinicians and the community have a higher likelihood of being accepted if successful. It was noted however, that this approach can creates problems for research generalizability.

Dr. Harrington noted that the ability to identify the research question or knowledge gaps that need further research may be more readily facilitated by large practice-based registries, such as the national heart catheterization database maintained by the American College of Cardiology. This provides the opportunity to identify whether a clinical trail is needed for an observational finding.

The panel session concluded with an interchange of comments between the panelists and audience on issues related to dissemination and translation of knowledge into practice, cultural change, and better integration of research into clinical practice.

CLOSING REMARKS FROM DR. BARBARA ALVING

Dr. Alving concluded the meeting by summarizing the contributions made by the speakers, moderators, and panelists and thanking all who participated in this event. Dr Alving encouraged the Roadmap Clinical Research Investigators to continue their efforts through the publication of their work.