CP Split™ refers to the unique way our disruptive technology splits content from presentation to simplify, speed and secure information transmission and reporting. We discuss how this technology works to support an interoperable architecture that saves money, time and hassle, while promoting collaborative knowledge-building and decision-making.

Step 1: A provider goes to his/her computer and, through the node's UI, requests patient data from other providers in order to create a composite EHR. The provider enters the patient’s identification data and the nature of data requested (e.g., episode of care time frames and data related to particular types of health problems). The node then sends the request--via a desktop e-mail client, Web services (SOAP, RESTful), or other means--to the nodes of multiple other providers to which he/she subscribes who also treat the patient. The node sending the request therefore takes a subscriber role and the other nodes take publisher roles.

Step 2: When the publishers’ nodes receive the request, their publisher functionality is automatically invoked, which validates the request to assure it came from an authorized subscriber node. They then query their databases to obtain the requested patient data.

Step 3: Depending on the data format requirements of the subscriber node, which is determined by instructions in the request, each receiver node automatically converts/transforms the patient data into the correct format and semantics.

Step 5: They then automatically send the Content Files to the subscriber.

Step 6: Upon receipt of the Content Files, the subscriber node’s subscriber functionality is automatically invoked, which begins by verifying the received payload is from authorized nodes, followed by Content File decryption. The CP Split process then automatically builds an EHR by incorporating the contents of each Content File received. Since the files may arrive at different times with different time delays, the EHR will be generated piece-by-piece, and the node keeps track of what data was received, who sent it, and what data has not yet arrived. When all the data have been received and processed, the node alerts the provider that the composite EHR has been completed.

Step 7: The provider has the option of having the node automatically save the EHR as a file on his/her hard drive and/or to send the EHR contents to his/her local database for storage, either as a blob or as separate pieces of data.

Step 8: If decision support tools are being used, the node automatically sends the EHR data to those tools and launches them in response to the provider’s instructions via APIs.

Step 9: The node then presents (renders) the EHR for screen viewing and/or printing via its presentation functionality. The provider may also modify, annotate, or add to the EHR’s contents and have those changes saved to his/her local hard drive or database.

The graphic below depicts how the nodes submit data to and access data from national data warehouses for biosurveillance, research and curriculum development.

Click image to enlarge

Step 1: As patient data are entered into the provider organization’s, the node scans the data looking for certain targeted information as defined in data-scan file.

Step 2: When targeted data are found, the node’s publisher functionality is automatically invoked and sends that data, stripped of identifiers, to a national biosurveillance data warehouse and/or to a national research data warehouse.

Step 3: Data sent to national biosurveillance data warehouse are retrieved automatically by its node’s subscriber functionality. The node then automatically stores it in the warehouse.

Step 4: After the data are analyzed, the node may publish significant findings to other nodes, e.g., to inform government officials and providers of an apparent outbreak.

Step 5: Data in the warehouse may also be accessed directly through queries by the nodes of authorized persons.

Step 6: Data sent to the national research data warehouse are retrieved automatically by its node’s subscriber functionality. The node then automatically stores the data in the warehouse.

Step 7: After the data are analyzed, the node may be instructed to use its publisher functionality to send results to other nodes, e.g., a researcher might request to be notified when new data concerning a particular illness has been added to the warehouse.

Step 8: As with the biosurveillance warehouse, the data in the research warehouse may also be accessed directly through queries by the nodes of authorized persons.

Step 9: These data may, for example, be used to support development of clinical guidelines/pathways, for post market drug surveillance, and for curriculum development. If so, the node may be used to disseminate this information to other nodes.

The processes described above lay out use cases in which providers in the “field” (i.e., treating patients in their practices) and researchers in the lab send diagnostic, treatment and outcomes data — stripped of patient identifiers — to central data warehouses where the aggregated data are available for analysis by research organizations and by professors and their students in universities.

These research results, along with replication studies, would be reviewed and interpreted by objective (non-profit, independent) organizations, and any valid findings would be incorporated into decision support tools by software vendors. Providers, patients and others would be able to use these tools to help them make healthcare decisions.

Outcomes and variance data, which reflecting the results/efficacy of these decision tools and providers’ willingness to utilize the recommendations/guidelines, would also be fed into the data warehouses. The participating researchers, educators and providers would be able to access, analyze and discuss of these data. They could also develop evidence-based decision models, which software vendors could incorporate into their decision-support tools. In addition, collaboration within and between organizations would be facilitated so that the stakeholders could exchange the models, discuss and evaluate the assumptions built into the models, and assess the models’ efficacy; this process would lead to modifications that improve decisions and outcomes over time. The data could also be used for biosurveillance. See diagram below for a simplified depiction of this process.