As far as timing goes, CMS said it disagrees that the API functionality cannot be implemented successfully by 2018 “as the technology is already in widespread use in other industries and API functions already exist in the health IT industry.”

All of this should be a boon for the FHIR (Fast Healthcare Interoperability Resources) standard development community and the Argonaut Project, working on API-related standards, as well as for the broader community of mobile app and personal health record developers. With barriers to patient access to their data coming down, patients will finally be able to create their own portals, separate from any health system and share that data with whomever they want.

This is good news for everyone.

If we truly want so solve issues that require access to information where and how we need it, we must provide interoperability. This means not only the data needs to available be in a format that is understood and supported by common applications, it means the method of discovering and accessing that data needs to be understood and supported, as well.

FHIR® (clinical data) is built on the right web technologies and design methods, as is DICOMweb™ (imaging data). With these APIs, we can discover and access the necessary patient information and make it available in any care setting we need.

And these APIs will create the foundation of data liquidity to spark an explosion of innovation of applications—including traditional departmental and enterprise ones, but also web and mobile ones.

Without clearly defined, supported and accessible APIs, we (healthcare) had no hope of achieving the kind of system-wide change required. We have no more excuses now.

2 thoughts on “Article – First Look at Stage 3: CMS Sticks to Its Guns on APIs, Patient Engagement”

It’s great they are pushing the API, however any API could technically meet this requirement today. I’m sure FHIR/DICOMweb will become the ultimate winner in this space, however my recent experience working with healthcare institutions shows that:

1. The information they will share will be the bare minimum. If CMS does not state what information must be shareable, and much like the 60% imaging requirement, it is very difficult to get institutions to commit to more than CMS recommends. This creates a lower quality clinical experience, as it doesn’t tell the whole story. Hopefully CMS will come back with more details on the exact data elements that need to be accessible through API’s, and ensure that 100% of certain types of data are shared.

2. Without standards for the way data is documented, we risk releasing information that while structured properly to the API standard, may not be correctly interpreted by the third party due to the nature of documenting said information (hand typing, mismatched proprietary descriptions, vague definitions of content without supporting information, etc.). While ICD-10 is helping with some of that, I think standardization of content across organizations needs to be an important focus go-forward to ensure that the data we are sharing is consumable, and not just accessible.

3. API standards implementation by vendors is highly variable in experience and performance. Just like a modality vendor performing DICOM for the first time, there is nothing that says they have a functional DICOM interface, just that they have one. In most common RFP processes I see customers asking for DICOM conformance statements that are “self authored” by the vendor, and do not reach the scrutiny necessary to ensure that the vendor delivering the API is conforming to the actual standard, or doing so in a way that has been tested. IHE is helping by taking FHIR and DICOMweb to the connectathons, and customers can ask for an IHE integration statement, however there is no current standard for ensuring that the vendor is providing adequate performance, and able to handle this new “query based” “pull” API world. This is especially the case in the DICOM world, where the metadata that is requested by these API’s may not be in the PACS database, and while you can request from that API all you want, it may not return data before timing out, and even if it does, it might not be complete. Obviously not all systems are created equal, but there is something to be said for choosing API providers that can deliver versus make claims.

Just sharing a few thoughts. This new development from CMS is great, and a great step forward. Obviously these concerns can be mitigated with the right approach.

You are correct, an API and protocol alone is not sufficient. The content has to be defined and ideally coded to standard items (e.g. values from a defined ontology).

Assessing transaction performance and gaps in content provided by an API (including DICOM) has always been a challenge. One can design clients that validate the returned data, and even measure and log the request/response times. Pilots are a good way to vet actual integration issues, especially with newly defined APIs and newly developed application implementations.

I think that when dealing with large scale problems like healthcare information interoperability, you need to tackle fundamentals and work your way up the ladder. So, get everything digital, then get as much of it managed by proven applications with staff that can support it, then make sure there is a usable API, and then focus on the content quality (consistency, completeness, etc.). Most healthcare organizations are somewhere between step 2 and 3 for patient records.