Pages

Monday, February 28, 2011

I started a discussion on building blocks last week. There are still some ideas kicking around that need more exploring around the concept of Patient Discovery. The concept is important enough that the NwHIN Specification Factory devoted a specification (pdf) to it.

That specification makes some assumptions:

The initiating NHIO has identified other NHIOs likely to hold a specific patient’s information. Patient Discovery requests are not intended to be broadcast across the NHIN.

The initiating NHIO provides patient-specific demographic data for use by the responding NHIO to evaluate whether the patient is known to the responding NHIO.

If the responding NHIO makes a match, it provides the set of demographics locally associated with of the matched person to the initiating NHIO who can either trust the candidate match, or use the returned patient demographics to evaluate the candidate match.

The specification allows NHIOs to independently make identity correlations based on their own algorithm and not that of another NHIO.

The assumptions become the basis for creating a specification allowing the discovery patient data at other NHIOs (I'll use their acronym, just think of it as an HIE). But assumption number 1 is problematic. It assumes that there is an easy way to identify the other NHIOs, and there probably is, but the way to go about it isn't described.

Let me recap what we know:
Required: Patient Demographics, Name, Address, Gender and Birthdate
Highly Likely: Other Patient Identifiers
Possible: NwHIN Direct PHR address for the patient or their healthcare provider, or other identifiers for the same (e.g., from an NHIO Patient Identity Card).

This is presumed to be known by the "Initiating NHIO", and so can be used in that first step of identifying other NHIOs that are likely to hold the patients information.

Let us assume that there is a "Home NHIO" for the patient, and that this Home NHIO has several responsibilities. One of these responsibilities to to ensure that there is an XCPD end-point that can be used for notifications and queries. Another is to maintain a registration record in the NwHIN UDDI registry that points to that end-point. (There's a spec [pdf] for that also). There's one more responsibility, but we'll get to that in a minute.

There is one key operation:Locating the Home NHIO:
If a Patient Identifier is specified and that is an NHIO Patient Identity, obtain the Home NHIO identity from the NHIO Patient Identity. That could be algorithmic, or printed as one of the many identifiers on an NHIO Patient Identity Card. This could even be reported as the NHIE Home Community ID. Prescription drug benefit cards already contain an identifier that indicates where to send the drug benefit transactions. We could do the same for NHIO Patient Identities.

If a Patient Identifier is specified as an NwHIN Direct address, use the health domain name as the NHIO Identifier.

Lookup the NHIE XCPD endpoint using the NHIO Identity. This is where you would send the XCPD query and notificatin messages.

Once you have the Home NHIE(s) (there could be several), perform the XCPD query against it(them).

Payers could get into the act too. All they'd need to do would be to expose an XCPD end-point, and register it in the NwHIN UDDI registry.

What if you didn't have an NHIO Identifier for the patient? Well, how about trying to use the patient's address to resolve the possible NHIOs? That could work. All you'd need is the State. Of course if they lived in California or New York, you might still have 10-20 NHIOs to deal with, but you might be able to resolve that better if you have some regional data to work with, and that is still much better than 200-300 NHIOs to query.

A couple of points of protocol are needed:

If you aren't the Home NHIO for the patient, yet records get stored in your NHIE, notify the appropriate Home NHIO(s) of the update so that they can keep up with who has records for the patient.

If you are the Home NHIO, maintain a record of the Home Community ID for the notifications you've recieved, and return them as appropriate to queriers of information.

So, now we need to update the NwHIN UDDI specification to support storing end-points for the XCPD service. Wait, that's already done. Ok, so we need to assign identifiers to home communities. Nope, that's done too. So, what else do we need to do?

Last night I finished re-reading Carpe Diem, which a Science Fiction Action/Adventure/Romance novel. That's quite a combination of topics to put into one book. You might even wonder where to find it in a book store, but the answer to that is easy. You look under Science Fiction because that is the principal classification used to find it.

The idea of a "principal classification" is what is behind the XDS Document Class code metadata attribute. Not too long ago, on a call that I was on, a question came up about how to code the document type for a CDA document. The querant also wanted to understand the relationship of document types to the XDS Document Class Code.

There's a couple of principles for classifying clinical documents (CDA) that the HL7 Structured Documents workgroup and IHE have come to generally agree upon.

The preferred coding system for classification is LOINC®. HL7's Structured Documents workgroup picked LOINC in CDA Release 1.0, and has been working with LOINC for several years on document classification. IHE sticks with the HL7 recommendation.

Precoordinated codes are NOT the preferred way of coding a document. That's because CDA contains a number of different components in the CDA Header that can be used to classify a clinical document. These include:

Composing all of these into a document type code might be OK if you could navigate all these facets of the classification system in LOINC, but you cannot do that at the moment. A perfect example of this is to look at the Attending Physician Hospital Admission History and Physical Note. That's a type of initial assessment and also a type of H&P note (two different items higher up in the hierarchy of services performed), performed by a physician in the attending role at a hospital. Now, what if I wanted to use that same document specification to admit a patient to a long term care facility? I wouldn't be able to because of the specificity of the document type.

HL7 tends to pick a preferred code and allow the others with guidance that the other facets must not disagree. For example, you cannot use the Attending Physician Hospital Admission H&P and then show that it was authored by a nurse, or that the healthcare facility was something other than a hospital. IHE just picks the one least restrictive code to use and avoids the issue altogether. That code in LOINC most commonly refers to the type of service only (e.g., Admission H&P).

If you collect up all the types of services and make a short list of services, say less than 50 or so items long, then you have what IHE called document class. That way, types could be detailed and could be used to support more specific searches, but document class would support the "drop-down" list of things to query for, e.g., ED records, L&D records, Lab reports, imaging studies, H&P notes, consult notes, procedure notes, et cetera.

The question come up then, of how one could locate the {facility type} {specialty} {service} note using XDS. Actually, the question was more detailed than that, I just made it into a meta question for illustration. In XDS you have all of these different facets in the metadata, and you can combine this metadata either directly in your query, or in filtering steps after the query is finished to show only the relevant data. Combining class with the author specialty or facility type or other criteria could get you to the Cardiology Procedure note, or opthamology imaging study.

Using that criteria, I can readily find documents based on the type of service performed, which seems to be the most critical classification axis, just like I can find more of that author's books in the Science Fiction section of the bookstore. Or if need be, I can search by classification, author, specialty, et cetera.

Saturday: I headed over at the booth around 8:00. It's still a NOTwork, instead of a Network, but most people are here. Start handing out fun ribbons, and decided that I'm going to be a rock star this week. After about four hours, I install the latest version of the software in 15 minutes, and I'm done. I hang out for another 4 hours before heading back to the hotel. Off to the EHRA Meeting and Dinner, where I and others recieve recognition from the EHRA. Listened to Doug Fridsma talk about the S&I Framework. Wrote a post on the Year in Review for Doug.

Sunday: It's off to the Interoperability Workshop where I'm giving the shpeil on IHE Building blocks for interoperability. Head back over to the showcase to see how things are going. It's all going well. Catch up with Joyce Sensmeir and Robin Raiford and we try to make plans for a photo shoot on the bike I'm going to rent this week. Then it's over to the GE Booth briefly and back to the showcase. Around 5, I head to the opening reception briefly, where I run into about 3 people who've read the blog, and a number of recovering HITSP-ites. Head off to the GE Kickoff dinner. Got the question on "What is a tweetup?" correct. Then it's back to the HIMSS reception to pick up the Sushi crowd for dinner at Seito Sushi. Excellent meal, great prices, great saki list, and fresh wasabi, at least 4 out of 5 on our ranking table.

Monday: Opening day. Meet up for breakfast with the CDS Consortium. Have an EHRA meeting, and then off to Eagle Rider to pick up my transport for the next two days. Use my tweetup question award to get some cheap eye protection. Back at the Interoperability Showcase booth by 10:30. Meetup with Ceasar Torres (@HIMSS on Twitter), who finds out I have a bike for a couple of days. He comes up with the idea of rolling it into the Social Media Center. Catch up with Arien Malec and we make plans to meet up later on S&I Framework stuff. Say hello to Doug again. Lose Joyce and Robin, so no photo shoot today. Tomorrow is about recognition, so I write a post on what David Tao has done in the Interoperability space.

Tuesday: A crazy day, packed with stuff. The day starts off with me learning that Ceasar made it happen via a tweet by Brian Ahier, and better yet, he's made the link to the missed photo op with Joyce on the bike. Then I'm off with an EHRA meeting with Arien Malec, in which we come out with some good ideas. In general, the whole notion of Direct is about simple messaging for simple uses as part of the glide path to EHR and HIE where you can do more complex things. Back to the showcase, where I finish lining up details with Ceasar. Dr. Blumenthal is at the Showcase, so it's a pretty packed place. I head over to find Sandy (who runs the showcase for HIMSS) and as I walk up, Farzad Mostashari reaches out to shake my hand and shouts "Motorcycle guy!" OK, I know he's listening at least. I shake hands with Dr. Blumenthal also and they head out. At 1:30 we roll the bike in, and quickly take shots of Joyce and Robin on it. The tweetup goes quite well. I catch up with Wes Rishel and we talk about greenCDA for about 10 minutes before he has to run off to another meeting. Then it's back over to the Social Media Center where I give a presentation on metrics and blogging. You can find more detailed information from that presentation here. Then it was off to Universal's Islands of Adventure for the GE Tweetup. We did the Spiderman ride and the Incredible Hulk. Some people even went back for a second run on the Hulk, which after a day of having only Mountain Dew for lunch and coffee for breakfast was too much for me.

Wednesday: Whoever thought it was a good idea to pack my calendar with back-to-back-to-back meetings at Hall E, Hall A, Hall E, Hall C, Hall A and back to Hall E needs to be instructed on how long it takes to walk a mile. At least GE fed me breakfast this morning before I dropped the bike back off at the rental center and headed back to the showcase for the Standards Geeks tweetup. Quite a bit going on today. Arien and I talk about some of his concerns on XCA and XCPD. The day ended with an EHRA meeting with the ONC-ATCB's and Carol Bean from ONC (She's certifiably cool). Not much to be said about the permanent certification program, but some good dialog on the ATCB program and as much as we know about stage 2. NIST is no longer the only one writing test procedures (Mitre is apparently working on Quality and there are others). ONC published what is needed to create test procedures in the Federal Register, so now I have to hunt down that document. I mention that we need some consistent labeling for the ONC-ATCB program so that physicians know what to look for (a message also conveyed to Doug at the EHRA dinner), and Carol acknowledges the miss. They'll be looking into that issue. I also point out to the ATCB's that they need to be engaged in the S&I Framework process, just like everyone else. QA does after all, need to sign off on specifications, right? I raise that same point later to some folks at Delloite who are coordinating the S&I framework activities. Then it's off to another tweetup with @dirkstanley @unclenate @2healthguru @MatthewBrowning @jeffbrandt. I write this piece for Arien to address some of his concerns.

Thursday: I am so ready to go home. I get a few minute breather to post photos, and took some time to catch up with what's going on in Direct, Exchange and Connect portions of the ONC presence in the showcase. I did a short interview with the HIMSS media folks on the showcase that may or may not get used (I need to work on that issue), then off to the airport. What should have been an uneventful trip home gets challenging. First, one of the multiply redundant components used for navagation goes down and we have to land at BWI to change planes. Fortunately, a replacement plane is ready for us, and we just get off, line up, and start getting back on in just a few minutes. Then, after I land in Boston and get into the cab, to get home, someone T-bones it. Fortunately we were all moving at 5mph, and all damage is cosmetic, but it's another delay.

Friday: I think I'll take the rest of the day off. I've spent the last 11 days straight working, and the last 7 away from home. I need a brain break.

Thursday, February 24, 2011

On Sunday I taught a workshop that used a lego theme to explain how IHE has the building blocks to create many different things. Of course, the images rolled around in my head all week, and I realized what's really right about that analogy. Anyone who is familiar with Lego Technic parts realizes that there are a limited number of different connector types, and that what really makes a design is NOT the connectors, but rather the shape of what the connectors are attached to.

The Chopper below illustrates the point. You can find several pieces that have similar connectors on them, but if the shape of the peice where the connectors are was different, well, it just wouldn't be as cool a chopper as this one.

Red Technic Chopper Motorcycle by Mike Cummings

But all IHE tells you is the shape of the connectors, not the shapes of the pieces they are attached to. That's what makes Lego (and IHE) so cool. The shapes of the connecting components are standardized, but the rest of the piece can be custom made to fit. Sometimes this makes it difficult to understand how to use an IHE profile, because it really is the shape if the pieces that inspires the use case and the profile, be it an EHR, HIE, RIS, PACS, Workstation, et cetera.

So what happens when we describe only the connectors is that sometimes the shape of the body to which they are attached is lost, or we don't understand how it works. That's what happened today while I was talking to Arien Malec about his concerns about the scalability of the NwHIN Exchange using IHE's Cross Community Architecture.

The XCA profile builds on the XDS query capabilities, and defines a transaction that looks almost exactly like an IHE XDS Stored Query transaction. The only difference is that it has one additional requirement regarding the home community identifier that must be included in the metadata. The whole point of this transaction is to support fan-out of an XDS query from a gateway to other HIEs that might have information on a patient.

So, imagine that there's an HIE in Florida and one in Boston, and I'm the patient in Orlando. I'm a visitor, and they need to get my records from Boston. So, they query the gateway in Orlando, the gateway queries the HIE in Boston, and violla, they have my data. So far, so good. But the scalability problem that Arien is worried about has to do with the number of HIEs that query will need to fan out to in order to find me. If they don't know to ask in Boston, and we have 100 or more regional HIEs, that query could go to every HIE in the nation, and only 1 out of 100 would return any data for me. That's a LOT of wasted CPU bandwidth.

So, how does the XCA gateway in Orlando know who to ask? Enter into the picture the Cross Community Patient Discovery profile. This is to PIX/PDQ version as XCA is to XDS. You can issue a query for patient demographics and get back a result listing the patient ids and their home comunity ids. So, the XCA gateway issues an XCPD query, and it returns the home community ids where we know that patient has data.

This seems like a shell game, because we've just transferred the magic about figuring out who to fan out from XCA to XCPD, right? Not really.

One possible solution makes the HIE responsible for the patient's medical home responsible for keeping track of the care locations from which they recieved care, and make all care locations which treat the patient responsible for telling that HIE to associate the patient with their home community id. This is one of the possible implementation mechanims that could be supported using by XCPD. There are others, but I can't give away ALL the good stuff.

So, XCA without XCPD, potentially 100 queries to fan out to, of which most would be expected to fail. XCPD has one query to perform, and then based on the results of that query, a query to each holder of records for that patient.

Wednesday, February 23, 2011

Healthcare Social Media is all the rage. While I'm not a healthcare professional, I do have a few tips to offer those who are thinking about reaching out with Social Media and Blogging. Earlier today I gave a short presentation to at the Social Media Center at HIMSS. For those of you who missed it, here is a a partial repeat of what I covered, with extras that I just couldn't give in 30 minutes.

I gained a real appreciation for the use of metrics when the company that I worked for a decade ago decided that it was going to become ISO 9001 Certified, and also apply the Software Engineering Insitute's Capability Maturity Model. So back when I started this blog, I immediately hooked it into Google Analytics.
Like all the tools I use, it is free. Google can tell you quite a bit about what people are reading, and you can watch the trends and link them to events to give you some clues. Here's a picture of the top part of the dashboard that shows page visit trends over time:

When I started this blog, I was writing a couple of times a month then quickly moved to around 2 times a week. In June of 2009, about a year later, I hooked the blog up to twitter using http://twitterfeed.com/. Everytime I posted, it now went to twitter. Pretty quickly there was a small but noticable bump up in readership. In July and more so in August of that year I made a concerted effort to post daily. It was hard at first but I found ways to repurpose what I was doing for other things, which made it much easier. More content about doubled and then tripleed readership.

Do you see that sharp spike in the middle of the chart? That was due to two posts, but mostly this one: Recognition. The Ad Hoc Harleys have since become some of my most popular posts, and that post broke single day and single week prior records by a clear margin. It was helped by a viral e-mail push. I hit as many e-mail lists as I could with that post, and in that time, that meant HL7, IHE and HITSP mailing lists. I don't do that often, but for really important stuff I do. At least 40 of Andrew's co-workers saw that post.

How do I know? Google told me. If I look at the stats on that particular post (View content by title), and then list by network Service Provider, I can often get a good deal of information about who is reading a particular post:

My own particular favorite sites that have hit my blog are these two:

I'm pretty sure I know what posts prompted these page views once I tracked down who works in this office, and the dates that they hit the blog. If you click on the individual service provider, it shows a graph of when the hits occur:

When you click on the Map Overlay button in the Overview report, you get a bigger map. Any colored areas are those where you have readers. My goal is to someday see this map filled in.

Many of the features of Google Analytics are also available in Blogger or Blogger In Draft, if you use that as your blogging platform. In Wisconsin, I have one hit in one particular city that I know belongs to John Moehrke before he turned scripting off in his browser. I know he reads me more often than that.

What Blogger does is a little bit different. It tracks hits at the server, which provides more accurate page counts but is somewhat less featured in capabilities than Google Analytics. In my particular space, I'm seeing about 50% more hits through Blogger than Google Analytics can track. John gets different results because his readers are often more security concious.

There's one little tidbit extra for blogger users. When Google tracks the titles of your page in Blogger, the root page of your blog is just given the title. So http://motorcycleguy.blogspot.com/ has the title "Healthcare Standards", but the page for a single post is "Healthcare Standards: Title of the Post". But I believe that most people hitting the root of the blog are there for the first post. So, I have a bit of script that changes the title of that page, which means that Google tracks the title of the blog differently depending upon which post was first up.

There's two pieces of script code you need to add to your page's HTML. In blogger, you would go to Design | Edit HTML, and then click on the "Expand Widget Templates" checkbox. After the body element, add the following to declare a variable that gets used later.

The first goes in after the body element.

The second comes right before the div using the post-header class. It checks to see if the variable has already been changed, and if not, fixes up the title. Just remember to change "Healthcare Standards:" to the name of your blog.

This will change the title that Google Analytics uses to report on your blog when you use the Content by Title View, but won't have any impact on how it reports the URL for the page the reader hit. That's a handy way to distinguish how they got to the page.

There's a ton more you can do with Google Analytics. Read the documentation and play around.

The best Search Engine Optimization tip that I've learned from it is that simple names for blogs and posts are the best. This blog is called Healthcare Standards. Guess what is in the top spot in Google for that query? Look also at the Meaningful Use Standards Summary. I learned this when I realized how well my post on Clinical Decision Support was doing over the course of a year. Now I conciously think about titles. Sometimes I'll forgo a clever title for an easy to find one.

One last free tool that I'll mention is Google Web Master Tools. That tool gives you great information about links to your blog, searches (most people find me using "motorcycle guy blog" as the query), and page rank.

Now I know you are all expecting a report out on the Interoperability Showcase, and I promise one tomorrow, but today I have other things in mind.

Saturday night I attended the EHR Association annual dinner for the first time at HIMSS (usually I'm completely tied up at the Interoperability Showcase, but this year I wasn't). This year the EHR Association recognized various of its members for some of their contributions to the association. As Carl Dvorak, chair of the association put it, it seems a little bit like the high school club giving awards to its members. I was pleased to be recognized with an award for interoperability along with several other association members for work in interoperability and other areas (more on that in a later post).

It got me to thinking a little bit about what it means to give an award, because of course, I do give one from time to time, and the requirements are pretty obscure for how I award it. There's no set time line for how this award is given out, no nominations committee, and no unbaised judging. I admit to it being completely arbitrary and of course, it only comes really with bragging rights and a pretty picture you can print out, unlike the EHRA awards which you can hang on your wall.

If you haven't already guessed it, it's time for another Ad Hoc Harley. This particular award is going to someone who has been very quietly, but intelligently speaking on a number of interoperability projects going on around our industry. Back in HITSP days, this person was my first choice for who to represent the Care Management and Health Records committee to the HITSP equivalent of the HL7 Architecture Review Board. Recently, in the CDA Consolidation work, he's taken quite a bit of data from IHE, HL7 and HITSP specifications and shared it in a way that is remenicent of one of Robin's Eggs. In a roomful of loud, forceful and often argumentative speakers, this particular person remains calm, cool and collected, and quietly pokes the holes that are needed into our best laid plans, just as quietly offers solutions.

His most recent works are a reflection on his harmonious and well thought out nature, which is a dead givaway if you've been paying attention. For his efforts, I award the next Ad Hoc Harley to:

This certifies that

David Tao of Siemens

Has hereby been recognized for outstanding contributions to the forwarding of Healthcare Standardization

Congratulations David, welcome to the blogsphere, and I hope to hear more from you in HL7, IHE and the S&I Framework initiatives. We need more like you...

P.S. This is the second time I've given the award to what some might consider a competitor. If you walk down to the Interoperability Showcase to hall E, you'll find that unlike everywhere else this week, there are only collaborators in this venue.

Monday, February 21, 2011

I love teaching. It's one of the most rewarding experiences in my job. This morning (yesterday actually), I joined with Lee Jones, Bob Yencha, and Didi Davis to talk about how you can put the pieces together to build interoperable solutions.

Lee kicked us of with some great slides about what he calls the "Meaningful Frenzy". There's several poignant slides of a pair of Sumo Wrestlers that really drive home some of the challenges we all have.

I followed up with a description of what a use case is, and talked about many of the Use cases we have already built interoperability solutions we call profiles in IHE. After that, we walked through "Use Case 3" which describes an emergency room visit for a sore wrist by a 70-year-old woman. Then the fun began. I'm used to walking around the room to engage students, but the mikes were all wired, so I was stuck behind the podium. But for the workshop portion of my talk, I needed to move. Well, I can project pretty well, so I skipped the mike, and started talking. The tech guys decided they needed to have me miked anyway (so they could record), and so they got me a wired shirt mike with a 40' cord. Moving around on stage and back to the floor I got tangled up a few times.

Once I waqs wired, we picked out the important pieces of our use case:

Identifying the patient

Obtaining Consent to Share information.

Referring the patient to a specialist (a cardiologist in this case).

Accessing information from the patients prior visits.

Reporting the results and sharing them with the patients GP.

Some of the IHE profiles that we talked about included PIX/PDQ [1], BPPC [2], XDS-MS, DSUB and MPQ [3], XDS.b and other content profiles.

I can understand why its difficult for people to put the profiles together. All the profiles do is deal with transactions for a specific use case, to solve a single problem. What people need to do is think creatively.

Here's a little experiment: Take the seven profiles that I listed above, and pick two. What cool thing can you do with those two together? Let's look at PIX and BPPC. PIX is how you send information from a Patient Identity Source (say a registration system) to a Patient Identity Cross Reference Manager (e.g., an MPI). Now, suppose that registration is done through a kiosk, and there's a checkbox to get consents to share information and the patient's signature. So, to create a new patient in the HIE, you can create an HL7 Version 2 ADT message, and send it from the Kiosk to the MPI. Now, the MPI can look at a couple of fields in that message to determine what the patient consented to, and it can create a Basic Patient Privacy Consents document which it will register in the HIE. So, just by putting these two profiles together, we've solved an interesting problem (I've seen a variation of this used in at least one HIE).

We did the same thing with the seven profiles I listed. But the challenge for most people is putting these things together to make it work. You have to be creative, and you have to know what the shapes do. Once you know what the shapes do, you can create an endless variety of solutions. IHE is to Health IT as Lego is to young engineers.

The rest of the presentation, which included details on another set of building blocks (templates in CDA), and steps beyond that can be found here.

Keith

P.S. If you are looking for me at HIMSS, you should be able to find me in the Interoperability Workshop at the Help Desk. But I'll be late tomorrow. Have to pick up the motorcycle first...

Saturday, February 19, 2011

Regular readers of this blog know that I organize my "standards" year around HIMSS and the IHE Interoperability Showcase. The showcase is the culmination of a years efforts in IHE made real.
So, tonight is new year's eve as it were. This evening I attended the EHRA annual meeting and dinner at HIMSS (my first time, though I've been a long time EHRA member). ONC's Doug Fridsma was the keynote speaker, and he asked everyone at the meeting to e-mail him their updates on what significant has happened in standards this year. As usual, I'm doing two things and using task to feed the blog monster as well.

There are a number of events that important to me personally this year:

I finished writing The CDA™ Book (yes, CDA and CCD are now HL7 trademarks, and will soon be registered).

HL7 has taken on some interesting new work and thinking in simplifying its standards.

SNOMED CT and LOINC were also recognized in Meaningful Use regulation.

From an International Perspective:

HL7 CDA moved to the "Entering the Plateau of Productivity" according to Gartner.

IHE moved to the "Climbing the slope" according to Gartner. The XDS map recieved over 100,000 hits in a year, showing the popularity of this profile, and interest in where it has been adopted (I'll bet you cannot point to any other resource that tracks any other HIE interoperability specification the way this map does).

HL7 and ISO finished a joint ballot on the ISO/HL7 Data types, a set of standard data types for healthcare.

So, that's my year in review. If you want to see the outcomes, have a look at the Interoperability Showcase at HIMSS11 this year. Quite of bit of what I described above is being demonstrated there, and most of it is in production. If you want details, you can probably find me around the help desk this week.

-- Keith

P.S. If you cannot find me, it might be that you aren't looking for me in a suit. It does happen, usually around this time of year. Jacob Reider kindly provides proof:

What Bill does explain quite well are the challenges of Information Retrieval as it relates to the practice of medicine and medical research, and vice versa. As I said, this was, even to one who is well versed in IR topics (having worked in IR and linguistics for 10 years), quite dense content, and full of usefull information and references. Quite a bit of the book focuses on the work of the National Library of Medicine, including PubMed and MEDLINE.

What I found most disappointing in the book was the rather scarce coverage of Information Retrieval as it applies to the electronic health record, but I should not be surprised. That lack is not Bill's fault. It is up to our generation to apply IR technology to the EHR, just as it was up to prior generations to apply them to electronic text and the web. Chapter 9 delves into some of the issues of Information Extraction from medical records, something I spent about 4 years working on in a prior job. Coverage is a little bit thin in this area, but then again, some of the products that do this today in a very small way (tagging utterances in text) are only just now emerging into the medical marketplace, nearly a decade after I started working on them.

Bill is very well versed in his topic. Lotka's law (see page 49 of his book) seems to hold if you look at the rather extensive (64 pages) of references that he includes in the back. While I would certainly expect some bias towards his own publications, only Anonymous seems to be more prolific in this space.

Bill also writes a blog. You can find him at Informatics Professor. I usually find his posts to be worth reading and tweeting about.

- Keith

P.S. In the interest of full disclosure, Bill gave me this copy, just as I will be giving him a copy of my book.

Thursday, February 17, 2011

I've recently been reading (re-reading in fact), Crystal Dragon by Sharon Lee and Steve Miller. The book is about the founding of Clan Korval; principals of which play leading roles in their Liaden Universe series of Science Fiction love stories. In the book, there is a sequence in which the leading characters must infiltrate a culture of academics to retrieve something of importance. In this particular culture, one advances in stature by proving ones work. The notion of proof however, is nothing more than a knife fight, held with razor sharp "truth blades". When one's proof is challenged, the prover and the challenger fight in a battle to the death, slicing each other to ribbons in the process. The winner's work recieves acclaim, and the loser's work is subsequently erased from history.

While this is a work of fiction, I have to wonder, had either of the authors ever attended a standards meeting?

Wednesday, February 16, 2011

This was in my inbox a few weeks back, and I thought I'd share it with you because some of the links are useful, but then I never posted it. Well, here it is, if a few weeks late...

Thank you for attending the IHE N.A. Connectathon Conference 2011.

On behalf of IHE USA, thank you for participation at the IHE N.A. Connectathon Conference on Tuesday, January 18, 2011. The conference attracted over 150 attendees to Chicago for an important convergence of thought leadership in the health IT industry. We hope you enjoyed the educational programming and networking.

As each of you witnessed during the guided tour of the IHE N.A. Connectathon testing area, the Conference is just one important element during the rigorous five-day testing marathon. As a result, we would like to share both the Conference highlights and final Connectathon statistics. Please visit the IHE USA website or use the direct links provided below to view the highlights from the week.

If you have any additional questions or would like to become involved in IHE, the IHE Global Connectathons or HIMSS Interoperability Showcases, please contact secretary@ihe.net. We will be glad to speak with you individually.

Fred Trotter wrote a post about the Patient Centered Health Internet back in December. In it, he describes what he calls the IHE model of exchange, and then describes the NWHIN model of exchange which uses IHE profiles. He also combines this model with a number of committees who "believe they are in charge of interoperability". I commented on this post, and @ePatientDaveasked for more information (even though he spelled my handle wrong), so here it is.

These profiles are building blocks upon which one can develop a number of architectures. The NWHIN architecture that Fred describes is one such architecture. It uses XCA, XUA, BPPC, XDS and PIX/PDQ or XCPD to federate HIEs at various levels. This is akin to the distributed DNS architecture of the internet, or the architecture used to federate organizational directories using LDAP. It is, like the internet, complex, fault tolerant, but subject to multiple points of local failure.

The NHIN Direct architecture that he describes is what IHE Patient Care Coordination considered as one possible architecture when we created the IHE XPHR (Exchange of Personal Health Records) content profile. The Direct Project includes references to both XDM and XDR in it, and can be used to exchange the HITSP C32 specification. XDR is an alternate messaging backbone for Direct which can be entered through an SMPT gateway. XDM is the specification that should be used with Direct when sending multiple documents in one message.

The IHE XPHR profile is a specification built over the HL7 CCD, and upon which the HITSP C32 is built. That profile is intended to support the exchange of information between the physician and the patient, putting the patient at the center of care.

IHE supports both models Fred presented; we build the lego blocks, and someone else figures out how to put them into an architecture for health information exchange. We design these lego blocks to be reusable and to support a number of different configurations, so there is no single IHE model. XDS was designed to support national, regional and local or institutional architectures by tying together health information systems containing clinical information with a single index. XDR is designed for point to point communication. XCA is designed to connect indices from multiple sources. PIX/PDQ is designed to be an interface to a Master Patient Index, which could be at the national, regional, local or institutional level. XCPD is designed to support location of patients across identity boundaries at any of several levels. BPPC is designed to support the recording of patient consents to policies established at any level. Because we address requirements at multiple levels, a wide variety of architectures is supported.

If you want to assign responsibility for the NWHIN architecture, please don't assume it was IHE. You'll have to look elsewhere. I'd start with the committee responsible for it.

I've been exchanging a few e-mails with freinds in Australia familiar with openEHR. In a recent exchange they described some important ideas that I'd like to share:

In a comparison of two different standards used for the same purposes (their focus was openEHR and HL7 RIM, but this applies to any two standards):

The overlap, or intersection where both agree, denotes a least upper bound on that which is required to be standardized for representing information.

The superset shows where different approaches to the same issue are possible. I'd be a bit more specific and say that the set difference (the superset minus the intersection) show where variety is possible.

Applying these ideas to openEHR and CDA will take some time for analysis, however, applying this thinking to CDA and CCR:

The intersection between the two is CCD, and since CCD is an implementation of CCR, wholely subsumes CCR. The superset is CDA.

Monday, February 14, 2011

Wes Rishel recently gave some advice to ONC on what they should do with respect to the PCAST report. In it, he largely agrees with me, or I with him. There are a few points of departure, which I'd like to address:

Documents vs. Molecules

The arcanity of CDA.

The dreaded OID.

The unwillingness of HL7 to do more than experiment.

Getting agreement on important molecules vs. documents.

More than 200 products have been ONC-ATCB certified. Based on reports from others, I would estimate that at least 80% of those support generating a CCD (and therefore CDA). Viewing IHE Connectathon results, more than 50 companies around the world have demonstrated the ability to generate a CDA, and as evidenced by IHE Conformance statements, nearly 40 of those companies are shipping product that does so (and many have more than one product).

Arcane? Maybe, BUT also doable and shipping.

Should we go back to the drawing board and do it better? My answer to this question depends on what your time frame is. If it for stage 2, the answer is pretty clear. NO. If you want to get it right, and take in the experiences of others, you need to take the time to figure this out and to build consensus. If you want this to get started, great. Just don't tie it to an arbitrarily tight deadline.

Documents vs. Molecules
Actually, I don't disagree with Wes at all on this topic. I just wanted to point out a few things from my own experiences creating a system envisioned in a former job that is similar to what PCAST envisioned. The importance of context is one we built into that system. It extracted problems, medications, allergies and procedures from dictated text, coded the problems, and saved the links back to the documentation with the data stored in the database. That database was successfully used to identify patients taking Vioxx when that recall was announced.

The IHE Query for Existing Data profile envisions just such an extraction. In fact, it uses very similar XML as found in CDA, and when data is extracted from documents, requires that references to the source documents be provided with the data. Does IHE support the PCAST vision? Yes, it does.

The Arcanity of CDA
Any critique of arcanity should start by addressing the specific issues, not just an essential claim of difficulty. So, to add to Wes's point, what contributes to the arcanity:

The lack of readily available educational materials and instruction (which is why I wrote The CDA Book).

An insistence upon rigor in modeling information correctly, and the complexity reporting information rigorous models in a generalized XML implementation technology.

Counterintuitive uses of XML, including he use of attributes (moodCode and classCode) to define semantics in the model, rather than element names [brought about by the parsing limitations of XML].

The onion peeling problem, which is really a question of publication format rather than a problem directly related to the standard.

The use of the dreaded OID.

Lack of Educational Materials and Instruction
One of the problems in healthcare IT is that it is a "small" vertical market when compared to the much larger IT market. While it didn't take me long to find a publisher for The CDA Book, what I heard from traditional publishers in the IT space was that it was "small potatoes" compared to things like HTML, XSL, XSLT, et cetera. I'm not going to get rich off that title, nor did I expect to, but neither is my publisher (although I expect we will both do better than he projects).

It's also not the kind of thing that used to show up frequently in informatics classes, although I do see that changing (and have taught a few classes on the topic). I expect by this time next year that there will be a few classes that spend at least a few weeks, and maybe an entire semester on the topic.

Rigor in Modeling
I don't want to see the perpetuation of the abuse of OBX and ORU that we've seen in HL7 Version 2. This is after all possibly my mother you are caring for. That being said, a rigorous model that creates difficult XML presents other opportunities for things to get messed up in implementation, and makes it hard to see what is being said. There should be a way to have both. That is what the standards experts need to do. GreenCDA is a way forward that supports both, and other ways might also be developed.

Counterintuitive XML
In the HL7 XML ITS, the XML element names don't carry semantics. The moodCode and classCode attributes do. Once you learn to get over that obstacle, much of the "complexity" is readily understood. There are two different projects in HL7 to make that easier to comprehend. The first of these is the RIM ITS, which uses semantically meaningful names, but still puts some information into classCode and moodCode. The other is greenCDA which goes for "molecular" names of things. I'm not convinced that either has the XML right yet, but both are headed in the right direction, making the XML more readily accessible to engineers.

Onion Peeling
This is really a problem of how the standard is published. There is a push from ONC to use model based development tools and to publish not just the standard, but also the data used to create the models. One of the important challenges here is addressing how to manage the various intellectual property issues across all of the organizations that want to use this data. That's no longer a problem I can claim to be beyond my ability to address (it is an issue that the HL7 Board is focused on). The CDA Conslidation Project being worked on by HL7, IHE and ONC is making substantial progress in this direction.

The Deaded OID
I am reminded of the opening of "The Prisoner". Q: Who is number one? A: You are number six. The ambiguity of indentification requires some mechanism to uniquely and perpetually provide the context that ensures identifiers and codes are correctly interpreted. HL7 chose to use OIDs, which are remarkably simple to generate, distribute and manage. What is hard about them is remembering them, but the same can also be said about codes. But, we don't talk about the "dreaded code". There seems to be a disconnect here. OIDs solve a problem that needs to be solved, just like codes do.

One need only recall that several large national programs have been shut down because identifers were not correctly dealt with. Criticising without proposing a solution is not constructive. If you have a better solution to this problem, I'd love to hear it. I can personally think of a few, but they add complexity through indirection, and I'm not sure that helps either.

The Unwillingness of HL7 to do more than experiment.
Progress starts with experimentation, moving beyond the theoretical with the recognition that more information may be needed. The concept of a "Draft Standard for Trial Use" embodies this principle. We say try it out and tell us what you think. That's not an unwillingness to do more than experiment, it's simply a recognition that attempts to theorize the right way to do something are futile beyond a certain point. Experience is needed, so go ahead, get some and tell us what you learned. If you'd like some approval beyond that, you might try describing your experiment, but um, do me a favor. Make sure that you realize that you are experimenting, and that you do have consenting patients participating in your study.

Getting Agreement on Important Molecules vs. Documents
Getting agreement on the molecules is a required step when one creates a document specification. Having done this more than two dozen times in three different organization, what I can tell you is that we already have a considerable collection (nearly 500) molecules, that have already been agreed upon by multiple organizations, standards experts, clinicians, et cetera.

In developing the HL7 CCD Specification, the IHE PCC Technical Framework, various other HL7 implementation guides, and the HITSP C83 specfication, we used the principle of creating the library of building blocks (molecules). Documents are composed of sections, which are composed of various entries. The entries are the same across documents (In HITSP, a problem entry in a C32 is the same as a problem entry in a C84 or C48, and that same principle applies across other specficiations). In fact, the proliferation of the IHE PCC Technical Framework across Europe and Asia has not been through the documents, but rather the sections and entries that appear within it.

Getting agreement on what the molecules are is NOT the problem. What is the problem is making that information readily available. I get just a bit tired of rebuilding the wheel. It this century, it is round, filled with air and made of rubber. I'd like to move on to build the next race car.

IHE Domain Overview and Presentations:Gain a quick twenty minute domain overview and the work being developed today! All presentations are held in Hall E, Exhibit Booth #7347 in HIMSS Interoperability Showcase Theaters A or B. See listing below.

IHE Global Deployment Committees Overview & Presentations: IHE has established over twenty Global Deployment Committees. Each committee is based in a given nation or region to promote the appropriate use of IHE Technical Frameworks within their respective areas. For example, IHE USA promotes the effective use of IHE’s work with the National Office of the Coordinator (ONC). All presentations below will be held in HIMSS Live Exchange Theater (Theater C) of the Interoperability Showcase.

Thursday, February 10, 2011

The following position statement was approved today by Structured Documents WG:

greenCDA wire format position statement

The enthusiastic response to development of greenCDA -- a simplified XML for CDA templates – is driving rapid experimentation and has raised the question of how greenCDA fits into the larger ecosystem of clinical information systems. This trial use and experimentation will help us understand how going green affects ease of use for data capture, management and analysis, when it might be an appropriate wire format for CDA, if there are significant limits on expressivity and where the cost and benefits may lie.

Today, greenCDA is an HL7 Implementation Guide, and one of several projects (including microITS, hData, and others) aimed at simplifying HL7 implementations.

We encourage a broad range of experimentation across different use cases and environments. HL7 welcomes the trial use and the opportunity to review the opportunities, costs and benefits of going green across the spectrum of implementation and looks forward to a robust and informative discussion with all stakeholders leading to acceleration of the development and adoption of interoperable clinical information systems.

Wednesday, February 9, 2011

IHE IT Infrastructure is defining an integration profile called Cross Enterprise Document Workflow (shortened to XDW). This is truly infrastructure. There is no clinical content, and the workflow management component is very thin on management. Really it provides tracking of the workflow state of various tasks, and provides for a mechanism to update state. Any associated clinical content that works with this profile would be developed by a clinical domain (e.g., radiology, cardiology, patient care coordination, labs, et cetera).

Each step in the workflow is described by a short set of metadata and the workflow also has a set of metadata. All of the steps (and their metadata) in the workflow are recorded in a single document.

We (PCC) spent a bit of time in IT Infrastructure today to outline some of the requirements for the step and workflow metadata, and how this could work with the existing XDS infrastructure without change. One of the key principles of this profile is that applications using it would manage workflows, rather than having them be enforced by a workflow management engine. It doesn't preclude the use of such an agent once we get into the specific clinical workflows, but it does provide a vastly simpler scope that makes the profile broadly useful.

My next step is to do a little investigation into existing standards for describing workflows and steps, to see if there are any good candidates to include in this profile for representation of the workflow.