Pages

Monday, September 25, 2017

This morning I received a long not necessarily relevant announcement to email list I don't remember subscribing to, followed by 30 replies. The replies are all from relatively educated people, many of whom know better, and are summarized below for your reading amusement:

R1: Please remove me from this list

R2: Hi R2, R3 did not send this to you ...

R3: I am not R2

R4: Please respond to the person/s directly and not send a reply to all

R5: Please remove me from all future emails concerning this program

R6: I find reply all useful when unsure who the admin is.

R7: Must you use "reply to all"

R8: Meme "Reply All"

R9: For God's sake everybody -- quit hitting 'reply all' ...

R10: Please remove me as well.

R11: The same here.

R12: This is officially ridiculous. Can everyone stop replying to all these emails?

R13: Same

R14: I don’t know what this email is either and I certainly did not send it out. Please remove me as well.

R15: Hitting reply on the original message only sends the message to the person who sent the email which should be the admin of the list.

R8: Good luck, R3! Keep me posted on the outcome.

R17: Please remove me from your list...

R8: Who's on first?

R20: You guys realize by replying all and asking people to stop replying all that you're just part of the problem, right?...

R21: I just became an Ohio State fan…

R22: I don’t know why I am on this list, so please remove me as well, whoever the admin is.

R23: And good Lord, people, there’s a contact email in the body of the original message:______

Although I must say this has been highly entertaining and a big improvement over the typical Monday.

R24: Please remove me from this list.

R25: Please remove me from this list. Thank you!

R26: Dear whomever, I already have <degree>. I need <job>...

R27:

R28: Me too (in reply to me too).

R29: It appears the original email came from ____. Please direct your request to her alone...

R30: Sorry R?, but hitting reply to all just fills our inboxes with garbage.

... and still going ...

P.S. My e-mail is simply going to point to this blog post and ask everyone to comment here.

Monday, September 18, 2017

Sometimes to see if two things are similar, you have to ignore some of the finer details. When applications dynamically generate CDA or FHIR output, a lot of details are necessary, but you cannot control always control all the values. So, you need to ignore the detail to see the important things. Is there a problem here? Ignore the suits, look at the guns.

Creating unit tests against a baseline XML can be difficult because of detail. What you can do in these cases is remove the stuff that doesn't matter, and enforce some rigor on other stuff in ways you control rather than your XML parser, transformer or generation infrastructure.

The stylesheet below is an example of just such a tool. If you run it over your CDA document, it will do a few things:

Remove some content (such as the document id and effective time) which are usually unique and dynamically determined.

Clean up ID attributes such that every ID attribute is numbered in document order in the format ID-1.

Ensure that internal references to those ID attributes still point to the thing that they originally did.

This stylesheet uses the identity transformation with some little tweaks to "clean up" the things we don't care to compare. It's a pretty simple tool so I won't go into great detail about how to use it.

Wednesday, September 13, 2017

Every year in September, HL7 has its "Plenary" session. This is a half day where we hear from folks outside of the working groups on important topics related to what we do.

This year we heard from Matt Might, whom I now would christen Matt the Mighty for his Super-Dad precision medicine powers. Either that, or as close in real life as one could come to a Doctor McCoy.

You really have to hear him tell the whole story because A) He is an awesome story teller, and B) there's simply so much more depth to it.

The long and short of it though, is not only does he help to figure out how to identify a rare (n=1?) disease, and develop a diagnostic test for it, and identify other possible sufferers, but also a treatment (not a complete cure, but addressing some effects) among already FDA approved substances (lucking out on an OTC drug), and develops model legislation that his state passes to allow for "Right to Try" use of medications for these cases, and builds a process by which other n=1? disease patients can benefit from it, starting with his own son.

That's Mighty powerful application of precision medicine (pun fully intended). If you weren't here, I'm sorry you missed it, and urge you to listen to him speak elsewhere.

Thursday, September 7, 2017

One of the things we've seen from early warnings about Hurricane Irma is a significant increase in prices in airline fares from some airlines. Some of this, I'm sure is due to automated pricing algorithms on fares based on demand, for which there may very well be little or no human intervention.

That got me to thinking about how demand driven pricing AND demand driven reimbursement could have an interesting impact on prices for healthcare services IF it were possible to apply them more interactively and faster.

In the battle of algorithms, the organization with the best data would most likely win. I see four facets to that evaluation of "Best": Breadth, Expression, Savvy, and Treatment (see what I did there?).

BreadthMore bigger data is better.

ExpressionIf your data is organized in a way that makes correlations more obvious, then you can gain an advantage.

SavvyIf you know how A relates to B, you also gain an advantage. Organization is related to comprehension.

TreatmentCan you execute? Does the data sing to you, or do you have to filter signal from a vast collection of white noise?

In the 5P model of healthcare system stakeholders, Polity (Government), Payer, Provider, Patient, and Proprietor (Employers):

Who has the largest breadth of data? The smallest?

Who has the best expression of data? The worst?

Who has the greatest savvy for the data? The least?

Who will be most able to treat the data to their best advantage? The least?

It seems pretty clear that the patient has the short end of the stick on most of this, except perhaps on their "personal" collection of data.

Payers are probably in better shape than others with regard to breadth, followed closely by Polity. The reason I say that is because government data is dispersed ... the left hand and the right hand can barely touch in some places. Providers rarely have the breadth unless they begin to take on the Payer role as well (e.g., Kaiser, Intermountain, et cetera).

Providers have a better chance of having better expression, being able to tie treatment to condition in more detail, and have some chance at understanding outcomes as well.

It's not clear that employers are THAT much better off than patients, although frankly I honestly don't know how much information they really have.

Treatment is where it all comes together, and right now in the US, it seems that nobody has yet found the right treatment ...

Wednesday, September 6, 2017

HL7 Balloting just closed this last hour. Here's my recap of what I looked at, how I felt about it, and where I think the ballot will wind up from worst to best. Note: My star ratings aren't just about the quality of the material, its a complex formula involving the quality of the material, the likelyhood of it being implemented, the potential value to end users and the phase of the moon on the first Monday in the third week of August in the year the material was balloted.

This had a total of six artifacts on the ballot. Together they get 1 star for being able to pass muster to go to ballot. As a family of specifications, this collection of material looks like it was written by a dozen different people across multiple workgroups with three different processes. What is sad here is that the core group of people who have been working on this material for some time (including me) is the same across much of this work, and it all comes out of the same place. VMR was always an ugly stepchild in HL7, and these specifications don't make it much better. Don't lose hope though, because QUICK and CQL are significant improvements, and the FHIR-based clinical decision support work such as CDS Hooks is much more promising. All appear to have achieved quorum and seem likely to pass once through reconciliation.

Release 2: Functional Profile; Work and Health, Release 1 - US Realm (PI ID:
1202) ⋆⋆
Yet another functional model. Decent stuff if that is what excites you. I find functional models boring mostly because they aren't being used as intended where it matters. Pretty likely to pass.

HL7 Version 2.9 Messaging Standard (PI
ID: 773) ⋆⋆
The last? of a dying breed of standard. Maybe? Please? Not enough votes to pass yet, but could happen after reconciliation (which is where V2 usually passes).

Another duo, missing the overweight architectural structure of VMR, but certainly adequate for what it is trying to accomplish. The question I have hear is about its relevance. Except in inpatient settings, I find the notion of a pharmacist care plan for a patient to be of very little value at this stage. In fact, we need more attention on care planning in the ambulatory setting.

These are for comment only ballots and the voting reflects it. While not likely to "pass", the comment only status guarantees that these will go back through another cycle. Based on the voting, the material needs it.

HL7 continues to ballot its own processes. What makes this one funny is that this particular ballot comes out of a workgroup in the Technical and Support Services steering division, which previously rejected another group in that divisions balloting a document because T3SD (their acronym) doesn't do ballots (BTW: That's a completely inadequate summary of what really happened, some day if you buy me a beer I'll get _ and _ to tell you the story. Better yet, buy them beers).

It's a decent document, and likely to "pass".

HL7 CDA® R2 Implementation Guide: International Patient Summary, Release 1 (PI ID: 1087) ⋆⋆⋆
I could get more excited about this particular piece of work if it weren't for the fact that it's all about getting treatment internationally, rather than being an international standard that would eliminate some of the need to deal with cross border issues. But, it's the former rather than the latter, so only three stars. A lot of the work spends time dealing with all the tiny little details about making everyone happy on every end instead of getting someone to make some decent decisions that enable true international coordination.

This one is tight, will likely pass in reconciliation, and is getting a lot of international eyes on it. It's good stuff.

This is a useful addition to what we can do today with Advance Directives, and a great example of how to deal with backwards compatibility right, and they almost nailed it perfectly (my one negative comment on this item is a fine point).

Not a lead-pipe cinch but surely the issues in this one will be resolved during reconciliation.

HL7 Cross-Paradigm Specification: Allergy and Intolerance Substance Value Set(s) Definition, Release 1 (PI ID: 1272) ⋆⋆⋆⋆⋆
ABOUT. DAMN. TIME. An allergy value set we can all use. Nuf said.
The interesting back story here is who is voting negative (who cares) about this. It looks like a lot of VA/DOD interoperability is going to get decided through standards. I'm pretty certain this stuff is going to get worked out, which has tremendous value to the rest of us.

HL7 FHIR® IG: SMART Application Launch Framework, Release 1 (PI ID: 1341) ⋆⋆⋆⋆⋆
I spend the most time commenting on this one. I'm looking forward to this seeing this published as an HL7 Standard and in getting some overall improvements to what I've been implementing for the past year or so.

There's definitely some good feedback on this ballot (which means likely to take a while in reconciliation), even though it seems very likely to pass.

HL7 Clinical Document Architecture, Release 2.1 (PI ID: 1150) ⋆⋆⋆⋆⋆
This was the surprise of the lot for me. I expected to be bored, having said CDA is Dead not quite four years ago. I was, pleasantly so. There was only one contentious issue for me (the new support added for tables in tables). They got to four stars by making sure all the issues we've encountered over the past decade and more were addressed. They got an extra star by making it easy to find what had changed in the content since CDA R2. All in all, a pleasant surprise. CDA R2 still reigns supreme, but I think CDA R2.1 might very well become regent until CDA on FHIR is of age.
Oh yeah. It passed, so very likely to go normative, which will make discussions about the standard in the next round of certification VERY interesting.

Thursday, August 31, 2017

Originally found in my personal inbox from a software developer still using a Commodore 64. I thought I'd share it today.

If you're reading this, then you are already part of a chain that goesback to the early 1980s. Early in the morning on June 4th in 1982,software engineer Dwayne Harris sat down to write a BLISS module forthe then-new VMS operating system. Little did he know, but aradioactive bug had crawled into his VAX 11/785 prototype, shorted apower supply capacitor and opened a worm-hole into another dimension.

A small type-2 semi-demonic entity emerged from this dimension andtook up residence in the VMS source repository. Fragments of thesemi-demonic entity's consciousness were also embedded in DaveCutler's subconscious (thus explaining the WindowsNT video driverinterface.)

Every 2^20 seconds, a secret society of software engineers gathers inan unused USENET news group to ritually banish this semi-demonicentity. Things have been going fine, but the old guard is retiring andmoving on to other projects. We are in desperate need of new softwareengineers to carry on the work of this once mighty society of softwareengineers.

If we fail to achieve a quorum of 0x13 participants in the banishmentritual, the semi-demonic entity will be released and any number ofmodern plagues will fall upon the online public.

In 1995, we started our ritual late and internet explorer was releasedupon the world. Only through fast action was complete disaster avertedand MS Bob coaxed back into a vault underneath Stanford University.

Because of the recent influx of former redditors into the remains ofthe USENET backbone systems, we can no longer perform our rituals. Asan alternative, we have developed this chain letter.

At EXACTLY 7:48:12PM PDT 22 July 2015 (10:48:12PM EDT) and every 2^20seconds afterwards, we ask you to email a copy of this letter to fivesoftware engineers in your address book. The flux of mysticalrepresentational energy through MAE WEST and MAE EAST should besufficient to ward off the evil that now faces us.

Remember, for this spell to work, you must be a software engineer andsend the email to other software engineers.

Wednesday, August 30, 2017

How do you find a problem that was occurring during a particular time span? This is relevant if you are doing a search for Conditions (problems) that are active within a particular time period for something like a quality measure or clinical decision support rule. As I've previously discussed here, temporal searching is subtle.

So, if you have a time period with start and endpoints, and you want to find those conditions which were happening in that time period. There are only two rules you need to care about:

You can rule out anything where the onset was after the end of the time period.

You can rule out anything where abatement was before the time period started.

What's left? In the following analysis, I'm ignoring "things that happen at the boundary points". For the sake of argument, we'll assume that time is infinitely divisible and that no two things occur at "exactly the same time". Obviously we quantize time, and boundary conditions are inevitable. But they aren't IMPORTANT to this discussion.

Things that had an onset within the time period, or before the time period started.

For those items that had an onset within the time period, clearly its in the time period!

For those items that had an onset before the time period started, one of three things must have occurred:

The problem abated before the time period started (which is ruled out by rule #2 above).

The problem abated during the time period, in which case it clearly was occurring within the time period for some point in that period.

The problem abated after the time period ended, in which case, the time period is wholly contained within the period in which the problem is active, and therefore was occurring during the time period.

Things that abated within the time period, or after the time period ended.

For those items with an abatement within the time period, the are clearly withing the time period.

For those items abated after the time period ended, one of three things must have occurred.

The problem onset was after the time period that ended, in which case it is ruled out by rule #1 above.

The problem had an onset during the time period, in which case it clearly was occurring within the time period for some point in that period.

The problem onset was before the time period started, in which case, the time period is wholly contained within the period in which the problem is active, and therefore was occurring during the time period.

So your FHIR query is Condition?onset=le$end&abatement=ge$start

Done. Simple ... err yeah, I'm going to stand by that.

Keith

P.S. Yeah, so easy I had to come back and reverse the le/ge above. Duh.

Tuesday, August 29, 2017

One of the things that I really enjoy about my job is when I get to play with something particularly challenging, and as a result come away from the experience with a better understanding of how things work, or a better process model.

Often times, code gets away from us as developers (same is true with standards). If you've ever had one of those situations where, as an engineer, you found yourself in the position of having developed a piece of software from the middle-out, you know what I mean.

Middle out solutions are where you have a particular problem, and basic principles are simply too basic to provide much help ... and details are sometimes rather nebulous. I just need to fix this one problem with ... fill in the blank. And so you find a way to fix that one problem. Except that later you find an odd ball exception that doesn't quite fit. And then there's another issue in the same space.

After a while you find you have this odd mess of code that just doesn't quite work because you came at things the wrong way. And then some thread comes unwoven and it stops working altogether .. at least for that thing you cared about right now. That thing somehow was important enough (unlike the rest of the work) to make you take a step back and try a different approach.

Somewhere along the line you took the lenses and flipped them around so that now you can see the forest instead of the trees, or vice versa. And now that strange jumble of code begins to make sense all over again, fitted together in a different way, to your new model of understanding.

Monday, August 28, 2017

Someone asked me for an OID today. I have an OID root (or seven), and needed to assign a new OID in one space to represent a particular namespace. The details aren't important.

I considered several choices. One of them was someoid.0 and the other was someoid.2 (since someoid.1 was already assigned). While if I had been assigning these OIDs in a meaningful order it would have made sense to make this OID appear before what I was already using someoid.1, I chose to assign it to someoid.2 instead, even though someoid.0 is perfectly legal.

Why? Because not everyone understands that an OID can contain a singular 0 digit in one of it's positions. And choosing an OID that some might argue with is just going to create a headache for me later where I'm going to have to explain the rules about OIDs to them. I can avoid that by simply choosing a different OID. Not only have I avoided a future support call, but I've also avoided a potential issue where someone else's incorrect interpretation of a standard could cause me or my customers problems somewhere down the line.

It would be nice if standards skipped the tricky bits, but we know they don't. So, when you have a choice, think about your end-user's experience, and keep it simple. Not every decision you make will let you do that, but for those that do, simply make it a point to think about it. You'll be glad you did.

Friday, August 25, 2017

Don't get me wrong, Block Chain is cool technology, but it is probably NOT the next big disruptor in healthcare. It's certainly a hammer in search of a nail, but there are so many fasteners in healthcare that we are working with that simply aren't nails.

Fundamentally, Block Chain is a way to securely trace (validate) transactions. For digital currency, the notion of transaction is fairly simple, I exchange with you some quantity of stuff ... Bitcoin for example. The block chain becomes the evidence of transfer of the stuff. It's a public ledger of "exchanges". The value add of the block chain is that it becomes a way to verify transactions.

1. The Unit of Exchange is Different

What's the transaction unit in healthcare? In my world, it is knowledge related, rather than monetarily related. The smallest units of knowledge are akin to data types, a medication (code), a condition (code), a lab result (code and value), a procedure (code), an order, an attachment, an address. Larger units are like FHIR resources, associating data together into meaningful assertions.

2. The Scale of the Problem is Different

Today, there are about 200,000 Bitcoin transactions a day. If we look at the unit of exchange I mentioned above, a typical CCDA document embodies something on the order of 100 knowledge units. Let's say there are 150,000 physicians in the US, and each one sees 20 patients a day. Multiply 150,000 x 20 x 100 = 300 million transactions per day. To put that number in perspective, Amazon sold about 36 million items on Cyber Monday in 2013.

3. Transactions are Private

When the unit of exchange is an association of an individual (the patient) with a problem, medication or allergy, asserted by another individual (the provider) it's not the same as when it is the exchange is of a disclosed public quantity of stuff between two pseudonymous addresses. Public ledgers, even with some level of protection behind them, still contain a persistent record of all transactions. After an assertion is made, the effects are pretty permanent, including any damage done, all future assertions to the contrary not-withstanding. Ask any patient who's every been falsely accused of drug seeking behavior.

4. The Fundamental Problem is Different

The challenge in health IT is not "verification" of knowledge exchanges (transactions), but rather, "enabling" knowledge exchanges between two parties. With block chain, the question of where to go to "get the ledger" isn't an issue. In healthcare today, it is.

Block chain is cool tech, no doubt. Surely there is a use for it in healthcare. But also, it isn't the answer to every problem, nor specifically the answer to the "Interoperability" problem. Though right now, you can be assured that it is effectively a free square in your next Interoperability buzzword bingo session.

Thursday, August 24, 2017

Are we there yet? The short answer, as I quoted from a speaker earlier last week, is: "There is no done with this stuff". The longer answer comes below.

If you are as old as I am, you remember having to have a case full of Word Perfect printer drivers, Centronics and Serial cables, and you might even have a Serial breakout box to help you work out problems setting up printers. Been there, done that

What's happened since then? Well, first we standardized port configurations based on the "IBM PC Standard". Except that then we had to move to 9 pin serial cables. And then USB. And today, wireless. Drivers were first distributed on disk, then diskette, then CD. And now you can download them from the manufacturer, or your operating system will do that for you.

If you happen to have a printer that isn't supported, well, if it supports a standard like Postscript, we've got a default driver for that, and for PCL printers, and several dot matrix protocols. So, today you can buy a printer, turn it on, autoconfigure it, and it just works, right? Mac users had it a bit easier, but they still went from the old-style Mac universal cables to USB to ...

I upgraded my network infrastructure the other day, and come to find out my inkjet printer that had been working JUST fine on all the computers in the house, and iPhones and iPads, no longer worked on my various Apple devices. I tracked it down to a compatibility issue between new features of my WiFi router and my old printer. As a consumer, my expectations of interoperability were definitely NOT met.

Which brings us back to my main point. The expectation of users with regard to interoperability still isn't being met, even if the situation is improving. It took us twenty some years to get from where we were then to where we are now, and some configurations still aren't "Plug and Play" with respect to printing.

To figure out how to measure where we are with regard to interoperability, we first need to figure out what it is we want to measure. And then we need to figure out how to measure the distance to that goal. When "where we want to go" is an obscure location, figuring out how far we have to go is huge challenge.

Let's assume we want "Plug and Play" interoperability. What does that actually mean? We probably want to start with a basic platform and set of capabilities. You have to define that, first functionally, and then in detail so that it can be implemented. Then we have to talk about how things are going to connect to each other. Connecting things (even wirelessly) is hard to do right. Just ask anyone whose ever failed to connect their Bluetooth headset to their cell phone. Do you have any clue how much firmware (software embedded in hardware) and software is necessary to do that right? We've actually gotten that down to a commodity item at this stage.

If we look at the evolution of interoperability in hardware spaces such as the above, we can see a progression up the chain of interoperability.

1. Making a connection between components.
This is a progression from wires and switches to programmable interfaces to systems that can automate configuration of a collection of components.
2. Securing a connection over the same.
This is a progression from internal physical security, to technical implementations of electronic security, to better technical implementations, with progressions advancing as technology makes security both easier and harder depending on who owns it.
3. Authenticating/authorizing interconnected components.
We start from just establishing identities, to doing so securely, and from complex manual configurations, to more user friendly configurations, and finally to policy based acceptance. At some point, some human still has to make a decision, but that's getting easier and easier to accomplish.
4. Integrating via common APIs or protocols.
Granularies start out at a gross level (e.g., CDA Document), and get more refined as time goes by and communication speed and response times get better, and drive from data (a set of bits) to functional ( function to produce a set of bits to understand) and back to data again (finer grained data) and algorithms (functional instructions again on how to produce data). This is a never ending cycle.
5. Adapting to capabilities of connected components.
This starts at the level of try and see if it works and respond gracefully to errors, to declaration of optional feature sets, to negotiations between connected components about how they will work together.
6. Discovering things that one can connect to.
We first start by making a list for a component, then by pointing components to lists of things, then by pointing components to places where they can find pointers to lists, and finally, by broadcast protocols where basically, all you need to know is how to look around your environment. Generally, there will always need to be a first place to look though (it might be a radio bandwidth, a multicast address, or a search engine location)
7. Intelligently interconnecting to the environment one is in.
The final destination. We don't know what this really looks like for the most part.

Where we want to go is that final stage, and arguably, that's what we have finally begun to reach with the end user experience installing a printer (with some bobbles). There's still some hardware limitations on Bluetooth devices because those are mostly small things, but even that is reached stage 6. For healthcare, we are somewhere around stage 4 with FHIR. CDS Hooks is arguably stage 5. Directories and networks like Carequality or Commonwell or Surescripts RLS will be progress towards stage 6.

The progression down this stack takes time, and the more complex the system, the longer it takes. Consider that printers, headsets and even cell-phones and laptops aren't enterprise class computing systems. The IT industry in general is making progress, but we aren't at a stage yet where enterprise level ERP, CRM and FMS systems are much further along than level 5 or 6, even multi-million dollar industries. The enterprise level EHR and RCM and EDI systems used in similar sized businesses are moving a bit slower (a classic issue in HCIT).

So, back to measurement. Are we there has a context. If your goal is to get to stage 7, be prepared to wait a while and continue be frustrated. In 2010, my family drove nearly 5000 miles to get sushi. There were plenty of stops along the way, and getting to each was exciting. If you want to have fun along the journey, identify the way points, and make a point that this IS your NEXT destination. Otherwise, sushi is still a very long way off.

IHE Quality, Research and Public Health Technical Framework Supplements Published for Trial Implementation.

IHE Quality, Research and Public Health Technical Framework Supplements Published for Trial Implementation

The IHE Quality, Research and Public Health (QRPH) Technical Committee has published the following supplements for Trial Implementation as of August 18, 2017:

New Supplements

Family Planning Version 2 (FPv2) - Rev. 1.1

Mobile Retrieve Form for Data Capture (mRFD) - Rev. 1.1

Updated Supplements

Aggregate Data Exchange (ADX) Rev. 2.1

Birth and Fetal Death Reporting-Enhanced (BFDR-E) - Rev. 2.1

Retrieve Process for Execution (RPE) - Rev. 4.1

Vital Records Death Reporting (VRDR) - Rev. 3.1

The profiles contained within the above documents may be available for testing at subsequent IHE Connectathons. The documents are available for download at http://ihe.net/Technical_Frameworks. Comments on these and all QRPH documents are welcome at any time and can be submitted at QRPH Public Comments.

Tuesday, August 22, 2017

So originally I thought this problem was an interaction between QoS (quality of service) capabilities of my new DLINK 890 Router and my old HP 7520 three-in-one printer because when I turned off QoS (resetting my router), the problem got resolved. Actually, all that did was cut off connections to the printer and force the printer to retry getting a network connection, which it did. And then it did the thing where it worked for a little while and so I thought the problem was truly solved.

But, in fact it wasn't, as proven to me by my children printing out homework (or failing to do so). I finally traced it back to a new router capability whereby the router could do some fancy dancing to get twice the bandwidth on the 2.4Ghz channel with smarter hardware. My printer is old and competely functional but not so smart. Fortunately, I could turn off this feature just for the 2.4Ghz band which the printer needs, but nothing else in the house really uses, and now everything seems to be working again.

How does this relate to healthcare standards? It goes back to Asynchronous bilateral cutover -- aka backwards compatibility mode. My new router has a mode in which it works compatibly with old stuff, and a mode in which is simply leaves that stuff behind. The default setting is to remain compatible, but of course I knew better and messed things up for a while.

After reading through various and sundry message forums for both my router and my printer, I found nothing that would help me identify or cure this problem. Pure slogging and some knowledge of protocols and interfaces was all that really helped in the end. In the end, I turned off 20/40Mhz coexistence, set channel width to 20Mhz (the original standard) and now my printer connects and seems to work just fine on the new router.

What does that mean in the realm of implementation healthcare IT standards?

Backwards compatibility is good. Testing is better. The 20/40 Mhz coexistence feature is supposed to detect and address the fact that when 20Mhz equipment is in use, the router should be able to configure itself to talk to it, but it doesn't quite work with the hardware I'm using.

Negotiating interface levels is good, but if you didn't design an interface to negotiate in the first place, you are likely to have problems. Consider HTTP 1.0 vs. 1.1, TLS 1.0 and later releases, et cetera. New protocols should be able to downgrade.

Make it possible for systems to have deterministic behavior controlled by a human. That way, when all else fails, an SME can tell the system exactly what to do. This is basically what I had to do for my printer, and for what I'm doing, is a completely satisfactory solution.

Friday, August 18, 2017

Lisa Nelson [a self-described CDA SME, Wife, Mother and designated daughter of two octogenarians] gave quite a fanciful skit and the ONC meeting last week. In it, she pretended to be interrupted by her cell-phone, and had conversations with her youngest child [who needed a physical sent for camp], husband [who was sick while travelling], and eldest child regarding the fact that one of the grandparents fell and was in the hospital. Throughout her response was the same. Get their Direct address and I'll send ... and then some followup on what to do next [which was to mostly not worry because she had things under control]. That skit, she says, is her dream.

I believe in dreams. An audience member sitting next to me said in aside: "I'm not sure many providers know or could even find out what their Direct Address was."

I decided to test this out, because I don't actually know the answer to that question for my provider. Nor does he, and NOT for lack of trying. My secure message to him was quite simple (I've somewhat redacted his responses to preserve his privacy ... my data I feel free to share, but not his):

Do you have a direct address that could be used by other providers to send you data (e.g., a CCDA)? What would I tell them when they ask for it?

Very Same Day, 4.5 hours later.

Hello Keith,

I have never been asked these technical questions in past and I am not sure. However, I have sent a message to our [Vendor] team to let me know how this works and I will let you know accordingly . I have not heard back from them as yet. I certainly know there are non [my hcp organization] providers who already have a link and they transfer the records directly to us electronically. [other hcp organization] hospital is one of them.

Best.

/S/

Following Day

Hello Keith,

I have left message with our [Vendor] team again and I have still not heard form our [Vendor] team for an answer. I am away from tomorrow for the next week. Certainly there is a electronic link for transferring records because, what I have seen is that [various] Hospitals send me some of the hospital records directly. I believe they have a link, not sure if it is CCDA. I think the [Vendor] team can establish a link if there is not already one in place with the provider you are mentioning.

Thanks for your patience.

Best.

/S/

I responded thanking him for his diligence and let him know my request wasn't urgent. I then sent an e-mail off to the Medical Director for Informatics to track it down from the other end.

My point here though, it NOT to fault my healthcare provider. When we design Direct Messaging, we included the notion that patients would be empowered by it in the addressing scheme. Anyone could have a Direct Address, and it would be a secure way for all stakeholders in the HealthIT ecosystem to exchange information.

But it's not, and there are several different explanations I've considered for why that might be:

Unintermediated electronic communication between patients and their physicians is avoided by policy due to HIPAA and ...

Provider to provider communication is OK (I can get just about any provider to fax records with a phone call to another provider ... but fax them to my HOME number? God forbid, and HIPAA forfend.)

Now you and I know that the HIPAA boogeyman here SHOULDN'T exist. But it does. And because of it and years of prior policy, there's a challenge.

Other challenges include patient matching and trust. When a provider gets a direct message from another provider about a patient, they implicitly trust the source, and are willing to match the patient with the data in the message. But when patients start communicating via unintermediated electronic means, well, the information goes through a different set of filters. The first step then is to be sure that one understands WHO the source of the communication is. Did it come from, and was it intended to be about "my patient".

So, handing out Direct addresses for providers to patients seems culturally to be a bad idea, because you cannot actually know how they'll use it.

The answer here is to flip the addressing scheme, I think. My Dr's Direct Address for me should be myuserid's+routing@hcpdomain. When I give that address out, and someone uses it, the message, when received, can be securely accessed by hcpdomain, and it can route it internally as appropriate based on what I used for route. So, if my userid was mg, and I set up my routing for my pcp to my doctor, his address would be mg's+pcp@hcpdomain.

We don't have to change the Direct Specification to support this. That's already baked in to the specification. Patient matching is built in when mg@hcpdomain is my identity as known to my healthcare provider.

It's not his "direct address", but rather "my direct address" for him.

Let's make it easy for patients and doctors to figure this stuff out. The Direct Project was supposed to be the on-ramp to the health exchange super-highway. What good is an on-ramp if patient's cannot find it.

-- Keith

P.S. I had another post planned for the day, but the communication from my provider led me to rethinking Direct addressing, and I thought it relevant to the topics already discussed this week.

P.P.S and an update for the win: I asked someone who would know at my HCP's organization when I wrote the post, and was given w/in 12 hours. Unencrypted e-mail is SO effective (and completely legitimate for me and my HCP to use as I have given permission for that form of communication).

Thursday, August 17, 2017

The reality of the world is that we don't all develop out of an office in Lake Wobegon. Not all developers are above average. Healthcare is challenging, we already know that. By passing that challenge onto developers, we are simply transferring risk to the developer. Making the hard stuff easy mitigates the risk, and makes it less likely for developers to do it wrong.

I talked about this earlier this week at ONC's Beyond Boundaries meeting, in the context of "Fit for Purpose" Standards. The human mind has a remarkable capacity to find and take the easiest path. We need to design our standards with that in mind, and use those human factors IN the standard itself.

Good standards tell developers what to do to do the right thing.

Better standards tell developers how to tell if they are doing the right thing.
Great standards make it easy for developers to do the right thing.

Good standards are testable.
Better standards have tests readily available.
Great standards are computably testable*.

I often hear the complaint (about CDA documents) that: I just want to see ...

At the ONC meeting on Tuesday a provider remarked that in order to understand what was happening for a patient a provider read through 18 hundred pages of CCDA documents. This was accompanied by the statement that they understood the standard.

I do (and did) protest. If providers are reading that many pages, then the only thing understood about the standard is the word "Document", and understanding of the application of standards to interoperability in general is also lacking. Just as in medicine where there is no singular magic pill to make a patient healthy, there isn't just one standard to apply to the various problems associated with interoperability.

CDA documents are snapshots in time of the data associated with a patient care event. All the data elements found in the the dozen and a half elements defined by the Common Clinical Data Set. The CCD is supposed to contain the relevant and pertinent data, but we know that what is relevant and pertinent to one provider isn't necessarily to another. Even so, it's how the data that is presented to the end user (the provider) that is the problem, not the standard that gets the data from that data set from one provider to another.

Consider multiple ways to address this issue that have all been worked in other standards efforts:

Consolidate data from multiple documents into a reasonable longitudinal view that reconciles information from across multiple sources of data. There are OTHER standards that explain how to do this (e.g., the IHE Reconciliation Profile). CCDA is about moving the data, and just like the web, you have to apply other standards to solve other problems.

Use an XSL Stylesheet to make the data easier to read and arrange according to provider preferences. HL7 and ONC ran a CDA Rendering challenge that produced a number of freely available open source solutions. CDA is about communication of data. It is up to applications to make it usable. CDA isn't a standard for display, or a standard for application function. It's a standard for communication.

Allow providers to incorporate the data as it becomes available. If you implement workflows that support a 360 Closed Loop referral / consultation processes, and enable incorporation of the data into the EHR when it becomes available, you avoid trying to manage and consume multiple documents in "one swell FUPE*".

FHIR isn't going to magically change the challenge of viewing "all the data", but it is going to change the approach used by folks (and that will be the subject of a future post).

-- Keith

* That's not a mispelling, but rather an acronym standing for Fowled-Up Process Execution

Tuesday, August 15, 2017

Information Blocking: The glass is either half empty or half full with regard to interop progress, depending upon where you stand to benefit. Best data is two + years old, hard to understand if it is even relevant. Progress is happening, best presentation showed the upward trends. We need more/better data with real-time measures. John Everson has the best presentation and backup for his assertions about progress.

Vocabulary (Semantics): If it isn't critical to care provided it is not as important. Most of our discusssion was around SNOMED CT, LOINC and RxNorm (aka SOLOR). ICD? CPT? Not necessarily relevant to clinical care. VSAC is good.

Interoperability Networks and Infrastructure: I find it telling that among 6 of the 8 "nationwide networks" participating in the discussion, half of them are making progress on connecting to each other (Carequality, Commonwell and Surescripts) and are working towards the one network/multiple carriers model and the others are not quite there yet.

APIs: Perhaps the most boring and exciting panel yet, in that everyone is agreed that SMART on FHIR is the way to go. Focus beyond that is in part where some would focus attention differently. Essential the battle about the standard is over, the contest seems to be about who might pick up the implementation guide honorary mention, and the front runner (Argonaut Project) wasn't even on the panel... (Though Micky was the moderator of the next one). FWIW, SMART is out for ballot on the standards track in HL7 this cycle

Third Party Uses: Standards we know (IHE, HL7 V2, V3, FHIR) = Cool, FHIR = Very Cool. We R doing kewl stuff. Noted complete absence of X12N from the discussion, even in payer channels where that might be natural to consider.

Some final notes: Tomorrow I am on the starting panel: Fit for purpose, and then we will hear from dietarty/nutritional, LTC, and Behavioral Health providors in a session titled "Across the Continuum", with wrap up from a panel moderated by John Halamka, and including Clem McDonald, Walter Sujansky and Aneesh Chopra. I didn't see either John or Aneesh today, but both Clem and Walter made their presence known today.

Think about the fact that the application on
your phone and the same application on your wife's phone, even though they are
talking to the same endpoint, don't necessarily know about each other, and so
if using Dynamic
Registration probably get assigned separate client_id and client_secret values.

==== Upto here ===

From the above excerpt, it
seems to indicate that each instance of App would get a separate Client
ID/Secret. Is that really true? Our understanding is that the client ID/Secret
is generated at the “App” level, same as any manual or self-service
registration process would do; and once done, the same ID (and secret if
Confidential App) will be used by each instance of the App. So, in a sense,
dynamic registration is no different than manual registration other than the
obvious fact that this is done via API.

Here’s link to the SMART
team’s web page that describes both the manual and dynamic approach

There's nothing in the above that allows me to trust the application registration content in any way.

Anyone can use this same data to register an application. The application itself runs on the patient's device. If giving the same data gives back the same client_id and client_secret, then these items really aren't secret. Anyone get the redirect_uri, the initiate_login_uri, and the logo_uri and the client_name. How does one get these? Build a FHIR Endpoint and ask the application to register itself against your endpoint. It will gleefully tell you what it uses to register itself.

Knowing what an application uses to register itself, one can easily determine what client_id and client_secret is assigned to that application.

This can be done by building an application that follows the dynamic registration protocol using the values extracted via the exploit I explain in #1 above.

There's nothing linking this registration back to a responsible party EXCEPT for an https: URL or three.

I have 30 or 40 HTTPS urls I control, and all I needed was an email address to create one. I think that providers need to take some precautions to safeguard healthcare data, and that means that there needs to be some way to trace it back to a responsible entity. That's a little bit more than what is required to create an https URL. Also, not all redirect or login URIs would be to the Interwebs.

Use of token_endpoint_auth_method assumes public apps are supported (applications that cannot manage a client_secret), and that client_secret won't be used.

I'd prefer to put a "somebody else's problem field" around application identity management if I could. Otherwise, I have to deal with a lot more infrastructure that I want, but some identity providers barf at this today (you know who you are). Either I have to use an identity provider that supports public apps, or I can restrict support to private apps. The latter seems to be preferable given the infrastructure needed to manage application identities.

In the example given, there's no reason why this application needs to be public, given that it's redirect and login URIs are "on the web".

So, client_id and client_secret, if created and maintained on a per-registration basis, needs something extra to ensure that you can safely consolidate multiple registrations to a singular client_id and client_secret for any application registering with the same information. Otherwise, to have these values actually be an id and a secret that MEAN anything, you have to do more work. What good is a secret if everyone can know it.

That's where I get into the "too much infrastructure" problem, because now we start introducing stuff like PKI to be able to verify publishers of software statements, or external trust bundles (a la DirectTrust or NATE or ...) and ...

Dynamic registration might be the way forward, but it doesn't appear to be ready yet for prime time.

Friday, August 11, 2017

Some weeks I'm just slow. I probably spent a week trying to figure out how to deal with Client registration for SMART on FHIR applications. One of the things I discovered in the process after I did the math:

Think about the number of different applications that patients could use. Oh, maybe a 100 if we are lucky. Most of us will use one or two. I know from previous experience the average number will be around 1.1 (at least at first).

Think about the number of patients a provider sees in a year. Say they have a panel of 2-3 thousand patients, they probably see somewhere between 1/4 to 1/2 of these annually. Call it an even 1000.

Think about the fact that the application on your phone and the same application on your wife's phone, even though they are talking to the same endpoint, don't necessarily know about each other, and so if using Dynamic Registration probably get assigned separate client_id and client_secret values.

Now, look at a 100 provider practice, and understand that some of the providers in that practice see the same patients that other providers do. So 100 * 1000 * 50% (for overlap) 50,000 patients seen in that practice (NOTE: These are rough numbers, what is important is the magnitude).

Now, just think about the fact that if every 50% of the patients each use an application, and each one

requires its own client_id and client_secret, you now have 25,000 client_id and client_secret values for what, maybe 100 applications. And if any of those applications are busted (meaning they register more often than they need to because someone missed something in the spec), you could have some applications registering daily or hourly or whatever the expiration time is on the authorization token, and now that number could really balloon.

Is this realistic? Is it valuable? Is it worth having to manage? Would it not be better to have 100 client secrets and client ids which you could then manage if necessary?

I know ONC is promoting use of Dynamic Client Registration, but I look at the cost of doing that, and I am quite certain that there really is a better way. It's interesting that Twitter, Facebook, Google and many other API publishers outside of healthcare haven't been using OAuth 2.0 Dynamic Application Registration.

Thursday, August 10, 2017

One of the challenges facing ONC for more than the past decade has been measuring interoperability (yes, ONC has been around for that long). One of the responsibilities of the National Coordinator in the original order creating the office back in 2004 and continuing on to this day is to "include measurable outcome goals".

These are some things that one might consider measuring with regard to Interoperability:

Reductions in costs of care attributable to presence of absence of Interoperability

Application support for Interoperability Standards and APIs

Transmissions of Interoperable Information

Speed of Uptake of Interoperable Solutions

However, not all of these are "Outcomes". They represent three different kinds of measures, and then something special. Cost reduction is clearly an outcome. Application support is a capability measurement. Transmissions of information is a process measurement. Speed of uptake I'll talk more about at the end of the post.

There really is no definitive or easy way to put a $ figure on the ROI of interoperability in healthcare without spending a great deal of time. It takes a well designed study which can show before and after effects of an interoperability based intervention. And it isn't clear how much of the $ saving could be attributed to technology, vs. other process changes involved that use that technology. Yet, among the above, that's in fact the only outcome measure listed.

Application support for standards and APIs is a capability measure. We can definitively show that applications have more interoperability capabilities than they have. But we don't have good evidence linking that capability to outcomes.

Transmissions of interoperable information is something where we are actually measuring a process. For example, in ePrescribing, we are measuring the number of prescriptions sent electronically. We also have some good studies linking that process to improved outcomes, but it's not a direct measure of outcomes.

Finally, the speed of uptake. This is an interesting measure. It shows something about capability, in that it demonstrates availability of a particular capability. But is also demonstrates an outcome, one related to ease of use. If we look at the ease of uptake and flexibility of HL7 Version 2, HL7 CDA, HL7 Version 3, IHE XD*, and HL7 FHIR, we can get a pretty good understanding of the complexity of the various standards.

As we embark next week in discussions about fitness for purpose, this is what I would consider to be an "outcome measure" for success in standards selection. Ease of uptake directly relates to products with interoperable capabilities in products, interoperable processes that can be delivered upon, and real cost reductions in implementation of Health IT solutions.

As we look at "Fitness for Purpose", I think we need to consider "adoptability" as one of the key metrics to consider. That means that the standard needs to be readily available, easily understood and implemented. It's a tall order for a standard to meet, and hard to tell how to get there, but I can say this: You'll know it when you see it, and it doesn't happen by accident or intention alone, but more like a bit of both.