Pages

Friday, January 30, 2015

Thursday is when the rubber meets the road. By Thursday night, you've passed all the easy stuff, but if you are like the people sitting around me, you are trying to make sure that you can get that one last test started. Because if you don't start it by Thursday to get it into the queue, you'll be dropping that profile (it's a Connectathon policy so that monitors aren't swamped with profile tests that won't be worth checking on Friday).

A lot of new code gets written on Thursday as the key priorities have all been hit, and people are reaching for their stretch goals.

One of the things that is new at Connectathon as of 2015 is the number of people who have finished the hard stuff already. Many vendors have already finished their XDS, XCA and ATNA tests. CDA viewing is pretty much done. One profile (Family Planning) had all vendors finished with it by Wednesday, including a couple of late adds from the Connectathon floor.

There are two factors I think that have to do with this. One is that the testing infrastructure is improving year over year, which makes a lot of the testing go quicker. Gazelle has improved to the point that there have only been a few capacity problems that we experienced over the week. There is still room for improvement here, and also automating some of the verification processes, but compared to where we were ten years ago, this is monumental change. The network is no longer spelled with two o's, and even though we had a bobble with a switch at our table, it was readily resolved.

The second change is the vendor community interest in interoperability. I see a lot of newbies here this year, which would have led me to expect more people struggling. But there's also a lot of people here who know the ropes who have helped them along. And the volume of information that is available about how to implement the profiles, and the public domain implementations (many of which have been tested here over several years) also make it a lot easier.

Some folks still tell me that Connectathon won't scale, but in fact it has. The model has also been adopted by a number of regional projects for preparing for real world implementations (we call these projectathons). It's like the old concern about the cable network being used for Internet access. Detractors said the cable infrastructure wouldn't scale either, but it certainly appears to have done the job. Connectathons will continue to be an important interoperability event, simply because there's nothing else like it where you can get so much done in so little time.

Thursday, January 29, 2015

By Wednesday you should have talked to a Connectathon monitor at least a few times. Let me tell you about these folks, many of whom are close friends.

Almost all of the people who act as Connectathon monitors are volunteers. Most of them aren't getting paid this week to be here. I know a guy who takes his vacation time to do this every year because he has fun with it. A lot of my consultant friends show up here instead of billing hours this week. There are only a few whose day job qualifies them to be monitors who also get paid by their employers to show up (that has do with IHE rules about vendor neutrality). And all of them I know would do it for the love of the work anyway, and are still giving up time from friends and family to be here.

The monitors are here to help you pass. If you succeed, they succeed. If they fail one of the tests you submitted, it is almost certainly because you or one of your testing partners missed something in the specifications. If you start with that assumption, you will be right most of the time. I'd bet that 95% of the problems discovered on the Connectathon floor could be readily fixed if everyone implemented one process: RTFM. Read everything that you can, the specifications, the test plan, and the daily announcements. Check the FAQ pages on the IHE Wiki.

And if you have read what you can, and the spec still isn't clear, or the test, start by asking others questions. Talk to others who may know more than you do. There are plenty of people on the test floor who've been around for a few years. If you and your test partners cannot agree on how the spec reads, read it again, and ask around. If you still can't resolve it, it's time to ask a monitor for their interpretation.

Be prepared to wait, they may have a queue of people in front of you, but they will follow up, and they will offer good advice. I find it is better to ask questions, rather than complaining. It gets you a lot further. For example, ask where you are in the validation queue, rather than complaining about the fact that your tests aren't being validated. Ask who could help, rather than assuming they have the time to walk you through it (because they don't). Ask who might be good to test with, they usually have good advice here (they know who does good work).

When there is a question about interpretation, start with explaining that you are confused about what something in the spec means, and be prepared to show them the line in question. Show that you've done your homework; they will appreciate it. You might also ask them for advice about who to talk to for information on the profile that you are having challenges with if you cannot find someone yourself. Many of the profile editors are on the Connectathon floor including some monitors.

Connectathon is the ultimate stress environment, we are often called upon here to do in hours what we are usually given days or weeks to accomplish. It's the perfect environment to practice how you'd work with a customer in a stressful situation. A guaranteed way to fail at Connectathon is to be disagreeable to the monitors. It's not that they'll fail you just because you are being a pain (because they won't), it's that by being agreeable, you might be able to get that extra minute of their time or guidance that will help you pass.

If this is your first Connectathon, I guarantee, by the time you are done, you will have made friends with several of the monitors. When you make your plans to come back next year, remember your friends, they'll be here again too.

Tuesday, January 27, 2015

Tuesday's activities at Connectathon is where real interoperability starts to happen. Monday is about figuring out your network connections, getting set up, and starting to execute on your no-peer tests. On Tuesday you really start working with peer systems.

A couple of things could happen today which would impact your work.

You might discover that a peer you have been working with abandoned a test you had started with them to work on another problem. This is a persistent problem at Connectathon, because much of the work is interrupt driven, and your stack (or their stack) can often overflow. It's a good idea to keep a list of things you haven't finished, and check up on the peers you are working with to ensure that you and they are still on track to finish the test. Never expect someone else to complete a test without checking on it. It's a mutual responsibility.

The only way to fail (or succeed) fast, is to get the work checked out fast. Don't forget to mark any tests you've completed as being ready for validation. I can't tell you how many times I've heard someone groan because they discovered hours or even days after finishing a test that it didn't get validated because nobody marked it as being ready for that step. The validation queues get longer as the week goes on, so finish that step as soon as you can. Keep working after the monitors go home if you need to, but get that in the queue today if at all possible.

Next, you may very well discover a blocking problem. This usually happens when you fail your first or second verification attempt on a profile. You need to quickly analyze those failed verifications. I've seen teams which learned on Thursday or Friday that a test that they had completed failed validation, and so there is a mad scramble to fix whatever the problem in time to get the missing test validated. So remember to keep track of the tests you've done and make sure they are getting validated. If you wait until Thursday, it could be too late.

When fixing a blocking problem, you will often stop work on tests that won't be valid. Abort them so you don't have to wade through the clutter to find the results you really need. And tell your test partners why you are doing that. And for those tests you simply put on hold, make sure to go back and complete them.

You may discover today that something needs to be rebuilt (and pray that it is today rather than tomorrow). If you don't have the code and compiler locally, make sure that your teams back home have a way to get you an update quickly. Don't count on the Connectathon network to enable you to access a 300Mb download quickly and easily. Consider what is happening on this network - with hundreds of engineers testing messages. Sometimes a screen refresh to an external website can take several minutes, even for a 140 byte tweet. FedEx might just be faster...

Late this afternoon, you will probably discover your first ATNA bug in your new HTTP or other stack. Check the ATNA FAQ for some of the best advice gathered by myself and others over the past decade. You should already have Wire Shark installed.

Yesterday at the IHE Connectathon I got pulled in to help a developer who had gotten pulled in at the last minute to replace someone else who couldn't make it. Being unfamiliar with the tools or the process, I showed him how to make a plan to succeed.

The first thing you do for your product is prioritize your work. Since profile actors embody the chunks of functionality being tested, they become what you prioritize. Some profile actors are critical because they are going into shipping product, or could be of use at several customer sites, or will be demonstrated this year at the HIMSS Interoperability showcase. These have to get tested.

Some profile actors are nice to have. These might be used in future releases, or could be important to a single customer, or you are thinking about demonstrating this profile actor at HIMSS, but haven't committed to that yet. The nice to have stuff should get tested, but not if it prevents the critical things from getting done.

Finally, there are your stretch goals, or what I call gravy testing. These are the profile actors that, got added to your plate last week by your product manager to support some hare-brained scheme you just heard about, or some attractive showcase leader walked up to you and asked if you support ___, and you said anything less convincing and firm than no, and as a result found yourself signed up to test a profile you just heard about five minutes ago. The important thing about these is to remember NOT to let somebody else's priorities become yours simply because you haven't assigned any of your own.

Once you've assigned priorities for actors, now you can prioritize tests. If you are doing anything requiring Concistent Time (CT), or Audit Trail and Node Authentication (ATNA), do your CT test first. Then you should look for tests you could have prepared for before you got here (e.g., generating a CDA document to upload for verification). The monitors expect these first, and it's always good to be nice to the monitors. Ideally, this is on a stick, and you just upload your samples (you can even do this sometimes before you even get your system being tested unboxed).

Do your tests basically in this order: No Peer Tests first, Peer to Peer tests second, workflow tests (supporting multiple peers) last. Don't worry if you take some out of order.

Things not to do: Do not do options testing on a profile actor before nice to have tests are done unless the option is truly critical. Options fit into the gravy. The only exception to this is when multiple options are provided in a profile, and at least one must be supported. Fine, pick your one and call that critical (if the profile actor is critical) or nice to have.

Also in the gravy category are tests with your own products. These might also fit into the nice have category for you, but the reality is, you shouldn't need IHE to set up a Connectathon for you to test with your own products. Do it if you can, but only after you are safely passed the critical and nice to have stuff.

After you've made your plan, you have one more job to do. Go through the tests which are critical and find out who your potential partners can be. Get their table numbers. Now, go walk the floor (after you've completed your CT test and uploaded your samples), and introduce yourself to them. Let them know you'll be working together this week.

Ok, so that's Monday (a day late). I'll try to get Tuesday done just a bit earlier. What can I say, I had a critical issue to address yesterday, my blog is just gravy.

Thursday, January 22, 2015

This afternoon the first official meeting of the Healthcare Standards Integration workgoup was convened at the HL7 Working group meeting A roomful of nearly 30 people met to review feedback on FHIR resources that had already been jointly developed by IHE and HL7.

This is a very significant event in the collaboration between HL7 and IHE. I've been (for the past 5 years or so), the HL7 appointed liaison to HL7, and had struggled to get the two organizations to coordinate their activities better. Others in HL7 and IHE had expressed similar frustrations. About 6 moths ago this came to a head, and IHE offered HL7 an opportunity to coordinate, and proposed some ways in which we could work together more closely.

The new work group is presently addressing some of its growing backlog of FHIR ballot comments, and will shortly begin working on developing their mission and charter.

I very much look forward to this new mechanism for collaboration between IHE and HL7, as it gives us a way to connect formally, rather than through the various informal mechanisms that we have tried in the past.

Already we have reduced the backlog significantly for the FHIR issues.

I'm sure you'll see more details on the official press releases coming from IHE and HL7, but those won't be coming out for a few days at the very least.

Wednesday, January 21, 2015

Cecil Lynch posed this challenge to me in the bar, and of course it being late, and my birthday (which meant I wasn't paying for beers today), I couldn't find the answer immediately. But I posed the following question to the FHIR Implementers list on Skype:

"I seem to recall a discussion here earlier where someone described how you could issue a query to request conditions that matched a value set expansion. Anyone remember what that looked like?"

Lloyd McKenzie had the answer not three minutes later (just after Cecil left the bar).

I had promised Cecil a blog post on the answer to his question here, so here it is.

His question: How can I find a FHIR Observation resource where Patient = John Smith, and Observation.code = Culture, and Observation.value is in a Value Set describing all codes for Salmonella.

If you read the recommended link and find the second occurrence of :in on the page it says:

in

the search parameter is a URI (relative or absolute) that identifies a value set, and the search parameter tests whether the coding is in the specified value set. The reference may be literal (to an address where the value set can be found) or logical (a referecnce to ValueSet.identifier)

So, the query is:

GET [base]/Observation?subject.name=John+Smith&name=System|Culture&value:in=ValueSetReference

And what this means is find me the observation whose subject has the name of John Smith, where the type of observation is Culture from some code system identified by System, which appears in the Value Set resource that can be retrieved from ValueSetReference, where in this particular case, the Value Set resource would describe somehow all codes for Salmonella.

Now, the beauty of this is that the FHIR server performing the query doesn't actually have to expand the value set more than once to perform this or any other query against the value set. And if the value set resource is resident on the same server as the observation, it may not even need to fully expand the resource to perform the search, it only need to execute an algorithm that determines whether the code in a candidate resource could be in that expansion. There are a couple of ways the server could do that, including precomputing a DFA for matching all codes in the value set.

I was told that if I could show that this was possible in FHIR, Cecil would become a believer.

Now, I'm already a believer in FHIR, but this exercise demonstrated for me once again why, because within 10 minutes of issuing my query, I had the answer, and it was documented right here all along.

Tuesday, January 20, 2015

One of the challenges with Meaningful Use has to do with the way that it started; first with one kind of document, the CCD (using the HITSP C32 specification), later migrating to CCDA which supported multiple document types. Meaningful Use never gave any guidance on which CCDA document types to use for different purposes, and in fact, defined content based on a combination of document types found in CCDA. As a result, most folks are still using CCD, which really is designed to be a general summary of care. But physicians generate other kinds of documents during routine care, such as a history and physical note, or a consult note, or an imaging report.

Because the 2011 edition required use of CCD, but these weren't related to what physicians were generating, the documents were automatically generated. And since they were automatically generated, there's no physician in the loop to determine what is relevant and pertinent. Vendors have built interfaces to make it possible for them to select relevant and pertinent, but that takes time in the physician workflow. So vendors get told to automate the generation process, and depending on how they do that, the result is often less than ideal.

By about 65 pages.

I've talked about this problem previously, and at the time, it seemed to me that the solution should have been obvious: Use the physician's existing documentation process to select what is pertinent and relevant.

But the evolution of Meaningful Use from a document that physicians don't generate to a collection of documents which they might use assumed a level of integration with physician workflows that simply wasn't allowed for by Meaningful Use timelines. Basically the setup of clinical documentation workflows is something that is usually done during an initial EHR installation. It isn't redone when the system gets upgraded, because that is usually a non-essential disruption for the provider. But that is what it would take to use the new document types. Now, that process will likely occur over time as hospitals and providers update their systems to improve their workflows, but in the interim, it leaves automatically generated CCD 1.1 as the documents that get exchanged for Meaningful Use.

Now we get into the disconnect. Most EHR developers that I've talked to know that clinicians are the best judge of what is pertinent and relevant content to be exchanged. This results in the choice of what is relevant to be set up as system configuration parameters. Lawyers and HIT administrators (and sometimes physicians), tend to err on the side of caution when trying to figure out what should be sent to a receiving system in this configuration.

Which results in a 70 page "summary" of the patient data, which is at least 65 pages too long for anyone to read (and probably closer to 68 pages too long for the average physician). As a result, when these documents get created and sent, a physician will look at them once or twice, and then decide they aren't useful and never look at them again.

How do we fix this? I ran into a similar challenge over the last year and the answer that I came up with then was to talk to clinicians about what the right rules were for limiting the data that would be provided. There were three sets of limits, time based, event based, and state based limits.

Information about active issues needs to presented regardless of time (assuming appropriate management of active and resolved in the provider's workflow). This would include problems (conditions), allergies, and any current (active) medication.

Event based limits are based on recent care, e.g., activities done in the last encounter. So any problem or allergy marked as resolved in the last encounter would also show up so that you could see recent changes in patient state. Additionally, any diagnostic tests and/or results related to that last ambulatory visit would also be included (inpatient stays need a slightly different approach because of volume). Finally, the most recent vital signs from the encounter should be reported.

Time based limits include dealing with stuff that has happened in recent history. For this, for an adult patient, you probably want the last year's immunization history. You likely want to know about any problems whether they were active or resolved in the patient's recent history (we wound up equating recent history to the last month).

There are certain exceptions that may need to be added, for example, for pediatric immunizations you might extend history to a longer period, but in an age dependent way.

HL7 Structured documents has agreed to begin working on a project in which we will work with clinicians, and reach out to various medical professional societies to help define what constitutes a reasonable set of limits for filtering relevant data. I'm thinking this would be an informative document which HL7 would publish and which we would also promote through the various professional societies collaborating on this project.

So, I think I was both wrong and right in my original post on this topic. It is impertinent for a developer or a system to decide what is relevant, but, it is possible, with the participation of clinicians, to develop guidance that system implementers could use to configure the filter of what should be considered relevant. And my advice to system designers is that while you might want to supply a good set of defaults, you should always let the clinician override what the system selects.

Monday, January 19, 2015

I've been spending a lot of time figuring out how to use BPMN to document Workflow Definition profiles using the IHE XDW Profile. It's become a new series which you can follow using either the BPMN or Workflow labels, and I'll update this post periodically as I add more.

Workflow AutomationIn which I realize that after looking at a few workflow profiles there's got to be a way to automate this.

Building IHE XDW WorkflowsIn which I show some of what I've already done previously and start to explore use of BPMN for documenting the workflow.

The image below is what I generated as an example representation of the IHE Cross-Enterprise Basic eReferral Workflow profile in BPMN 2.0 notation that I generated Friday based on the work some of which has been discussed here so far.

Referral Requester

At the start of it's process the Referral Requester begins by executing the "Request Referral" subprocess. When the subprocess starts, this participant may, but is not required to create the XDW document containing this XDW task in the CREATED state to indicate that work has started creating a referral for the patient. The eReferral request is created and is an output of the user task [this user task is NOT recorded in the XDW Workflow document -- there should be a way to indicate that somehow, but I haven't figured that part out yet]. At that point the eReferral request document [and other relevant data, but again I haven't figured out how to represent that yet either] are used as inputs to the Schedule Referral CREATED message. This message notation is the way that we show that the scheduling task must be created and recorded in the XDW document [the XBeR profile doesn't actually have this requirement -- there's some work to discuss here]. Finally the participant updates the XDW Document with the Request Referral Task in the COMPLETED state.

Referral Scheduler

The Referral Scheduler basically waits until a Schedule Referral Task is CREATED. Arguably, according to the XBeR profile, this task is actually triggered by the completion of the Referral Request, and I think I could simply notate it that way, without having to deal with the Schedule Referral CREATED event. However, from a best practice perspective, a Workflow Monitor will want to track task status change events, and time between CREATED, READY, IN_PROCESS and COMPLETED are all important and have well-understood semantics. So even though CREATED on Schedule Referral task could be implied by a COMPLETED on the Request Referral, it seems best to me to make such dependencies explicit. I think the way I did it last Friday may not be right though.

Now, this Perform Referral subprocess really cannot continue further without the patient participation, so I created a merge where a message [really the patient request to schedule the referral] needs to also occur before the workflow can continue. At that stage, the Schedule Referral task is supposed to [according to the profile] go to the IN_PROGRESS state, which we show with the signal event. Then the patient is scheduled. Note that the data flow shows that the eReferral request coming from the incoming message being used in the Create Appointment user task. The next task: Perform Referral is created, and then the Schedule Referral is marked as COMPLETED.

This diagram shows what happens if a patient does not schedule the referral after sufficient time has passed, which results in a failure of the Schedule Referral Task, resulting in a closure of this workflow with the referral workflow itself failing to complete in a normal state (as we use the Error State here in the final form).

Perform Referral

Alright, I leave the description of this one as an exercise for the reader. There's really very little in here that you haven't already seen. Tell me if you cannot figure it out.

Friday, January 16, 2015

Did your parents ever ask you that question when you were young? Usually in an exasperated voice because you had done something (that at least they thought was) stupid. In my family, we have at least two usual responses to that: "Thinking? What makes you think I was thinking." That's an admission that yes, it was a stupid thing to do. The other response is "It seemed like a good idea at the time." That response usually means that we had a good reason for doing what we did, or at least we thought we did. Often though, the reason itself is lost to the recesses of memory. Sometimes there's an important nuance that occasionally returned to us later that reminds us why this is actually a good idea.

Recently, in working through the BPMN appendix for IHE PCC, I've started to write down why I think something is required or recommended. I've talked about this before (in What were we trying to do?). For every requirement I've been trying to provide at least a one sentence explanation for the requirement. Sometimes it's obvious: (e.g., Of course you want to give this a name), and I struggle to write the explanation of the obvious, but do it anyway. Sometimes what I think is obvious isn't, at least to someone who is encountering this stuff for the first time. Other times, it isn't so obvious why you need such a thing, or why you chose that way to do it. In these cases, I can really see where this adds value, and not just for other readers. Already I have 20 or so pages of text, and I'm going back through it about every other day. Those little notes are gold when I get to something I wrote a few weeks back and am trying to recall why I did that. Here's an example:

1.The targetNamespaceattribute in the <definitions>
element shall define a unique namespace for the
workflow specification following the rules for creating a formatCodefor an
IHE Content profile.A target namespace is necessary because
it can be used to create unique identifiers for elements contained within the
workflow specification and relate them to what occurs in an instance of an XDW workflow
document. Use of formatCode rules is
somewhat arbitrary, but they already exist, and they serve a similar purpose
(avoiding name collisions) across IHE profiles, and provide a URN which
allows for dereferencing.

As we work on standards, I'd love to see more of this sort of explanatory text in both HL7 and IHE specifications. It might make ballot reconciliation or public comment and post publication maintenance a lot easier. It should certainly make implementation easier for the end users of these specifications.

Wednesday, January 14, 2015

If you are like most people, you worry much more about your kids not getting a balanced breakfast then you do them getting brain tumors. At least that's the case for me. And if I had to worry about the latter problem, I'd probably not be worrying quite so much about the former. Because the magnitude (cost, effort, impact, however you want to measure it) of the latter problem is so much bigger than the magnitude of a single instance of that other challenge (failing to get a balanced breakfast). But when you multiple that other problem by the thousands of times that it occurs, it could still have just as big an impact. Even so, these aren't equations that balance out prettily like those examples in physics or math texts.

This is one of the reason's that I've catapulted myself right into the middle of the workflow muddle in Health IT. IHE's XDW profile has a lot to offer here for both sides, the provider doing a simple task thousands of times, and in those crazy-expensive collaborations like tumor boards, where having multiply expensive specialists wasting even a minute of time is worth paying attention to.

What I'm trying to do right now is figure out how to make both of these different kinds of workflows be able to benefit from precise descriptions. Yesterday I explained how to integrate messages into a workflow, today, I'm going to discuss some of what I've discovered about event handling and signaling.

In BPMN, a signal is "like a message", except that it is broadcast to anyone who is interested in it. Rather than having a single sender and receiver (as in a message), a signal has a single thrower, and many possible catchers. One of the things I'm using signals for in the IHE representation of an XDW workflow in BPMN is a XDW workflow task state change. The XDW task I represent as a BPMN activity (specifically a sub-process). Within that subprocess, each state change that forces an update of the XDW document in the XDS repository "signals" the change. The event itself is named after the task and the final state. Start events are "caught" by the task which the event is named for. End events (either COMPLETED or FAILED) are thrown by the task for which the event is named. I use the error event to represent the FAILED case, and the normal end event in BPMN to represent the usual COMPLETED condition. Intermediate events for READY or IN_PROGRESS can be caught or thrown by anyone. The "thrower" is responsible for recording the new task state in the XDW document. The catcher simply waits until the task state is reached before doing anything else.

There's more I have to figure out in these cases, because I also have to figure out data flow with these signals, and also tie the signals into the IHE DSUB profile. That's for later posts.

Tuesday, January 13, 2015

Usually, a message recipient is ignorant of the workflow that resulted in the message being sent. The same message may be sent as part of multiple workflows, in multiple places within a single workflow, even multiple times within a single task. I think we need to fix that, because there are a variety of different ways services could be optimized or improved if we only knew what workflow was being done.

One of my present challenges in IHE is in defined how to describe an XDW Workflow Definition in BPMN. Mostly this involves mapping what has already been done in text in various workflow definition profiles to the appropriate BPMN representation. But there are some things that you can do in BPMN that we haven't addressing in IHE Workflow definitions yet, probably because we really didn't have a clue as to how useful that might be.

Two areas of present concern are Messages and Signals (I'll talk about signals in a later post). Now, in BPMN, a message is simply that. It could be used to represent an HL7 Version 2 message, an HL7 Version 3 message, a FHIR request, a SOAP Message, a DICOM service request, whatever. You name it. What I want to be able to do is associate a specific message with a workflow. Now, the metadata stored in the Workflow instance captures that association, but not in any way that is searchable. In other words, you need to know the workflow instance before you can determine the association with the message, and what I'm trying to find, given I only know about the message is the workflow instance.

Ideally, we'd all have workflow oriented systems, and this wouldn't be a problem. Each message would be linked back to a workflow and we'd know without much effort where and how that was done. But the reality is different. Some systems have and deal with defined workflows. Others don't. Some messages can occur as part of more than one workflow, or result from more than one task in a single workflow, or could occur multiple times in a single task. As the message receiver, figuring out where you are in the workflow be very valuable. At the very least it might provide you with some opportunities to optimize the work of the message based on the workflow (for example, you might cache patient data when that patient is involved in an active workflow).

So, what I'm proposing is that the sender of a message acting as a result of a workflow add the message identifier to the metadata for document describing workflow instance when that document is updated to describe the task that sent the message. The other piece of this would be to add that message identifier to the <taskEvent> element in the history. That way, once you've received a message, you could quickly and easily find the workflow and task instance which generated it.

Now, I don't advocate that services being performed should vary in what they do based on the workflow that they are part of. That would result in numerous private agreements inside services with different workflows. But it would allow for careful optimizations which could improve overall performance within and across workflows, and at the very least, would allow for information capture about the relationship of various workflows to activities affecting system integration. Imaging being able to say: 85% of XYZ messages received are result of ABC workflow. What would that do to your ability to optimize?

I have some things to do to prepare to teach an online edition of the Standards and Interoperability class, more on that as it evolves.

As the year starts off, this seems like a light load, and from an external perspective, it is compared to my usual activities. However, I expect the HL7/IHE Joint workgroup efforts will be consuming a great deal of my time once it gets going. There's a ton of outstanding things that it could take on.

I'm no longer on the HL7 Board, and did not get elected to be the HL7 Chair, so I have at least a year of not being involved on the board. I'm not sure what I'm going to do there. I'm still involved in several board appointed committees and planning on continuing in those roles. The real question is whether I'm going to run for the Board again, and if so, in what role, and when.

Wednesday, January 7, 2015

Consider McDonald's, Wendy's, and Burger King, and your favorite sit-down restaurant, and your favorite diner, and you favorite sushi place. Do they all do the same thing? Basically. But their processes are very different. Would a nice sit-down restaurant benefit from adopting processes from Burger King? Probably not. And vise-verse. And leave my favorite Sushi place's processes alone.

What in those processes can and should be standardized? And where does the value-add come in? In all the discussion of the importance of workflow, one thing that we must realize is that very little workflow in this world is ever standardized. Even Amazon and Barnes and Noble have different workflows, and let's not even add eBay into the discussion. At best we might standardize on certain names for products and methods of preparation, and how food is permitted to be treated (e.g., storage temperatures). And even names are challenging. Ever order your steak Chicago-blue and get a charred mess?

Let us consider even the simplest healthcare workflow: Identifying the patient.

Do you:

Obtain a patient identifier from some official document (e.g., a driver's licence) and then look them up in some global database to get their demographics?

Get their demographics from them directly, and then verify their identity?

Verify their identity with one document, and get their healthcare identifier from another?

Or are the two linked together somehow?

Do you need more than one identifier for them?

After that, do you:

Collect their co-pay, and if so, how do you determine how much it is?

Bill their insurer (or the government, or both, them and the others)?

Collect their co-payment after billing?

How do you determine who pays and in what order?

Many of these very simple workflows vary, not just at the international level, but even within the same practice depending on the patient, and for a single patient depending on their age, payers and other stuff, even the type of practice they are visiting.

Could we simplify this? Possibly. Even probably for the really small stuff. But once we get into bigger stuff, it isn't clear. Part of the reason for this has to do with inference based on process and workflow. Once a particular workflow step has been completed, there are a lot of assumptions and inferences that can be made about what has already been done. For example, in many places, the admission/registration process includes updates to the allergy list. And so, a person familiar with that organization's workflow can rest assured that allergies have been update. But elsewhere, they cannot, and so even though the patient has been admitted, the allergy list cannot be assumed to be up to date.

That may be an oversimplified example, but it should make the point.

What most people really mean when they say that workflow is important is that flexibility is important, so that your product (or mine, or anyone else's) is integrated with something else, the two can work together, regardless of whatever workflow the other assumed.

Tuesday, January 6, 2015

One of the differences between IHE and HL7 is in how the two organizations approach the dynamic interaction model. IHE has actors which exchange information in transactions. HL7 (version 2 and 3) have application roles and interactions which exchange information using messages. But the traditional focus is a bit different in the two organizations. In V3, very little attention is given to story board and application roles. Most of the attention goes to the information models, the various *IM diagrams that are generated. In IHE, about 50% of the technical detail in a profile is related to the information model -- constraints on data. The other 50% (sometimes even more), is on required behaviours. It isn't enough to just recieve the message, the reciever has to do something (operate with) the information it recieved from elsewhere.

This material is covered in the HL7 specifications, it just doesn't have as much an impact on how the specification is written. And within different domains, these generalizations don't always apply. For example, most of PCC's early work was about data constraints (templates). And some HL7 domains focus very much on dynamic behaviours.

This is the real key to interoperability is to be able to operate with information recieved from outside your system. I think it should really be called outeroperability.

Monday, January 5, 2015

The <collaboration> element in BPMN would represent the Workflow Profile. That contains <participant> elements which would represent the Workflow Participants. So far that makes sense. What isn't clear here is <messageFlow>, because XDW's notion of communication is through input and output documents, rather than explicit messages, but maybe there's a mapping here.

Each <participant> has a name, a description (in documentation), and references some processes. Essentially, we are putting the workflow participants in the collaboration as swim-lanes. Again, this part makes sense to me. I've seen at least one workflow where partnerMultiplicity is useful (an order placer for a read of an imaging study in which several order fillers could respond).

There's a few places where some routing logic needs to be explained, and where triggers and timeouts need to be addressed. I'm looking for task transitions to be triggers for other workflow tasks.

I'm not entirely clear on the distinction between the terms task (in Human Task) and process (in BPMN), because task seems like it could be smaller or larger depending on context.

One of the places where it seems like there would need to be some refinement in ITI is in the DSUB profile, because I'd like to support DSUB style notifications of workflow participants when their created/owned tasks have been transitioned from one state to another (each task has an owner and a creator -- when the task has its state changed, both the owner and the creator may want to know about it).

I think these might be some additional DSUB-like queries, which might be addressed by the Workflow Monitor participant.

Friday, January 2, 2015

In looking back over 2014, and looking forward to 2015, two things stands out: uncertainty and change. As I look into my crystal ball at the beginning of this year, I see more fog than I have seen in many years.

The US national program is at a time of transition, from the carrot to the stick. Incentives are no more. Penalties kick in this year. Will that work?

At the same time, many hope that the incoming congress will pass new laws changing the way the Meaningful Use program works. Will that happen?

Meaningful Use Stage 3 proposed rule, which many and projected to drop December 23rd as usually is still waiting in the wings, Clearly it is no longer business as usual at ONC. What's in the proposed rule?

ONC now hosts at most three of the original people staffing it under ARRA. Few really have a clue what is going to happen here either. What will ONC look like when it grows up?

FHIR will soon be launching its second DSTU. The first pilots of FHIR using DSTU 1 will soon be appearing. Will it work, or won't it?

IHE and HL7 will soon have a joint workgroup. There is a lot of opportunity here to bring together these two organizations which have had an on-again off-again love-like-hate-envy-love relationship. , will it be successful?

On a personal note, I am now half-way through my degree program according to hours needed to graduate, and so far carrying a very credible GPA. I'm looking forward to teaching my first semester long class later this year on Interoperability and Standards. Will I become one of those ivory tower academics that I so despised earlier in my career? (I certainly hope not, but ...)

For next year, I have to plan out what I will be doing, and how my new role(s) will be evolving. As to what that will look like?

So, wishing you a joyful new year, and as for all the rest ... I dunno.