Pages

Friday, December 18, 2009

HIT Standards Committee - 12/18/09

The HIT Standards Committee met on Friday, December 18, 2009. The meeting materials from the ONC website and the rough draft transcript of the meeting are below. Also be sure to check out the FACA blog and the Health IT Buzz blog for the latest updates.

Good morning, everybody, and welcome to the eighth meeting of the HIT Standards Committee … operates in public, but over the telephone and Web only. Members of the committee, if you could please remember to mute your telephone line when not in use, and also please remember to identify yourselves, not only for your fellow members of the committee, but for members of the public who are listening in. And the public will have an opportunity at the close of the meeting to make comments. With that, I will conduct a roll call now. Jonathan Perlin?

Jonathan Perlin – Hospital Corporation of America – CMO & President

Good morning.

Judy Sparrow – Office of the National Coordinator – Executive Director John Halamka?

Sharon Terri? Karen Trudel? And James Walker will not be participating in this meeting. With that, I’ll turn it over to Dr. Blumenthal for some comments.

David Blumenthal – Department of HHS – National Coordinator for Health IT

Thank you all for being here. This is, I guess, a novel way, not novel in the technological sense, but I think it’s the first, if I’m not mistaken, the first completely virtual standards committee meeting. And you all are - I think we’re all going to be pleased that we didn’t have it actually in person given what’s likely to happen weather-wise here in Washington. But thank you for being on the line.

We are kind of in a pause mode here waiting for our major regulations to be issued, which should come very soon. Then we will, of course, undoubtedly receive a fair amount of comment on the work you’ve done and the work the policy committee has done, and that’ll, I think, re-sort of put our agendas back in the more, highly energized mode that we were in until very recently. So take a deep breath of the holidays because I suspect you’ll be coming back to a lot of interesting discussion of the work you’ve done and that we have done here at ONC.

In the meantime, I know that you all, your groups are continuing to look into the future and to work on issues that remain to be addressed, and I don’t want to take more of your time, more of the two Johns time, but thanks again for being here, and we will be talking, I am sure, in the near future about how to react to the comments that our previous work will elicit.

Jonathan Perlin – Hospital Corporation of America – CMO & President

Thank you so much, John and David. Good morning, everybody. This is Jon Perlin, and I want to begin simply at this time of year by recognizing and thanking each of you for your work, and especially to the members of the public, the broader community who have been with us the entire time. I can’t overstate how much we appreciate your feedback.

Of course, as Dr. Blumenthal just mentioned, we go into a period where we’ll receive additional feedback, predicated, of course, on the conditions of the HITECH Act, which require the Secretary to publish to the federal register the recommendations. We look forward to that dialog as well. So I think David was absolutely right in encouraging us to take that deep breath over the holidays because it will be a busy time.

But the business, certainly as many of those who wear other hats as practitioners in the field, extends not only to reacting to comment, but really moving forward on the points of the exercise, which is to improve the safety, the effectiveness, the value function of connected healthcare through the use of these tools. I for one really want to thank each of you and the Office of the National Coordinator for helping to really provide some clarity, emergent clarity in direction. It just makes life in the real world, I believe, a lot easier.

Towards that end, today we will hear from each of the workgroups, as we always do, to really begin to discuss in the next phase not just what the standards are, but really build on this theme on surmounting barriers and effectively implementing in the various areas that are part of the charge for us in standards development. That dialog and its work product, in support of implementation, are extremely useful, and I know that many people are counting on this guidance to make sure that their steps are as wide and as fast as is possible.

With that in mind, I don’t want to either take any more of the time, as we tee up to the specific discussions on, of course, privacy and security, and the real sort of central underpinnings of clinical operations, the input that will guide us in implementation from the implementation workgroup, and the work that Aneesh and…. And then an area that I think is so exciting because, in a sense, for those of us of a certain age who have lived through a personal computer revolution and thought, gee, that was cool, only to realize that the real power, the real opportunity came with connectedness, an update and introduction to some work that Farzad Mostashari and Doug Fridsma in and around the Nationwide Health Information Network or NHIN. I very much look forward to that because, in the year ahead, we need to really amplify not only what one can do in a particular environment, but how those environments relate to make sure that those more desirable attributes of healthcare are available wherever and whenever a patient might need care.

With that, let me turn to my co-chair, John Halamka. John will be guiding us through the discussion of each of the workgroups this morning.

John Halamka – Harvard Medical School – Chief Information Officer

Great. Thank you so much, Jonathan, and I just want to echo my thanks as well. It’s been a remarkable year. I think of 2009 as the year that everything changed. We have had just an incredible amount of work, and it has been a sprint from May until the present, and so I also echo David’s notion that this is a bit of hiatus, as we await all the regulations, and so recharge your batteries. Spend time with your families, and we will have a lot of work ahead in 2010, as those comments come in. And we have to polish the work that we’ve done, and that polishing will be done with the guidance of the implementation workgroup. We have a set of guiding principals that we’ve articulated of keeping it simple and engineering for the little guy, reducing barriers, and making sure that we provide all the tools and education necessary to accelerate the work ahead.

As Jonathan introduced the day, we’ll start with privacy and security, so that we’ll take a deep dive, so that everybody on the call can really understand the state of what we’ve chosen for 2011 and some small refinements that Dixie and her team have done, and then we’ll hear a summary from her about the security issues meeting where we really identified for many stakeholders in the community what are some of the pressing issues. What are some of the implementation barriers? What are the challenges that we should focus on?

Let me turn it over then to Dixie and if Steve Findlay has joined, and look forward to your remarks.

Thank you, John. While we were making our recommendations, and since that time, I received quite a bit of feedback from this committee, as well as from the policy committee and, frankly, the community at large indicating to me that people don’t really understand what we’re recommending and how these pieces fit together. So Steve and I thought it would be a good time for us to just step back, and I’m going to try to explain what our recommendations really mean, and I hope I’m able to communicate this in plain English, and I would welcome your feedback from this attempt.

I gave the … three slides at the beginning of this, I gave at the policy committee earlier this week, and I got positive feedback there, so I’m hoping that this will help people feel more comfortable with the recommendations and that they’re not esoteric recommendations, but in fact, we’ve recommended standards that most of us on this call use practically on a day-to-day basis and really may not be aware of it.

The first slide, please. This is to acknowledge the work of all the people on our working group, and as the two Johns have indicated, I certainly appreciate all of the time and effort that the working group members have put into this.

The next slide, the second point, let me just add that the second point that I’m going to cover today, there is one change that we’ve made since the last meeting. Then the last topic we’re going to address is I’ll give you a summary of our summary, a working group summary of what we heard from the standards working group or standards hearing that was held on November the 19th.

Next slide: First of all, what are the standards, and how do we anticipate they’re being used. Everybody banners about the term standards and meaningful use, and it’s not always clear what’s used for what purpose, so I wanted to clarify that our recommendations of standards and certification criteria and implementation guidance are intended for use in certifying products, so they’re intended to indicate the kind of functionality, security and privacy functionality that a product should be required to have in order to get certified. Then how those capabilities are actually used within a particular healthcare environment will be based on a number of factors such as the organization’s size, the complexity of the organization, the IT capabilities they may have, the technical infrastructure and, above all, the risks and vulnerabilities and their approach, their overall approach to risk management. And, finally, what available resources they may have. As John Halamka mentioned before from the implementation workgroup, these standards and certification criteria and capabilities that will be inherent in the product need to be applicable and usable by the little guy, as well as the large healthcare organization.

Then secondly, the standards and certification criteria are intended to help assure that this product that Dr. Jones has purchased has the technical capabilities that they will need, that organization will need to, number one, comply with HIPAA and ARRA. And, number two, be ready and eligible for meaningful use, be ready to perform the transactions and exchanges that will be required for them to report to CMS. Here are our quality measures, for example.

Next slide: So I have three slides here that, as you can see, are intended to try to demystify our recommendations. The primary standards that we’ve recommended above everything else, and certainly based directly on the meaningful use measures that were handed to us by the policy committee are the HIPAA security rules, which contains a number of standards, as well as implementation criteria, and secondly, the privacy rules. So above everything else, the two main standards are the HIPAA security rule and the HIPAA privacy rule.

In the left-hand column of these three slides, you’ll see the requirements that are contained and the standards. They’re called standards, labeled standards within the privacy rule and the security rule. So everything in the left-hand column is required by one of those – by a HIPAA rule. Then what we did was we looked at the HITSP body of work, and we looked at the standards that were inherent in that body of work, and we identified additional standards that would be helpful in meeting these standards that are inherent in the HIPAA security rule and privacy rule, and those supporting standards are those that are identified on the right side. So the full recommendations are the sum of the standards on the left side and the standards on the right side.

Beginning here, the first one, HIPAA security rule and privacy rule also includes those requirements that are contained in the ARRA law that have not yet been translated into regulation. The first one is to obtain proof that users and systems are who they claim to be. This is what we mean when we say authenticate identity. So when I logged in, it asks me who I am. I say Dixie Baker, and then I provide some proof of who I am. That generally is either something I am or something I know or something that I have. In many cases, it’s a password, something I know. It might be my fingerprint. It could be a certificate, a privacy certificate. So it could be a number of things, but the basic requirement in HIPAA is that every person that uses the system and every entity that a system connects with needs to authenticate itself before they’re allowed to access protected resources.

The supporting standard to that, in addition to what’s contained in HIPAA itself, is what every one of us uses every day. When we use the Web, and we go to Amazon.com to buy something, we’ll see that when they’re ready to send across our credit card, you’ll see in the lower right-hand corner a little lock, and it’s locked. And what locks that is a protocol called the transport layer security or TLS, and it does several things. But one, the first thing that it does is to authenticate one or both ends of that exchange. The choice as to whether both ends or one end is authenticated is really up to the implementer, but it does provide the authentication capability for the two ends to authenticate to each other.

The second HIPAA/ARRA requirement is to control access to information, and the security rule contains implementation specifications that support that overall HIPAA standard. The third to encrypt and decrypt information, so when we wanted to get a standard for how you encrypt and decrypt information, the HIPAA security rule doesn’t say. So we looked to NIST, and we said, okay. What does NIST, the National Institute of Standards and Technology, recommend as the encryption standard today, and what is widely used? That standard is advanced encryption algorithm or AES, and so that’s what we recommended as the encryption standard that you would use to encrypt either data at rest, which means data that’s stored on the disk or UBS drive or CD or whatever, and data in transmission.

The fourth one is to create an audit trail, and most of us know and do work for healthcare organizations, certainly know that the time stamps that are on audit records are really critical to being able to establish that complete audit of what happened in a system, so the supporting standard that we recommended was the integration profile called Consistent Time that was developed by integrating the healthcare enterprise and the IHE Consistent Time integration profile uses the standards, the time synchronization standards that are widely used across the Internet, which are the network time protocol and the simple network time protocol, so those are the standards we used, again, standards that are used every day in many organizations of many sizes.

The second recommendation was the IHE audit trail and known authentication integration profile. This profile is really intended to describe, to provide guidance on how one would exchange audit messages between systems, so it would be more important, as we move ahead toward maintaining an accounting of disclosures between organizations.

Next slide, please. The next requirement is to detect unauthorized changes in content. One of the most common ways to do this is to use something called a cryptographic hash function. What that is, is simply a mathematical equation that takes a body of text, whether it be a file, or it might be a message, whatever, and it doesn’t really encrypt that message or protect its privacy or anything like that.

What it does is it runs this algorithm based on that body of text, and it comes up with a unique number such that if anything in that body of content is changed, then that number will change. So the number doesn’t say what changed or when it was changed or whether it was changed intentionally or by a hacker. It simply says something has changed. And so we looked to the NIST standards for the algorithms to do these cryptographic hash functions, and so our recommendation was to recommend the secure hash functions that NIST recommend, and that’s what we recommended.

The other guidance document that we recommended was an American Society of Testing and Measurement Standard that is for implementing electronic signatures. Now electronic signatures are not the same as digital signatures. The term electronic signatures means anything from a signature transducer that just captures your signature, to a real PKI digital signature, so that was the guidance document that we included.

The sixth HIPAA requirement was to protect the confidentiality and integrity of information transmitted over networks to include, but not limited to, the Web. And again, we looked to what does NIST recommend for doing this. NIST recommends using TLS with the transport layer security that we discussed before, and it says use it with the AES encryption standard that we’ve discussed before, and the hash function, the SHA hash function standard that we’ve also discussed before.

We also included here a service collaboration that HITSP developed, and this is just as guidance to sharing information. It’s just a basic collaboration, basic service collaboration describing how entities exchange information. Then the final item under protecting the integrity of information transmitted over networks are two standards that are used to provide directory services, like to be able to find resources on the Net, be able to find the doctor that you need to exchange the information with, and those are very common Internet standards, the domain name service, which is used to translate HHS.gov into a numeric address for HHS.gov, and the lightweight data access protocol, LDAP, which is the commonly used standard for directory services. The NHIN working group of the policy committee has identified the need for directory services for the NHIN, and those two standards will be key to directory services.

The next slide, please. This is the final one of these tutorial slides. The seventh HIPAA requirement is to electronically record individual consumer consent, and we simply recommended the implementation specifications that are contained in the privacy rule itself. To provide an electronic copy of an individual’s electronic health record is actually an ARRA standard. The privacy rule itself requires that one provide consumer a copy of their record, but doesn’t require that it be electronic, and ARRA says it must be electronic. So we recommended here a HITSP capability as guidance, and this capability is simply guidance in recording unstructured content on removable media, whether it be a USB drive or CD or whatever, and it also contains guidance in sending unstructured information from one system to another. In this case, it could be from an EHR to a PHR.

The ninth requirement was the capability to de-identify information, and the HIPAA privacy rule contains a lot of detail on detailed implementation specs on how to do that. Then, finally, the requirement to be able to re-identify information if it’s necessary. Now when that's most commonly used is in public health. When information is sent to public health agencies, it’s often de-identified, but the health provider organization that sends it needs to retain the capability to re-identify it should the public health department come back and say, uh, we’ve detected an outbreak, and we need to get back to these patients, so here patients 16, 18, and 24 need to be notified that they need to get a vaccination or whatever. And so what we recommended there was an ISO pseudonomization standard as guidance. The process that I just described of described of de-identifying information, but retaining a tag in it so that it can be re-identified is commonly known as pseudonomization, so those are the standards that we’ve recommended.

If you do have any other – if I haven’t succeeded in clarifying, I sincerely am trying to do so, so if you do have any further needs for clarification, I would hope you’d contact me, and I’d be happy to go further on these recommendations.

The next slide, please. There we go. Okay. Now I’m going to jump back into the weeds. We discovered; our working group discovered a potential problem with the standard that we had recommended for protecting the integrity of data. This is the SHA standard, the cryptographic hash function standard that I mentioned.

Because we had recommended excluding the algorithm called SHA-1, security hash algorithm 1, and the reason we recommended it was that there’s NIST guidance out there that states that federal agencies may not use SHA-1 after 2010 for digital signatures and certain other applications, but it does allow the use of SHA-1 for use in protecting data integrity. But what we put in our initial recommendations was you use the other algorithms, but not SHA-1. The latest update of the NIST standard, HITSPUB, still includes the SHA-1 algorithm. But we discovered that the SHA-1 algorithm is still very widely used for protecting data integrity, so that was the challenge.

Given that it is widely used, and given that the NIST recommendation does allow its continued use for hash functions, just not for digital signatures, we changed the recommendations to update to the latest version and to allow the use of the SHA-1 algorithm for Web integrity. In other words, use in this transport layer security, the little lock in the bottom right-hand side, to explicitly – so we changed it to explicitly allow its use for that purpose, but to require the use of one of the other algorithms for protecting the integrity of data at rest. And there’s a handout that Judy sent out with the materials for this meeting that highlights exactly the wording that our recommended changes contained.

Next slide: Moving on to our security hearing, the hearing was held on November the 19th, and most of you participated. You know we had four panels on system stability and reliability, cyber security, data theft, loss, and misuse, and building trust. Overall, in truth, standing back from it, most of the comments made that day really related to topic four, building trust. But I want to go over the key points that we gleaned not only from the overall hearing itself, but also from each of these four panels.

Next slide: We got the same message that Aneesh’s implementation panel communicated. We heard time and time again to keep it simple. Translating that into real life, that’s not that we can’t recommend standards that contain complexity, but rather that we need to abstract out the complexity. In other words, we need to create standards based components that are easy to implement, easy to use, and if they contain complexity, that the implementer really doesn’t have to deal with it. For example, the AES encryption standard is complex, but if you create a component that uses AES and such that a developer can implement it without really having to get into the details of the algorithm itself, you’ve abstracted out the complexity.

They said bake security into product, so we can buy products, and we know that the product has the capabilities we need. Several mentioned the need for a security toolkit, and they did use that word, especially for small practices. Time and time again, they said implement defense in-depth or layered security, not only layered policy, as well as layered mechanisms. So if one mechanism fails, there’s something else to catch it. In other words, we might want the lock in the lower right-hand corner at the application level to provide security, but we may want additional security under that at the network layer.

They told us what we already knew that the days of tightly controlled perimeters are long gone, and that we need to address distributed, mobile, wireless, and virtual resources, as well as computers imbedded in biomedical products or devices. I think this is a big change in the approach that security has taken for years where we’ve talked about firewalls at the perimeter of your organization, not that you no longer need firewalls, but you also have to consider communications that jump over firewalls like in wireless communications can jump over firewalls, certainly virtual cloud computing, virtualizations, those kinds of things. Then, finally, we heard that we need to measure security outcomes. We need to measure the effectiveness of the security mechanisms and the security policies that we put in place.

Next, the first panel was on system stability and reliability having to do with threats, as well as measures to availability and reliability: availability of information and services, and reliability of systems. They told us that many of the existing clinical products lack the functionality that’s needed to support best practices in security. They told us that the systems that are embedded in biomedical devices that are regulated by the FDA are a huge problem. The reason for that is they will have computers imbedded in these regulated biomedical devices, and these are often managed by vendors, but they’re also, they can’t be modified. The FDA says you can’t modify it without letting us know, without having the modifications approved. So the routine updates to the operating systems can’t be made, and the routine updates to the virus signatures can’t be made, and so they presented a number of examples, and several testifiers repeated that this was a problem for them.

We also learned that the least critical systems often are the ones that are compromised and set up as a back door for hackers to access more important systems. The HIPAA requires, HIPAA security rule requires that they conduct an analysis to identify the most critical resources in an organization, but we learned that those may not be the ones. They may not be the only systems that need to be protected. In fact, they may not be the most important to protect because the way to the critical systems are often those that may not be on that list.

Next slide, please. The second panel was on cyber security, and the first testifier reported the results of the 2009 HIMSS study survey, and the most amazing thing we learned was that fewer than half, 47% of the large, healthcare provider organizations conduct annual risk assessments. Now an annual risk assessment is a HIPAA requirement, so about half of the large organizations, if we believe this number, about half aren’t HIPAA compliant. Fifty-eight percent have no security personnel, and HIPAA, again, the privacy security rule requires that they have someone that’s named to be in charge of security.

Fifty percent reported that – now I think that this is information – I think that this is 3% of their IT budget. I think I recorded it wrong here, not 3% overall, but less than 3% of their IT budget.

The second thing that they reported was that we need to continually monitor and measure effectiveness of security policies. We heard that the mechanisms that are used in security today are not necessarily backed by evidence-based in terms of the protection that they actually protect, and one of the examples that was provided was that the password rules like those that they have to be at least eight characters long. They have to include capital letters and lower case letters, numbers, special characters, etc. That rule was really put in place when computers were a lot slower than they are today, and it may not be all that valuable in today’s world.

Next slide, please. The third panel was on data theft, loss, and misuse. We heard that portal devices and wireless access present major vulnerabilities. I think we knew that before the hearings, but we heard it reiterated by those who testified. We learned that Web 2.0 social technologies and cloud computer present new avenues for data loss. This is Web 2.0 social technologies are things like FaceBook and YouTube, those collaborative and social technologies.

Audit logs might not be sufficient to detect misuse of information. This really isn’t surprising either because often information is misused even though the person may have legitimate access to it, but is misused for other purposes. Role-based security is important, but the roles vary across institutions, so the idea of creating a common policy and a common set of roles would be very, very challenging.

Then the final slide, the final panel was on building trust, and we heard that security and privacy are foundational to EHR adoption. I think we knew that as well, but again, it was good to hear it reiterated by those who testified. Health data, cyber crime is on the rise, and increasingly health data are being targeted.

We heard that security plays a major role in protecting patient safety, as well as patient privacy. I think most of our efforts have been directed toward the protection of protected health information or identifiable patient information. But really, protecting the integrity of data is important to assure the accuracy of patient records, to assure the quality of care that a patient is provided, and in fact is important to protecting patient safety as well. The example given there is, for example, protecting the integrity of clinical guidelines.

There were several areas that people said we need what I call here baseline policies and standards. What I mean by baseline here is a level of protection that an organization can assume when they start to exchange information with another organization. So there are certain – we need some basic policies and standards that before I exchange information with another organization, I need to know something about how people are authorized within their organization. I need to know something about the level of authentication they perform, you know, the strength of how well they prove that a person is who they claim to be, access control, and then, finally, the audit. And we also heard that statistical profiling is important, particularly in the realm of detection of misuse. And that’s the end of my presentation.

John Halamka – Harvard Medical School – Chief Information Officer

Great. Dixie, thanks so much. And so we want to open it to questions to the committee. We don’t have a raise your placard function electronically today, and so questions from the committee on either the details of what it is we’ve approved so far in its revisions, or on the security hearings that Dixie just described.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

This is Wes.

John Halamka – Harvard Medical School – Chief Information Officer

Go right ahead, Wes.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

I have a couple of questions. Most of them follow a common theme. By the way, Dixie, I just want to say how much I appreciated how clear your tutorial was, and extremely helpful. It blew away a lot of fog for me.

From time-to-time, in discussing a specific recommendation, particularly a right column recommendation, you made a point to emphasize how commonly it was used. Other times, you didn’t, and I didn’t know whether that was just everybody already knows it’s common, or it’s not common, or you just were managing your time, but I want to ask a couple of specific questions like that.

Actually, that’s a good example because that happens to be one that I did some investigation about, as you, John, and several of you know. I did some independent kind of polling of people out there, and IT people in particular who were those who were really in charge of synchronizing systems, and nobody I talked to uses the integration, the consistent time integration profile. But NTP is very, very commonly used, yes.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

So, but as I understand it, our requirement is to use both. Is that right?

Okay. For ATNA, the way I read it, and I read it closely, and I want to make sure I wasn’t over-reading it. Systems are not required to create logs in ANTA format. They’re required to present logs for comparison in ATNA format. Is that right?

This is John Halamka. Just to give you some additional color to that, ATNA is really a communication standard to a registry. The idea that you may have dozens of different systems that log all kinds of different ways, but they’re using the SIS log function of an operating system to collect certain data elements in a standardized fashion, which could be exchange.

We had a lot of debate about this. Do you need a technical standard for auditing, or should you just have a policy that you can send an audit from place-to-place, even if it’s in a PDF format or something? What we heard in the security hearings was that audit exchange is actually incredibly important for cross-organizational trust. The ability to say, I need to see an audit of what actually was done. And so the notion that there would be at least a standardized set of data elements and a mechanism to communicate it between organizations seemed important. I think, Wes, to your point, we aren’t requiring – and, Dixie, please add to this – that every existing legacy system be retooled so that it somehow natively uses ATNA inside the application.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

Just to restate what I think I heard from both of you together is that for an EHR to be certified, it’ll have to be able to conform to the ATNA protocol for sharing its audit log. It seems, and here I’ll speculate, it seems likely that the vendors could find an open source or marketed package to create an adapter between their log format and ATNA, as long as they’re correcting the right data. That’s my speculation, but you might say whether it sounds right or not.

Well, ATNA uses – I don’t want to get too far … it uses the ASPM standard for specifying the data elements that are collected in an audit. It uses a – I won’t say it’s universally used, but it certainly is a widely recognized standard for the data elements that it collects. As far as exchanging, I would hope. I really can’t say the availability of such components that you’re suggesting. I don’t really know, but I think that that certainly could be an approach because John is right. ATNA addresses the exchange of audit messages with a common audit repository, which, quite frankly, an audit repository is not a HIPAA requirement. But on the other hand, as they move out especially to fulfill the ARRA requirement of accounting for disclosures, they will need something like that.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

I won’t debate that with you. Personally, I don’t think that an audit log is how you account for disclosures, but….

What I’m trying to find is a way to enjoy ATNA without being scared of it. And it would make me much more comfortable to know that it can be easily externalized and handled by a third party package, whether such package exists today or not.

That’s a good comment. That’s something we should clarify, and we discussed this very early on. Just about not only ATNA, but just about anything of the security requirements could be provided by a third party. You know, if they bring in a third party product, for example Active Directory, to do authentication. And if they make that part of the certification to show that that works in fulfilling these requirements, you know, there’s certainly nothing wrong with using an integrated component.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

Some specifications more easily admit to that than others, and that was really what I was trying to uncover. In the interest of time, I’m going to just make a comment and … the last question, and give up the floor here. I think what we heard on the issue of biomedical devices and failures of vendors to provide timely patches for the operating system is not that the FDA requires them to do that. That, in fact, that is a fallacy that the biomedical device manufacturers perpetuate. And that is – we’ve investigated that in Gartner, and that's certainly our conclusion. It’s the conclusion of HIMSS, so I just wanted to make sure that there was no misunderstanding that the fact that FDA requires recertification to issue a security patch to a biomedical device.

And you described, and I’m working from notes here without being able to look at the presentation, but you described the use of HITSP 112, I think, in one of your recommendations.

Actually, it’s a requirement that is intended to be for general exchange of health information between organizations. I believe it’s intended to be foundational to XDS and XCE and all the others. And I don’t believe it’s widely used.

John Halamka – Harvard Medical School – Chief Information Officer

It’s XDS, XDR, XDM, and XCA, and certainly there are multiple implementations of it. I don’t have statistics precisely, Dixie, as to how many organizations are using any of those standards today, but we could certainly, for the committee, try to get from the folks at IHE both an international and national statement about that.

M

Although I think it’s easy to say, John, that it’s under 10% in the United States, probably under 5%.

There is a statement, yes, of how a SOAP and Web services architectures can be used to exchange data, and recall that what we said as a committee was either these SOAP approaches or restful approaches were fine.

This is David. I think that the thing, as I recall from 112, and I don’t have it in front of me, is that it does presume that the mode of sharing is XDS oriented, so it packages the data on the device, even if the device isn’t participating in an XDS registry repository setup in the XDS format, and that is a requirement that very few people currently meet.

John Halamka – Harvard Medical School – Chief Information Officer

It’s XDS, XDR so that it’s just a mechanism of transmission, and it is moot on how, for example, the actual registries, if you needed a registry, would be built.

Right, and it goes beyond standard mime types, for example. It has additional constraints that would not be common in, for example, e-mailed messages that have encrypted content in them and so forth, so it does go beyond what people commonly do today because it anticipates the need for interchange through an XDS repository at some point.

M

John, you’re making the assertion that this format – I just want this really clearly stated because I’m not convinced it’s accurate that those protocols are agnostic to the architectural organization because I don’t believe they are, and I hear you saying they are.

I didn’t quite hear John say that, although it might be possible to imply it from what he said.

M

I heard … content only.

John Halamka – Harvard Medical School – Chief Information Officer

No, so service collaboration is a transmission vehicle that enables using XDS, XDR, or XDM or XCA for sharing documents if there is an architecture that requires a repository of document metadata or point-to-point communication between two organizations using a SOAP approach or a thumb drive. The content, which is sent, could be a variety of packages, so meaning a CCD, CDA package could be sent in this mechanism or other stuff. So the intent of a service collaboration is to be a transmission mechanism as opposed to a capability, which is more the package itself.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

I think our next option to clarify that is to see what comments come in on the regulation and have a discussion on the basis of those comments. Is that reasonable?

John Halamka – Harvard Medical School – Chief Information Officer

I think the committee said, this is not one of those either/or situations that we know the federal health architecture, FHA Connect approach have used this service collaboration. But we think that there are emerging, restful approaches for pushing data that we’ll hear about in the NIN discussion later in this meeting that are very valid, and that we would want to embrace both possibilities.

And just two final comments: one is that optionality in a specification for certification has a choice of becoming a tiger or a baboon in the sense that you either have to do both to be certified, or you now have the situation that two certified systems don’t work together, and so I would hope for some clarification on that point. My last comment is just to share an evolution in my view of what it means to be used wisely. A year ago, I would have said that the fact that 25 or 50 or N vendors that participate in IHE have gone through connect-a-thons and may even have gone through a tradeshow to show that they have code that implements a profile, represents widespread usage, or at least the potential to rapidly roll out widespread usage.

I still think that that does strengthen the potential, but I have become aware of the difference between vendors having code running in a special version built for the tradeshow, and having that rolled out to their clients and having their delivery teams familiar with how to configure certificates to make the protocol work and a whole bunch of things that in fact took a lot of time during the NHIN trials. So I would say that we need to note how many vendors have tested in connect-a-thons successfully for a given protocol. We need to be aware of that, but we can’t take that as direct evidence of the same degree of ability to access in the networks, as something that’s actually been used in production in a lot of places for a long time.

I think that can be said for any demo. I mean, you know, connect-a-thon is a demo.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

No, I credit IHE with a connect-a-thon being – it has two parts. It has a connect-a-thon, and it has a demo. When you get to a demo, it’s a demo. You may find, in a given case, God forbid, that it only works if the patient’s name is … Smith or something because trade shows are trade shows. But the connect-a-thon that precedes it represents a genuine collaboration among a lot of parties on figuring out how to make it work and has criteria for deciding whether it’s working or not. So that’s better than demo, but short of implementation, you know, as a mark of maturity, and I think IHE deserves full credit for having achieved that.

John Halamka – Harvard Medical School – Chief Information Officer

Other comments from folks?

Carol Diamond – Markle Foundation – Managing Director, Health

Yes. This is Carol Diamond. I was going to ask questions about exactly the three things that were just … rather than going over those issues again, I would just reiterate that for ATNA for … time and for 112, I think we really have to ask ourselves if these are not widely deployed or they make assumptions about a registry based exchange occurring down the road. Are they the highest priority right now given that the standards that we really want to make sure are also in place are the ones that are going to facilitate achieving meaningful use.

I also, in that vein, wanted to give, reiterate a comment or a takeaway that we had in that hearing that I didn’t really hear mentioned today, and it really was a meta-comment. That comment was that basically security alone is kind of an endless discussion in that the industry was really, in several ways, asking for policy guidance. In fact, they said, tell us what the condition is that we need to meet, what the outcome is that you want to have, not necessarily the technical specification because we run the risk of specifying yesterday’s technology.

And I just want to reiterate the importance for all of these issues to have policy guidance, clear policy guidance and expectations articulated because, especially for some of these more technical requirements, while they are directionally correct, they may exceed what technically needs to be done in order to achieve the basic policy. I just think we have to ask ourselves that question again and again.

I totally agree with you. We’ve specified, as I mentioned in other meetings. We’ve specified these standards or recommended these standards in the absence of policy, and I’m encouraged that the policy committee has finally created a privacy and security workgroup because we definitely need somebody to step up and say here are some foundational policies.

Carol Diamond – Markle Foundation – Managing Director, Health Yes, and I think, on that issue, Dixie, I really wonder even where we have some of these standards. I’m amazed that we have to specify network time protocol, but why are we sort of going beyond those basics in some of these cases? I have the same question about the other standards that were raised. We can suggest that there needs to be a policy created, an outcome that has to be achieved, an objective that an organization has to meet, but the standard just is a whole other.

Yes. Yes, well, I think we’ll be given the opportunity to revisit those as we review the regulation that’s issued, but I think your points are very well taken.

John Halamka – Harvard Medical School – Chief Information Officer

Carol, these are the exact points that we’ve had many debates about in the workgroup, and we go back and forth. On the one end we say, you know, NTP, everybody uses NTP or SNTP. I mean, every operating system in the world pretty much does that, and so stating that NTP should be used to insure that audit trails are recorded in roughly accurate time isn’t really a burden. Where we then get murky is when we say audit trails are good. Should we specify a policy naming the data elements…?

Carol Diamond – Markle Foundation – Managing Director, Health

Right.

John Halamka – Harvard Medical School – Chief Information Officer

And so we’ve gone back and forth on this, and so certainly I think this is something that is going to be great for the committee to discuss what specificity meets the requirements of meaningful use and helping accelerate implementation once we see the regulations in two weeks.

I really think it’s not … that was part of the trail toward NTP. We were asked to start with HITSP standards and to use standards that have been endorsed by the ONC. And the standard that is contained in the HITSP CT construct was IHE CT, so it’s probably more than anything, it’s probably a remnant of the analysis to get to NTP and SNTP.

John Halamka – Harvard Medical School – Chief Information Officer

Right. I think we could certainly look at that in some detail, but I think it’s just an implementation guide on NTP and SNTP. It’s not anything much more than that.

It seems like with NTP being so widely used that it just, especially since Dixie also said that people aren’t using this consistent time profile, that I don’t know. Why we need to go beyond that is not clear to me. I worry that every one of these nice to haves, they add up. That’s all.

John Halamka – Harvard Medical School – Chief Information Officer

Dixie, one interesting question, and we’re getting in the weeds, is knowing that NTP has a series of servers you could point to from the atomic clock connected to a computer, to computers connected to that computer, to computers connected to that computer connected to the computer that’s connected to the atomic clock. So when you’d say, when you use NTP, what’s the source of truth? This actually can get slightly ugly because there can be variants between the various computers that are sources of truth and, therefore, you could get audit trails, which actually don’t reflect precisely what happened. And so, I guess, again, the committee has to decide how specific are we going to be. Do we say you know you must go to stratum one servers that are connected to atomic clocks for your NTP synchronization? I mean, it’s no big deal to implement. It’s just a policy statement.

Yes, and that gets back to a discussion that we’ve had recently that isn’t really reflected in these because, like I say, it’s recent, is what level of tolerance, of variance. You know, what level of variance can be tolerated when you’re doing an audit trail or accounting for disclosures across organizations? And that’s really the “standard” that somebody needs to set. But again, that’s a policy decision.

Yes, I just wanted to emphasize a couple points because I think there’s some, perhaps, confusion or misunderstanding of some of the things that are being defined here. I think there’s some base standards that are being called upon to be used to be named to be recommended as the adopted standard. Then there are some guidance documents, implementation guide, guidance document, however we want to call them that help understand how those base standards are to be implemented.

One example of that is the HITSP service collaboration 112, and it’s already been discussed. Another example, of course, is the one that we were just talking about, the IHE consistent time profile that calls on NTP and SNTP, or the IHE admin profile. All these are guidance that call upon base standards, and I think it is upon us to always help in many ways the industry understand now just which standard is to be used, but how to use it in the sense of how to technically implement it, not to define policy as in the policy that will be needed to determine policy questions, but help the industry understand how to technically implement some of these base standards. I think that’s a distinction that we should make and perhaps even in the documentation clarify which are the actual base standards that are being called upon and which are the implementation guidance documents that will help technically implement those standards.

Cris Ross – MinuteClinic – CIO

This is Cris Ross. I’d like to go back to what Carol was talking about and other questions before. I guess, I admire the work that the workgroup has done here. This is incredibly technical stuff, but if you’re debating about whether to go back and forth, I think one of the things that’s important on the top of your slide 12, is many existing clinical products lack the functionality needed to support security best practices. The real question is going to be, what’s achievable that is minimally sufficient to meet the intent of all of the legislation that comes in front of us? I guess I would urge the same point that Carol said, that the nice to have stuff may in fact get in the way of progress.

On top of that, I had one question as well under the section demystifying 2011 recommendations. The recommendation about providing a capability to create an electronic copy of an individual’s electronic health record makes reference to HITSP capability 120. And I think, I don’t know if this is a question for the security group or if it’s a question for HITSP or whomever, but one of the things that we talked about was separating transport and security from document types. But the HITSP capability 120 is particular with respect to document type that an HL-7 document type is the only thing that is supported.

If the purpose of this is to support exchange with outside entities like health information exchanges that might include Microsoft HealthVault or Google Health, HealthVault does support the CDA standard, but today Google does not. So this would either require that exchanges implement a whole new document type or not. I guess my question is, how do we get to the issue that we raised before, which was, I think, concensus that we should separate transport and security from document type and document content?

You know, Dixie, I looked at this because I was curious about that, and Section 323 says this capability requires the exchange of documents in the HL-7 clinical document architecture format.

John Halamka – Harvard Medical School – Chief Information Officer

What it provides is the metadata in a CDA wrapper, and so it’s for unstructured data that could be free text, PDF, TIFF, CCD, whatever, but it does require the metadata in a standard CDA header format. That is correct.

Cris Ross – MinuteClinic – CIO

So I guess then that just comes to the question about whether the header constitutes the document itself, which I know gets into technical archaena, but I think that part of what we were, you know, had reached some concensus on was to try to move that boundary as far as possible to separate document type from security and transport.

I think you could debate that requiring the header says that that means that the document needs to be in that type. I’ll just leave that comment to others to….

John Halamka – Harvard Medical School – Chief Information Officer

Jamie, are you on the call? You could talk, because you’ve worked with this. On the clinical operations side we’ve said, for 2011, there’s a lot of variability that we would accept in content, including TIFF, PDF, CCR, CCD, whatever. But that the intent was that you wanted to wrap these things in at least some metadata that would be….

That’s exactly right. That is the recommendation for communicating summary records and after visit summaries and different kinds of documents to patients and PHRs when using the unstructured alternatives is that it should have some consistent metadata.

Carol Diamond – Markle Foundation – Managing Director, Health

This is Carol. This issue of the metadata having some of these requirements, but not the actual package, sorry, the content, I think is a slippery slope. I mean, the same issue, John, we discussed on the EB XML and the XER standard. I just think we should seriously consider this.

I think that since we’ve put together our standards, all of us, we heard testimony from the implementation testimony. We heard it from the security testimony that simple is good. Make it simple. Make it simple. And, quite frankly, I think that it would behoove us to revisit all of our recommendations from the perspective of, is this as simple as we can make it, and will this work for the little guy? I think that’s a legitimate activity for us to do.

John Halamka – Harvard Medical School – Chief Information Officer

I think what would be great – none of us know, other than David, if he’s still on the phone, exactly what the interim final rule says.

Because, of course, we’ve been a federal advisory committee, and so I think exactly, Dixie, as you describe. Our next body of work is we’ll see what the interim final rule says in the next few weeks. We’ll glean comments, and then we will look at the comments from all stakeholders and ask what can we do to refine our recommendations. And this debate on consistent time and auditing is one that we’ve gone back and forth on. The notion of how do we insure separation of transmission and content, that's now a guiding principal, so let’s make sure we’ve done that appropriately.

This is David McCallie. Just to reiterate the keep it simple, start simple, I think, is a better maybe phrase, and we have a naturally stepped process ahead of us that’s spread out over five years or more. Starting simple and then getting more complex once we master the simple start seems to be a sound way to proceed. I think some of these goals that we have in mind, we all want to be the end game goal. The question is whether we should start with some stepping-stones in that direction to get as many people engaged as possible. Obviously that’s a high level, philosophical approach. But in this case, since we already have a guaranteed set of steps that we’re going to have to meet, we should take advantage of that.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

This is Wes. If I could just elaborate on that a tad, I would say, speaking as a recovering techy, the sure way to know that you won’t have to change something is to design it all up front. A sure way to know that what you are going to do works is to design as little as possible up front, and there is sort of a wave of recognizing the latter philosophy in the last few years that may not have been there when … was defined, when Web services were defined, and so forth, going back. What I would be afraid of is if we only looked at the first year, didn’t at least have a vision for how we thought what we do in an early step can expand to later steps. I am concerned, on the other hand, that trying to get to the five-year solution at the start may be introducing a lot of new things at once.

John Halamka – Harvard Medical School – Chief Information Officer

In general, Wes, we’ve said 2013 is really the time that we’re going to start seeing a lot of interoperability, and that there’s a glide path to get there, and that Jamie’s group in particular has been very careful to make sure that for each of the content standards that we keep that in mind. That is, we know where the world is today. We know where we want to get. We better have a clear glide path between here and there, so you’re right.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

It might be moot to ask since we’re all planning to spend Christmas Eve reading the regulations, but aren’t there interoperability requirements in 2011?

I understand that there’s being able to receive structured lab data, being able to send a patient summary, being able to do e-prescribing, and being able to send quality data, although we don’t know yet whether that’s protected health information or not.

John Halamka – Harvard Medical School – Chief Information Officer

Right, and that's a reasonable summary of 2011, for lab, HL-7 2.5.1 with implementation guidance on UCUM, SNOMED compendiums. For e-prescribing, we have NCPDP, Script 10.x. For clinical quality, we have a set of data types that are in a CDA format that has been described with some options of using some PQRI XML constructs and clinical summaries, we’ve described, as we already talked about, PDF, TIFF, text, CCD as possibilities, although going to a CDA document in a longer-term requirement for 2013. Jamie, anything you want to add to what I’ve just said?

That's what came out of Jamie’s group, and so what Dixie has tried to do in her presentation is describe the minimal security constructs, which you hope that when you look at 2011, there isn’t anything too burdensome in what she’s described. I think what we’ve just heard though in today’s discussion that we need to make sure we’re clear on consistent time and how do we make it as simple as possible, what we’re doing on the audit, and making sure that there is separation of transmission and content so that we don’t constrain innovation. That is, Dixie’s group is really about the transmission of place-to-place, not what you’re transmitting.

M

Understood.

John Halamka – Harvard Medical School – Chief Information Officer

Again, just to reiterate, it’s always been a specific recommendation to ONC that both the SOAP approaches have been articulated by HITSP and IHE and the restful approaches, which have been discussed in many quarters are both acceptable, so we hope that genius of the and will work for everybody.

This is Walter again. I just wanted to add, I think a lot of it is really going to come down to the wording at the end of the day on the regulations, of course, not just in terms of specifically naming things, but how they are named. In other words, what we are trying to do was identify the specifics that rules can point to, to say things like provide or demonstrate the capability to do this or to do that. The decision to implement some of those capabilities are part of the policy question that the policy group will need to provide some of the framework. But in many respects, all these standards are really the places that regulations and directions point to in the wording of being an EHR being capable to do this encryption algorithm or use that type of consistent timing. But I think that’s the key element, I think, that will be looked upon in terms of specificity in the regulations.

I’m not sure how you’re meaning the word ‘implement’, but I’m more in Wes’ camp on this. The standards that we’re recommending, they are intended to be certification criteria. And if you’re a vendor, and you’re building a system, and to have it certified or even if you’re building for your enterprise, a system to have it certified, if there’s a standard and a certification criteria in there, you have no choice. You implement it. Now whether an organization then uses that implemented capability, that’s where the decision comes in, but to certify a product, the criteria for certification should be very, very clear and very straightforward.

No, the implementation of those capabilities. In other words, the EHR tools have the ability to demonstrate that they support a particular standard, and the decision to use that capability rests on the user of that EHR product.

So I guess it almost sounds like you two are violently agreeing with each other. The question is, I think, is there any verification as part of the HITECH Act that that decision to implement is consistent with the recommendations for the products.

We have no way of demonstrating that they will. We’d like to believe that it’s easier for them to meet these needs using our approach then to implement one thing to get certified and do something else. But we don’t actually know that. I would say that the area where we need to be most careful is in those security standards or those format standards that are required to be in place for interoperability. We have to be sure that there are not so many options that two different standard systems don’t interoperate. But that’s something we can look at going forward.

John Halamka – Harvard Medical School – Chief Information Officer

This is what I’ve always said is ideally you get to one, but maybe it’s two. You just want to – you don’t want 20 because every time you say this and this, it does mean that the vendor has to implement both.

Right. Well, so a very rich discussion so far, and I think we’ve got our work ahead when we get to the interim final rule and the comments begin to roll in, and we’ve targeted today a couple of areas to relook at, and that is, have we done exactly the right thing on consistent time, and have we done exactly the right thing on audit trails with regard to technology policy and technology standards because, as we talked about on the audit trail side. And we’ve debated this before. You might state, here are seven data elements that need to be captured and, on request, exchanged, and that becomes a policy statement. I guess you could call it kind of a standard in the sense that you are specifying at least the nature of the data elements to be captured, but not specifically how they’re stored in a registry or repository or communicated.

These are the kinds of things I’m sure we will discuss at our next meeting, and then the issues that we’ve brought up today about the service collaboration 112 and capability 120, and making sure there’s clean separation of the transmission vehicle from the content, and making sure that we keep it simple, we engineer for the little guy, but we know we have many stakeholders, and we have to make sure both the little guy with point-to-point exchange, and the federal health architecture with FIPS and FISMA both exist in our echo system, and that balance of achieving good engineering for both is something that we have to balance carefully.

Well, I think, let me close out this discussion, Dixie, and then move on to our clinical operations workgroup taskforce on vocabulary. Just as people have talked about the importance of transmission and security and privacy, there’s unanimity that it is important that we have vocabulary controls, and we, over time, because we know this is a journey, and it’s going to take multiple years, have the capacity to record structured data, especially for quality measures because the more structured data we have, the more possibilities exist for providing decision support and feedback to clinicians and a supportive healthcare reform. Jamie, please present what your work in the taskforce has produced thus far.

Sure. Thank you. This is going to be actually not so much an update on what has been done so far, as it is going to be an outlook on our plans for 2010 because, in reality, we’ve just gotten organized.

Next slide, please. So we did hold, and it’s hard to believe it’s been a month already, but we did hold a kickoff meeting of the vocabulary taskforce. We’ll start regular meetings in January, and our meeting schedule is actually we’re planning meetings the day before each of the standards committee meetings.

What you see here is our charge from ONC to identify the gaps and issues related to vocabulary and then to make recommendations. If we go to the next page, you can see the membership. We’ve invited both members of all of the SDOs, all of the standards organizations whose standards for vocabulary have been recommended by the standards committee. We’ve invited all of the standards committee members. And we have, of course, a few ONC members on the committee.

Next slide, please. What I wanted to start to focus on is what we’ve heard already and what we reviewed in our kickoff meeting from the standards committee. Of course, we heard input from the implementation workgroup on the needs to create publicly available vocabularies and code sets to insure that they’re easily accessible and have very straightforward updates, and the implementation workgroup discussions also noted some licensing issues. And an example I’ll point out is the medication cross-maps where RxNorm has some cross-maps to proprietary vocabularies, and in order to use those cross-maps may require licensing.

We also got input from the clinical operations workgroup, which noted a number of different gaps, both in terms of cross-maps, availability of cross-maps, but also in definition and publication of subsets and value sets, and I will take a minute to differentiate between subsets and value sets where a value set is really a particular list of codes or concepts that describes the universe of vocabulary for a particular purpose, whereas a subset is more a convenience for implementers such as the list of most frequently used lab tests. We wouldn’t want to constrain lab results to only that list because then what happens if you have a test that’s not in that list or the same thing for problems. But, for example, so the value sets are being defined as an example for the quality measures, whereas subsets are more convenience and guidance for implementers. But a number of those gaps were noted throughout the clinical operations recommendations.

We also, in that workgroup, noted gaps in terms of the need for processes to govern the creation, the maintenance, selection of vocabulary subsets and value sets, and for their maintenance and publication, making them easily available and so forth. And we also noted gaps in terms of binding value sets to particular content exchange standards where that’s necessary. We also heard from the quality workgroup that there are needs in the vocabulary area for value set determination for each of the quality measures, and for the processes around the selection and management and promulgation of those value sets, and also needs for coordination of these processes with the external or third party quality measure stewards.

If we go to the next slide, please, now I’m going to hand it over to Betsy. We’re going to do a bit of a tag team here to talk about some of the process gaps and issues that we discussed in our kickoff.

Betsy Humphreys – National Library of Medicine – Deputy Director

I think one of the things that is evident and, to me, was somewhat evident from the discussion of the first vocabulary taskforce call is that one of our first tasks is going to be to make sure that everyone who is on the taskforce is talking about the same thing when they’re talking about governance, infrastructure, licensing, and value sets and subset processes because I think that probably most of us realize that there were multiple definitions of all of these terms being discussed simultaneously when we were starting out.

There are obvious issues here in terms of determining who gets to make changes to what on what schedule and if you want to make changes, who are you supposed to deal with. And some of the governance issues are pretty well laid out for certain things that are of concern to the vocabulary taskforce, are pretty well laid out in the law, and others, of course, are not. Then there’s the issue of infrastructure, and we’re dealing with at least two kinds of infrastructure: infrastructure that will make it easier for those who are creating subsets and value sets to do so in an effective and efficient way that will allow for updating and versioning, as the requirements may vary by the different types of things that they’re creating. Then the other issue is infrastructure that makes it very easy for people to pick up and go and understand what they are supposed to be using for what purpose. We obviously need to worry about both of those things.

We need to get more specific down on exactly which licensing issues are the biggest pinch, and one of the things that vocabulary taskforce, I think, is going to have to do is work on priorities for all of these things because there are many tasks to be done, but some will be much more valuable and immediately useful for helping people achieve meaningful use, and we want to get on those, and we don’t want to do them in such a way that we are designing something that only works in the short term and is not a stepping stone toward a broader good infrastructure for the future. So I think that there’s definitely a lot of education and communication, and one of the issues that I think we need to deal with here, and I’m not sure this came up in the call, but I think one of the things, this is an area where we’re going to have to deal with how we can make effective use of the extension centers to help people with this activity.

Then there are a lot of coordination, dissemination, and issues that fall around the fact that there are multiple parties issuing different vocabularies and classifications that have to be used by the same community. Schedules for some of them are set by law in terms of when updates are issued. The laws don’t agree on when everything should be issued, so there are a lot of interesting issues that have to be addressed here. As one who would like to see much greater coordination among these activities then is currently the case, I personally feel that the legislation and meaningful use and so forth offers us an opportunity to achieve greater coordination and simplicity here. And I think that some external force is necessary, and maybe we have it now.

Next slide, please. And again, these are some of the issues, specific issues that the cross-maps, the RxNorm business that Jamie referred to, issues about testing and where we are with a lot of this. I think that one of the things that was evident is that we do need to bear down with the people who made some of these comments and make sure that we clearly understand exactly where they see the problem, and so that we, as the taskforce moves ahead, we’re actually focusing on the problems that people really need fixed, and we’re not, you know, assuming that we know what those are because, at the level of specificity of some of this, it isn’t specific enough for us to move ahead in a sensible way, but we know who they are, so we can obviously follow up and get more information.

Great. Thanks. And so in terms of our planned approach to this, certainly as Betsy said, we need to determine priorities in these areas. We really need to focus, I think, initially on both process definition and level setting on those areas. And then, as we move into hearings, we want to understand what would be most helpful to implementers.

One of the things that has really come to light in our initial discussions is that particularly for the smaller provider implementers, most of them are not very close to meaningful implementation of controlled vocabularies, and so we want to have a structure for getting input that can really give us good input on what would be most helpful, and we may need to in fact demonstrate different models in order to get good input. And then we also need to be able to describe what the processes are. So we’ve discussed having an approach where each of our meetings, in general, probably after the first one in January, would include both a public input or testimony section, as well as panels with invited experts on particular issues, as we move through our list to get input from all sides on these.

Next slide, please. So as I mentioned, we do plan to have monthly meetings right before each of the standards committee, and we’re in fact working on some background documents in preparation for the January meeting. One of the things also that we need to look at that we didn’t really get to in this presentation is what are the requirements, as John Halamka was saying earlier. What are the requirements for 2011 versus 2015? What’s needed immediately versus what’s needed in the long term, and really to differentiate between those?

One of the areas where we did discuss that in our kickoff meeting was in terms of some of the platform requirements where there are, for example, a very small number of subsets and value sets that need to be easily available to implementers for 2011, but as that number of subsets and value sets grows over time, what’s the platform that will make that manageable and easily accessible that may not be needed in 2011, but that may be needed by 2015? So those are some of the issues that we also plan to tackle. That’s the end of our prepared presentation. I’d love to take questions and discussion.

John Halamka – Harvard Medical School – Chief Information Officer

Questions from the committee on vocabulary and all the issues that Jamie and Betsy have outlined?

Nancy Orvis – U.S. Department of Defense (Health Affairs) – Chief

Jamie, this is Nancy Orvis from DoD, and Betsy. Thank you for a really good summary of how this needs to start. I think the idea of really defining the process for narrowing these vocabularies down for use is extremely important, given some of the prototypes that we in DoD and the VA did a few years ago. One of the key things on implementing controlled vocabularies is that the organizations on the sending and receiving side have to agree that they take their organization’s content, and map it to the reference terminology in the same sequence. The issue is if they say I pick this hierarchy, then hierarchy D and … and the other one does, I do A, B, C hierarchy in order to find the right concepts, you come up with different identifiers. That may be a little esoteric for today’s discussion, but we would be happy to rejoin this and help figure out how we can get some testimony on some of these lessons learned, I think, on implementing vocabularies. But again, thank you for laying out a very good schedule and structure for these meetings.

David McCallie – Cerner Corporation – Vice President of Medical InformaticsThis is David McCallie. Jamie and Betsy, my thanks as well for taking on this extremely important and difficult task. I think, in some ways, this is more important than many of the other things that we have stubbed our toes on around consistent time versus network time and things like that, so this really should get a lot of attention.

My question to you is what your strategy is, and you may not have the answers yet, in addressing the proprietary nomenclatures. The one I’m thinking of most specifically is the CPT code, code set, which has a tremendous amount of influence on the way our systems are designed. Obviously everyone knows that, but if the end goal is to get paid for doing something, then that drives backwards up into the system and influences everything from the way order sets are designed, orderables, data repositories, even the names and subsets of lab results all kind of back up from the CPT code. I’m curious as to what your strategy is around including the AMA in that process….

Even though we did not include CPT as one of the specific recommendations out of any of the workgroups of the standards committee, we have discussed adding AMA to the taskforce for exactly that purpose. But beyond that, we don’t – and we’ve also noted that it’s on our list of things to prioritize, to analyze, and make recommendations on is the exact set of issues that you’ve talked about in terms of CPT licensing. Beyond that, we don’t have more specific plans.

David McCallie – Cerner Corporation – Vice President of Medical InformaticsThe issue that we run into occasionally is it’s a great goal to have a kind of rational, clinical orientation to, say, for something like an order for a radiology procedure. But if the department only gets paid if those procedures map cleanly to CPT codes, the CPT code wins in that debate. That’s changing somewhat, but that’s been a struggle at our implementation, so I would urge you to at least consider that as you move forward.

Yes. This is Walter Suarez. Just a couple of comments, and this is actually similar to the things that we discussed with privacy and security. One thing is, and we have defined the standard vocabularies or the recommendation for the standards for vocabularies, so we have the actual codes, if you will. But a different thing and perhaps where some of the complexities become real is in the implementation, the harmonized implementation of those vocabularies. Again, just like with privacy and security, one thing is the standard itself, the base standard. Another thing is the implementation guidance.

With HIPAA, going back to what happened back then, the standards were separated into a standard define, and then an implementation specification, which basically was the guidance for implementing the actual standard. In here, what I see with respect to vocabularies is the importance of two very critical elements. One is the harmonization of implementation guides for the use of the actual standards, and the second one is the crosswalks or maps or general equivalent maps, however one wants to call them, to make the vocabulary standards be able to be mapped back to earlier versions, map across to other previously used vocabulary nomenclatures, and map forward with the newer versions and all that. Is the intent to, besides of course looking at the elements of governance and infrastructure and licensing, is the intent to help define and develop or promote the development of this harmonized implementation guides on how to use these code sets and developing a standardized sort of “official” cross-maps or crosswalks?

Yes, I think that’s exactly within our scope, to the extent that those things have been identified within the scope of meaningful use.

John Halamka – Harvard Medical School – Chief Information Officer

Any other final comments on the topic of vocabularies?

Farzad Mostashari – NYC DH&MHH – Assistant Commissioner

This is Farzad. Two comments: One, I want to underscore what Walter said about the importance of the implementation and also mention that I think part of this would also be things like tools to allow for not just usable subsets that are more implementable or address industry concerns, but also tools for like a SNOMED browser or whatever. That could be part of a recommendation that help people use the terminologies more effectively.

Right, and that’s exactly. In fact, that’s why we have tooling for implementers right on one of our slides with SNOMED as the example for a query tool and things of that nature. That’s exactly one of the things that I think is on our list.

Farzad Mostashari – NYC DH&MHH – Assistant Commissioner

Exactly. And then I wanted to ask Walter, given his public health background, and maybe I missed it on the earlier discussion, whether this immunization registry and CVX codes, whether anything needs to be involved in that discussion on terminology.

Yes, this is Walter again. Yes, I think it will be very valuable to include that as far as the terminology, and perhaps even go farther into identifying some of the additional terminology that is used within public health, but certainly immunizations is one of them, and it will be important to highlight as a specific item within the vocabulary standards.

John Halamka – Harvard Medical School – Chief Information Officer

Was there another comment? I thought I heard….

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

Yes. This is Wes. I wanted to ask on the relationship of this to certification, and particularly when it comes to subsets. The way subsets were described, it’s an implementation recommendation, so there are – and I’m thinking of lab values for 2011. There are, you know, 20,000, give or take 10,000, lab codes and there are 700 to 900 that constitute the vast majority, the vast, vast majority of all lab results. And as I understand it, it’s likely that that 700 to 900 would be a subset that is an implementation recommendation.

If that is true, what is the definition of a certified EHR? Is it one that accepts the 700, maps them into whatever codes they want to use internally for those concepts, and accepts the others as NOS, or because this is only an implementation recommendation, does the certification requirement become the entire LOINC code set? Then, finally, I want to ask what leverage do we have over the labs with regards to sending this data in these formats with these codes?

I think a couple of things. These are great questions that certainly deserve more discussion among the full committee and when we see what the rules say. Then we ought to discuss it in that context, I think. But the recommendations that were made by the committee, for example, with the lab implementation guide, that requires the use minimally of particular subsets of LOINC and SNOMED, so a SNOMED organisms table and 95% most frequently used routine lab test names. And so I think, because that implementation guide is part of the recommendations, that then the use of those particular subsets would become a certification requirement, but then other parts of our recommendations from the committee would say that it’s just as broad as you should use LOINC for lab tests and SNOMED for problems. And so I don’t think that there’s necessarily – and so it’s not restricted to that subset, but it would seem to me that the – and this is just personal opinion, so I don’t know what’s going to be in the rule – that the use of those particular subsets would be part of the certification requirement.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

Thanks.

John Halamka – Harvard Medical School – Chief Information Officer

Certainly this whole issue of the lab compendium and the right subset is something that is a very active topic, and … McDonald has been working on it, and there’s HITSP activities, and of course this vocabulary taskforce will make sure that we have an appropriate compendium to make laboratory interfacing as simple as possible in this country because that will significantly reduce barriers.

Yes, and not meaning to go off track, but to point out something in John’s comment just now. My understanding is that we should expect to see an updated frequency analysis of what the most commonly used lab tests are, and so then that leads right back to our discussion of process for maintenance because the current recommendations are based on essentially the last available data on that frequency, but we’re about to get new data, and so what’s the process for managing that update.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

H1N1 is going to jump right up there, huh? A question though, I understand that the policy committee passed some resolution about labs after hearing testimony from labs on Wednesday. I wasn’t there. I haven’t seen the transcript, but is there anything impending, any process where we’re getting agreement from the labs to conform to these tests, these codes?

John Halamka – Harvard Medical School – Chief Information Officer

Certainly Jamie had talked to Micky Tripathi and talked to Paul Egerman and a variety of conversations have taken place about the policy committee activities, and I can just summarize the policy activities seem to be completely in line with the standards committee recommendations, and I have not heard specificity on how do we insure lab conformance, other than if meaningful use does require the use of these standards, you would think the labs would be motivated because the clinicians, customers of their products will need them in a specific format.

Yes, and so I would just echo what John said. I think there’s very close alignment and no disagreement between the policy committee and the standards committee recommendations in terms of what the standards are, compliance, and so forth. But then in a separate area, in the policy committee, the workgroup went outside of the standards area to make recommendations on sort of what compliance levers might be used by the department to insure compliance by labs and providers alike to these kinds of recommendations for labs. But that doesn’t really have to do with the standards per se.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

Thanks.

Nancy Orvis – U.S. Department of Defense (Health Affairs) – Chief

Jamie, this is Nancy Orvis. That’s the incentive. I would suggest we look at incentivization, as well as compliance because we’ve had interests in this topic from laboratory manufacturer, machine manufacturers, but one thought that had gone around for a number of years is if, as health organizations put out purchasing contracts for these things, they give preference to those vendors who already have the LOINC codes embedded in the….

And that’s exactly, I think, part of one of those recommendations that was given to the policy committee earlier this week.

Nancy Orvis – U.S. Department of Defense (Health Affairs) – Chief

Good. I mean, that's a very critical one. My impression is that that would kind of – a new topic for some people in the vendor industry. They hadn’t been focusing on this, but they have a couple key personnel who finally are trying to educate that community, and that may be what it is as well.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

I just look forward to our finding the lever and tweaking it by 2011.

John Halamka – Harvard Medical School – Chief Information Officer

Thanks very much, Jamie and Betsy, for that. Now, Aneesh, have you joined us on the call?

Aneesh Chopra – White House – CTO

I have.

John Halamka – Harvard Medical School – Chief Information Officer

Wonderful. You have the floor to give us an update on the implementation workgroup and other comments you may have.

Aneesh Chopra – White House – CTO

Thank you so much, John. Actually, this will be brief because I’m eager to hear from Farzad and company on the NHIN work. As you all know, we have concluded our sort of sprint, 30-day sprint to gather feedback. That process went reasonably well. Two things I’d like to reflect to the group. Number one, we intend to back on track, if you will, in January, as with all the conversation, how we prepare for 2013, building off of the learnings coming out of the implementation workgroup, so we’ve been dormant since the hearing, but we’ll reengage early in January, as we all collectively move forward on the work. Obviously there’ll be some immediate work to do as a group on the IFR, but as we prepare for 2013 prep, we hope these principals and ongoing work in this regard will help it.

Number two: I did want to report back that along with Dixie I had the honor of joining her in presenting some of our findings at the Policy Committee Meeting earlier this week. The meeting was, I believe, successful in doing a few things, first and foremost, reiterating the desire on the Policy Committee Side to work closely with us and a real alignment around some of the findings as being useful not only in our standards work, but also as they think through some of their policy activities. But also, and hats off to Cris Ross and the blog community and Judy on the hearing we held. They were very keen on how to incorporate some of those open government principles in the policy work moving forward, so we think this method by which we’ve gone about gathering input may be a method that is replicated elsewhere in this ecosystem.

So those are just the two items I wanted to share briefly. We certainly welcome any particular thoughts or concerns, but I’m presuming here we want to get to the fun work of Farzad and Fridsma.

M

Any questions for Aneesh and the committee? Okay. Well, great. Aneesh, I think you’re going to be wandering over to the National Academy of Sciences and so I’ll probably see you in an hour or so. Very good.

Aneesh Chopra – White House – CTO

Yes, sir. Good-bye.

M

Thank you, Aneesh. So, Farzad, let us turn to your NHIN Workgroup. Tell us your thinking and what are some of the steps you envision to get us to a Nationwide Health Information Network.

Farzad Mostashari – NYC DH&MHH – Assistant Commissioner

Hello. So I’m talking on behalf of myself and Fridsma, who you all know and love as the true standards aficionado, the workgroup is chaired by David Lansky and Danny Weitzner, who is now at the Department of Commerce at the NTIA. We have a lot of great members on this workgroup, maybe more from the Standards Committee than from the Policy Committee, even though this was created as a workgroup to the Policy Committee officially, but Christine Bechtel; John Blair; Neil Calman; Jim Borland; Carol Diamond Collan Evans; Tim Cromwell from the VA; Jonah Frohlich; Leslie Harris; Arien Malec; Marc Overhage; Mark Probst; Wes Rishel and Micky Tripathi. So this is people coming from very different perspectives, bringing just a ton of highly relevant experience to bear on the question of how can we create a set of recommendations for a policy and technical framework that allows the Internet to be used for the secure and standards based exchange of health information. Some of the key requirements of that are that it be open to all and foster innovation.

The goal is for the workgroup to provide our initial set of recommendations to the Policy Committee in January. I’m sure there will be a lot of implications for the Standards Committee stemming from that.

One of the things that we did, I think Doug called this, based on the artificial intelligence world, a backward chaining process, but it can be helpful to work backwards I guess from what do you want to do specifically and concretely. I think I can’t underestimate the significance of one very basic premise, assumption here or goal here, which was that the NHIN should help a motivated provider achieve meaningful use in 2011 and 2012. That pretty simple assumption has, I think, provided a lot of focus and has had some implications for what we do.

The meaningful use recommendations of the Policy Committee, and I want to be clear that I’m not talking out of school here; we’re only talking about the meaningful use recommendations of the Policy Committee; involve health information exchange. There are different aspects to that. There are provider-to-provider concepts, lab-to-provider, provider-to-pharmacy and provider-to-patient. But in terms of the initial set of recommendations for 2011, they really don’t require the need for a patient index, in other words, the ability to say I don’t know who holds this information. I’m going to look up a patient and I’m going to, through a record locator, find out where all of those people are and then go and ask those people and pull that draft back in. That is, I think, because the Policy Committee recognized that those capabilities were unlikely to be available nationwide in 2011 or 2012; that was not part of the 2011 and 2012 recommendation.

So then the question becomes what can be a foundational element that we can set in place today that will support this simpler form/forms of exchange and accelerate the information exchange that’s already happening that we know is feasible, but it is not as widespread as we would like it to be. So we looked at kind of the foundational NHIN component. What is in the NHIN today? This audience knows more about the NHIN than probably anybody, but still, there’s been a lot of uncertainty around what exactly is the NHIN. We’ll go into kind of a fun kind of presentation of that shortly, but basically it’s the vocabulary standards, the document and messaging standards, directories and certificates, which we’ve highlighted and I’ll come back to why we’ve highlighted that, delivery protocols, authentication and security and trust relationships. Those are basically, if NHIN is the answer to the question, what is all of the stuff you need to exchange information over the Internet. This is the current list. I understand that there’s not really as technical as many here would have, but it’s at my level.

Next set of slides: Actually, we can skip forward two, three slides. All right. Actually, we don’t need to read this. One more. Okay. Now I’m going to keep saying click, click, because I don’t control this. Let’s start clicking.

This is thanks to Doug Fridsma for doing something so simple that I can understand it. Click. So the goal is to exchange information from one organization to another organization. Click. The first thing that we need is for people to have agreement on what they’re going to get from one end to the other. We’ve had the ... in German to make the point that you can talk whatever you want inside your house according to the NHIN construct and I believe the Policy Committee recommendations around certification, but once you go outside the house you should agree on a language and not just on a language, but also this group believes passionately about the documenting and national standards.

Click. The next piece, and this is kind of in today’s paper world, if you want to send some information from one organization to another organization you might look someone up in the directories to find out not just the correct spelling of their name, but also I’m looking for a cardiologist who takes Aetna or whatever, but also for how to get the information to them. How do I route this to them? What’s their phone number or what’s their fax number or what’s their address?

Click. Then the other thing that you do in the paper world today is you put your John Hancock on the bottom of the page or sign it across the seal of the envelope in the old fashioned way. That is kind of what we have today for authentication.

Click. You put it in the post. The postal truck carries it over. There’s that delivery protocol.

Click. It goes into the mailbox and, of course, it’s not the physical security afforded by the U.S. Postal Service regulation mailbox that provides the security. It is the entire set of laws, regulations and the fact that it’s a felony to interfere with the U.S. Post that provides the security that we do feel today in sending information through the U.S. Postal Service.

Click. Then importantly, there’s something that goes on kind of currently in the brain of the person who receives this piece of information or maybe a request for information, which is the thinking, interpreting this contact in the context of their relationship with the organization or person on the other end, so the trust relationships.

Next slide. So in answer to the question, click, what’s the NHIN, click, it’s basically this stuff. It’s the standards, the protocols and it does go over the Internet. It’s the directories and certificates, which are the key, external elements in the current NHIN implementation that are the services you need to make this whole thing run between organizations and then finally, making explicit and kind of computable almost some of the trust and security aspects. I obviously don’t know what I’m talking about here, but things like the SAML assertions and the DURSA that makes explicit the Data Use Reciprocal Services Agreement that makes explicit the trust relationship so that you could have “a policy engine” on the information within the walls of the organization that takes that packet and then does something in an automated way to determine whether this meets the state regulations, the patient’s preferences and the policies of the organization itself in terms of releasing that information or not.

But suffice it to say that this is, all of this stuff, is what’s in the current approach to the NHIN, which is where we’re seeking input from a wide variety of stakeholders by saying, “Hey, come take a look at the NHIN. We think it’s pretty cool and we think it has some real value,” as you try to plug in different networks that are emerging, which don’t have to be just health information exchanges or RIOs, as traditionally defined. It can also include potentially EHR vendor networks, who want to exchange information with each other or with the VA or with the DoD or with SSA and so forth.

So we are having more and more people looking at this full stack of services and standards and protocols involved with the very sophisticated information exchange that’s enabled by it and just last week I think the VA and Kaiser have signed DURSAs and this is going into production, limited production mode, but production mode with those large, capable organizations with the rich and complex standards and services that it affords.

Next slide. Back to meaningful use 2011 and 2012 though, there are some simpler exchanges though that maybe require only a subset of those, of all of the above, right? A subset of the services maybe, a subset of the protocols, maybe a subset that maybe has different implications or context for the trust and maybe even security, although maybe not on that side. But fundamentally the core thing, and this is an insight that Wes and Dave McCallie on this call have taken to a very intriguing place; maybe you can really, really strip down a lot of the requirements.

Now, if we stay within the NHIN framework here, it’s not as stripped down as David or Wes have taken this. There is something, some common services in the middle, but those common services, instead of each person maintaining their own directory, their own e-mail directory that can go out of date and isn’t authoritative, maybe there would be some centralized or federated, but a directory service outside of the organization that are available and that could serve this simpler exchange. To give a concrete example of this, what does SureScripts do today? Well, they do a lot of things that add value, but in terms of the simple, secure routing piece of it, they offer a list of all of the pharmacies. That’s a good, authoritative list from NCPDP that has attached to it how to route information to that pharmacy where you can look them up over their network and they offer security assurances over the encryption of that message. They offer the standards and conformance to that standard. Then when the message gets to the provider and the pharmacy on the other end, they have a list of prescribers that is authoritative from SureScripts to look up to see, “Yes, I can make sure that this person is who they say they are and that they are indeed authorized to be a prescriber.”

The authentication, interestingly, we heard at testimony is done through intermediaries, third party intermediaries of the EHR vendors, so it’s really the business relationship between the prescriber and the EHR vendor and then from the EHR vendor to SureScripts that creates that chain of trust and authentication. They don’t have kind of PKI at the very edge to make this all spin, but there are these basic assertions and services.

One can imagine the similar basic services of the directories and maybe certificates to ensure the mutual authentication and encryption can happen either at the server level, at the organization level or all of the way out to the edge at the provider level. Those fundamental building blocks could also serve to provide a means of communication securely over the Internet for lab-to-provider or provider-to-provider as well.

Next slide. Let’s be clear; make no mistake about it; we are not saying all you need is push. I’m not sure push is even the right term for this, but we’re quite explicit about acknowledging the central goal here is what can we do today that is going to meet the needs of today and tomorrow. Again, these are the Policy Committee recommendations for information exchange. I’m not saying anything anyone doesn’t know here. No news here. But these are the kinds of things the Policy Committee said looking ahead we’re going to need to do as a country. Some of this is not going to be met by the simplest form of the interoperability and the secure routing that we just talked about, but certainly, we want to build it in such a way that the simple building blocks build towards and support the more sophisticated exchange as we’re seeing happening even today between advanced organizations, who can handle the complexity and the richness of the services and agreements required.

Next slide. So, other key considerations for the workgroup in doing this is how do we reinforce and ensure that there is a solid trust fabric. It doesn’t always have to be the same pieces that provide you with that trust assurance depending on the different kinds of exchange, the different context for the exchange, the different purposes of exchange and so forth.

Another key consideration is what can the government do to ensure that one of these other services, authentication, is available and different models for authentication, as I mentioned, the PKI infrastructure. I know there are lots of shudders for anyone who’s been around this field for any duration of time about the complexities of doing PKI infrastructure on a large scale. We’ll hear about those in the hearings for providers, but also considering the alternatives of having this similar to what I just described, for example, the EHR vendors or other credentialing organizations or hospitals or someone else being kind of the source of the authentication and kind of the chain of trust on that or server-to-server, TLS or whatever other stuff that I don’t really understand to be honest.

The key thing though is what can we do today. That’s the mission. We really want to move forward in a very get-to-operations and execution, but not do it in such a way that takes us off track or diverts us. What is the, as someone said, will regret or irreducible stuff, core of stuff that under a variety of future scenarios will be deemed as having been useful and to clarify the role of government. NHIN is not a box in the middle that people plug into. What is the appropriate role for government, both federal and state? I know that there are lots of states, who are going to be eagerly interested in knowing what their expectations are for them in terms of their role in establishing the infrastructure for information exchange. And finally, to do all of this in a way that enables broad participation across the full spectrum of players, both large and small. I think that’s it.

Next slide. Is that it? Yes, that’s it.

M

Good. Farzad, thank you so much. I think you’ve really encapsulated a lot of the themes of this morning. That is we want to engineer for the little guy, but we want to make sure whatever we do it’s both large and small organizations, that it’s implementable today, but also we set a foundation for the future. So many of the states we’ve had this morning are trying to find the delicate balance and really look forward to working in conjunction with the Policy Committee because, as you said, we need to get it done. We need to move forward. We need to take action and take the steps that are going to help meaningful use for 2011.

This is Stan Huff. Very nice presentation. To just extend what you said, I think there’s a very direct connection between this kind of exchange and things that we talked about this morning. I would point out basically that the interoperable exchange of this kind requires the terminology activities and I would also add in some modeling activities to make the content truly interoperable. Sort of the level we’re at today we’ve said we’re going to use this message standard and we’re going to use these terminologies and that leaves the space; that’s good, but it leaves a lot of things unspecified or under specified if you truly want to get to an interoperable state for this kind of exchange. So we’re at a level now where data is being exchanged either for human readability or some partial automated processing of the information. I hope we’re starting on a journey where we can increase the computer processable exchange of information and that requires a lot of the things that we talked about this morning in terms of creating value sets that are tied to specific elements in the messages and the value of a lot of what we provide or can provide is in fact a shareable set of logical models of how information is constructed and the terminology that’s used with those structures to make an interoperable infrastructure.

Farzad Mostashari – NYC DH&MHH – Assistant Commissioner

Absolutely. I would never disagree with that. I would point out though that the mission or the NHIN Workgroup that we’ve asked them to focus on in the near term is what’s the right architecture and services and so forth for the secure transmission of stuff over the Internet, less so recognizing that it’s this group and others, who are going to be really points in terms of making sure that the vocabulary and messaging standards that then ride on top of that transport.

Yes. I agree completely. I mean that’s the right order and I think it builds on some of the things that Wes has said. In one sense it’s saying these things ought to be as general across not just the medical industry, but all of Internet commerce and ride on as many of those capabilities and standards that exist outside of healthcare at sort of the transport and communication levels and security levels. Then the part that medicine sort of asks to that, in fact, is what is the logical structure and associated terminology for the payload in these messages. That’s where it becomes medicine specific and you’re proceeding in exactly the right order ...

To get all of that communication set up. It just lays the correct foundation then for these next steps related to modeling and terminology, so excellent.

Farzad Mostashari – NYC DH&MHH – Assistant Commissioner

Yes. Thank you.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

This is Wes. Are we in the comment period now?

M

We are in the comment period. Please go ahead.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

So I’m going to do the politic thing and say something that might be taken as the word “but,” but I’ll call it “and.” And I fully support what Stan is saying about our need to get to, over a period of time, an expansion of the amount of data that we can represent in our formats in a structured format. I’d particularly like to emphasize that we are talking about expansion over time. I don’t know that semantic interoperability; I think semantic interoperability is like grace and religion; you aspire to it more and more over time. You don’t ever get there.

Farzad Mostashari – NYC DH&MHH – Assistant Commissioner

I will say though, Wes, there will be data standards issues around things like directories that make sure that directories are interoperable with each other and things like that.

Wes Rishel – Gartner, Inc. – Vice President & Distinguished Analyst

Oh, yes. Yes. I’m just responding to what Stan said. The concern I have, and a lot of people recognize this as a soapbox I jump on pretty regularly, is that we find a way to pull the underlying EHR systems along to the level of complexity that we want to transmit. In Stan’s case I know that Intermountain Healthcare has got a great system. In many of the systems that we see in doctors’ offices they, frankly, have the ability to find specific data and capture it in a concrete way essentially by modifying the dialogues that physicians use to document care or orders or whatever. But they don’t have a generic approach to this and we will be able to create the standards for communicating this information a lot faster than we’ll be able to pull the industry along. My concern is that we let Intermountain Healthcare communicate with whoever they can in a standard way using these standards, but not require the community health center, federally qualified community health center in southern Mississippi to create templates that are more than they need for giving care and those specific areas of documentation that we know we want for other use; that creates a problem I call incremental interoperability. I think it’s fully solvable, but we have to continually temper, not temper, but structure our approach to sending standard data so that we allow systems to communicate at the level that they can create data.

Farzad Mostashari – NYC DH&MHH – Assistant Commissioner

I think, to use your own phrase, I would violently agree with that. We want to do this incrementally. We don’t want perfect to get in the way of good. I think we want to look at where we can take early gains and get value, but never losing sight of the goal with the recognition that many of the values that we can ultimately get in substantial cost reductions in healthcare will only come if we reach a new level of structured, coded exchange, but I want to agree. I think we have to focus and there is great value in just sending things that people can read, because we don’t do that efficiently today, but then we do things that we already mentioned earlier today; that we focus and say, “Gee, it ought to be doable sometime in the next five years that people can exchange those 300 to 700 lab tests using a standard code. That should be achievable. It might take us longer in designing things that allow us to accurately capture in a coded way, signs, symptoms, clinical course, the very complicated things in language comparison would be poetry compared to billing lists or something. I mean they’re very complex in terms of representation, so yes. Let’s do what we can; start where we are; get value out of that, but not lose sight of what we can do by incremental increases in the sophistication and interoperable representation.

M

I’ll violently agree with your violently agreeing and say and we ought to enable those institutions that are working together at this highly sophisticated level to be designing and trying ways of representing that data over the wire over the facilities of the NHIN even though we’re not expecting to achieve interoperability at the most sophisticated levels until they’ve gotten all of the arrows in the back and pulled them out and sewed up the wounds and made it work.

Christopher Chute – Mayo Clinic – VC Data Gov. & Health IT Standards

This is Chris Chute and if we’re talking soapboxes and violent agreement, because I want to join, I certainly agree that semantics are an important component to ... the more substantive comment I think is if we are going to pursue NHIN, and I’m a great fan of NHIN and the notion of common service directories and common methods for interchange. Might we, over time, broaden its scope to include not only service directories and user directories and message directories and the like, but also content directories in a sense where terminology services or the access to, as Stan was saying, use-case specific value sets derived from work that Betsy and Jamie’s workgroup are going to be pursuing could be made manifest through the NHIN in a way that any organization anywhere could readily and easily access this kind of content either in a run-time basis, in a browsing basis, in a review basis with issue of synchronization and update to maintain currency with national standards essentially managed as a component of the NHIN infrastructure?

Thank you, Chris. I think what you suggest is really kind of an important part of developing not only sort of the technical components to operationalize this, but also to provide essentially that semantic framework that we need to take people, who have a need to exchange data all of the way through the process of identifying what are the value sets that are needed, what are the building blocks in terms of the components and packages that need to be exchanged and then operationalizing those in a way that will support the exchange of information. So I think you’re right on in terms of recognizing the importance of that. I think the infrastructure for supporting that semantics really has to run the gamut from the point of identifying the need to exchange and the value sets around that all of the way down to the operational details and being able to access them in some fashion to exchange that information and making sure everybody is using the same sets of values and the same kinds of terminologies.

This is Dixie Baker. I have a comment. First of all, I think that this is a nice vision of the NHIN from a very user perspective, so I think that that’s valuable, especially in selling the whole concept. This is what it would do, but; no, I mean and; looking at it from the perspective of the security engineer I think that there are several concepts that need to be factored into this message, if you will.

First of all, there are several, three very basic tenants in security. One of them was mentioned by several of our testifiers, which was the defense in depth concept. In fact, Farzad, you’ve mentioned this in terms of levels. We have to think about security from the user perspective, at the application level, the server and also organizations, especially when we’re thinking about authentication, but also securing between them.

The second basic tenant is the lower in the stack you implement security the hard it is to bypass, so I would hope that the NHIN is not just message security, but that it really does address security in depth.

The third thing, and this relates to defense in depth as well, is that we need to implement security so it’s easiest to do the right thing. I think doing the right thing means that we should minimize the number of decisions that an individual has to make in order to make it work. We can’t really assume that everybody is going to do the right thing and we can’t assume that every system is going to operate as it’s supposed to. That just is not going to happen, so I think as we move forward we have to start, when you mentioned the need for a trust fabric, we really have to start by figuring out how to secure the network itself and then build upon that layers of security; not that it needs to be complex. Again, we need to minimize the complexity and remove the decision making from the individual as much as possible.

I don’t think that my doctor wants to have to think about every time she sends me an e-mail she doesn’t want to have to think about what she needs to do to protect it. She should just be able to send me my information and know that it’s going to be protected.

Farzad Mostashari – NYC DH&MHH – Assistant Commissioner

Very interesting. You know, we had had some discussions, again, non-technical, but about security like IPV6 type of stuff at the network level, at a deeper level rather than at the message level, so I don’t know if that should be something we include in the kind of feasibility of doing that.

Yes. I think IPV6 may be a stretch, but I think IPsec certainly has been widely implemented. In fact, some of our testimony they said they’d already implemented it at their HIE. That’s very feasible with IPV4. It’s very feasible and I think that we really need to do that. I think in order to really get the defense in depth that will make people trust NHIN we need to do that.

Farzad Mostashari – NYC DH&MHH – Assistant Commissioner

Interesting.

Kevin Hutchinson – Prematics, Inc. – CEO

Farzad, it’s Kevin Hutchinson. A quick question: You made a comment about the HIEs and their role in the NHIN. Of course, we’re getting ready to distribute, the government is getting ready to distribute a large amount of money for the continued development of these HIEs at the local level. We are talking about provider-to-provider, provider-to-pharmacy, provider-to-lab as part of this NHIN, which most of that is going to be done at the local community level for exchange of information. We talked about that the NHIN is not necessarily just simply the connection of those HIEs, but instead is ...

Farzad Mostashari – NYC DH&MHH – Assistant Commissioner

But it also is.

Kevin Hutchinson – Prematics, Inc. – CEO

It is also, right. So where do you see the work that we’re doing on the NHIN as it relates as a standards group applying to the HIEs as they begin their work and some that already are far along in their production level today.

Farzad Mostashari – NYC DH&MHH – Assistant Commissioner

Well, I think I have two thoughts on that. One is that I think the NHIN is, as it’s been described to me, could be almost fractal in that you can use some of the basic tools and approaches to not just enter kind of ... communication, but could also serve as a model with the service if you want to, to create local information exchange. I think there are already a couple of examples of that that other people, who know more about it can talk about. So that I think should be something seriously that people who are developing local information exchanges should consider and look at and not just at the NHIN specifications, but also at the tools, like Connect, the open source software tools that are available.

The other part of it is that, as I said, this is also. This is not instead of. If you can support the full stack, if you have the sophistication and you have the mission to support the sophisticated information exchange and the full stack of services, including record locators and the ability to sign a DURSA on behalf of the organization we strongly encourage that. We want to add to the NHIN cooperative that’s happening today. So I think that would be encouraged.

M

Great. Well, a very, very rich discussion on this topic. Judy Sparrow, I know we do have to reserve some time for public comment and so I want to make sure we do that, but clearly, I think what we’ve heard today is resounding messages that there are guiding principles, that we need to serve a variety of stakeholders and that we will be getting the interim final rule hopefully very soon. Public comment to follow and rich discussion from this committee, especially now with some of the new inputs from the Policy Committee. Farzad, I very much thank you for this. I think it’s given us all some great food for thought.

Jonathan Perlin – Hospital Corporation of America – CMO & President

John, this is Jon. Let me echo your comments of thanks. I think we all have a holiday gift in the forthcoming role, but I have been remiss in my duties as Chair. I forgot to ask for approval of the minutes at the beginning. I would ask that some of the members of the Committee, who have had a chance to review, let’s take a moment and if anyone has any comments, amplifications, modifications, amendments, please say so now.

Farzad Mostashari – NYC DH&MHH – Assistant Commissioner

This is Farzad. I have to log off. If there are any questions, I’m sorry I won’t be able to take them from the public. I’m sorry.

Jonathan Perlin – Hospital Corporation of America – CMO & President

Thank you. Okay. Hearing no amendments to the minutes then we’ll assume consensus on those. John, unless you have anything else we’ll turn to Judy for public comment.

Okay. Great. Thank you. The operator will queue up any public comment, but on the screen if you’re already connected just press *1. If you’re not, if you would dial 1-877-705-6006. Operator, can you tell me, are there anybody on the line for a comment?

Cris Ross – MinuteClinic – CIO

While people are queuing up, Judy, I’ll just mention that text comments that are sent in will not be read aloud in the context of the live meeting, but will be added to the public record.

I’m David Tao from Siemens Healthcare and also a volunteer for HITSP and CCHIT, so thank you for your updates and for your desire to strive for simplicity and not let the perfect get in the way of the good. As Dr. Blumenthal mentioned, we’re in a pause state waiting for all three regulations. Now, the problem is right now people don’t know what simple or good mean to ONC and CMS or whether those organizations even accept those premises of simplicity and good enough until the regulations are published. I would say, speaking for my company and I’m sure many others, many people have been on the edge of their seats. Some are planning, have been planning to work the last two weeks of the year to start reviewing the regulations from ONC and CMS so they could start refining their development and implementation plans to ensure that meaningful use is achieved.

But recently in the HIT Buzz Blog and Dr. Halamka’s blog and some other places dates like early 2010 have been mentioned for those regulations; whereas the previous expectation was definitely this month. So we strongly urge ONC and CMS to make clear their intention for the timing of these publications, especially if they really aren’t going to be released in December. We ask that ONC and CMS save people the time and angst by telling them whether they can have their holidays back. We’d like a public statement of either resetting the data expectations or publicly reaffirming the dates if they are still in December for each regulation, since I recognize they may not all be on the same date. One way to do that might be to update the HIT Buzz Blog with a statement about this affect. Thank you very much for listening.

Well, in the interest of travel and holiday, I’ll be very short. Again, our thanks to each of you for the hard work during this year. I know this has been really nothing less than a second job, perhaps for some as much time this year it was a first job and your work is greatly appreciated. I think we all share not only a passion, but an understanding that this is really a vehicle to take us from healthcare, which is inadequate in terms of its use of information to healthcare that can be informed ... terrific ... to hear the discussion of NHIN as the ... they really realize that world. So many thanks. Happy, healthy holidays and all great things in the new year. Get your rest because, as David Blumenthal apprised us, we have more work to do. John, anything on your end?

John Halamka – Harvard Medical School – Chief Information Officer

Yes. Thank you, everybody. Please enjoy the holidays. I really look forward to reconvening next year and working through a number of the issues we described today. Have a wonderful holiday and a wonderful day.

Participants

Thank you.

Public Comments

1. In regards to data back-up: I would like to hear more about how once all of this data is placed into all the new EMR implementations in private practice and hospitals that it will be protected from loss. Currently HIPAA requires covered entities to back-up data but most do not do this and those that do, do so in-house, ergo if the office is lost due to disaster so is the backed up data. Others use cold storage for tape and optical disk, but this does not comply with the intended use and spirit of the NHIN; specifically exchanging data over the Internet, and Time-sensitive access to medical data during a disaster.

2. From the discussion, there is offered the notion that trust services/standards are sufficient if engaged at the point of exchange. In reality, trust is a function of how information is captured, authenticated, attested as to accuracy and completeness, retained, audited and traceable from its point of origination to the point of exchange. Otherwise, exchange is at best "garbage in, garbage out".

3. Sounds like the HIT Standards Committee is overly sensitive to vendor's interests, while generally ignoring protections for patients, consumers and providers interests and information, specifically end to end trust assurance.

4. For an EHR, what would require me to implement ATNA? In general, I am unlikely to exchange audit messages with any other system.

5. "Security and privacy are foundational to EHR adoption". Absolutely, but must be considered WITHIN system architectures not simply applied at back-end message/record exchange.

6. 5-6 are also point to point schemes which have little to do with requirements for persistent EHR records. Unfortunately this is a rehash of 20 year old exchange methods that are entirely insufficient for persistent EHRs.

7. 1-3 standards cover point to point exchange of transient messages. 4 is again auditing of exchange of transient messages (not end to end audit of persistent records - from point of record origination to each ultimate point of record use and retention).