The center is MedStar’s patient safety, and usability applied research arm. MedStar is the Mid Atlantic area’s largest medical facility non profit operating 10 major hospitals as well as dozens of urgent care, rehab and medical groups.

MedStar set up the center, as part of its Institute for Innovationfive years ago. The Institute is an in house service of several centers that conduct research, analysis, development and education. In addition to human factors, the Institute turns MedStar staff’s ideas into commercial products, conducts professional education, encourages healthy lifestyles and develops in house software products.

The Human Factors Center’s work concentrates on medical devices, as well as creating new processes and procedures. The center’s 30 person staff features physicians, nurses, engineers, product designers, patient safety, usability and human factors specialists. The Center’s focus is on both MedStar and on improving the nation’s healthcare system with grants and contracts from AHRQ, ONC, CMS, etc., as well as many device manufacturers.

Dr. Raj Ratwani

Dr. Ratwani: Dr. Ratwani’s publications are extensive and were one reason prompting my interview. I met with him in his office in the old Intelsat building along with Rachel Wynn the center’s post doctoral fellow. We covered several topics from the center’s purpose to ONC’s Meaningful Use (MU) program to the center’s examination of adverse event reporting systems.

Center’s Purpose: I started by asking him what he considered the center’s main focus? He sees the center’s mission as helping those who deliver services by reducing their distractions and errors and working more productively. He said that while the center examines software systems, devices take up the lion’s share of its time from a usability perspective.

The center works on these issues in several ways. Sometimes they just observe how users carry out a task. Other times, they may use specialized equipment such as eye tracking systems. Regardless, their aim is to aid users to reduce errors and increase accuracy. He noted how distractions can cause errors even when a user is doing something familiar. If a distraction occurs in the middle of a task, the user can forget they’ve already done a step and will needlessly repeat it. This not only takes time, but can also lead to cascading errors.

Impact: I asked him how they work with the various medical centers and asked about their track record. Being in house, he said, they have the advantage of formal ties to MedStar’s clinicians. However, he said their successes were a mixed bag. Even when there is no doubt about a change’s efficacy, its acceptance can depend on a variety of budget, logistic and personal factors.

EHR Certification. I then turned to the center’s studies of ONC’s MU vendor product certification. Under his direction, the center sent a team to eleven major EHR vendors to examine how they did their testing. Though they interviewed vendor staffs, they were unable to see testing. Within that constraint, they still found great variability in vendor’s approach. That is, even though ONC allowed vendors to choose their own definition of user centered design, vendors often strayed even from these self defined standards.

MU Program: I then asked his opinion of the MU program. He said he thought that the $40 billion spent drove EHR adoption for financial not clinical reasons. He would have preferred a more careful approach. The MU1 and MU2 programs weren’t evidence based. The program’s criteria needed more pilot and clinical studies and that interoperability and usability should have been more prominent.

Adverse Events: Our conversation then turned to the center’s approach to adverse events, that is instances involving patient safety. Ratwani is proud of a change he helped implement in Medstar’s process. Many institutions take a blame game approach to them berating and shaming those involved. MedStar treats them as teaching moments. The object is to determine root causes and how to implement change. Taking a no fault approach promotes open, candid discussions without staff fearing repercussions.

I finally asked him about his studies applying natural language processing to adverse patient safety reports. His publications in this area analyze the free text sections of adverse reporting systems. He told me they often found major themes in the report texts that the systems didn’t note. As a follow on, he described their project to manage and present the text from these systems. He explained that even though these systems capture free text, the text is so voluminous that users have a difficult time putting them to use.

My thanks to Dr. Ratwani and his staff for arranging the interview and their patience in explaining their work.

____________________________________

A word about DC’s old Intelsat building that houses the Institute. Normally, I wouldn’t comment on an office building. If you’ve seen one, etc., etc. Not so here. Built in the 1980s, it’s an example of futurist or as I prefer to call it Sci-Fi architecture and then some. The building has 14 interconnected “pods” with a façade meant to look like, well, a gargantuan satellite.

Intelsat Building

To reach an office, you go down long, open walkways suspended above an atrium. It’s all other unworldly. You wouldn’t be terribly surprised if Princess Leia rounded a corner. It’s not on the usual tourist routes and you can’t just walk in, but if you can wangle it, it’s worth a visit.

Intelsat Building Interior

Share this:

Like this:

Recently, John hosted a #HITsm chat on using technology to fight physician burnout. The topic’s certainly timely, and it got me to wondering just what is physician burnout. Now, the simple answer is fatigue. However, when I started to look around for studies and insights, I realized that burnout is neither easily defined nor understood.

Job burnout is a special type of job stress — a state of physical, emotional or mental exhaustion combined with doubts about your competence and the value of your work.

So, it is fatigue plus self doubt. Well, that’s for starters. Burnout has its own literature niche and psychologists have taken several different cracks at a more definitive definition without any consensus other than it’s a form of depression, which doesn’t have to be work related.

Unsurprisingly, burnout is not in the DSM-5. It’s this lack of a clinical definition, which makes it easy to use burnout like catsup to cover a host of issues. I think this is exactly why we have so many references to physician or EHR burnout. You can use burnout to cover whatever you want.

It’s easy to find articles citing EHRs and burnout. For example, a year ago April, The Hospitalist headlined, “Research Shows Link Between EHR and Physician Burnout.” The article then flatly says, “The EHR has been identified as a major contributor to physician burnout.” However, it never cites a study to back this up.

If you track back through its references, you’ll wind up at a 2013 AMA study, “Factors Affecting Physician Professional Satisfaction and Their Implications for Patient Care, Health Systems, and Health Policy.” Developed by the Rand Corporation, it’s an extensive study of physician job satisfaction. Unfortunately, for those who cite it for EHR and burnout, it never links the two. In fact, the article never discusses the two together.

Not surprisingly, burnout has found its way into marketing. For example, DataMatrix says:

Physician burnout can be described as a public health crisis especially with the substantial increase over the last couple of years. The consequences are significant and affect the healthcare system by affecting the quality of care, health care costs and patient safety.

Their solution, of course, is to buy their transcription services. What’s happened here is that physician work life dissatisfaction has been smushed together with burnout, which does a disservice to both. For example, Medscape recently published a study on burnout, which asked physicians about their experience. Interestingly, the choices it gave, such as low income, too many difficult patients – difficult being undefined — are all over the place.

Their findings were far more nuanced than many others. EHRs played a role, but so did long hours. They found:

EHR proficiency training has been associated with improved job satisfaction and work-life balance.14 While increasing EHR proficiency may help, there are many potential reasons for physicians to spend after-hours on the EHR, including time management issues, inadequate clinic staffing, patient complexity, lack of scribes, challenges in mastering automatic dictation systems, cosigning resident notes, messaging, and preparing records for the next day. All of these issues and their impact on burnout and work-life balance are potential areas for future research.

There’s a need to back off the burnout rhetoric. Burnout’s overused and under defined. It’s a label for what may be any number of underlying issues. Subsuming these into one general, glitzy term, which lacks clinical definition trivializes serious problems. The next time you see something defined as physician or EHR burnout, you might just ask yourself, what is that again?

Share this:

Like this:

One of the givens of EHR life is that users, especially physicians, spend excessive time keying into EHRs. The implication is that much keyboarding is due to excessive data demands, poor usability or general app cussedness. There’s no end of studies that support this. For example, a recent study at the University of Wisconsin-Madison’s Department of Family Medicine and Community Health in the Annals of Family Medicine found that:

Primary care physicians spend more than one-half of their workday, nearly 6 hours, interacting with the EHR during and after clinic hours. The study broke out times spent on various tasks and found, unsurprisingly, that documentation and chart review took up almost half the time.

Figure 1. Percent Physician’s Time on EHR

This study is unique among those looking at practitioners and EHRs. They note:

Although others have suggested work task categories for primary care,13 ours is the first taxonomy proposed to capture routine clinical work in EHR systems.

They also make the point that they captured physician EHR use not total time spent with patients. Other studies have reached similar EHR use conclusions. The consensus is there too much time keyboarding and not enough time spent one to one with the patient. So, what can be done? Here, I think, are the choices:

Offload to Patients. Use patient apps to input data, rather than physician keyboarding.

Examining the Alternatives

1.Do Nothing. Making no change in either the systems or practioners’ approach means accepting the current state as the new normal. It doesn’t mean that no changes will occur. Rather, that they will continue at an incremental, perhaps glacial, pace. What this says more broadly is that the focus on keyboard, per se, is wrong. The question is not what’s going in so much as what is coming out compared to old, manual systems. For example, when PCs first became office standards, the amount of keyboarding vs. pen and paper notations went viral. PCs produced great increases in both the volume and quality of office work. This quickly became the new norm. That hasn’t happened with EHRs. There’s an assumption that the old days were better. Doing nothing acknowledges that you can’t go back. Instead, it takes a stoic approach, and assumes things will get better eventually, so just hang in there.

2. Scribes. The idea of using a scribe is simple. As a doctor examines a patient, the scribe enters the details. Scribes allow the physician to off load the keyboarding to someone with medical knowledge who understands their documentation style. There is no question that scribes can decrease physician keyboarding. This approach is gaining in popularity and is marketed by various medical societies and scribe services companies.

However, using scribes brings a host of questions. How are the implemented? I think the most important question is how a scribe fits into a system’s workflow. For example, how does an attending review a scribe’s notes to determine they convey the attending’s clinical findings, etc. The attending is the responsible party and anything that degrades or muddies that oversight is a danger to patient safety. Then, there are questions of patient privacy and just how passive an actor is a scribe?

If you’re looking for dispositive answers, you’ll have to wait. There are many studies showing scribes improve physician productivity, but few about the quality of the product.

3. Make EHRs Easier. Improving EHR usability is the holy grail of health IT and about as hard to find. ONC’s usability failings are well known and on going, but it isn’t alone. Vendors know that usability is something they can claim without having to prove. That doesn’t mean that usability and its good friend productivity aren’t important and are grossly overdue. As AHRQ recently found:

In a review of EHR safety and usability, investigators found that the switch from paper records to EHRs led to decreases in medication errors, improved guideline adherence, and (after initial implementation) enhanced safety attitudes and job satisfaction among physicians. However, the investigators found a number of problems as well.

These included usability issues, such as poor information display, complicated screen sequences and navigation, and mismatch between user workflow in the EHR and clinical workflow. The latter problems resulted in interruptions and distraction, which can contribute to medical error.

Additional safety hazards included data entry errors created by the use of copy-forward, copy-and-paste, and electronic signatures, lack of clarity in sources and date of information presented, alert fatigue, and other usability problems that can contribute to error. Similar findings were reported in a review of nurses’ experiences with EHR use, which highlighted the altered workflow and communication patterns created by the implementation of EHRs.

Improving EHR usability is not a metaphysical undertaking. What’s wrong and what works have been known for years. What’s lacking is both the regulatory and corporate will to do so. If all EHRs had to show their practical usability users would rejoice. Your best bet here may be to become active in your EHR vendor’s user group. You may not get direct relief, but you’ll have a place, albeit small, at the table. Otherwise, given vendor and regulatory resistance to usability improvements, you’re better off pushing for a new EHR or writing your own front EHR front end.

4. Make EHRs Smarter. If Watson can outsmart Kent Jennings, can’t artificial Intelligence make EHRs smarter? As one of my old friends used to tell our city council, “The answer is a qualified yes and a qualified no.”

AI takes on many, many forms and EHRs can and do use it. Primarily, these are dictation – transcription assistant systems. They’re known as Natural Language Processing (NLP). Sort of scribes without bodies. NLP takes a text stream, either live or from a recording, parses it and puts it in the EHR in its proper place. These systems combine the freedom of dictation with AI’s ability to create clinical notes. That allows, the theory maintains, a user to maintain patient contact while creating the note, thus solving the keyboarding dilemma.

The best-known NLP system Nuance’s Dragon Medical One, etc. Several EHR vendors have integrated Dragon or similar systems into their offerings. As with most complex, technical systems, though, NLP implementation requires a full scale tech effort. Potential barriers are implementation or training shortcuts, workflow integration and staff commitment. NLP’s ability to quickly gather information and place it is a given. What’s not so certain is its cost effectiveness or its product quality. In those respects, its quality and efficacy is similar to scribes and subject to much the same scrutiny.

One interesting and wholly unexpected NLP system result occurred in a study by the University of Washington Researchers. The study group used an Android app NLP dictation system, VGEENS, that captured notes at bedside. Here’s what startled the researchers:

….Intern and resident physicians were averse to creating notes using VGEENS. When asked why this is, their answers were that they have not had experience with dictation and are reluctant to learn a new skill during their busy clinical rotations. They also commented that they are very familiar with creating notes using typing, templates, and copy paste.

The researchers forgot that medical dictation skills are just that, a skill and don’t come without training and practice. It’s a skill of older generations and that keyboarding is today’s given.

5. Offload to Patients. I hadn’t thought of this one until I saw an article in the Harvard Business Review. In a wide ranging review, the authors saw physicians as victims of medical overconsumption and information overload:

In our recent studies of how patients responded to the introduction of a portal allowing them to e-mail health concerns to their care team, we found that the e-mail system that was expected to substitute for face-to-face visits actually increased them. Once patients began using the portal, many started sharing health updates and personal news with their care teams.

One of their solutions is to off load data collection and monitoring to patient apps:

Mightn’t we delegate some of the screening work to patients themselves? Empowering customers with easy-to-use tools transformed the tax reporting and travel industries. While we don’t expect patients to select what blood-pressure medications to be on, we probably can offload considerable amounts of the monitoring and perhaps even some of the treatment adjustment to them. Diabetes has long been managed this way, using forms of self-care that have advanced as self-monitoring technology has improved.

This may be where we are going; however, it ignores the already crowded app field. Moreover, every app seems to have its own data protocol. Health apps are a good way to capture and incorporate health data. They may be a good way to offload physicians’ keyboarding, but health apps are a tower of protocol Babel right now. This solution is as practical as saying that the way to curb double entering data in EHRs is to just make them interoperable.

What’s an EHR User to Do?

If each current approach to reducing keyboarding has problems, they are not fatal. I think that physician keyboarding is a problem and that it is subject to amelioration, if not solution.

… In reality, much of this extra work is a result of expanded documentation and quality measure requirements, security needs, and staffing changes. As the healthcare industry shifts its focus to value-based reimbursement and doing more with less, physician work is increasing. That work often takes place in the EHR, but it isn’t caused by the EHR’s existence.

Blaming the EHR without optimizing its use won’t solve the problem. Instead, we should take a holistic view of the issues causing provider burnout and use the system to create efficiencies, as it’s designed to do.

The good news is that optimizing the EHR is very doable. There are many things that can be done to make it easier for providers to complete tasks in the EHR, and thereby lower the time spent in the system.

Broadly speaking, these opportunities fall into two categories.

First, many organizations have not implemented all the time-saving features that EHR vendors have created. There are features that dramatically lower the time required to complete EHR tasks for common, simple visits (for instance, upper respiratory infections). We rarely see organizations that have implemented these features at the time of our assessments, and we’re now working with many to implement them.

In addition, individual providers are often not taking advantage of features that could save them time. When we look at provider-level data, we typically see fewer than half of providers using speed and personalization features, such as features that let them rapidly reply to messages. These features could save 20 to 30 minutes a day on their own, but we see fewer than 50 percent of providers using them.

Optimization helps physicians use the EHR the way it was intended – in real-time, alongside patient care, to drive better care, fewer mistakes, and higher engagement. Ultimately, we envision a care environment where the EHR isn’t separate from patient care, but rather another tool to provide it.

What does that mean for scribes or NLP? Recognize they are not panaceas, but tools. The field is constantly changing. Any effort to address keyboarding should look at a range of independent studies to identify their strengths and pitfalls. Note not only the major findings, but also what skills, apps, etc., they required. Then, recognize the level of effort a good implementation always requires. Finally, as UW’s researchers found,surprises are always lurking in major shake ups.

Share this:

Like this:

Over the years, writers on blogs such as EHRandHIPAA have vented their frustration with lousy EHR usability and interoperability problems. Usability has shown no real progress unless you count all the studies showing that its shortcomings cost both time and money, drives users nuts and endangers patient lives.

The last administration’s usability approach confused motion with progress with a slew of roadmaps, meetings and committees. It’s policies kowtowed to vendors. The current regime has gone them one better with a sort of faith based approach. They believe they can improve usability as long it doesn’t involve screens or workflow. Interoperability has seen progress, mostly bottom up, but there is still no national solution. Patient matching requires equal parts data, technique and clairvoyance.I think the solution to these chronic problems isn’t technical, but political. That is, vendors and ONC need to have their feet put to the fire. Otherwise, in another year or five or ten we’ll be going over the same ground again and again with the same results. That is, interop will move ever so slowly and usability will fade even more from sight – if that’s possible.

So, who could bring about this change? The one group that has no organized voice: users. Administrators, hospitals, docs, nurses and vendors have their lobbyists and associations. Not to mention telemed, app and device makers. EHR users, however, cut across each of these groups without being particularly influential in any. Some groups raise these issues; however, it’s in their context, not for users in general. This means no one speaks for common, day in day out, EHR users. They’re never at the table. They have no voice. That’s not to say there aren’t any EHR user groups. There are scads, but vendors run almost all of them.

What’s needed is a national association that represents EHR users’ interests. Until they organize and earn a place along vendors, etc., these issues won’t move. Creating a group won’t be easy. Users are widely dispersed and play many different roles. Then there is money. Users can’t afford to pony up the way vendors can. An EHR user group or association could take many forms and I don’t pretend to know which will work best. All I can do is say this:

EHR Users Unite! You Have Nothing to Lose, But Your Frustrations!

Share this:

Like this:

Following a 2015 Congressional directive, CMS is abandoning its Social Security based Medicare ID for a new randomly generated one. The new card will be hitting beneficiary’s mailboxes in April with everyone covered by a year later.

The old ID is a SSN plus one letter. The letter says if you are a beneficiary, child, widow, etc. The new will have both letters and numbers. It is wholly random and drops the coding for beneficiary, etc. Fortunately, it will exclude S, L, O, I, B and Z, which can look like numbers. You can see the new ID’s details here.

New Medicare ID Card

Claimants will have until 2020 to adopt the new IDs, but that’s not the half of it. For the HIT world, this means many difficult, expensive and time consuming changes. CMS sees this as a change in how it tracks claims. However, its impact may make HIT managers wish for the calm and quiet days of Y2K. That’s because adopting the new number for claims is just the start. Their systems use the Medicare ID as a key field for just about everything they do involving Medicare. This means they’ll not only have to cross walk to the new number, but also their systems will have to look back at what was done under the old.

Ideally, beneficiaries will only have to know their new number. Realistically, every practice they see over the next several years will want both IDs. This will add one more iteration to patient matching, which is daunting enough.

With MACRA Congress made a strong case for Medicare no longer relying on SSNs for both privacy and security reasons. Where it failed was seeing it only as a CMS problem and not as an HIT problem with many twists and turns.

Share this:

Like this:

HIT is a small ship in the large IT sea. Whether we like it or not whatever stirs IT will rock HIT’s boat – to stretch an analogy. Sometimes it’s a tidal change in how we do business. Dial up modems, for example, gave way to high speed lines revolutionizing all that they touched.

Sometimes these revolutions – to switch analogues are much welcome and undeniable. No one is going back to MS-DOS or parallel interfaced printers. Sometimes, though, IT gets caught up in cultural revolutions (CRs) that eventually fade and disappear, but take a toll before their done and gone.

Chinese Cultural Revolution Poster. Source: chineseposters.net

By cultural revolutions I don’t mean the extremes of Chairman Mao’s creation, with Red Guards who destroyed everything and everyone in their path. We’re far more kinder and gentler than that. The CRs I’m talking about are organizational or technical fads noted for their promoters’ evangelical zeal. Heavily promoted they soak up organizational time and effort often with little to show.

To be sure IT’s not the only organizational sphere with fads. DOD’s Program, Performance Budgeting System (PPBS) is a famous 1960s example. It promised an almost mechanical solution to DOD’s major logistical, operational and performance review problems. It didn’t. Little changed. That doesn’t mean PPBS didn’t have some practical aspects, or that it didn’t leave behind some improvements. However, little justified its over blown hype and massive organizational disruption.

IT and HIT have had their share. Six Sigma, CMMI, and ISO 9000 quickly come to mind. I would add XML and Big Data. Advocates pushed these in the name of curing many woes or reaching new heights by adopting a new way of thinking or doing. However, CRs almost always just put old beer in new bottles.

Spotting a Cultural Revolution

Each day brings something new in IT/HIT. Here some ways to determine if what you’re facing is a fad or not:

Advocates. Who’s promoting it? Who certified them and what did that entail?

Analogues. Who’s implemented the CR and can you speak to them freely?

Client Demand. What do your clients think? Do they want you to adopt the new ways?

Effort. What effort will it take to adopt the CR? What are the opportunity costs?

Jargon. Do the advocates speak terms you know, or do they promote a whole new language you’ll have to master?

Organizational Fit. How well does the CR fit into your current way of doing things?

Payoff. What are the CR’s specific, definable advantages?

Segments. Does the CR give you a menu of choices or is it an all or nothing approach?

Sponsors. Who’s the CR author? Is it a standards organization, a movement by knowledgeable users or a self-referencing group?

CRs aren’t a simple matter of useful or not. Sometimes even fads can bring a useful approach wrapped up in hyperbole. For example, XML advocates claimed it would change everything. After that promotional tide receded, XML became another tool. The challenge, then, is being able to see if the current CR really offers anything new and what it really is.

Share this:

Like this:

HIT is a relatively small world that generates no end of notices, promotions and commentaries. You can usually skim them, pick out what’s new or different and move on. Recently, I’ve run into two articles that deserve a slow, savored reading: Politico’s Arthur Allen’s History of VistA, the VA’s homegrown EHR and Julia Adler-Milstein’s take on interoperability’s hard times.

VistA: An Old Soldier That May Just Fade Away – Maybe

The VA’s EHR is not only older than just about any other EHR, it’s older than just about any app you’ve used in the last ten years. It started when Jimmy Carter was in his first presidential year. It was a world of mainframes running TSO and 3270 terminals. Punch cards still abounded and dialup modems were rare. Even then, there were doctors and programmers who wanted to move vet’s hard copy files into a more usable, shareable form.

Arthur Allen has recounted their efforts, often clandestine, in tracking VistA’s history. It’s not only a history of one EHR and how it has fallen in and out of favor, but it’s also a history of how personal computing has grown, evolved and changed. Still a user favorite, it looks like its accumulated problems, often political as much as technical, may mean it will finally meet its end – or maybe not. In any event, Allen has written an effective, well researched piece of technological history.

Adler-Milstein: Interoperability’s Not for the Faint of Heart

Adler-Milstein, a University of Michigan Associate Professor of Health Management and Policy has two things going for her. She knows her stuff and she writes in a clear, direct prose. It’s a powerful and sadly rare combination.

In this case, she probes the seemingly simple issue of HIE interoperability or the lack thereof. She first looks at the history of EHR adoption, noting that MU1 took a pass on I/O. This was a critical error, because it:

[A]llowed EHR systems to be designed and adopted in ways that did not take HIE into account, and there were no market forces to fill the void.

When stage two with HIE came along, it meant retrofitting thousands of systems. We’ve been playing catch up, if at all, ever since.

Her major point is simple. It’s in everyone’s interest to find ways of making I/O work and that means abandoning fault finding and figuring out what can work.

Share this:

Like this:

In that random scrap heap I refer to as my memory, I’ve compiled several items not worthy of a full post, but that keep nagging me for a mention. Here are the ones that’ve surfaced:

Patient Matching. Ideally, your doc should be able to pull your records from another system like pulling cash from an ATM. The hang up is doing patient matching, which is record sharing’s last mile problem. Patients don’t have a unique identifier, which means to make sure your records are really yours your doctor’s practice has to use several cumbersome workarounds.

The 21st Century Act calls for GAO to study ONC’s approach to patient matching and determine if there’s a need for a standard set of data elements, etc. With luck, GAO will cut to the chase and address the need for a national patient ID.

fEMR. In 2014, I noted Team fEMR, which developed an open source EHR for medical teams working on short term – often crises — projects. I’m pleased to report the project and its leaders Sarah Diane Draugelis and Kevin Zurek are going strong and recently got a grant from the Pollination Project. Bravo.

What’s What. I live in DC, read the Washington Post daily etc., but if I want to know what’s up with HIT in Congress, etc., my first source is Politico’s Morning EHealth. Recommended.

Practice Fusion. Five years ago, I wrote a post that was my note to PF about why I couldn’t be one of their consultants anymore. Since then the post has garnered almost 30,000 hits and just keeps going. As pleased as I am at its longevity, I think it’s only fair to say that it’s pretty long in the tooth, so read it with that in mind.

Ancestry Health. A year ago September, I wrote about Ancestry.com’s beta site Ancestry Health. It lets families document your parents, grandparents, etc., and your medical histories, which can be quite helpful. It also promised to use your family’s depersonalized data for medical research. As an example, I set up King Agamemnon family’s tree. The site I’ve found is still in beta, which I assume means it’s not going anywhere. Too bad. It’s a thoughtful and useful idea. I also do enjoy getting their occasional “Dear Agamemnon” emails.

Jibo. I’d love to see an AI personal assistant for PCPs, etc., to bring up related information during exams, capture new data, make appointments and prepare scripts. One AI solution that looked promising was Jibo. The bad news is that it keeps missing its beta ship date. However, investors are closing in on $100 million. Stay tuned.

Share this:

Like this:

A few weeks ago, I was having a bad dream. Everything was turning black. It was hard to breath and moving was equally labored. It wasn’t a dream. I woke up and found myself working hard to inhale. Getting out of bed took determination.

I managed to get to our hallway and call my wife. She called 911 and DC’s paramedics soon had me on my way to Medstar’s Washington Hospital Center’s ER. They stabilized me and soon determined I wasn’t having a heart attack, but a heart block. That is, the nerve bundles that told my heart when to contract weren’t on the job.

A cardiology consult sent me to the Center’s Cardiac Electrophysiology Suite (EP Clinic), which specializes in arrhythmias. They ran an ECG, took a quick history and determined that the block wasn’t due to any meds, Lime disease, etc. Determining I needed a pacemaker, they made me next in line for the procedure.

Afterwards, my next stop was the cardiac surgery floor. Up till then, my care was by closely functioning teams. After that, while I certainly wasn’t neglected, it was clear I went from an acute problem to the mundane. And with that change in status, the hospital system’s attention to detail deteriorated.

This decline lead me to a simple realization. Hospitals, at least in my experience, are much like Ulysses Grant: stalwart in crisis, but hard pressed with the mundane. That is, the more critical matters became in the Civil War, the calmer and more determined was Grant. As President, however, the mundane dogged him and defied his grasp.

Here’re the muffed, mundane things I encountered in my one overnight stay:

Meds. I take six meds, none exotic. Despite my wife’s and my efforts, the Center’s system could not get their names or dosages straight. Compounding that, I was told not to take my own because the hospital would supply them. It couldn’t either find all of them or get straight when I took them. I took my own.

Food. I’d not eaten when I came in, which was good for the procedure. After it, the EP Clinic fed me a sandwich and put in food orders. Those orders quickly turned into Nothing by Mouth, which stubbornly remained despite nurses’ efforts to alter it. Lunch finally showed up, late, as I was leaving.

Alarm Fatigue. At three AM, I needed help doing something trivial, but necessary. I pressed the signaling button and a nurse answered who could not hear me due to a bad mike. She turned off the alert. I clicked it on again. Apparently, the nurses have to deal with false signals and have learned to ignore them. After several rounds, I stumbled to the Nurses’ Station and got help.

Labs. While working up my history, the EP Clinic took blood and sent for several tests. Most came back quickly, but a few headed for parts unknown. No one could find out what happened to them.

Discharge. The EP Clinic gave me a set of instructions. A nurse practitioner came by and gave me a somewhat different version. When we got home, my wife called the EP Clinic about the conflict and got a third version.

EHR. The Hospital Center is Washington’s largest hospital. My PCP is at the George Washington University’s Medical Faculty Associates. Each is highly visible and well regarded. They have several relationships. The Center was supposed to send GW my discharge data, via FAX, to my PCP. It didn’t. I scanned them in and emailed my PCP.

In last five years, I’ve had similar experiences in two other hospitals. They do great jobs dealing with immediate and pressing problems, but their systems are often asleep doing the routine.

I’ve found two major issues at work:

Incomplete HIT. While these hospitals have implemented EHRs, they’ve left many functions big and small on paper or on isolated devices. This creates a hybrid system with undefined or poorly defined workflows. There simply isn’t a fully functional system, rather there are several of them. This means that when the hospital staff wants to find something, first they’ll look in a computer. Failing that, they’ll scour clipboards for the elusive fact. It’s like they have a car with a five speed transmission, but only first and second gear are automatic.

Isolated Actors. Outside critical functions, individuals carry out tasks not teams. That is, they often act in isolation from those before or after them. This means issues are looked at only from one perspective at a time. This sets the stage for mistakes, omissions and misunderstandings. A shared task list, a common EHR function, could end this isolation.

The Hospital Center is deservedly a well regarded. It’s heart practice is its special point of pride. However, its failure to fully implement HIE is ironic. That’s because Medstar’s National Center for Human Factors in Healthcare isn’t far from the Hospital.

The problems I encountered aren’t critical, but they are troublesome and can easily lead to serious even life endangering problems. Most egregious is failure to fully implement HIT. This creates a confusing, poorly coordinated system, which may be worse than no HIT at all.

Share this:

Like this:

Thanks to a Biotronik Eluna 8 DR-T pacemaker that sits below my clavicle, I’m now a thing on the internet of things. What my new gizmo does, other than keeping me ticking, is collect data and send it to cell device sitting on my nightstand.

Once a day, the cell uploads my data to Biotronik’s Home Monitoring website, where my cardiologist can see what’s going on. If something needs prompt attention, the system sends alerts. Now, this is a one way system. My cardiologist can’t program my pacemaker via the net. To do that requires being near Biotronik’s Renamic inductive system. That means I can’t be hacked like Yahoo email.

The pacemaker collects and sends two kinds of data. The first set shows the unit’s functioning and tells a cardiologist how the unit is programmed and predicts its battery life, etc. The second set measures heart functioning. For example, the system generates a continuous EKG. Here’s the heart related set:

Atrial Burden per day

Atrial Paced Rhythm (ApVs)

Atrial Tachy Episodes (36 out of 48 criteria)

AV-Sequences

Complete Paced Rhythm (ApVp)

Conducted Rhythm (AsVp)

Counter on AT/AF detections per day

Duration of Mode Switches

High Ventricular Rate Counters

Intrinsic Rhythm (AsVs)

Mode Switching

Number of Mode Switches

Ongoing Atrial Episode Time

Ventricular Arrhythmia

Considering the pacemaker’s small size, the amount of information it produces is remarkable. What’s good about this system is that its data are available 24/7 on the web.

The bad news is Biotronik systems don’t directly talk to EHRs. Rather, Renamic uses EHR DataSynch, a batch system that complies with IEEE 11073-10103, a standard for implantable devices. EHR DataSynch creates an XML file and ships it along with PDFs to an EHR via a USB key or Bluetooth. However, Renamic doesn’t support LANs. When the EHR receives the file, it places the data in their requisite locations. The company also offers customized interfaces through third party vendors.

For a clinician using the website or Renamic, data access isn’t an issue. However, access can be problematic in an EHR. In that case, the Biotronik data it may or may not be kept in the same place or in the same format as other cardiology data. Also, batch files may not be transferred in a timely fashion.

Biotronik’s pacemaker, by all accounts, is an excellent unit and I certainly am glad to have it. However, within the EHR universe, it’s one more non-interoperable device. It takes good advantage of the internet for its patients and their specialists, but stops short of making its critical data readily available. In Biotronik’s defense, their XML system is agnostic, that is, it’s one that almost any EHR vendor can use. Also, the lack of a widely accepted electronic protocol for interfacing EHRs is hardly Biotronik’s fault. However, it is surprising that Biotronik does not market specific, real time interfaces for the products major EHRs.

Note: This was first published on emrandehr.com

Share this:

Like this:

A member of our extended family is a nurse practitioner. Recently, we talked about her practice providing care for several homebound, older patients. She tracks their health with her employer’s proprietary EHR, which she quickly compared to a half dozen others she’s used. If you want a good, quick EHR eval, ask a nurse.

What concerned her most, beyond usability, etc., was piecing together their medical records. She didn’t have an interoperability problem, she had several of them. Most of her patients had moved from their old home to Florida leaving a mixed trail of practioners, hospitals, and clinics, etc. She has to plow through paper and electronic files to put together a working record. She worries about being blindsided by important omissions or doctors who hold onto records for fear of losing patients.

Interop Problems: Not Just Your Doc and Hospital

She is not alone. Our remarkably decentralized healthcare system generates these glitches, omissions, ironies and hang ups with amazing speed. However, when we talk about interoperability, we focus on mainly on hospital to hospital or PCP to PCP relations. Doing so, doesn’t fully cover the subject. For example, others who provide care include:

College Health Systems

Pharmacy and Lab Systems

Public Health Clinics

Travel and other Specialty Clinics

Urgent Care Clinics

Visiting Nurses

Walk in Clinics, etc., etc.

They may or may not pass their records back to a main provider, if there is one. When they do it’s usually by FAX making the recipient key in the data. None of this is particularly a new story. Indeed, the AHA did a study of interoperability that nails interoperability’s barriers:

Hospitals have tried to overcome interoperability barriers through the use of interfaces and HIEs but they are, at best, costly workarounds and, at worst, mechanisms that will never get the country to true interoperability. While standards are part of the solution, they are still not specified enough to make them truly work. Clearly, much work remains, including steps by the federal government to support advances in interoperability. Until that happens, patients across the country will be shortchanged from the benefits of truly connected care.

We’ve Tried Standards, We’ve Tried Matching, Now, Let’s Try Money

So, what do we do? Do we hope for some technical panacea that makes these problems seem like dial up modems? Perhaps. We could also put our hopes in the industry suddenly adopting an interop standard. Again, Perhaps.

I think the answer lies not in technology or standards, but by paying for interop successes. For a long time, I’ve mulled over a conversation I had with Chandresh Shah at John’s first conference. I’d lamented to him that buying a Coke at a Las Vegas CVS, brought up my DC buying record. Why couldn’t we have EHR systems like that? Chandresh instantly answered that CVS had an economic incentive to follow me, but my medical records didn’t. He was right. There’s no money to follow, as it were.

That leads to this question, why not redirect some MU funds and pay for interoperability? Would providers make interop, that is data exchange, CCDs, etc., work if they were paid? For example, what if we paid them $50 for their first 500 transfers and $25 for their first 500 receptions? This, of course, would need rules. I’m well aware of the human ability to game just about anything from soda machines to state lotteries.

If pay incentives were tried, they’d have to start slowly and in several different settings, but start they should. Progress, such as it is, is far too slow and isn’t getting us much of anywhere. My nurse practitioner’s patients can’t wait forever.

Share this:

Like this:

When HHS released ONC’s proposed FY2017 budget last winter, almost all attention focused on one part, a $22 million increase for interoperability. While the increase is notable, I think ONC’s full $82 Million budget deserves some attention.

ONC’s FY2017 Spending Plan.

Table I, summarizes ONC’s plan for Fiscal Year 2017, which runs from October 1, 2016 through September 30, 2017. The first thing to note is that ONC’s funding would change from general budget funds, known as Budget Authority or BA, to Public Health Service Evaluation funds. HHS’ Secretary may allocate up to 2.1 percent of HHS’ funds to these PHS funds. This change would not alter Congress’ funding role, but apparently signals HHS’s desire to put ONC fully in the public health sector.

Table I

ONC FY2017 Budget

What the ONC Budget Shows and What it Doesn’t

ONC’s budget follows the standard, federal government budget presentation format. That is, it lists, by program, how many people and how much money is allocated. In this table, each fiscal year, beginning with FY2015, shows the staffing level and then spending.

Staffing is shown in FTEs, that is, full time equivalent positions. For example, if two persons work 20 hours each, then they are equivalent to one full time person or FTE.

Spending definitions for each fiscal year is a little different. Here’s how that works:

FY2015 – What actually was spent or how many actually were hired

FY2016 – The spending and hiring Congress set for ONC for the current year.

FY2017 – The spending and hiring in the President’s request to Congress for next year.

If you’re looking to see how well or how poorly ONC does its planning, you won’t see it here. As with other federal and most other government budgets, you never see a comparison of plans v how they really did. For example, FY2015 was the last complete fiscal year. ONC’s budget doesn’t have a column showing its FY2015 budget and next to it, what it actually did. If it did, you could see how well or how poorly it did following its plan.

You can’t see the amount budgeted for FY2015 in ONC’s budget, except for its total budget. However, if you look at the FY2016 ONC budget, you can see what was budgeted for each of its four programs. While the budget total and the corresponding actual are identical -$60,367,000, the story at the division level is quite different.

Table II

ONC FY2015 Budget v Actual

000s

Division

FY2015 Budget $

FY2015 Actuals $

Diff

Policy Development and Coordination

12,474

13,112

638

Standards, Interoperability, and Certification

15,230

15,425

195

Adoption and Meaningful Use

11,139

10,524

(615)

Agency-wide Support

21,524

21,306

(218)

Total

60,367

60,367

–

Table II, shows this by comparing the FY2015 Enacted Budget from ONC’s FY2015 Actuals for its four major activities. While the total remained the same, it shows that there was a major shift of $638,000 from Meaningful Use to Policy. There was a lesser shift of $195,000 from Agency Support to Standards. These shifts could have been actual transfers or they could have been from under and over spending by the divisions.

Interestingly, Table III for staffing shows a different pattern. During FY2015, ONC dropped 25 FTEs, a dozen from Policy Development and the rest from Standards and Meaningful Use. That means, for example, that Policy Development had less people and more money during FY2015.

Table III

ONC FY2015

Budget v Actual Staffing FTEs

Division

FY2015 Budget FTEs

FY2015 Actuals FTEs

Diff

Policy Development and Coordination

49

37

(12)

Standards, Interoperability, and Certification

32

26

(6)

Adoption and Meaningful Use

49

42

(7)

Agency-wide Support

55

55

–

Total

185

160

25

To try to make sense of this, I looked at the current and past year’s budgets, but to no avail. As best I can tell is ONC made great use of contracts and other non personnel services. For example, ONC spent $30 Million on purchase/contracts, which is $8 million more than it did on its payroll.

ONC’s budget, understandably, concentrates on its programs and plans. It puts little emphasis on measuring its hiring and spending abilities. It’s not alone, budgets government and otherwise, are forecast and request documents. However, if we could know how plans went – without having to dig in last year’s weeds – it would let us know how well a program executed its plans as well as make them. That would be something worth knowing.

Share this:

Like this:

Sickle cell anemia (SCA) is a genetic, red blood cell condition, which damages cell walls impeding their passage through capillaries. Episodic, it is often extremely painful. It can damage organs, cause infections, strokes or joint problems. These episodes or SCA crises can be prompted by any number of environmental or personal factors.

In the US, African Americans are most commonly susceptible to SCA, but other groups can have it as well. SCA presents a variety of management problems in the best of circumstances. As is often the case, management is made even more difficult when the patient is a child. That’s what Children’s Health of Dallas, Texas, one of the nation’s oldest and largest pediatric treatment facilities faced two years ago. Children’s Health, sixty five percent of whose patients are on Medicaid, operates a large, intensive SCA management program as the anchor institution of the NIH funded Southwestern Comprehensive Sickle Cell Center.

Children’s Health problem wasn’t with its inpatient care or with its outpatient clinics. Rather, it was keeping a child’s patents and doctors up to date on developments. Along with the SCA clinical staff, Children’s Chief Information Officer, Pamela Arora, and Information Management and Exchange Director, Katherine Lusk, tackled the problem. They came up with a solution using all off the shelf technology.

Their solution? Provide each child’s caregiver with a free Verizon smartphone. Each night, they extracted the child’s information from EPIC and sent it to Microsoft’s free, vendor neutral HealthVault PHR. This gave the child’s doctor and parents an easy ability to stay current with the child’s treatment. Notably, Children’s was able to put the solution together quickly with minimal staff and without extensive development.

That was two years ago. Since then, EPIC’s Lucy PHR has supplanted the project. However, Katherine Lusk who described the project to me is still proud of what they did. Even though the project has been replaced, it’s worth noting as an important example. It shows that not all HIE projects must be costly, time consuming or resource intense to be successful.

Children’s SCA project points out the value of these system development factors:

Clear, understood goal

Precise understanding of users and their needs

Small focused team

Searching for off the shelf solutions

Staying focused and preventing scope creep

Each of these proved critical to Children’s success. Not every project lends itself to this approach, but Children’s experience is worth keeping in mind as a useful and repeatable model of meeting an immediate need with a simple, direct approach.

[Note: I first heard of Children’s project at John’s Atlanta conference. ONC’s Peter Ashkenaz mentioned it as a notable project that had not gained media attention. I owe him a thanks for pointing me to Katherine Lusk.]

Share this:

Like this:

This was going to be a five year relook at Practice Fusion. Back then, I’d written a critical review saying I wouldn’t be a PF consultant. Going over PF now, I found it greatly changed. For example, I criticized it s not having a shared task list. Now, it does. Starting to trace other functions, a question suddenly hit me. Why did I think an EHR should have a shared task list or any other workflow function for that matter?

It’s a given that an EHR is supposed to record and retrieve a patient’s medical data. Indeed, if you search for the definition of an EHR, you’ll find just that. For example, Wikipedia defines it this way:

An electronic health record (EHR), or electronic medical record (EMR), refers to the systematized collection of patient and population electronically-stored health information in a digital format.[1] These records can be shared across different health care settings. Records are shared through network-connected, enterprise-wide information systems or other information networks and exchanges. EHRs may include a range of data, including demographics, medical history, medication and allergies, immunization status, laboratory test results, radiology images, vital signs, personal statistics like age and weight, and billing information.[2]

Other definitions, such as HIMSS are similar, but add another critical element, workflow:

The EHR automates and streamlines the clinician’s workflow.

Is this a good or even desirable thing? Now, before Chuck Webster shoots out my porch lights, that doesn’t mean I’m anti workflow. However, I do ask what are workflow features doing in an EHR?

In EHRs early days, vendors realized they couldn’t drop one in a practice like a fax machine. EHRs were disruptive and not always in a good way. They often didn’t play well with practice management systems or the hodgepodge of forms, charts and lists they were replacing.

As a result, vendors started doing the workflow archeology and devising new ones as part of their installs. Over time, EHRs vendors started touting how they could reform not just replace an old system.

Hospitals were a little different. Most had IT staff that could shoehorn a new system into their environment. However, as troubled hospital EHR rollouts attest, they rarely anticipated the changes that EHRs would bring about.

Adding workflow functions to an EHR may have caused what my late brother called a “far away” result. That is, the farther away you were from something, the better it looked. With EHR workflow tools, the closer you get to their use, the more problems you may find.

EHRs are designed for end users. Adding workflow tools to these assumes that the users understand workflow dynamics and can use them accordingly. Sometimes this works well, but just as often the functions may not be a versatile as the situation warrants. Just ask the resident who can’t find the option they really need.

I think the answer to EHR workflow functions is this. They can be nice to have, like a car’s backup camera. However, having one doesn’t make you a good driver. Having workflow functions shouldn’t fool you into thinking that’s all workflow requires.

The only way to determine what’s needed is by doing a thorough, requirements analysis, working closely with users and developing the necessary workflow systems.

A better approach would be a workflow system that embeds its features in an EHR. That way, the EHR could fit more seamlessly its environment, rather than the other way around.

Share this:

Like this:

So, you want to dump your EHR and find another, or about to join the fray? Once you’ve got a handle on your requirements, this review lists some online tools that might help. Ideally, they’ll point to the one that’s best for you. Even if they can’t do that, they should help identify what you don’t want. Along the way, they may also raise some new issues, or give you some new insights.

The web has a surfeit of EHR evaluation tools. I’ve only reviewed those that are vendor independent and employ some filtering or ranking. That excludes spreadsheets and PDFs that just list features. I also skipped any that charge. I found the nine shown in Table I and reviewed below. Table II explains my definitions.

EHR Tools Reviewed

1. American EHR. American’s tool gives you several ways to look at an EHR. Its side by side list compares 80 features. It asks users to rank a dozen features on a 1 to 5 scale. To use the tool, you pick a practice size and specialty. You can also see how users rated a product in detail, which shows how it stacks up against all its others. Unfortunately, its interface is a hit or miss affair. When you change a product choice sometimes it works and sometimes it just sits there.

2. Capterra. Capterra ranks the top 20 most popular EHRs, or at least the most well known. To do this,it adds up the number of customers, users and social media scores. That is, how often they’re mentioned on Twitter, Facebook, etc. Users rank products on a 1 to 5 scale and can add comments. It has a basic product filtering system.

3. Consumer Affairs. It examines ten major vendors using a short breakdown of features and user reviews. Users rate products on a 1 to 5 scale and can add comments.

4. EHR Compare. This tool solely relies on user ratings. Users score 20 EHR features on a 1 to 5 scale. It may add additional features depending on specialty. It only has a handful of reviews, which is a drawback.

5. EHR in Practice. EHR in Practice provides a short list of features and thumbnail EHR descriptions.

6. EHR Softwareinsider. This site uses ONC attestations to rank vendors. Its analysis shows those rankings along with Black Book ratings. Users rank products on a 1 to 10 scale. Interestingly, users can earn a $10 Amazon gift card for their reviews. For a fee, a vendor can move their product to the top of a list, though ES says that does not influence other factors.

7. Select Hub. There is one big if to using this site, if you can get in. As with some sites, SH requires that you register to get to its rankings. The problem is that once you do, you may wait for a day or more for a confirming email link. Even then, it didn’t see the confirmation, so I had to repeat, etc. If you get in, you’ll find some interesting features. Its staff briefly analyzes a product’s performance for each function. The other is that you can set up a project for yourself and others to query vendors.

8. Software Advice. Software Advice is a user rating site based on a 1 to 5 scale. It offers filters by rating rank, specialty and practice size as well as a short product summary.

9. Top Ten Reviews. As the name implies, Top Ten shows just that. There are two problems with its rankings. It doesn’t explain how it chose them or how they are ranked. It provides a thumbnail for each product.

What to Use. Several of the EHR comparison are just popularity contests. They have limited filters and depend on user reviews from whoever walks in the door. Two, however, go beyond that and are worth exploring: American EHR and Select Hub. Both have interface problems, but with persistence, you can find out more about a product than using the others.

With that said, you may also may find it useful to go through the user ranked tools. They may help you cull out particular products or interest you in one you’ve overlooked. Finally, if I’ve left something out, please let me know. I’ll add it in a revised post.

Like this:

I ran across a new, five pound, home robot called Jibo in an IEEE publication my wife gets. Jibo, whose first planned product run has sold out at $750 each, promises to ship this Spring. It bills itself as the first social robot.

Started as an INDIGOGO project that banked close to $4 million, Jibo recently added $36 million from investors. Its technology smarts come from its founder, chief scientist and MIT Associate Professor, Dr. Cynthia Breazeal.
Jibo’s driven by an ambition to bring artificial intelligence capabilities to the home market. Though it’s not mobile, it’s touch sensitive, gesture sensing and can dip and swivel 360 degrees to capture events. Jibo’s natural language processing uses two high res cameras to recognize faces, do your selfies and run video calls.

With these capabilities, Jibo is far smarter than smart thermometers, vacuum cleaners or security systems. It’ll use these to learn your phrases and gestures, so it can act as your calendar, inbox, media organizer and general personal assistant. Importantly, Jibo has a significant, developer program. That’s what gave me the idea for a virtual scribe.

An EHR Virtual Scribe?

High end EHRs have been using natural language processing for years. You dictate and the system figures out what and where to put the text. These pricey add-ons aren’t widely used.

Less versatile, but far more used is Dragon Voice. Other smart assists are various macro systems and front ends. These improve an often frustrating, mind numbing EHR interface, but are only a partial solution. Their major disadvantage is that the user is tethered to a machine. Ideally, a doctor should be able to talk with their patient, and seamlessly use the record as needed.

If new, smart devices such as Jibo really can aid around the house, it should be possible in another generation or so, to free practioners from their tablets or keyboards. An EHR virtual scribe with cameras and projector could do these tasks:

Workflow. Set up workflow based on patient history and appointment type.

Updates. As the user dictates, the scribe could show the entries to both.

Assessment. As the user builds the note, the scribe could show how it compares to similar cases or when asked do searches.

Plan. The scribe could produce potential plans and let the user modify them.

Orders. Based on the plan, past orders, etc., it could propose new orders.

Education. Provide tailored materials, references, etc.

Appointment. Set up appropriate follow on appointments.

Claims. Interface with claiming and reporting systems as needed.

Products such a Jibo hold the potential for a technical fix for EHRs seemingly intractable usability problems and do it at reasonable cost. Their combination of adaptable hardware, AI abilities and unobtrusive size may just be the ticket.

Share this:

Like this:

Ancestry.com, the genealogy behemoth, has entered the health field — sort of. AncestryHealth (AH), a beta foray, helps you document your family’s medical conditions. To start, you build a family tree of your blood relations. Unlike a typical family tree, it only lists those who’re your biological relations. So, your spouse is out, but your kids are in. However, your grandchildren, for some reason, aren’t tracked.

AncestryHealth Home Page

To show how AH works, I built a tree for King Agamemnon and his family. At top of the Agamemnon’s chart are his four grandparents, Pelops, etc. Below them are his parents. On Agamemnon’s level is his brother Menelaus, whose wife caused some marital stress.

The Agamemnon’s Family Tree

Chart Building. AH’s heart is its family member entry screens. First, you add the member, for example, a daughter then her conditions. You could also build the chart and then enter conditions.

Adding a person’s health conditions is a simple, top down process. When you select someone, AH brings up its basic conditions menu. It has five general categories: Heart, Cancer, Lung, Brain or Metabolism. If those don’t make it, a click brings up 13 more: Muscle, Autoimmune, etc., or you can add your own. Unlike EHRs, etc., AH is strictly for recording health conditions not their treatments. You can note, for example, your mother’s osteoarthritis, but not what she takes for it.

Adding Family Conditions

When you’ve picked general categories, you can leave it at that. For example, if you knew Aunt Agatha had allergies, but not much more about her, you’d be done. You can add as many general categories as you like to any one person.

To add more detail, you select the person and then you can add both medical and lifestyle details. Again, you can use AH’s choices or add your own.

Detailed Conditions

When you’re done, you use the family tree to see everyone or just those with a specific condition. For example, you’d see the relationship, if any, of everyone who has or had heart disease. Finally, you can download a summary of all your family’s conditions.

Daughter Entry Screen

As with any beta program, some of AH parts are less finely developed than others. Many of the problems were with the member entry screen. For example, if a person has two first names, such as, Mary Beth she’ll show up as Mary.

Daughter Detail Screen

Once the member’s on the chart, you have to edit their entry to add a last name and living status. I don’t know why this isn’t done with a single entry screen instead of two screens.

If you do add a last name, it doesn’t show on the tree. That means if you have two Great Uncle Davids, you’ll have to open the record to make sure you have the right one. It would also be helpful if the member screen had an Unknown Name box.

Similar to Facebook, you can’t use titles, such as Dr., Ms., etc. However, if you leave out the period, it’s accepted. Nor can you add, MD, PhD, etc., unless you omit the comma after their last name.

Lost Child. AH gives you both an on screen graphic and a printed health summary. The graphic lets you click on a person’s icon – though their names don’t show – and see the detail.

In one case, Electra’s icon disappeared. Given the family’s way of settling their issues, I wasn’t too surprised.

AH’s Big Sibling Connection

If you have an Ancestry account, you can use it to log into AH. You can also create one for AH. When you create a new AH account, one is also opened for you in Ancestry.com, whether you want it or not. For example, using a Gmail address I created the Agamemnon family tree. That login is now part of Ancestry.com. I can’t think of a system that opens an account for you in another system.

If you do use your Ancestry.com account for AH, any change you make in your AH family tree changes your Ancestry.com tree as well. You can avoid this if you make a private copy of your Ancestry.com tree. AH should offer to do this without your going to Ancestry.com, etc.

That’s not the end of it. When I needed to change the spelling for one of the persons on my AH chart, I found all entries were locked. I could only change someone’s conditions, or add a new person. However, I could not edit anyone’s name, etc., nor could I remove someone. To edit, I had to go to Ancestry.com’s Agamemnon family tree, which has a far different interface. Apparently, this occurs if you’re logged into the same Ancestry.com site. Wading through all of this was like trying to figure out Abbot and Costello’s Who’s on First, but not as much fun.

Sorta Informed Consent. AH shares your family conditions, less any personally identifiable information, with health researchers, etc. I don’t object to their doing that and it has significant potential. AH posts a long Informed Consent note about their sharing family information. However, AH puts this where it’s unlikely to be read. It’s a link at the bottom of each page along with Terms and Conditions, etc. Given its importance, it deserves higher billing.

It is a Beta. AH is a work in progress by the major, family genealogical site. During its beta, AH is free and AH is interested in its reception. It wants comments on adding functions, such as, AncestryDNA data, or risk analysis tools. Its family condition documentation may prove quite valuable to you and possibly to medical researchers.

AH’s Family Condition

AH needs to fix several design flaws and eliminate some obvious bugs. It needs to do a far better job of letting you know what it does with your data. Most importantly, it needs to sort out and clear up its various, functional relationships with Ancestry.com. You might call it an Ancestry family condition.

Share this:

Like this:

My friend Joe is a retired astrophysicist turned web geek. Joe’s had his health problems, but he’s still an avid bicycle rider as well as one of the most well read persons I’ve ever known. He doesn’t miss much, and always has his own – often original – take on politics, the economy, the net and a lot else.

There is one topic where his take echoes many others. He never fails to send me posts about how EHRs interfere in the doctor – patient relationship. He just loathes it when his doc spends time keying away rather than making eye contact.

Joe knows I get pretty wound up on EHR usability and interop problems, but that just makes me an even better target for his disgruntlement. He, of course, has a point as do so many others who’ve lamented that what was once a two way conversation has become a three way with the patient the loser.

The point, I think, only goes so far. What’s going on in these encounters is more than the introduction of an attention sucking PC. It’s simply wrong to assume that medical encounters were all done the same warm and fuzzy way until EHRs came along. To understand EHRs effect from a patient’s perspective, I think we need to ask ourselves several questions about our EHR involved medical appointments.

Many of the changes have been disruptive in the old sense of that word. Asking yourself these questions may help assuage some of the feelings of loss and put the EHRs’ use in better perspective:

Whose Appointment Is It? Are you there alone, with an elderly parent, your spouse or your child, etc.?

Appointment’s Purpose. Why are you there? Is it for a physical, is it due to bad cold, a routine follow up or is it for a perplexing question? Is it with a specialist, pre or post op?

Your Relationship. How long and how well have you know this doctor? How many doctors have you had in the past few years?

How Long Has It Been? When was the last time you saw your doctor, days, weeks, years? How much catching up is there to do?

Doc’s Actions. What’s your doc doing on the EHR, looking for labs, going over your meds, writing notes, writing prescriptions, ordering tests, checking drug interactions? How many of these would have been impossible or difficult on paper?

Money. How much time does your doc take trying to save you money finding generics, looking at what your insurance covers, etc.?

Many EHRs have usability problems and many have been implemented poorly. As with all technological innovations in professional settings though, they often create longing for the good old days, which may never had existed. We need to remember that medical records could not continue to exist as paper records written more as reminders than searchable, definitive records.

EHRs have changed provider’s roles. They have to create records not just for themselves, their partners, etc., but also for other providers and analysts they may never meet. As patients, we also need to understand that EHRs, like word processing, cell phones, and the internet itself are far from perfect. Banishing them may allow more personal time for you, but what will it mean for your care, your doctor or for the next patient?

Share this:

Like this:

After 15 years running the EHRSelector, a free, interactive app for finding an EHR, my partner, Cali Samuels and I want to sell or give it to someone who can help it meet its potential.

What’s the Selector?

The Selector’s idea is simple: Give those looking for an EHR an objective way to find a new EHR that centers on what it can do.

Here’s how it works. You go down an extensive EHR feature list. As you click what you want, it instantly shows which products match. You can then compare them like this: EHR Selector Side by Side.

Here’re its major categories. Each has a clickable feature list. For example, you can choose among 50 medical specialties. We also show which features are HIPAA or MU required.

What’s the Problem?

So, why do we want to sell it or give it away? Simple, we can’t crack two problems. Vendors don’t update their profiles and, consequently, there’s low user interest.

The selector depends on vendors subscribing to it and keeping their product lists up to date. Even though it’s free, we can’t convince vendors to update their information.

We’ve written and called, but we run into several problems. Often it’s impossible to get through to a person. When we do, we get bounced among sales, marketing and technical types. Contacts who we’ve dealt with are often gone and no one knows who can speak to all product features.

Then there’s the connected question of driving users to the site. If our vendor list isn’t up to date, we can’t expect a high user volume. As it is, we get some users who understand that while not everything is current, few EHRs take out features. Occasionally, whole college classes sign up. We’re pleased to serve them, but that rarely interests vendors.

How Does It Make Money?

It doesn’t. For several years, we ran the selector on a subscription basis for both users and vendors. This paid for hosting and maintenance, but the marketing firm that ran it lost interest and it began to lose subscribers.

Two years ago, my partner and I took it back, put on a new home page, added a blog and made it free for everyone. We hoped to get enough traffic to sell ads. That hasn’t worked out. We get a few hundred hits a week, but we don’t have the user or vendor interest to justify ads.

What Are We Looking For?

We would like to turn the site over to someone who shares our interest in an objective, interactive EHR selection tool. Obviously, we want someone who has the marketing clout to get vendors interested.

The Selector’s written in classic ASP. It’s functions work, but it could stand a good rewrite making it more intuitive and less 2001. Finally, we’d love to add a mobile app, user ratings and graphics comparing products.

Sell or Buy?

Cali and I have enough cash and sweat equity in the selector to build a Tesla. It’s never paid, but we’ve been content to run it break even or even at a loss, because we believe it’s an important service.

We’d love someone to dump goo gobs of dough on us for it, modernize it, etc., – are you listening Google? However, we’re realistic enough to look for someone who shares our interest in giving users a useful EHR tool finding tool and who has the wherewithal to carry it on. You can reach me at: carl@ehrselector.com

Share this:

Like this:

One year ago, I looked at the demand for EHR related certifications. I found, as the old line goes, that many are called but few are chosen. Of 30 or so certificate programs, only about a quarter had substantial demand. In fact, 1 had no demand.

Study Update

Finding Certification Programs. To bring the study up to date, I looked for new certificates or ones I’d overlooked. I found one, CEHRS, Certified Electronic Health Record Specialist Certification from the National Healthcareer Association.

Searching for Jobs. As with last year’s study, I then used John’s HealthcareITCentral to search for jobs posted in the last 30 days that require an EHR or HIT certification.

Certifications Reviewed

Table I lists the 12 certifications, which had at least one job opening. Last year, I found at least 16 certifications with at least one opening. That is, this year as shown in Table II, I found no mentions for 15 certificates.

What Counts. Each certification listed in a job counts as one opening. For example, if a job listed ComTIA, CPHIMS and CPEHR, I counted it as three jobs, one for each certification.

General certifications only. For practical reasons, this review only covers general certifications that have a one word abbreviation. Where the abbreviation isn’t unique, I’ve filtered out non certificate uses.

No EDUs. I excluded certificates from colleges, universities, etc., whether traditional or on line. There are scads of these, but I’m not aware of any that are in general demand. That’s not a judgment on their value, just their demand.

Dynamics. The openings for these certifications are a snapshot. The job market and the openings that HealthcareITCentral lists constantly change. What is true now, could change in a moment. However, I believe it gives you a good idea of relative demand.

Certification Demand

In the past 30 days, I found 322 openings that listed a certification. See Chart I. As with last year, AHIMA’s were most in demand. Two of its certificate programs, RHIA and RHIT account for 60 percent of certificate demand.

RHIA’s designed to show a range of managerial skills, rather than in depth technical ability. If you consider certifications proof of technical acumen, then the strong RHIA demand is a bit counter intuitive. Where the RHIA has a broad scope, the close second, RHIT, is more narrowly focused on EHRs.

In third place, but still with a substantial demand is CCS, which focuses on a specific ability. Compared to last year, CCS has fewer openings. This is due to a change in my methodology not demand. Last year, I counted any CCS opening. This year, I only count those with a clear HIT relationship.

Certification Location Demand

After looking at certification demand, I looked at it by state. To do this, I merged the different certification job openings into a single list. That is, I added those for RHIA, RHIT, etc., and then eliminated duplicates.

After creating a consolidated list, I sorted and subtotaled by state. I then sorted the state totals. This gave me the data for Chart II. It shows the top ten states for openings, including/ two ties.

State Rankings. As you might expect, states with the largest populations have the most jobs. California leads, which is what you’d expect.

To account for population, I take job rank from population rank. For example, Washington State is 13th in population. It’s eight in job openings. So, subtracting job rank eight from population rank 13 is five. That is, Washington State’s job share is five ranks above its population ranking. Chart III shows the result where states stand when you account for population.

Most notable is Colorado. Colorado is 22nd in population, but fifth in certification demand. That is job openings for it are 17 ranks higher than population would account for.

Others ranking higher than their population are: Missouri, Arizona, Tennessee, Wisconsin, etc. Conversely, those states, which have openings below their rank, include New York, Pennsylvania and Florida.

Missouri’s case is interesting. Almost all its openings are from one company: Altegra. Its openings are almost all for one position type: medical record field reviewer. At first, I thought this was a case of over posting, but it doesn’t appear to be. They’re recruiting for several different locations.

Certification Demand Trends

When I stated this update, I expected there would be more jobs due to economic growth, but that hasn’t happened. There’ve been shifts among states, but overall the demand is pretty much the same. RHIA and RHIT demand last year and this year are practically identical while demand for others has dropped. I don’t have any numbers for overall openings then and now, but I suspect that they’ve grown while certification demand has either gone down or been flat. However, as I’ve said that’s just a guess.

Certifications are a response to the demand for persons with demonstrated skills. The question is whether a one will reward your time, cost and effort with something that is marketable. Demand alone can’t make that choice for you. For example, working on a certificate that has little or no demand might seem pointless. However, its requirements may be a good way for you to acquire demonstrate your skills, especially if your experience is iffy.

Personal satisfaction also can’t be discounted as a factor. You might be interested in an area with low demand, but when coupled with your other skills might make you marketable in an area you desire.

If you do decide to pursue one of these certificates, I think these numbers can help you know where to look and what to look for.

Share this:

Like this:

The Selector’s Blog

Choosing an EHR/EMR is a hard task. For many years, we hosted the EHRSelector, which we designed to help you pick an EHR by features. It had the most granular feature list on the web. When we were not able to entice enough vendors participation, we closed the system. However, we believe our feature list is unique and useful, so you can download it here: EHR Selector Feature List. We also will continue to write about EHR related issues.