The whole idea of making this data-dependent is silly and unnecessary. If it were to make sense, then you would be better dealing with the data at a higher level. For example, if you want to store ASCII data efficiently, then compress it first - saving far more than 22%.
There is an extremely simple way to get a similar power saving (over 23%), albeit at the cost of extra bits. For each 8-bit byte, you have an additional "invert" bit. For each data read, you use this bit to see if the data bits should be inverted. When writing, you consider both the non-inverted and inverted versions of the data, and choose whichever uses least power. Simple, fast, low-power, and independent of the data.

Thanks for your comments. That's interesting. Based on the paper, I'd understood that having Er/Es greater than 1 was what delivered the energy savings. I'll have to reach out to the authors and see what they say.

Actually, no—the method works in no small part because the energy required to reset a bit (Er) is roughly twice that of the energy required to set a bit (Es), while the read energy is negligible. It's a nifty bit of work.

Kristin- The RICE authors also raise the subject of multi-level cell (MLC-PCM) and cite the IBM-MLC. It would certainly be interesting to hear their view on how their memory overhead would be added to IBM’s drift compensating codeword memory overhead and methodology. In that respect, they might also need reminding that the write process for the multi-level cell and some earlier MLC work involves starting with a cell in the reset state and reaching the required data state resistance value by iterative steps. How many more memory codeword solutions to PCM memory problems can be added before in becomes a write never memory (WNM) because there is no room for the data?