A recent measure of ‘integrated information’, Phi_DM, quantifies the extent to which a system generates more information thanthe sum of its parts as it transitions between states, possibly reflecting levels of consciousness generated by neural systems.However, Phi_DM is defined only for discrete Markov systems, which are unusual in biology; as a result, Phi_DM can rarely bemeasured in practice. Here, we describe two new measures, Phi_E and Phi_AR, that overcome these limitations and are easy toapply to time-series data. We use simulations to demonstrate the in-practice applicability of our measures, and to exploretheir properties. Our results provide new opportunities for examining information integration in real and model systems andcarry implications for relations between integrated information, consciousness, and other neurocognitive processes.However, our findings pose challenges for theories that ascribe physical meaning to the measured quantities.