We investigate the use of logistic regression (LR) to initialise Reinforcement Learning (RL)-based dialogue systems with models of human dialogue strategies. LR produces accurate predictions and performs feature selection. We illustrate this technique in exploring human multimodal clarification strategies, observed in a Wizard-of-Oz experiment. We use it to initialise an RL-based system with features which significantly influence human behaviour. We show that the strategy applied by the human wizards is sensitive to different dialogue contexts. Furthermore we show that for predicting clarification behaviour the logistic models improve over the baseline on average twice as much as the supervised learning techniques used in previous work.