In our study, we investigated the use of multivariate pattern analysis to predict a subject's viewing task from their eye movements. Previous research indicates that prediction is possible if the visual stimuli are different across tasks (Henderson, Shinkareva, Wang, Luke, & Olejarczyk, 2013). However, one study suggests that if the same stimuli are used in all tasks, prediction is no better than random guessing (Greene, Liu, & Wolfe, 2012). To investigate this question further, we recorded eye movements from 72 subjects performing 3 tasks on the same set of real-world visual scenes: 1) a visual search task, 2) a scene memorization task, and 3) an aesthetic evaluation task. A set of classifiers (linear and nonlinear, univariate and multivariate) was used to predict the task for a particular trial using 7 features of eye movements recorded during this trial (number of fixations; mean, standard deviation, and skewness of fixation durations and of saccade amplitudes). All classifiers were successful in predicting the task when trained on the other trials recorded from the same subject. Linear classifiers, particularly Fisher's linear discriminant (FLD), were also successful in predicting the task for each subject when the training data came from the other subjects (mean prediction accuracy of FLD was 56%, with random guessing corresponding to 33%). These results suggest that the homoscedastic multivariate Gaussian model effectively captures predictive eye movement information that is specific to a task and generalizes across subjects. For each pair of tasks, we computed the loadings of each of our Z-scored features to determine the importance of each feature in predicting the task. The two most relevant features, on average, were the number of fixations and the mean saccade amplitude. The skewness of saccade amplitudes was the least important, but still contributed to prediction.