This study offers a systematic comparison of automated content analysis tools by assessing their ability to correctly identify affective tone (e.g., positive vs. negative) in different data contexts and social media environments. Our comparisons assess the reliability and validity of publicly available, off-the-shelf classifiers. We use datasets from a range of online sources that vary in the diversity and formality of the language used, and we apply different classifiers to extract information about the affective tone in these datasets. We first measure agreement (reliability test) and then compare their classifications with the benchmark of human coding (validity test). Our analyses show that validity and reliability vary with the formality and diversity of the text; we also show that ready-to-use methods leave much space for improvement in domain-specific content and that a machine-learning approach offers more accurate predictions.