Natural Language Classifiers increasingly help Enterprises overcome the deluge of textual data coming their way. But these classifiers, once trained, lose their usefulness over time, as they remain static but the textual data’s underlying domain evolves: Their accuracy decreases in a phenomenon known as concept drift. Can this phenomenon be reliably detected in the classifier’s output? Once detected, can it be corrected through re-training, and if so, how? A proof-of-concept implementation of a system is presented, in which the classifier’s confi-dence metrics are used to detect concept drift. The classifier is then re-trained iteratively, by selecting test set samples with low confidence value, correcting them, and using them in the next iteration’s training set. The classifier’s performance is measured over time, and the system’s performance is observed. Finally, recommendations based on this implementation are made, which may prove useful in implementing such systems.