Considering it's probably tricky to programmatically determine what a nasty comment is, I'm assuming you'll figure out whether a comment is good/bad based on the ratio of upvotes to downvotes, and penalize those who voted against the grain.

I get this, but wouldn't this lead to making HN more conformist than it sometimes already is? "Either you agree with the majority of us about X, or..."

it's not that hard. Those that work with classifiers, this kind of thing is pretty easy. Identifying sarcasm and irony are hard, but 'nasty comments' can be identified pretty simply using the well known text classifier algorithms. You find the training data and use it to train something like an SVM.

As you might expect, a subreddit about a politician with (in)famously devoted followers attracts its share of strife. It can be difficult to distinguish legitimate arguments from flamebait, and there's no shortage of people eager to take any bait offered. I should note that I'm not actively running the moderation bot at the moment.

A well-executed troll is, by definition difficult for humans to detect.I don't think there's much chance of reliably doing it with software. Fortunately, most political squabbling on reddit consists simply of people expressing scorn or outrage that someone would post something on the internet that disagrees with their deeply-held beliefs. That's a bit easier to detect.

I plan to. I've been doing a lot of work with text classification over the past couple years and would like to base a startup on it. I just need to come up with a product that's commercially viable and non-evil.