Article

Custom Spam & Moderation Filter for Telligent Community

Creating a spam rule

A client recently came to us with a requirement to moderate all of a certain type of content, but leave the existing moderation of other content in place. Luckily, Telligent Community makes this really simple by using the IAbuseDetector plugin type to create a custom spam filter. In this filter we can apply our own logic and flag the content for moderation.

The IAbuseDetector plugin

The IAbuseDetector plugin requires us to implement two methods in addition to those required by IPlugin. The Register() method is passed the abuse controller that we will assign to a field for use in our moderation logic.

The other is the GetAbuseExplanation() method which returns the string to be shown in the moderation UI. This should ideally be translatable, but for simplicity here, we are just returning the string. You can read more about translatable plugins here.

The rest of the plugin is implemented by creating event handlers for the type of content you want to moderate. This can either be a content specific event e.g. ForumThread events or a more generic IContents event handler that covers all content.

Example

This example builds on what we learnt in the recent blog post on handling events. We are going to create an event handler for the Content AfterCreate event which will fire when any content is created. Here we are going to mark all comments for moderation, but it will ignore all other content types. This means we need to check the ContentTypeId to ensure it matches the TypeId for Comments and then flag it for moderation. It’s appropriate in this example to mark the content as Moderated using the following call.

_abuseController.Moderate(e.ContentId, e.ContentTypeId);

This would effectively remove the content from the site and require a moderator to approve it before it’s published. However, if we were building a spam rule, we could instead flag it as abuse like this.

_abuseController.IdentifyAsAbusive(e.ContentId, e.ContentTypeId);

This would then trigger the abuse process which would notify the user their content was marked as abusive. To keep this example simple, we are running this rule when the content is created, but in a production scenario you would likely want to add support for updated content. You can do this is much the same way but in an AfterUpdate event handler.