Given a typical selection of your incoming mail classified as spam or ham (non-spam), this tool will feed each mail to SpamAssassin, allowing it to 'learn' what signs are likely to mean spam, and which are likely to mean ham.

Simply run this command once for each of your mail folders, and it will ''learn'' from the mail therein.

Note that csh-style globbing in the mail folder names is supported; in other words, listing a folder name as * will scan every folder that matches. See Mail::SpamAssassin::ArchiveIterator for more details.

SpamAssassin remembers which mail messages it has learnt already, and will not re-learn those messages again, unless you use the --forget option. Messages learnt as spam will have SpamAssassin markup removed, on the fly.

If you make a mistake and scan a mail as ham when it is spam, or vice versa, simply rerun this command with the correct classification, and the mistake will be corrected. SpamAssassin will automatically 'forget' the previous indications.

Users of spamd who wish to perform training remotely, over a network, should investigate the spamc -L switch.

Learn the input message(s) as ham. If you have previously learnt any of the messages as spam, SpamAssassin will forget them first, then re-learn them as ham. Alternatively, if you have previously learnt them as ham, it'll skip them this time around. If the messages have already been filtered through SpamAssassin, the learner will ignore any modifications SpamAssassin may have made.

Learn the input message(s) as spam. If you have previously learnt any of the messages as ham, SpamAssassin will forget them first, then re-learn them as spam. Alternatively, if you have previously learnt them as spam, it'll skip them this time around. If the messages have already been filtered through SpamAssassin, the learner will ignore any modifications SpamAssassin may have made.

sa-learn will read in the list of folders from the specified file, one folder per line in the file. If the folder is prefixed with ham:type: or spam:type:, sa-learn will learn that folder appropriately, otherwise the folders will be assumed to be of the type specified by --ham or --spam.

type above is optional, but is the same as the standard for ArchiveIterator: mbox, mbx, dir, file, or detect (the default if not specified).

Don't learn the message if a from address matches configuration file item bayes_ignore_from or a to address matches bayes_ignore_to. The option might be used when learning from a large file of messages from which the hammy spam messages or spammy ham messages have not been removed.

Display the contents of the Bayes database. Without an option or with the all option, all magic tokens and data tokens will be displayed. magic will only display magic tokens, and data will only display the data tokens.

Can also use the --regexpRE option to specify which tokens to display based on a regular expression.

If specified this username will override the username taken from the runtime environment. You can use this option to specify users in a virtual user configuration when using SQL as the Bayes backend.

NOTE: This option will not change to the given username, it will only attempt to act on behalf of that user. Because of this you will need to have proper permissions to be able to change files owned by username. In the case of SQL this generally is not a problem.

Add additional lines of configuration directly from the command-line, parsed after the configuration files are read. Multiple --cf arguments can be used, and each will be considered a separate line of configuration.

Produce debugging output. If no areas are listed, all debugging information is printed. Diagnostic output can also be enabled for each area individually; area is the area of the code to instrument. For example, to produce diagnostic output on bayes, learn, and dns, use:

spamassassin -D bayes,learn,dns

For more information about which areas (also known as channels) are available, please see the documentation at:

Skip the slow synchronization step which normally takes place after changing database entries. If you plan to learn from many folders in a batch, or to learn many individual messages one-by-one, it is faster to use this switch and run sa-learn --sync once all the folders have been scanned.

Clarification: The state of --no-sync overrides the bayes_learn_to_journal configuration option. If not specified, sa-learn will learn to the database directly. If specified, sa-learn will learn to the journal file.

Note: --sync and --no-sync can be specified on the same commandline, which is slightly confusing. In this case, the --no-sync option is ignored since there is no learn operation.

If you previously used SpamAssassin's Bayesian learner without the DB_File module installed, it will have created files in other formats, such as GDBM_File, NDBM_File, or SDBM_File. This switch allows you to migrate that old data into the DB_File format. It will overwrite any data currently in the DB_File.

Can also be used with the --dbpathpath option to specify the location of the Bayes files to use.

There are now multiple backend storage modules available for storing user's bayesian data. As such you might want to migrate from one backend to another. Here is a simple procedure for migrating from one backend to another.

Note that if you have individual user databases you will have to perform a similar procedure for each one of them.

Once you have backed up all databases you can update your configuration for the new database backend. This will involve at least the bayes_store_module config option and may involve some additional config options depending on what is required by the module. (For example, you may need to configure an SQL database.)

For a more lengthy description of how this works, go to http://www.paulgraham.com/ and see "A Plan for Spam". It's reasonably readable, even if statistics make me break out in hives.

The short semi-inaccurate version: Given training, a spam heuristics engine can take the most "spammy" and "hammy" words and apply probabilistic analysis. Furthermore, once given a basis for the analysis, the engine can continue to learn iteratively by applying both the non-Bayesian and Bayesian rulesets together to create evolving "intelligence".

SpamAssassin 2.50 and later supports Bayesian spam analysis, in the form of the BAYES rules. This is a new feature, quite powerful, and is disabled until enough messages have been learnt.

i.e.: a straightforward rule, that matches, say, "VIAGRA" is easy to understand. If it generates a false positive or false negative, it is fairly easy to understand why.

With Bayesian analysis, it's all probabilities - "because the past says it is likely as this falls into a probabilistic distribution common to past spam in your systems". Tell that to your users! Tell that to the client when he asks "what can I do to change this". (By the way, the answer in this case is "use whitelisting".)

I suggest several thousand of each, placed in SPAM and HAM directories or mailboxes. Yes, you MUST hand-sort this - otherwise the results won't be much better than SpamAssassin on its own. Verify the spamminess/haminess of EVERY message. You're urged to avoid using a publicly available corpus (sample) - this must be taken from YOUR mail server, if it is to be statistically useful. Otherwise, the results may be pretty skewed.

This can be applied to either ham or spam that has run through the sa-learn processes. It's a bit of a hammer, really, lowering the weighting of the specific tokens in that message (only if that message has been processed before).

If you don't have a corpus of mail saved to learn, you can let SpamAssassin automatically learn the mail that you receive. If you are autolearning from scratch, the amount of mail you receive will determine how long until the BAYES_* rules are activated.

Learning filters require training to be effective. If you don't train them, they won't work. In addition, you need to train them with new messages regularly to keep them up-to-date, or their data will become stale and impact accuracy.

You need to train with both spam and ham mails. One type of mail alone will not have any effect.

Note that if your mail folders contain things like forwarded spam, discussions of spam-catching rules, etc., this will cause trouble. You should avoid scanning those messages if possible. (An easy way to do this is to move them aside, into a folder which is not scanned.)

If the messages you are learning from have already been filtered through SpamAssassin, the learner will compensate for this. In effect, it learns what each message would look like if you had run spamassassin -d over it in advance.

Another thing to be aware of, is that typically you should aim to train with at least 1000 messages of spam, and 1000 ham messages, if possible. More is better, but anything over about 5000 messages does not improve accuracy significantly in our tests.

Be careful that you train from the same source -- for example, if you train on old spam, but new ham mail, then the classifier will think that a mail with an old date stamp is likely to be spam.

It's also worth noting that training with a very small quantity of ham, will produce atrocious results. You should aim to train with at least the same amount (or more if possible!) of ham data than spam.

On an on-going basis, it is best to keep training the filter to make sure it has fresh data to work from. There are various ways to do this:

This means keeping a copy of all or most of your mail, separated into spam and ham piles, and periodically re-training using those. It produces the best results, but requires more work from you, the user.

(An easy way to do this, by the way, is to create a new folder for 'deleted' messages, and instead of deleting them from other folders, simply move them in there instead. Then keep all spam in a separate folder and never delete it. As long as you remember to move misclassified mails into the correct folder set, it is easy enough to keep up to date.)

Another way to train is to chain the results of the Bayesian classifier back into the training, so it reinforces its own decisions. This is only safe if you then retrain it based on any errors you discover.

SpamAssassin does not support this method, due to experimental results which strongly indicate that it does not work well, and since Bayes is only one part of the resulting score presented to the user (while Bayes may have made the wrong decision about a mail, it may have been overridden by another system).

Also called 'auto-learning' in SpamAssassin. Based on statistical analysis of the SpamAssassin success rates, we can automatically train the Bayesian database with a certain degree of confidence that our training data is accurate.

It should be supplemented with some supervised training in addition, if possible.

This is the default, but can be turned off by setting the SpamAssassin configuration parameter bayes_auto_learn to 0.

This means training on a small number of mails, then only training on messages that SpamAssassin classifies incorrectly. This works, but it takes longer to get it right than a full training session would.

The database of tokens, containing the tokens learnt, their count of occurrences in ham and spam, and the timestamp when the token was last seen in a message.

This database also contains some 'magic' tokens, as follows: the version number of the database, the number of ham and spam messages learnt, the number of tokens in the database, and timestamps of: the last journal sync, the last expiry run, the last expiry token reduction count, the last expiry timestamp delta, the oldest token timestamp in the database, and the newest token timestamp in the database.

This is a database file, using DB_File. The database 'version number' is 0 for databases from 2.5x, 1 for databases from certain 2.6x development releases, 2 for 2.6x, and 3 for 3.0 and later releases.

A map of Message-Id and some data from headers and body to what that message was learnt as. This is used so that SpamAssassin can avoid re-learning a message it has already seen, and so it can reverse the training if you later decide that message was learnt incorrectly.

While SpamAssassin is scanning mails, it needs to track which tokens it uses in its calculations. To avoid the contention of having each SpamAssassin process attempting to gain write access to the Bayes DB, the token timestamps are written to a 'journal' file which will later (either automatically or via sa-learn --sync) be used to synchronize the Bayes DB.

Also, through the use of bayes_learn_to_journal, or when using the --no-sync option with sa-learn, the actual learning data will take be placed into the journal for later synchronization. This is typically useful for high-traffic sites to avoid the same contention as stated above.

Since SpamAssassin can auto-learn messages, the Bayes database files could increase perpetually until they fill your disk. To control this, SpamAssassin performs journal synchronization and bayes expiration periodically when certain criteria (listed below) are met.

SpamAssassin can sync the journal and expire the DB tokens either manually or opportunistically. A journal sync is due if --sync is passed to sa-learn (manual), or if the following is true (opportunistic):

Go through each of the DB's tokens. Starting at 12hrs, calculate whether or not the token would be expired (based on the difference between the token's atime and the db's newest token atime) and keep the count. Work out from 12hrs exponentially by powers of 2. ie: 12hrs * 1, 12hrs * 2, 12hrs * 4, 12hrs * 8, and so on, up to 12hrs * 512 (6144hrs, or 256 days).

The larger the delta, the smaller the number of tokens that will be expired. Conversely, the number of tokens goes up as the delta gets smaller. So starting at the largest atime delta, figure out which delta will expire the most tokens without going above the goal expiration count. Use this to choose the atime delta to use, unless one of the following occurs:

If the expire run gets past this point, it will continue to the end. A new DB is created since the majority of DB libraries don't shrink the DB file when tokens are removed. So we do the "create new, migrate old to new, remove old, rename new" shuffle.