VW begins to find the pattern: a +1 label if the capital letters outnumber the
lowercase, and -1 otherwise.

How well trained is our model? Let’s run 100 tests on new random examples:

for i in range(num_tests):
label, features = get_example()
# Give the features to the model, witholding the label
response = vw.get_prediction(features)
prediction = response.prediction
# Test whether the floating-point prediction is in the right direction
if cmp(prediction, 0) == label:
num_good_tests += 1

(For logistic regression, a prediction value greater than zero representa
a label of +1; that is why cmp(prediction, 0) is used.)

The most important Vowpal Wabbit feature not discussed above is namespaces. VW
uses namespaces to divide features into groups, which is used for some of its
advanced features. Without discussing in detail why you would use them,
here’s how to use namespaces in Wabbit Wappa.

In the VW() constructor, each named argument corresponds
to a Vowpal Wabbit option. Single character keys are mapped to single-dash options;
e.g. b=20 yields -b 20. Multiple character keys map to double-dash options:
quiet=True yields --quiet.

Boolean values are interpreted as flags: present if True, absent if False (or not given).
All non-boolean values are treated as option arguments, as in the -b example above.

Note that Wabbit Wappa makes no attempt to validate the inputs or
ensure they are compatible with its functionality. For instance, changing the
default predictions='/dev/stdout' will probably make that VW() instance
non-functional.

Active Learning is an approach to training somewhere between supervised and unsupervised.
When getting labeled data is very expensive (such as when users must be solicited for
their preferences), an Active Learning approach assigns an “importance” value to each
unlabeled example, so that only the most critical labels need be acquired.

Vowpal Wabbit’s Active Learning
interface requires you to start a VW instance in server mode and communicate with it
via a socket. Wabbit Wappa abstracts all that away, providing the same interface for both
regular and Active learning: