In machine learning (/main/index.php? Save s=Machine+learning&item_type=topic), feature hashing, also known as the hashing trick[1] (by analogy to the kernel trick (/main/index.php?s=Kernel+trick&item_type=topic)), is a fast and spaceefficient way of vectorizing features (/main/index.php?s=Feature+ (machine+learning)&item_type=topic), i.e. turning arbitrary features into indices in a vector or matrix. It works by applying a hash function (/main/index.php?s=Hash+function&item_type=topic) to the features and using their hash values as indices directly, rather than looking the indices up in an associative array (/main/index.php? s=Associative+array&item_type=topic). Motivating example In a typical document classification (/main/index.php? s=Document+classification&item_type=topic) task, the input to the machine learning algorithm (both during learning and classification) is free text. From this, a bag of words (/main/index.php?s=Bag+of+words&item_type=topic) (BOW) representation is constructed: the individual tokens (/main/index.php? s=Type%E2%80%93token+distinction&item_type=topic) are extracted and counted, and each distinct token in the training set defines a feature (/main/index.php?s=Feature+(machine+learning)&item_type=topic)