You want to eliminate duplicate values from a list, such as when you build the list from a file or from the output of another command. This recipe is equally applicable to removing duplicates as they occur in input and to removing duplicates from an array you've already populated.

The question at the heart of the matter is "Have I seen this element before?" Hashes are ideally suited to such lookups. The first technique (
"Straightforward
") builds up the array of unique values as we go along, using a hash to record whether something is already in the array.

The second technique (
"Faster
") is the most natural way to write this sort of thing in Perl. It creates a new entry in the hash every time it sees an element that hasn't been seen before, using the
++
operator. This has the side effect of making the hash record the number of times the element was seen. This time we only use the hash for its property of working like a set.

The third example (
"Similar but with user function
") is similar to the second but rather than storing the item away, we call some user-defined function with that item as its argument. If that's all we're doing, keeping a spare array of those unique values is unnecessary.

The next mechanism (
"Faster but different
") waits until it's done processing the list to extract the unique keys from the
%seen
hash. This may be convenient, but the original order has been lost.

The final approach, (
"Faster and even more different
") merges the construction of the
%seen
hash with the extraction of unique elements. This preserves the original order of elements.

Using a hash to record the values has two side effects: processing long lists can take a lot of memory and the list returned by
keys
is not in alphabetical, numeric, or insertion order.

Here's an example of processing input as it is read. We use
`who`
to gather information on the current user list, and then we extract the username from each line before updating the hash: