We could also use first from the Scalar::Util module (which comes
standard with Perl), which allows us to find the first occurrence of
the element in the list.

use Scalar::Util qw(first);

my $found = first { $_ eq $check_name } @usernames;

Each of these are fine solutions, although the solution using
foreach provides the best performance in most circumstances.
It avoids searching the entire list (which grep does even in
a scalar context), and the overhead of subroutine calls (which
first uses for each comparison).

Searching for multiple items

Let's say we have a list of items and we want to test each one for
existence in a much larger list. A simple solution may look like
this:

If we have many library books (for example approximately 10,000) and a medium
demand for book orders (approximately 150), one third of which do not exist in
the library, then we find that foreach is moderately faster (26%) than grep
which is moderately faster (26%) than first. Each of these take between one
third and half of a second to run on modern hardware.

Using a hash

If we're doing lots of searches, it becomes much faster to
have our books in a hash, and use a hash-lookup:

my $found = exists $library_index{$book_name};

How much faster? About 100,000 times faster on average when compared to a
linear search with foreach on a list with with 10,000 items in it. Of course,
there is the additional overhead of building the hash in the first place. For
two-character long keys, building a hash takes twice the time as building an
array, and it's even longer for longer keys. It's just not worth the extra
effort to build a hash if we're going to perform just a single search.

However in our library example we're walking over our arrays multiple times,
making this a prime candidate for using a hash to speed things up. We can use
a hash slice to add our keys to the hash in a single step.

Building the hash of all the library books (our first example) and then
searching is much faster (1928%) than our simple linear searches with
foreach, while building the hash of our wanted books (our second example) is
even faster (160%) than that.

Both our examples requires that Perl walks through each of our arrays at
least once, either to build the hash (using our hash slice), or to search
the hash (using our foreach loop). Our second example is the faster of the
two since we're building a smaller hash, which takes less time. We also
have the potential to exit our main loop early if we find all the books
we're looking for.

However our second solution is only superior if we throw away our hashes at
the end, and rebuild them whenever we need to order new books. If we're
able to retain our hashes in a persistent process, then it's much faster
to index all the library books once, and then loop over the smaller list
each time:

In fact, it's blindingly fast, running at more than 78 times the speed of the
fastest code that builds the hash each time. Whenever you build a hash for the
purposes of searching, it's often worthwhile to hang onto it as long as possible.

(where lb-hash is building the hash from the library books and w-hash is
building the hash from the wanted books). Note that these results will
vary slightly from machine to machine and run to run.

We can tell by the rate that we can run the solution using first 2.03
times in a second, while we can use the solution using foreach 3.21
times in a second. The results with less library books (about 5000) are
much the same.

Benchmark is a standard Perl module, and its documentation can be found
on CPAN ( http://search.cpan.org/perldoc ) or can be obtained
with perldoc Benchmark.

Conclusion

There are reasons why you may have large lists instead of hashes. For
example you may need the elements to be ordered, or an array may make more
sense at other points of your program. However, if you are likely to be
doing lots of random accesses into a list, including looking for items
which do not exist, then it may be more efficient to create a hash. In our
situation, even with the cost of creating a hash of all of the library books,
we discovered a massive performance increase of over 1900% than using a simple
foreach loop. Using a better approach and creating a hash of our wanted
books was an improvement of more than 5100% over the simple foreach loop.
If we can instead use a hash instead of a list our lookups are even faster.