Following are user-supplied samples of WWW::Mechanize in action.
If you have samples you'd like to contribute,
please send 'em to <andy@petdance.com>.

You can also look at the t/*.t files in the distribution.

Please note that these examples are not intended to do any specific task.
For all I know,
they're no longer functional because the sites they hit have changed.
They're here to give examples of how people have used WWW::Mechanize.

Note that the examples are in reverse order of my having received them,
so the freshest examples are always at the top.

Here's a pair of programs from Nat Torkington,
editor for O'Reilly Media and co-author of the Perl Cookbook.

Rael [Dornfest] discovered that you can easily find out how many Starbucks there are in an area by searching for "Starbucks".
So I wrote a silly scraper for some old census data and came up with some Starbucks density figures.
There's no meaning to these numbers thanks to errors from using old census data coupled with false positives in Yahoo search (e.g.,
"Dodie Starbuck-Your Style Desgn" in Portland OR).
But it was fun to waste a night on.

Here are the top twenty cities in descending order of population,
with the amount of territory each Starbucks has.
E.g.,
A New York NY Starbucks covers 1.7 square miles of ground.

This program takes filenames of images from the command line and uploads them to a www.photobucket.com folder. John Beppu, the author, says:

I had 92 pictures I wanted to upload, and doing it through a browser would've been torture. But thanks to mech, all I had to do was `./pb.upload *.jpg` and watch it do its thing. It felt good. If I had more time, I'd implement WWW::Photobucket on top of WWW::Mechanize.

Steve McConnell, author of the landmark Code Complete has put up the chapters for the 2nd edition in PDF format on his website. I needed to download them to take to Kinko's to have printed. This little program did it for me.

This was a program that was going to get a hack in Spidering Hacks, but got cut at the last minute, probably because it's against IMDB's TOS to scrape from it. I present it here as an example, not a suggestion that you break their TOS.

Last I checked, it didn't work because their HTML didn't match, but it's still good as sample code.

A quick little utility to search the CPAN and fire up a browser with a results page.

#!/usr/bin/perl
# turn on perl's safety features
use strict;
use warnings;
# work out the name of the module we're looking for
my $module_name = $ARGV[0]
or die "Must specify module name on command line";
# create a new browser
use WWW::Mechanize;
my $browser = WWW::Mechanize->new();
# tell it to get the main page
$browser->get("http://search.cpan.org/");
# okay, fill in the box with the name of the
# module we want to look up
$browser->form_number(1);
$browser->field("query", $module_name);
$browser->click();
# click on the link that matches the module name
$browser->follow_link( text_regex => $module_name );
my $url = $browser->uri;
# launch a browser...
system('galeon', $url);
exit(0);