I've found documentation across CPAN to be generally clear and complete. Web::Scraper does not expand into much detail, but there is enough to experiment with. It also states that "There are many examples in the eg/ dir packaged in this distribution. It is recommended to look through these". Documentation should provide enough information to support its reader's requirements without them having to study the source. If you are competent enough to study the source, then you will inevitably develop a deeper understanding of the module and its limitations. Its also a good idea to look at the modules dependencies, in the instance of Web::Scraper, HTML::TreeBuilder::XPath and HTML::Selector::XPath appear to handle xpath expressions therefore may provide additional syntax / documentation / examples. If this is the first time you've approached web scraping in Perl, although Web::Scraper has been designed to simplify the process, it would be good to research into "rawer" techniques, which give you more control at every stage of the scraping process i.e. HTML::Element.