-- | This module provides all the settable options in shpider.moduleNetwork.Shpider.OptionswhereimportData.MaybeimportNetwork.Shpider.Curl.OptsimportNetwork.Shpider.Curl.TypesimportNetwork.Shpider.StateimportNetwork.Shpider.URLimportNetwork.Shpider.TextUtils-- | Setting this to `True` will forbid you to `download` and `sendForm` to any site which isn't on the domain shared by the url given in `setStartPage`.stayOnDomain::Bool->Shpider()stayOnDomainb=doshpider<-getput$shpider{dontLeaveDomain=b}-- | Set the CurlTimeout option. Requests will TimeOut after this number of seconds.setTimeOut::Long->Shpider()setTimeOuts=doshpider<-getletisTimeoutc=casecof(CurlTimeout_)->True_->FalsetimeoutPresent=not$null$filterisTimeout$curlOptsshpiderput$shpider{curlOpts=ifnottimeoutPresentthenCurlTimeouts:curlOptsshpiderelsemap(\c->ifisTimeoutcthenCurlTimeoutselsec)(curlOptsshpider)}-- | Set the start page of your shpidering antics.-- The start page must be an absolute URL, if not, this will raise an error.setStartPage::String->Shpider()setStartPageuncleanUrl=doshpider<-getifisAbsoluteUrlurlthenput$shpider{startPage=url}elseerror"The start page must be an absolute URL"whereurl=escapeSpacesuncleanUrl-- | Return the starting URL, as set by `setStartPage`getStartPage::ShpiderStringgetStartPage=doshpider<-getreturn$startPageshpider-- | If onlyDownloadHtml is True, then during `download`, shpider will make a HEAD request to see if the content type is text\/html or application\/xhtml+xml, and only if it is, then it will make a GET request.onlyDownloadHtml::Bool->Shpider()onlyDownloadHtmlb=dost<-getput$st{htmlOnlyDownloads=b}-- | Set the given page as the `currentPage`.setCurrentPage::Page->Shpider()setCurrentPagep=doshpider<-getput$shpider{currentPage=p}-- | Return the current pagegetCurrentPage::ShpiderPagegetCurrentPage=dosh<-getreturn$currentPagesh-- | When keepTrack is set, shpider will remember the pages which have been `visited`.keepTrack::Shpider()keepTrack=doshpider<-getput$shpider{visited=Just[]}