Set Max simultaneous connections (data transfer) to 2.
We do this because we want to minimize our load on the server that keeps a copy of your website in their archive.

Scan website > Analysis filters

In limit analysis of internal URLs to which match as "relative path" OR "text" OR "regex" in list
add a limit-to that limits which page URLs get downloaded and analyzed. Example could be
::webarchives/20(0|1)[-0-9A-Za-z_]+/https?://(www\.)?example\.com/.

Note: By adding such filters, you can limit crawl and analysis to the exact parts you need. However, since some archive services redirect pages to other dates and URL versions (e.g. with and without the .www part), your filters should not be too specific.

Scan website > Output filters

In limit output of internal URLs to which match as "relative path" OR "text" OR "regex" in list
add a limit-to that limits which page URLs get downloaded and included in output. Example could be
::webarchives/20(0|1)[-0-9A-Za-z_]+/http://example\.com/.

Note: Using this requires extra care and is only relevant if you need very finely limit download to the exact parts you need.

While still testing the configuration, you may want to uncheck the option: Scan website > Crawler options > Apply "webmaster" and "output" filters after website scan stops