Since 1996, the Wayback Machine has been archiving cached pages of websites onto its large cluster of Linux nodes.[citation needed] It revisits sites on occasion (see technical details below) and archives a new version.[6] Sites can also be captured on the fly by visitors who enter the site's URL into a search box.[citation needed] The intent is to capture and archive content that otherwise would be lost whenever a site is changed or closed down.[citation needed] The overall vision of the machine's creators is to archive the entire Internet.[citation needed]

Information had been kept on digital tape for five years, with Kahle occasionally allowing researchers and scientists to tap into the clunky database.[7] When the archive reached its fifth anniversary, in 2001, it was unveiled and opened to the public in a ceremony at the University of California, Berkeley.[8]

Software has been developed to "crawl" the web and download all publicly accessible World Wide Web pages, the Gopher hierarchy, the Netnews (Usenet) bulletin board system, and downloadable software.[11] The information collected by these "crawlers" does not include all the information available on the Internet, since much of the data is restricted by the publisher or stored in databases that are not accessible. To overcome inconsistencies in partially cached websites, Archive-It.org was developed in 2005 by the Internet Archive as a means of allowing institutions and content creators to voluntarily harvest and preserve collections of digital content, and create digital archives.[12]

Crawl are contributed from various sources, some imported from third parties and others generated internally by the Archive.[6] For example crawls are contributed by the Sloan Foundation and Alexa, crawls run by IA on behalf of NARA and the Internet Memory Foundation, mirrors of Common Crawl.[6] The "Worldwide Web Crawls" have been running since 2010 and capture the global Web.[13][6]

The frequency of snapshot captures varies per website.[6] Websites in the "Worldwide Web Crawls" are included in a "crawl list", with the site archived once per crawl.[6] A crawl can take months or even years to complete depending on size.[6] For example "Wide Crawl Number 13" started on January 9, 2015 and completed on July 11, 2016.[14] However there may be multiple crawls ongoing at any one time, and a site might be included in more than one crawl list, so how often a site is crawled varies widely.[6]

In 2011 a new, improved version of the Wayback Machine, with an updated interface and fresher index of archived content, was made available for public testing.[18]

In March 2011, it was said on the Wayback Machine forum that, "the Beta of the new Wayback Machine has a more complete and up-to-date index of all crawled materials into 2010, and will continue to be updated regularly. The index driving the classic Wayback Machine only has a little bit of material past 2008, and no further index updates are planned, as it will be phased out this year".[19]

In January 2013, the company announced a ground-breaking milestone of 240 billion URLs.[20]

In October 2013, the company announced the "Save a Page" feature[21] which allows any Internet user to archive the contents of a URL. This became a threat of abuse by the service for hosting malicious binaries.[22][23]

As of December 2014[update], the Wayback Machine contained almost nine petabytes of data and was growing at a rate of about 20 terabytes each week.[24]

Historically, Wayback Machine respected the robots exclusion standard (robots.txt) in determining if a website would be crawled or not; or if already crawled, if its archives would be publicly viewable. Website owners had the option to opt-out of Wayback Machine through the use of robots.txt. It applied robots.txt rules retroactively; if a site blocked the Internet Archive, any previously archived pages from the domain were immediately rendered unavailable as well. In addition the Internet Archive stated, "Sometimes a website owner will contact us directly and ask us to stop crawling or archiving a site. We comply with these requests."[39] In addition, the website says: "The Internet Archive is not interested in preserving or offering access to Web sites or other Internet documents of persons who do not want their materials in the collection."[40]

Wayback's retroactive exclusion policy is based in part upon Recommendations for Managing Removal Requests and Preserving Archival Integrity published by the School of Information Management and Systems at University of California, Berkeley in 2002, which gives a website owner the right block access to the site's archives. [41] Wayback has complied with this policy to help avoid expensive litigation.[42]

The Wayback retroactive exclusion policy began to relax in 2017, when it stopped honoring robots.txt on U.S. government and military web sites for both crawling and displaying web pages. As of April 2017, Wayback is exploring ignoring robots.txt more broadly, not just for U.S. government websites.[43][44][45][46]

The site is frequently used by journalists and citizens to review dead websites, dated news reports or changes to website contents. Its content has been used to hold politicians accountable and expose battlefield lies.[47]

In 2017 the March for Science originated from a discussion on reddit that indicated someone had visited Archive.org and discovered that all references to climate change had been deleted from the White House website. In response, a user commented, "There needs to be a Scientists' March on Washington".[49][50][51]

In a 2009 case, Netbula, LLC v. Chordiant Software Inc., defendant Chordiant filed a motion to compel Netbula to disable the robots.txt file on its website that was causing the Wayback Machine to retroactively remove access to previous versions of pages it had archived from Netbula's site, pages that Chordiant believed would support its case.[52]

Netbula objected to the motion on the ground that defendants were asking to alter Netbula's website and that they should have subpoenaed Internet Archive for the pages directly.[53] An employee of Internet Archive filed a sworn statement supporting Chordiant's motion, however, stating that it could not produce the web pages by any other means "without considerable burden, expense and disruption to its operations."[52]

Magistrate Judge Howard Lloyd in the Northern District of California, San Jose Division, rejected Netbula's arguments and ordered them to disable the robots.txt blockage temporarily in order to allow Chordiant to retrieve the archived pages that they sought.[52]

In an October 2004 case, Telewizja Polska USA, Inc. v. Echostar Satellite, No. 02 C 3293, 65 Fed. R. Evid. Serv. 673 (N.D. Ill. Oct. 15, 2004), a litigant attempted to use the Wayback Machine archives as a source of admissible evidence, perhaps for the first time. Telewizja Polska is the provider of TVP Polonia and EchoStar operates the Dish Network. Prior to the trial proceedings, EchoStar indicated that it intended to offer Wayback Machine snapshots as proof of the past content of Telewizja Polska's website. Telewizja Polska brought a motion in limine to suppress the snapshots on the grounds of hearsay and unauthenticated source, but Magistrate Judge Arlander Keys rejected Telewizja Polska's assertion of hearsay and denied TVP's motion in limine to exclude the evidence at trial.[54][55] At the trial, however, district Court Judge Ronald Guzman, the trial judge, overruled Magistrate Keys' findings,[citation needed] and held that neither the affidavit of the Internet Archive employee nor the underlying pages (i.e., the Telewizja Polska website) were admissible as evidence. Judge Guzman reasoned that the employee's affidavit contained both hearsay and inconclusive supporting statements, and the purported web page printouts were not self-authenticating.[citation needed]

Provided some additional requirements are met (e.g., providing an authoritative statement of the archivist), the United States patent office and the European Patent Office will accept date stamps from the Internet Archive as evidence of when a given Web page was accessible to the public. These dates are used to determine if a Web page is available as prior art for instance in examining a patent application.[56]

There are technical limitations to archiving a website, and as a consequence, it is possible for opposing parties in litigation to misuse the results provided by website archives. This problem can be exacerbated by the practice of submitting screen shots of web pages in complaints, answers, or expert witness reports, when the underlying links are not exposed and therefore, can contain errors. For example, archives such as the Wayback Machine do not fill out forms and therefore, do not include the contents of non-RESTful e-commerce databases in their archives.[57]

In Europe the Wayback Machine could be interpreted as violating copyright laws. Only the content creator can decide where their content is published or duplicated, so the Archive would have to delete pages from its system upon request of the creator.[58] The exclusion policies for the Wayback Machine may be found in the FAQ section of the site.[59]

In late 2002, the Internet Archive removed various sites that were critical of Scientology from the Wayback Machine.[60] An error message stated that this was in response to a "request by the site owner".[61] Later, it was clarified that lawyers from the Church of Scientology had demanded the removal and that the site owners did not want their material removed.[62]

In 2003, Harding Earley Follmer & Frailey defended a client from a trademark dispute using the Archive's Wayback Machine. The attorneys were able to demonstrate that the claims made by the plaintiff were invalid, based on the content of their website from several years prior. The plaintiff, Healthcare Advocates, then amended their complaint to include the Internet Archive, accusing the organization of copyright infringement as well as violations of the DMCA and the Computer Fraud and Abuse Act. Healthcare Advocates claimed that, since they had installed a robots.txt file on their website, even if after the initial lawsuit was filed, the Archive should have removed all previous copies of the plaintiff website from the Wayback Machine, however, some material continued to be publicly visible on Wayback.[63] The lawsuit was settled out of court, after Wayback fixed the problem.[64]

On April 25, 2007, Internet Archive and Suzanne Shell jointly announced the settlement of their lawsuit.[65] The Internet Archive said it "...has no interest in including materials in the Wayback Machine of persons who do not wish to have their Web content archived. We recognize that Ms. Shell has a valid and enforceable copyright in her Web site and we regret that the inclusion of her Web site in the Wayback Machine resulted in this litigation." Shell said, "I respect the historical value of Internet Archive's goal. I never intended to interfere with that goal nor cause it any harm."[69]

Kevin Vaughan suspects that in the long-term of multiple generations "next to nothing" will survive in a useful way besides "if we have continuity in our technological civilization" by which "a lot of the bare data will remain findable and searchable".[80]

Some find the Internet Archive, which describes itself to be built for the long-term,[81] to be working furiously to capture data before it disappears without any long-term infrastructure to speak of.[82]

^Rossi, Alexis (2013-10-25). "Fixing Broken Links on the Internet". archive.org. San Francisco, CA, US: Collections Team, the Internet Archive. Archived from the original on 2014-11-07. Retrieved 2015-03-25. We have added the ability to archive a page instantly and get back a permanent URL for that page in the Wayback Machine. This service allows anyone – wikipedia editors, scholars, legal professionals, students, or home cooks like me – to create a stable URL to cite, share or bookmark any information they want to still have access to in the future.

^Advisory provided by Google (2015-03-25). "Safe Browsing Diagnostic page for archive.org". google.com/safebrowsing. Mountain View, CA, US: Google. Archived from the original on 2015-04-06. Retrieved 2015-03-25. 2015-03-25: Part of this site was listed for suspicious activity 138 time(s) over the past 90 days. ... What happened when Google visited this site? ... Of the 42410 pages we tested on the site over the past 90 days, 450 page(s) resulted in malicious software being downloaded and installed without user consent. The last time Google visited this site was on 2015-03-25, and the last time suspicious content was found on this site was on 2015-03-25. ... Malicious software includes 169 trojan(s), 126 virus, 43 backdoor(s).

^Claburn, Thomas (2007-03-16). "Colorado Woman Sues To Hold Web Crawlers To Contracts". New York, NY, US: InformationWeek, UBM Tech, UBM LLC. Archived from the original on 2014-09-04. Retrieved 2015-03-25. Computers can enter into contracts on behalf of people. The Uniform Electronic Transactions Act (UETA) says that a 'contract may be formed by the interaction of electronic agents of the parties, even if no individual was aware of or reviewed the electronic agents' actions or the resulting terms and agreements.'

^Samson, Martin H., Phillips Nizer LLP (2007). "Internet Archive v. Suzanne Shell". internetlibrary.com. Internet Library of Law and Court Decisions. Archived from the original on 2014-08-03. Retrieved 2015-03-25. More importantly, held the court, Internet Archive's mere copying of Shell's site, and display thereof in its database, did not constitute the requisite exercise of dominion and control over defendant's property. Importantly, noted the court, the defendant at all times owned and operated her own site. Said the Court: 'Shell has failed to allege facts showing that Internet Archive exercised dominion or control over her website, since Shell's complaint states explicitly that she continued to own and operate the website while it was archived on the Wayback machine. Shell identifies no authority supporting the notion that copying documents is by itself enough of a deprivation of use to support conversion. Conversely, numerous circuits have determined that it is not.'

^brewster (2007-04-25). "Internet Archive and Suzanne Shell Settle Lawsuit". archive.org. Denver, CO, USA: Internet Archive. Archived from the original on 2010-12-05. Retrieved 2015-03-25. Both parties sincerely regret any turmoil that the lawsuit may have caused for the other. Neither Internet Archive nor Ms. Shell condones any conduct which may have caused harm to either party arising out of the public attention to this lawsuit. The parties have not engaged in such conduct and request that the public response to the amicable resolution of this litigation be consistent with their wishes that no further harm or turmoil be caused to either party.