''The Harvard/MIT Drive Image Corpus.'' Between 1998 and 2006, [[Simson Garfinkel|Garfinkel]] acquired 1250+ hard drives on the secondary market. These hard drive images have proven invaluable in performing a range of studies such as the developing of new forensic techniques and the sanitization practices of computer users.

+

;The Real Data Corpus.

+

: Between 1998 and 2006, [[Simson Garfinkel|Garfinkel]] acquired 1250+ hard drives on the secondary market. These hard drive images have proven invaluable in performing a range of studies such as the developing of new forensic techniques and the sanitization practices of computer users.

: In 2001 the Honeynet project distributed a set of disk images and asked participants to conduct a forensic analysis of a compromised computer. Entries were judged and posted for all to see. The drive and writeups are still available online.

+

: http://www.honeynet.org/challenge/index.html

+

: Other challenges were released in 2010 and 2011, and two contained partial disk images.

''The Honeynet Project Forensic Challenge.'' In 2001 the Honeynet project distributed a set of disk images and asked participants to conduct a forensic analysis of a compromised computer. Entries were judged and posted for all to see. The drive and writeups are still available online.

+

;Honeynet Project Scans of the Month

+

: The Honeynet Project provided network scans in the majority of its Scan of the Month challenges. Some of the challenges provided disk images instead. The Sleuth Kit's Wiki lists Brian Carrier's responses to those challenges.

;The [http://www.cfreds.nist.gov/ Computer Forensic Reference Data Sets] project from [[National Institute of Standards and Technology|NIST]] hosts a few sample cases that may be useful for examiners to practice with:

''The [http://www.cfreds.nist.gov/ Computer Forensic Reference Data Sets]'' project from [[National Institute of Standards and Technology|NIST]] hosts a few sample cases that may be useful for examiners to practice with.

''The [http://www.wide.ad.jp/project/wg/mawi.html MAWI Working Group] of the [http://www.wide.ad.jp/ WIDE Project]'' maintains a [http://tracer.csl.sony.co.jp/mawi/ Traffic Archive]. In it you will find:

''The [http://www.wide.ad.jp/project/wg/mawi.html MAWI Working Group] of the [http://www.wide.ad.jp/ WIDE Project]'' maintains a [http://tracer.csl.sony.co.jp/mawi/ Traffic Archive]. In it you will find:

−

* daily trace of a trans-Pacific T1 line

+

* daily trace of a trans-Pacific T1 line;

−

* daily trace at an IPv6 line connected to 6Bone:

+

* daily trace at an IPv6 line connected to 6Bone;

−

* daily trace at another trans-Pacific line (100Mbps link) in operation since 2006/07/01:

+

* daily trace at another trans-Pacific line (100Mbps link) in operation since 2006/07/01.

Traffic traces are made by tcpdump, and then, IP addresses in the traces are scrambled by a modified version of [[tcpdpriv]].

Traffic traces are made by tcpdump, and then, IP addresses in the traces are scrambled by a modified version of [[tcpdpriv]].

−

==WireShark==

+

==Wireshark==

−

The open source WireShark project (formerly known as Ethereal) has a website with many network packet captures:

+

The open source Wireshark project (formerly known as Ethereal) has a website with many network packet captures:

* http://wiki.wireshark.org/SampleCaptures

* http://wiki.wireshark.org/SampleCaptures

Line 44:

Line 78:

* http://tesla.hpl.hp.com/public_software/

* http://tesla.hpl.hp.com/public_software/

−

=Text Files=

+

==Other==

−

==Email messages==

+

Github user "markofu" has aggregated several other network captures into a Git repository.

+

* https://github.com/markofu/pcaps

+

+

=Email messages=

''The Enron Corpus'' of email messages that were seized by the Federal Energy Regulatory Commission during its investigation of Enron.

''The Enron Corpus'' of email messages that were seized by the Federal Energy Regulatory Commission during its investigation of Enron.

Line 51:

Line 88:

* http://www.cs.cmu.edu/~enron

* http://www.cs.cmu.edu/~enron

* http://www.enronemail.com/

* http://www.enronemail.com/

+

+

The NIST '''TextREtrieval Conference 2007''' has released a public Spam corpus:

+

* http://plg.uwaterloo.ca/~gvcormac/spam/

+

+

Email Messages Corpus Parsed from W3C Lists (for TRECENT 2005)

+

* http://tides.umiacs.umd.edu/webtrec/trecent/parsed_w3c_corpus.html

+

+

=Text Files=

==Log files==

==Log files==

Line 62:

Line 107:

The [http://trec.nist.gov Text REtrieval Conference (TREC)] has made available a series of [http://trec.nist.gov/data.html text collections].

The [http://trec.nist.gov Text REtrieval Conference (TREC)] has made available a series of [http://trec.nist.gov/data.html text collections].

+

==American National Corpus==

+

The [http://www.americannationalcorpus.org/ American National Corpus (ANC) project] is creating a massive collection of American english from 1990 onward. The goal is to create a corpus of at least 100 million words that is comparable to the British National Corpus.

+

+

==British National Corpus==

+

The [http://www.natcorp.ox.ac.uk/ British National Corpus (100)] is a 100 million word collection of written and spoken english from a variety of sources.

: A set of freely redistributable images from all over the world, used for content-based image retrieval.

=Voice=

=Voice=

+

==CALLFRIEND==

CALLFRIEND is a database of recorded English conversations. A total of 60 recorded conversations are available from the University of Pennsylvania at a cost of $600.

CALLFRIEND is a database of recorded English conversations. A total of 60 recorded conversations are available from the University of Pennsylvania at a cost of $600.

+

==TalkBank==

TalkBank in an online database of spoken language. The project was originally funded between 1999 and 2004 by two National Science Foundation grants; ongoing support is provided by two NSF grants and one NIH grant.

TalkBank in an online database of spoken language. The project was originally funded between 1999 and 2004 by two National Science Foundation grants; ongoing support is provided by two NSF grants and one NIH grant.

The [http://corpus.canterbury.ac.nz/ Canterbury Corpus] is a set of files used for testing lossless compression algorithms. The corpus consists of 11 natural files, 4 artificial files, 3 large files, and a file with the first million digits of pi. You can also find a copyof the Calgaruy Corpus at the website, which was the defacto standard for testing lossless compression algorithms in the 1990s.

+

* Under an NSF grant, Kam Woods and [[Simson Garfinkel]] created a website for digital corpora [http://digitalcorpora.org]. The site includes a complete training scenario, including disk images, packet captures and exercises.

+

+

* The [http://corpus.canterbury.ac.nz/ Canterbury Corpus] is a set of files used for testing lossless compression algorithms. The corpus consists of 11 natural files, 4 artificial files, 3 large files, and a file with the first million digits of pi. You can also find a copyof the Calgaruy Corpus at the website, which was the defacto standard for testing lossless compression algorithms in the 1990s.

+

+

* The [http://traces.cs.umass.edu/index.php/Main/HomePage UMass Trace Repository] provides network, storage, and other traces to the research community for analysis. The UMass Trace Repository is supported by grant #CNS-323597 from the National Science Foundation.

+

+

* [http://arstechnica.com/science/news/2009/02/aaas-60tb-of-behavioral-data-the-everquest-2-server-logs.ars Sony has made 60TB of Everquest 2 logs available to researchers.] What's there? "everything."

+

+

* UCI's [http://networkdata.ics.uci.edu/resources.php Network Data Repository] provides data sets of a diverse set of networks. Some of the networks are related to computers, some aren't.

Disk Images

The Real Data Corpus.

Between 1998 and 2006, Garfinkel acquired 1250+ hard drives on the secondary market. These hard drive images have proven invaluable in performing a range of studies such as the developing of new forensic techniques and the sanitization practices of computer users.

In 2001 the Honeynet project distributed a set of disk images and asked participants to conduct a forensic analysis of a compromised computer. Entries were judged and posted for all to see. The drive and writeups are still available online.

The Honeynet Project provided network scans in the majority of its Scan of the Month challenges. Some of the challenges provided disk images instead. The Sleuth Kit's Wiki lists Brian Carrier's responses to those challenges.

Memory Images

Network Packets and Traces

DARPA ID Eval

The DARPA Intrusion Detection Evaluation. In 1998, 1999 and 2000 the Information Systems Technology Group at MIT Lincoln Laboratory created a test network complete with simulated servers, clients, clerical workers, programmers, and system managers. Baseline traffic was collected. The systems on the network were then “attacked” by simulated hackers. Some of the attacks were well-known at the time, while others were developed for the purpose of the evaluation.

Text for Text Retrieval

American National Corpus

The American National Corpus (ANC) project is creating a massive collection of American english from 1990 onward. The goal is to create a corpus of at least 100 million words that is comparable to the British National Corpus.

IEEE VAST Challenges

Images

A set of freely redistributable images from all over the world, used for content-based image retrieval.

Voice

CALLFRIEND

CALLFRIEND is a database of recorded English conversations. A total of 60 recorded conversations are available from the University of Pennsylvania at a cost of $600.

TalkBank

TalkBank in an online database of spoken language. The project was originally funded between 1999 and 2004 by two National Science Foundation grants; ongoing support is provided by two NSF grants and one NIH grant.

Augmented Multi-Party Interaction Corpus

Other Corpora

Under an NSF grant, Kam Woods and Simson Garfinkel created a website for digital corpora [2]. The site includes a complete training scenario, including disk images, packet captures and exercises.

The Canterbury Corpus is a set of files used for testing lossless compression algorithms. The corpus consists of 11 natural files, 4 artificial files, 3 large files, and a file with the first million digits of pi. You can also find a copyof the Calgaruy Corpus at the website, which was the defacto standard for testing lossless compression algorithms in the 1990s.

The UMass Trace Repository provides network, storage, and other traces to the research community for analysis. The UMass Trace Repository is supported by grant #CNS-323597 from the National Science Foundation.