The email dataset was later purchased by Leslie
Kaelbling at MIT, and turned out to have a number of integrity
problems. A number of folks at SRI, notably Melinda Gervasio,
worked hard to correct these problems, and it is thanks to them
(not me) that the dataset is available. The dataset here does not
include attachments, and some messages have been deleted "as part
of a redaction effort due to requests from affected
employees". Invalid email addresses were converted to something of
the form user@enron.com whenever possible (i.e., recipient is
specified in some parse-able format like "Doe, John" or "Mary
K. Smith") and to no_address@enron.com when no recipient was
specified.

I get a number of questions about this corpus each week, which
I am unable to answer, mostly because they deal with preparation
issues and such that I just don't know about. If you ask me a
question and I don't answer, please don't feel slighted.

I am distributing this dataset as a resource for researchers
who are interested in improving current email tools, or
understanding how email is currently used. This data is valuable;
to my knowledge it is the only substantial collection of "real"
email that is public. The reason other datasets are not public is
because of privacy concerns. In using this dataset, please be
sensitive to the privacy of the people involved (and remember that
many of these people were certainly not involved in any of the
actions which precipitated the investigation.)

March 2, 2004 Version of dataset and
the August 21, 2009 Version of dataset
are no longer being distributed. If you are using
this dataset for your work, you are requested to replace it
with the newer version of the dataset below, or make
the the appropriate changes to
your local copy. A total of four messages have been removed
since the original version of the dataset.

Kimmie Farrington and colleagues published a paper in 2011
that uses the Enron dataset as part of the test corpus for their
work on crowdsourcing human vs. computer generated
classification explanation: see Hutton, Amanda, Alexander Liu,
and Cheryl Martin. "Crowdsourcing evaluations of classifier
interpretability." In Proceedings of the 2012 AAAI Spring
Symposium on Wisdom of the Crowd