The Senseval Committee invites proposals for tasks to be run as part of SemEval-1 and Senseval-4. As the nature of the tasks in Senseval has evolved to include semantic analysis tasks outside of word sense disambiguation, the Senseval Committee is pleased to announce the debut of ''SemEval: Evaluating Semantics''. In 2007, the SemEval workshop will retain the Senseval name to help ease the transition to the new name.

We welcome proposals for any tasks that can test an automatic system for semantic analysis of text, be it application dependent or independent. We especially encourage tasks for different languages, cross-lingual tasks, and tasks that are relevant to particular NLP applications such as machine translation, information retrieval and information extraction.

It will help us if tasks are designed in such a way that they can be run and scored automatically from a central website (as in Senseval-3). We will be happy to help with task design, data formatting, and so on.

The time period for SemEval-1/Senseval-4 has not yet been finalised, but it will likely be held over a 45-day period sometime during the first quarter of 2007.

SUBMISSION DETAILS

Proposals for tasks will ideally contain:

- A description of the task (max 1 page) - How the training/testing data will be built and/or procured - The evaluation methodology to be used, including clear evaluation criteria - The anticipated availability of necessary resources to the participants (copyright, etc) - The resources required to prepare the task (computation and annotation time, etc)

If you are not yet at a point to provide outlines of all of these, that is acceptable, but please give some thought to each, and present a sketch of your first ideas. We will gladly give feedback.