This thesis presents a reference model and some computational methods for the automatic detection of affective states of people interacting with artificial systems. The model can be successfully adopted to analyze and compare many Affective Computing studies evaluating similarities and differences among proposed approaches. When we first approached Affective Computing and started reviewing the literature, we noted that the same problem was being approached from different points of view. While the main question - to automatically recognize emotions - was shared among various studies, a wide range of dissimilar experiments was conducted. These heterogeneous approaches, however, were sharing some key aspect of Emotion Detection problem in Affecting Computing. Nevertheless, without a well-defined model, it was difficult to deeply understand which aspects (variables) were the most relevant, and how they were related to each other. This lack of a common model motivated us to formalize the problem. Sharing a general model helps to better approach and analyze the problem and to systematically verify hypotheses. This lead to an improvement of the formalization of the problem toward a valid, and effective formulation. We introduce a machine-centered model that characterizes the interaction between a subject and a machine as well as the affective state of the subject. The model is general enough to represent many different experimental protocols as well as more practical scenarios proposed by both the Psychophysiology and the Affective Computing communities. To complete the model, we discuss some methodological issues related to Emotion Detection. An agreed methodology should provide the guidelines to follow in the realm of formal use and evaluation of the model. In fact, we propose a methodology aimed at guiding the use of the model to design experiments, data acquisition, data preprocessing (e.g., artifact removal, data normalization and feature extraction), data analysis and validation (e.g., how to get a correct estimation). Guidelines are provided for the selection of stimuli and questionnaires, to control the possible sources of noise and their influences of the measurements. After the formal definition of the model and the methodological discussion, we present our case study whose original purpose is to advance the knowledge about Affect Detection in video games. In particular, we are interested in investigating whether physiological measurements could discriminate the player's preference between different video game experiences. A number of critical issues needed to be addressed during the design of the experiment. We studied whether physiological response could provide a more robust and interesting insight, since classical metrics, such as in-game performance, are not necessarily a good estimate of the preference for a generic player. The answer to this question is an important aspect for the development of an adaptive video game able to offer different game experiences according to the preferences inferred from the players physiological status. In principle, different players have different preferences, given their experience, their mood, the emotions they feel, and many other factors. If we could identify the player's preference on-line, we might adapt the game to match it. Different analyses have been performed: from preference learning approach, to the canonical classification approach using k-NN and 3 classes of enjoyment. A comparison of performances between physiological features and in-game features showed that the latter can better predict the user reported preference. However, a deeper analysis showed that in-game features were more correlated to the task than to the preference itself. This result has been obtained thanks to a novel approach, derived from our model, that exploits the correlation between stimuli, emotion, and ground truth. When classes of preferences are unbalanced, the proposed method helps to find the features that result more correlated to the reported preference.