For some researchers - particularly those working in AI - the term
`agent' has a stronger and more specific meaning than that sketched
out above. These researchers generally mean an agent to be a computer
system that, in addition to having the properties identified above, is
either conceptualised or implemented using concepts that are more
usually applied to humans. For example, it is quite common in AI to
characterise an agent using mentalistic notions, such as
knowledge, belief, intention, and obligation [Shoham, 1993]. Some
AI researchers have gone further, and considered emotional
agents [Bates, 1994][Bates et al., 1992a]. (Lest the reader suppose that this
is just pointless anthropomorphism, it should be noted that there are
good arguments in favour of designing and building agents in terms of
human-like mental states - see section 2.) Another
way of giving agents human-like attributes is to represent them
visually, perhaps by using a cartoon-like graphical icon or an
animated face [Maes, 1994a] - for obvious reasons, such
agents are of particular importance to those interested in
human-computer interfaces.