Cognitive Sciences Stack Exchange is a question and answer site for practitioners, researchers, and students in cognitive science, psychology, neuroscience, and psychiatry. It's 100% free, no registration required.

I have a design where I present participants with a series of decisions, and then ask them about some of these decisions later. Similar work has found that even when the particular decision was made in error, and people 'should' realise this, participants carry on and attempt to justify the decisions anyway. I presented similar decisions but with a computer program acting as 'the experimenter', providing text boxes for input. Participants mentioned errors made in my study much more - they didn't carry on justifying an error.

This was an unexpected result. I suspect some of this may be due to participants not wanting to admit to making an error when they feel more identifiable, but I don't know where to start in terms of literature on the topic!

So essentially - what are the differences in peoples willingness to admit and correct their own, or even others, errors, between social and computer interactions? and further, what causes these differences?

2 Answers
2

If I understand your question correctly, you have found that participants are more willing to admit to an error if they interact with a computer rather than a human being.

I would suggest reading up on related findings in the field of social desirability. Social desirability is the tendency of humans to think of themselves (self-deceptive enhancement) and try to present themselves towards others in a favorable way (impression management). There have been studies that suggest, that the effects of social desirability--or more precisely impression management--on reporting of sensitive or stigmatizing behavior are attenuated by interaction with computers rather than people (Turner et al., 1998; Joinson, 1999; King & Miles, 1995; Davis, 1999; Reips, 2011).

I suspect this effect is mostly a result of self-administration rather than being specific to computers, but may be a plausible explanation for your finding: Admitting mistakes is much more threatening to a person's self-worth than justifying a decision-- especially, when an experimenter is present.

Crash made a great point that discusses why people won't admit error to others.

But it isn't just about social desirability.

To explain why people WILL admit error to a computer?

It's also about personal affirmation and self-direction.

Cognitive dissonance - that stubborn attitude an individual will get after making a choice - can be blamed (in part) for this inability to admit mistakes. It is the reason that people will become brand loyal to overpriced products - they're not willing to admit they've overspent, and justify the decision to themselves by changing their mind. They didn't overspend because the item (and purchase decision) contains some intangible benefit (in this case, social desirability). Given the opportunity to re-adjust their decision, with no obligation to be "true" to their choice - and change their self image - individuals will choose the rational course and correct an error. Cognitive dissonance only occurs when your self-image is out of step with your actions, and it is easy to correct an action with a computer.

In addition, the ability to self-direct, self-select and self-motivate plays a large role in admitting errors. A person-computer interaction is essentially an individual activity, and admitting error is not threatening. A social interaction begins with certain assumptions (consistency) that an individual activity does not (consistent to whom?). As people are motivated to learn and grow (according to some), a threat-free, obligation free, expectation free activity provides that opportunity.