The transitivity of trust

Suppose you tell a close friend a secret. You consider them trustworthy, and don’t fear for its release. Suppose they request to tell the secret to a friend of theirs who you don’t know. They claim this person is also highly trustworthy. I think most people would feel significantly less secure agreeing to that.

In general, people trust their friends. Their friends trust their own friends, and so on. But I think people trust friends of friends, or friends of friends of friends less than proportionally. e.g. if you act like there’s a one percent chance of your friend failing you, you don’t act like there’s 1-(.99*.99) chance of your friend’s friend failing you.

One possible explanation is that we generally expect the people we trust to have much worse judgement about who to trust than about the average thing. But why would this be so? Perhaps everyone does just have worse judgement about who to trust than they do about other things. But to account for what we observe, people would on average have to think themselves better in this regard than others. Which might not be surprising, except that they have to think themselves more better than others in this domain than in other domains. Otherwise they would just trust others less in general. Why would this be?

Another possibility I have heard suggested is that we trust our friends more than is warranted by their true probability of defecting, for non-epistemic purposes. In which case, which purposes?

Trusting a person involves choosing to make your own payoffs depend on their actions in a circumstance where it would not be worth doing so if you thought they would defect with high probability. If you think they are likely to defect, you only rely on them when there are particularly large gains from them cooperating combined with small losses from them defecting. As they become more likely to cooperate, trusting them in more cases becomes worthwhile. So trusting for non-epistemic purposes involves relying on a person in a case where their probability of defecting should make it not worthwhile, for some other gain.

What other gains might you get? Such trust might signal something, but consistently relying too much on people doesn’t seem to make one look good in any way obvious to me. It might signal to that person that you trust them, but that just brings us back to the question of how trusting people excessively might benefit you.

Maybe merely relying on a person in such a case could increase their probability of taking the cooperative action? This wouldn’t explain the intransitivity on its own, since we would need a model where trusting a friend’s friend doesn’t cause the friend’s friend to become more trustworthy.

Another possibility is that merely trusting a person does not get such a gain, but a pair trusting one another does. This might explain why you can trust your friends above their reliability, but not their friends. By what mechanism could this happen?

An obvious answer is that a pair who keep interacting might cooperate a lot more than they naturally would to elicit future cooperation from the other. So you trust your friends the correct amount, but they are unusually trustworthy toward you. My guess is that this is what happens.

So here the theory is that you trust friends substantially more than friends of friends because friends have the right incentives to cooperate, whereas friends of friends don’t. But if your friends are really cooperative, why would they give you unreliable advice – to trust their own friends?

One answer is that your friends believe trustworthiness is a property of individuals, not relationships. Since their friends are trustworthy for them, they recommend them to you. But this leaves you with the question of why your friends are wrong about this, yet you know it. Particularly since generalizing this model, everyone’s friends are wrong, and everyone knows it.

One possibility is that everyone learns these things from experience, and they categorize the events in obvious ways that are different for different people. Your friend Eric sees a series of instances of his friend James being reliable and so he feels confident that James will be reliable. You see a series of instances of different friends of friends not being especially reliable and see James most easily as one of that set. It is not that your friends are more wrong than you, but that everyone is more wrong when recommending their friends to others than when deciding whether to trust such recommendations, as a result of sample bias. Eric’s sample of James mostly contains instances of James interacting with Eric, so he does overstate James’ trustworthiness. Your sample is closer to the true distribution of James’ behavior. However you don’t have an explicit model of why your estimate differs from Eric’s, which would allow you to believe in general that friends overestimate the trustworthiness of their friends to others, and thus correct your own such biases.

GD Star Ratingloading...

Trackback URL:

ryancarey

Really a very clever explanation and it seems correct too.

An additional point: My friend betraying my trust, seems awful, intuitively. My friend passing on this information to a friend, and hiss friend betraying my trust, it seems bad, but not as bad. It seems to reflect less badly on my friend, and also, in total, I feel that I have been betrayed less. It would be much more sensible to think that in this case, my friend has still betrayed me by being imprudent. But it feels that he’s made an error of judgement, which feels less bad than an error of trust. That’s why my friend seems to get off the hook. And with regard to the total betrayal, it feels like the one who has made an error of trust is my friend’s friend. But his duty of trust seems less because we have less of a social contract. Somehow the social contract seems less binding when it is separated by a second degree. So this contractualist system builds a system of ethics up out of our biases and rationalises them all in this horribly twisted fashion.

dmytryl

Or, if the secret is to be retold by the friend to friend of a friend, that’s potentially a first step of a big cascade of such re-tellings.

It can also help to go a bit specific. It probably depends to the kind of secret it is. If your friend told you they are guilty of a crime, you are unlikely to rat that friend out because your friend would go to jail, and he’s substantially closer circle to you than the society. Or you may have enough knowledge of what redeems him. But when it comes to friend-of-a-friend, you can act in favour of greater social good and tell on that person. You may even see it positive for your friend if your friend loses that friend who’s a criminal.

On the other hand if it is corporate sensitive (but ethically neutral) information that is under NDA, such as software source code, I think people pretty much do assume transitivity.

http://www.facebook.com/profile.php?id=723726480 Christopher Chang

Yes, as soon as you allow your friend to tell your secret to their friend, you generally lose control of the exponent on the .99. Since you’ve permitted one instance of trusted communication, it may not seem so unreasonable to the friend or friend-of-friend to engage in a few more without bothering you with additional permission requests.

This does not invalidate Katja’s other observations.

dmytryl

I think it is simpler, though. Friendship is definitely not transitive, as per crime involvement secret example.

Plus as pointed out elsewhere the confidence in friend’s integrity and confidence in friend’s evaluation of people are two different things.

I think its good to get specific here, as it seems its closer to transitive for sensitive information like source code, while for e.g. being involved in a crime, the most trusted friend of most trusted friend can very well tell on you even if everyone is acting in their friend’s best interest (but the friend is a little bit bad on foresight).

ByTimeAsunder

Trust is near?

I think you’re mainly correct (at least it sounds plausible to me), but there is perhaps another mechanism. People identify more with their friends than with their friends’ friends. Perhaps many choose who to befriend and/or trust based at least in part on how accurate they believe they’ve modeled their friends’ thought processes. Hence someone they know is easier to model that someone they don’t. Relying on a friend’s model isn’t the same as relying on one’s own because one doesn’t have direct control over that model. So perhaps it’s not that they only trust their friend’s friend less, but that they trust their friend less than the person who’s modelling they can control, i.e. themself. Humans often trust that which they can control more than uncontrolled variables, even if those variables are pretty well understood.

And now, the obligatory quote:

“Three can keep a secret, if two of them are dead.” ~ Benjamin Franklin

http://juridicalcoherence.blogspot.com/ srdiamond

Trust is near?

Is it? I was surprised Katja didn’t explore the near-far implications.

I think trust is far; isn’t it? 1) We trust people for the long term. 2) Trust is a moralistic concept. 3) Pro in general is far, whereas con is near. ( http://tinyurl.com/7yqe7zp )

On the other hand, we trust those near us and distrust those who are distant.

How to resolve this conflict of “intuition”? I would argue that who we trust, far or near, is beside the point. The criterion is the context in which we typically apply the concepts of trust and distrust . In trust we look to the future; distrust ends the transaction.

In all, I think the established findings for pro and con generally should carry the greatest weight, I think.

Anyway, this is relevant for Katja’s explanation. If as I think trust is far and distrust is near, then our bias should be to trust a friend’s friend excessively (in comparison to the trust accorded to a friend). This would support my position that friend’s don’t over-endorse their friends; the opposite, they would underestimate the trustworthiness of their own friends with another friend’s secret. (This may seem unintuitive, but remember that this involves abstracting from the incentive-based effects, which predominate.)

ByTimeAsunder

Trust is a moralistic concept.

Is it? Notionally, yes. Practically, I’m less sure. Perhaps a distinction is needed between trusting someone to act in our own best interests, trusting someone’s judgement, and trusting someone to behave in an expected manner. For example, one may trust a bank in a way they would be disinclined to trust a close friend, simply because the bank is bound by laws and the need to stay in business. But one may trust a close friend with a secret because they share the reciprocally reinforced mutuality Katja mentioned. Even though the ability to model expectations of the trusted party plays a major role in both, they are very different.

I’m no longer convinced that trust can be accurately reduced to a single concept. How we trust someone seems as important as if we trust them. Near trust is perhaps dominated by emotion and immediate experience whereas far trust is perhaps dominated by mechanics and social expectations, two different instantiations of the social contract.

As a caveat, I’m no economist and realize I may be using the terms near and far (the technical meanings of which I was unaware of until I began reading this blog) incorrectly. Nevertheless, hopefully I got my meaning across ungarbled.

roystgnr

If I have 99% trust in my friend’s integrity, that doesn’t mean I have 99% trust in their beliefs, including their belief in their friends’ integrity. You’re multiplying the wrong numbers.

gwern0

Yeah, no kidding. If I tell my friend, they might tell an indefinite number of their friends, each of whom might blab with 1%…? Or if I tell more than one friend, too!

Let’s remember the sum rule: the probability of A or B equals p(A) + p(B) – p(A & B); 1% + 1% – (1%*1%) = ~2%. So I could trust my friend to fail at only 1%, and think there is a much much higher risk of the information going public (eg. 10 friends would be ~10%).

http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

Gwern: red herring; the OP limits her claim to telling one friend

Roystgnr: It doesn’t mean we have 99% confidence in our own beliefs either.

Katja: Nice analysis, although I think you’re wrong. The most plausible explanation for why we trust friends more than friends of friends is that the friend has two motives for keeping the confidence: he doesn’t want to violate a trust and he doesn’t want to harm me. The confidence will be a lot less secure even with a friend when the real consequences of disclosure are clearly trivial. The friend of the friend has the trust motive but lacks the welfare motive.

The main direct criticism of substance I’d make is to your last sentence:

However you don’t have an explicit model of why your estimate differs from Eric’s, which would allow you to believe in general that friends overestimate the trustworthiness of their friends to others, and thus correct your own such biases.

I think you need an explanation of why we lack such a model, as it doesn’t seem like a hard insight to obtain. I others lack one–I don’t! Do you?

http://juridicalcoherence.blogspot.com/ srdiamond

” I others lack one–I don’t! Do you?”

Should be: I don’t think others lack this insight—I don’t! Do you?

Katja Grace

If you don’t trust your friends suggestion that you should trust his friend, then it seems you think your friend lacks this insight.

bluto

I don’t know if there’s anything but a catechism but I recall from Clancy novels the rule of thumb was the probability of a secret being exposed was proportional to the square of the people who know it.

http://www.facebook.com/profile.php?id=723726480 Christopher Chang

Doesn’t seem horrible for a rule of thumb.

Two explanations for quadratic scaling in professional contexts come to mind:
1. The probability that the nth most trustworthy person in the group spills the secret scales linearly with n.
2. The perceived cost/benefit of keeping the secret changes with n in such a way that, to first order, *everyone* in the group is n times as likely to spill the beans.

Note that if both are true simultaneously, you end up with cubic scaling, so if quadratic scaling is observed in practice, at least one of these factors, I’d guess #1, does not really come into play.

Robin Hanson

Such trust might signal something, but consistently relying too much on people doesn’t seem to make one look good in any way obvious to me. It might signal to that person that you trust them, but that just brings us back to the question of how trusting people excessively might benefit you.

I’d think this would work as a costly signaling equilibrium. You pay a real cost to signal that you expect to gain from your continued relation with them, a cost that those who expect to gain less are not willing to pay. This convinces them that you intend to stay with them, allowing them to rely more on you.

jimrandomh

If I tell one friend, then I have an enforcement mechanism: if the secret gets out I’ll know they did it. If I tell two friends, I might not be sure who did it, but they can’t be sure that I won’t know (because they might not know that the other knows, and I might have told them different subsets of the details). But if I tell a friend, and he tells someone else, then I’ll have a very hard time using social enforcement against the friend-of-a-friend if the secret gets out, both because I can’t be sure who leaked it and because I don’t have a relationship through which to punish them.

http://juridicalcoherence.blogspot.com/ srdiamond

I can’t be sure who leaked it and because I don’t have a relationship through which to punish them.

1. Whether you can be sure who leaked it is essentially beside the point. You would have even greater lack of certainty about the source of leakage when you tell two friends; many secrets are told to more than a single friend, yet permission to tell the friend’s friend is denied.

2. The lack of relationship for punishment argument misses the point because the friend will presumably punish trust transgressions by his friend. It becomes a serious factor only because it’s my welfare that’s at stack: thus, I have more reason to punish.

So here the theory is that you trust friends substantially more than friends of friends because friends have the right incentives to cooperate, whereas friends of friends don’t. But if your friends are really cooperative, why would they give you unreliable advice – to trust their own friends?

“The right incentives to cooperate” encompasses my explanation by concern with your welfare (which in turn encompasses dmitryl’s example concerning criminal conduct). So, on my analysis, your explanation should stop before this point. You got it right when you said motive to cooperate was the distinguishing ingredient.

So, you’re right that if your friends advised you to trust their friends even when there’s a lot at stake for your welfare, you would need an additional explanation to explain this. Let’s take dmitryl’s example of criminal conduct: I would not expect a good friend to advise me to allow him to confide the information to his friend! So, I don’t think there’s much beyond this point that requires further explanation.

richatd silliker

Sounds more like a post on manipulation than one on the transitivity of trust.
Have I missed something?.

Tom

In many situations, an important part trusting your friend’s friend is who is your friend trying to help, you or their other friend?

For example, your friend says that his friend, the contractor, can do a job for you and that he is very good at what he does. Is your friend doing this for your benefit or the contractor friend who needs a job?

Trusting your friend’s friend can ruin even your trust in your friend. If the contractor does a bad job, you lose on that and in addition you will lose at least some of your trust in your friend.

blink

This is an interesting questions, but I worry that the model conflates two ideas. If I retell a secret, that is a bad signal about the importance of the secret itself, irrespective of whether trust is transitive. For example, if I tell two friends independently, I may evaluate the possibility of failure as 1-(.99*.99). But if I also *tell them* that I have told my secret to two people, this percentage goes way down.

To test transitivity, then, we need to eliminate the retelling aspect. How about this way: Your friend recommends someone you don’t know as says you can trust them (with a secret, to watch your house, to hold your money, whatever…). How much trust do you place in this third person? Personally, I say quite a lot.

Scott Messick

It seems to me that you already answered the last question yourself–friends have the right incentives to cooperate (with you) while friends of friends don’t. But friends will want to be more trustworthy than they would be to all their friends, not just you. So they ask for permission because it benefits them, and self-interestedly advise that it’s a good idea.

OwenCB

“One possible explanation is that we generally expect the people we trust to have much worse judgement about who to trust than about the average thing.”
I think this should be modified to: we trust friends more in actions more than in judgements, which doesn’t sound so implausible. Being a friend is mostly about trusting actions (you believe that they will do what they say, in this case not tell people the secret). It’s not that judgements are less trustworthy, though: there are just two dimensions. There might be other people who you don’t know personally but whose judgements you trust, at least in some domains (a favourite restaurant critic?).

Brian Webb117

Strictly off the top of my head, is it possible there are two types of trust here? Trusting a friend not to repeat a secret goes to their trustworthiness with respect to respecting your wishes. Trusting them to select a third party who respects your wishes or confidentiality goes to trusting their judgment, not their willingness to respect your wishes. Put a slightly different way, one’s trust in a friend is validated by, at base, one’s judgment. The friend’s assessment of his or her friends is a step removed from one’s judgment, and so less valued.

Additionally, your selection of a trustworthy friend and their selection of a third party willing to keep your secrets are very different things. In the first instance, there is a direct relationship. In the second, the relationship, and therefore the responsibility, is derived and diluted. I think most people, when they consider how they would act in a similar circumstance, realize they are more likely to keep a secret of a friend than of a friend’s friend.

Finally, consider the cost-beneft analysis here. When you tell a friend something, the risk of their disclosing the confidentiality is mitigated by the satisfaction we receive in the sharing of the secret. There is no related benefit to us when our friend passes the story along.

Drewfus

Define ‘friend’ as someone we voluntarily and in general offer more information to than is strictly necessary to establish a positive reputation. A social reputation is a probability generalization of good behavior in the future, and thus for novel scenarios (with respect to the friends’ shared history).

Define ‘others’ as those who we only and in general transmit or expose enough information to establish a positive reputation for the purpose of exchanging tradable entities.

So a friend represents an over-supply of information, and therefore an under-pricing of information. The under-pricing implies a consumer surplus, and a producer deficit. A friendship exists when a two-way over-supply of information coincides with a two-way net benefit from this over-supply. How this could be is interesting, but simply because this relationship does not exist between the first person and the friend of a friend, the trust cannot be transitive.

Another explanation comes from looking at the situation in terms of information. What does the friend of a friend receive in pure information terms? A copy of a copy. That is the first persons point-of-view. From the friends PoV it is perceived as only a single copy (that is, they have the original). Therefore the problem is information fidelity, not trust. You can’t assume that the content of the secret is identical in the mind of the friend of a friend to that of the first person.

The friend experiences the revealing of the secret not as an info transfer, but as reading from a shared memory space – as though it were public content. For the first person, it is perceived as a transfer from private memory space to another private memory space. As friends, this perception exists both ways. This two-way asymmetric perception communication model is the basis of trust, and hence of friendship.

Also, what establishes trust in the first place? If trust is learnt behavior, rather than rational calculation, then transitive trust is impossible. Transitive trust would be getting too close to the definition of reputation i suggested (above) – generalized probability of pro-social behavior. Friends of friends cannot be considered quasi-friends, only as information consumers (otherwise the outlook is utopian). Learnt behavior implies an energy minimization state. Why is or how can energy be saved by trusting? In other words, what is the point of having friends?

http://www.facebook.com/daniel.carrier.54 Daniel Carrier

I feel like if I tell a friend a secret, and let him tell a friend so long as his friend promises to keep it secret, it’s highly likely that his friend will see nothing wrong with telling yet another friend so long as he promises to keep it a secret etc. By telling my friend that he can tell others so long as they promise to keep it secret, I’m implying that it’s still secret if you do that, so his friends can do it too.

Also, if I tell n friends, then n people know. If I allow each of my friends to tell n friends, n^2 people know. This is substantially more people, and it’s proportionally less likely to stay a secret.

Arch1

Summing up
there seems to be at least a plethora of potential causes for subtransitivity
of trust-

0) gossip
value for originator

1) reduced
friendship-signaling benefit for originator

2) Katja’s
sampling bias effect

3)
exponential explosion of potential betrayers

4) reduced
chance of attribution/punishment

5) reduced
incentive (and increased cost) of permission-seeking

6) reduced concern for originator’s welfare

7) trusting
actions vs trusting judgement

Why
doesn’t vouching by a trusted friend eliminate most of these?
First, there is point 7. But note that even if we rashly assume that
originator trusts friend’s *judgement* as much as her own, 0 & 1
still give originator more incentive to share directly than indirectly.
And a bit of reflection suggests that 2-6 all describe factors which friends
(however *trustworthy*) will tend to *appreciate* less than originators, simply by virtue of
their role.

TruePath

Seems to me this conversation suffers from a lack of a definition of trust. A few things one could mean by trust include:

Seems to me that transitivity decreases as we go down the scale. Trust is fairly transitive for 1 and very non-transitive for 3. I hypothesize this has to do with the difficulty in both checking for compliance (hence incentives for being trustworthy) and the difficulty of coordinating goals over long social networks.

I mean in 1 your friend can easily arbitrate any defection any punish it accordingly so the incentives to trust are high and no conflicting trust issues are likely to come up. On the other hand in 2 and 3 it is much harder to incentivize trust and the interests of friend (rather than a friend’s friend) might turn out to conflict with yours in cases 2 and 3.

Realistically, the way we handle conflict of interest in cases 2 and 3 between various people (two people who want the same girl, a sudden need to lend someone else money you have agreed to put in on a condo) is by coordinating with the friends concerned and agreeing on a common solution. If we extended our trust in cases 2 and 3 indefinitely the cost of checking for conflict would be overwhelming.

IT seems to me we tell secrets to our friends because the immediate benefit (a feeling of relief at having shared a feeling out loud) feels greater than the potential risk (that friend will tell others). But there is no benefit to the original secret-teller, if their friend shares the secret with another person. So why would I want my friend to tell my secret? It offers me no additional value, and only risk of harm.

http://juridicalcoherence.blogspot.com/ srdiamond

Katja,

If you don’t trust your friends suggestion that you should trust his friend, then it seems you think your friend lacks this insight.

Yes, you’re right. (I think I slighted your stipulation that your friend recommends the further disclosure because in my experience, that doesn’t much happen. But I trust your account of social practices more than mine, as you seem a more social being.

I think the solution goes back to construal-level theory (or do I just bring everything back to that?) A modal mismatch ( http://tinyurl.com/6pt9eq5 ) occurs when your friend assesses her friend’s concern for your welfare. Since it’s her (presumably close) friend, she makes a near-mode assessment. But since your friend’s friend isn’t your friend, you make a far-mode assessment of the second-order friend.

The far mode assessment yields the “insight,” which involves general considerations about loyalties. Your friend is mired in near-mode–it’s her close friend–which fails to discern any specific facts which would suggest that the second-order friend would betray.