Encoding reputation

Sam Lawrence on ways of encoding reputation, and why that might be a good thing.

I’ve speculated about this before — in fact, I think it would be a killer attribute for SOA, and is therefore much more broadly interesting than Sam suggests. To wit:

One of the under represented aspects of the natural needs of a service oriented environment is credibility. In the ideal SOA world, your component goes out, “into the wild”, searching for a service implementation that matches a specific interface and provides certain information. What does your component do if it finds multiple implementations, each of which meets all of your other selection criteria, such as performance, cost, completeness, whatever. Under such circumstances, you’d need to make a judgement based on something quite similar (if not identical) to credibility in human relationships — what is the reputation of service X compared to service Y? Who do you believe?

So let’s play the scenario out — how would our theoretical agent/component, in some futuristic SOA environment, deal with such fuzzy choices? I think one possible valid solution would be …

Sam Lawrence on ways of encoding reputation, and why that might be a good thing.

I’ve speculated about this before — in fact, I think it would be a killer attribute for SOA, and is therefore much more broadly interesting than Sam suggests. To wit:

One of the under represented aspects of the natural needs of a service oriented environment is credibility. In the ideal SOA world, your component goes out, “into the wild”, searching for a service implementation that matches a specific interface and provides certain information. What does your component do if it finds multiple implementations, each of which meets all of your other selection criteria, such as performance, cost, completeness, whatever. Under such circumstances, you’d need to make a judgement based on something quite similar (if not identical) to credibility in human relationships — what is the reputation of service X compared to service Y? Who do you believe?

So let’s play the scenario out — how would our theoretical agent/component, in some futuristic SOA environment, deal with such fuzzy choices? I think one possible valid solution would be to provide a mechanism to “change our minds”. By that, I mean the agent would need to be able to do something along the lines of the following:

Evaluate the various offerings from the various available services

After filtering on the “objective” criteria (method signature, QOS promises, etc), if there are still multiple choices, apply “subjective” criteria, such as reputation, degree of satisfaction with past performance, and so on.

If there is still no distinct choice at this point, decide at random, AND (and here is the critical bit) “remember” the alternatives in some persistent way

If, at some later point in time, we become dissatisfied with the answer we received from the service we selected, we would invoke a kind of exception handling/rollback sort of mechanism, and “change our mind” — we switch to the alternative service.

Note that, to really model halfway human behaviour here, we’d need some sort of polling mechanism as well, in that last step — we’d need a way to “keep an eye on” the alternatives, as one possible motivation for “changing our mind” might be as simple as one of the alternatives suddenly offering a superior solution.