Author

Topic: A proposal for a reputational system (Read 1197 times)

full member
Activity: 168
Merit: 100
June 13, 2012, 08:03:47 AM
#13
I see your point much better now and you gave more thinking to parts I underthought, thank you.

The idea "every one is a judge, some are more reputable than me, some less" is proved to workable by middle ages traders and is knows as Lex mercatoria, recently mentioned by Forbes magazine contributor Jon Matonis. It only just does not go into details. You may want to get in touch with him.
newbie
Activity: 21
Merit: 0
June 13, 2012, 07:49:52 AM
#12
The general challenges with designing WoT systems as I understand them are:
1) To make hard to create clones
2) To make hard to hurt reputation for no reason
3) To rely as little as possible on anyone else but yourself

cyberagorist shows that general purpose WoT's are doomed and proposes for a set (network?) of single purpose WoT's.
Almost. I think the single purpose "does he obey his contracts" is the central question of a reputational system. It is itself as general purpose as the concept of a contract. 

Quote
Also proposes for a "with arbiter"-style network which IMO we already have today in real life with threats from government, judges as arbiters and blacklisting as having a record of serving a prison.

But this proposal misses the point IMHO as WoT is exactly that - web. Decentralization and disintermediation from judges.  Who are basically are no longer your peers and you get a classed society with all the consequences.

Not at all. Everybody is free to work as an arbiter.  The only "qualification" of the arbiter is the agreement of the participants to accept this particular arbiter for their particular contract.

But, in fact, there is no way out of arbitrage. If you want to obtain nontrivial information about "X has violated a contract", you cannot wait for confirmation from X himself.  Nobody will blacklist himself.
So, you have to rely on information from others. But, then, how to make it hard to hurt reputation for no reason? Or you have a separate class of people allowed to blacklist somebody, which is independent of X himself, that means, some class structure or centralization, or you make it dependent on decisions X has accepted himself.

The second type is what is used in my proposal.  There are no centrally accepted trusted arbiters.  The arbiter is an arbitrary person trusted by X himself to be fair.

Quote
The proposal for have a way to trust arbiter is something like overlaying another WoT on top of it.
Not another one, but the same one.

Each arbiter accepts, volitionally, other arbiters to judge about his contracts and terms of service. These are also not some sort of a second class of higher arbiters, but, again, every person trusted by the abiter himself can be used.

And, so, there is no hierarchy of arbiters, but a network:  A general directional graph, where every node has at least one outgoing edge to an arbiter he trusts.  In particular, there will be loops in it.

Don't be afraid of loops - the terms of service may and will clarify that, at some point, the next arbiter is already unable to change the original decision of the first arbiter, but can only penalize or destroy the reputation of the first arbiter.

Quote
I believe WoT must be way simpler and exclusively peer-to-peer.
It is. The arbiter is a peer of the guy who accepts him.

BTW, if you trust the partner of your contract, you can accept him as the arbiter for the contract too.  And he can accept you as his arbiter.  So a simple peer to peer reputational system would be possible as a special case.

Quote
To make hard to hurt reputation for no reason there clearly must be some cost to do so. In case of bitcoin economy - see first word Wink You should pay to change someone's reputation.
So or poor victims appear helpless, or rich people can destroy your reputation for  cheap.
full member
Activity: 168
Merit: 100
June 13, 2012, 05:40:21 AM
#11
The general challenges with designing WoT systems as I understand them are:
1) To make hard to create clones
2) To make hard to hurt reputation for no reason
3) To rely as little as possible on anyone else but yourself

cyberagorist shows that general purpose WoT's are doomed and proposes for a set (network?) of single purpose WoT's. Also proposes for a "with arbiter"-style network which IMO we already have today in real life with threats from government, judges as arbiters and blacklisting as having a record of serving a prison.

But this proposal misses the point IMHO as WoT is exactly that - web. Decentralization and disintermediation from judges. Who are basically are no longer your peers and you get a classed society with all the consequences. The proposal for have a way to trust arbiter is something like overlaying another WoT on top of it.

I believe WoT must be way simpler and exclusively peer-to-peer.

The subset of WoT's related to economy might be like this:

The first problem of creating clones could be mitigated my "weighing" each identity by the amount of provable assets (bitcoins, glbse stocks, etc). After some thinking I concluded that, strangely, you would not want to count identity's debts. In that case the important thing there would be to make sure it is impossible to transfer debt from identity to identity. But that is impossible because in a desired environment it should be possible to transfer assets from identity to identity and leaving whatever debts.

Another thing to weigh identity is its age and its age-assets history. New and suddenly rich identities signal a clone. As well as old but then suddenly becoming rich. In fact the trustworthy identity's age-assets history graph can be expected to be some sort of a curve and weighed as closeness to that curve.

This approach makes "growing" clones by imitating successful financial activity between them slow and hopefully pointless.

To make hard to hurt reputation for no reason there clearly must be some cost to do so. In case of bitcoin economy - see first word Wink You should pay to change someone's reputation.

As for self-reliance I believe this is already solved with existing technologies like Free Software, Open Hardware and DHT communication and storage networks.
newbie
Activity: 21
Merit: 0
June 13, 2012, 02:38:54 AM
#10
Still, it's hard to get something so simple to say "Honors their contracts" into something like a number. Like if I see 6/7 what am I to think? Do I really think that 11/11 is the same or even likely at all to be similar to another 11/11 I did business with?

It is a simple yes-no question, and I do not see much justification for intermediate numbers.

So my proposal is here for a global black list of persons who have cheated. The only valid record on this list contains

1.) the signature of the blacklisted person that it accepts a given arbiter,
2.) the signature of the arbiter that the person has not fulfilled a contract and not accepted the decision of the arbiter about it,
3.) additional information about the person as well as the contract.

So, to appear on the black list you have to be a total scumbag.

Quote
It makes me think of the doctor who has a great record because he never takes risky cases.
The point is that you can take risky cases, and, once you fail, the arbiter makes a decision about compensation. Then, all you have to do is to accept it and to do what the arbiter has told.  The penalties are themself established in the contract.

The idea behind this is to exclude only those who refuse any cooperation.  Everything else is (and can be) left to freedom of contract and the volitional choice of arbiters.

This black list is negative reputation.  There is, clearly, also a need for something different, a positive reputation. For this purpose, I would propose a system of guarantees which is based on it:  

So, you offer a compensation for the victim of A if A appears on the black list for violating a contract with the victim.

This is an information which suggests a simple metric: You can easily count the compensation you obtain if you are cheated.  

Not that easy, to be honest, because one should take into account the possibility of clones of A giving guarantees. A way to solve this problem would be the possibility to create persons connected with real life data, like passport data, biometric data and so on. These should not be necessarily public, it is sufficient to have them available to the arbiter, who makes them public in the case of a black list entry.  Without it, he gives only confirmation that he has a given set of personal data, and that there is not yet a record with personal data of that person on the black list.

Then, the simple case is that you have an expection of the worth of an open personal account (even a scumbag looses a lot if he appears on the black list with his personal data), so that you can compute - or I receive that much money as compensation, or even more money worth of reputation will be destroyed. The more complicate case is if the guarantee is given by a pseudonym - but this pseudonym is supported itself by other personal accounts.

Another problem is how to evaluate hidden personal accounts, there the personal information is known only to the arbiter.  Here, there may be many pseudonyms of the same person giving guarantees. This problem can be solved pseudonymously by a trusted notary who assigns ordering information to pseudonyms: An open personal account can give a signed confirmation with timestamp to this notary, where he confirms to own a pseudonym. Then, the notary returns a confirmation that this is the n-th pseudonym of a real person.  So, if pseudonyms supported by such numbers give guarantees, one can obtain information about the minimal number of involved real persons.

The remaining problem is trust in arbiters.  It has to be solved by iteration (arbiters sign open rules of behaviour, accepting second order arbiters) and personal trust to at least one arbiter as a starting point.

With this system, you can prove something: Whatever happens, or you receive a compensation, or at least one real person ends on the black list with personal data, or a person on your personal list of first order trusted persons does not deserve it. A scumbag on the black list improves the reliability of the network as a whole, so, or everything is fine, or the reliability of the trust information of the whole network increases, or at least your personal trust choices have been improved.  That means, you and the whole network learn from bad experiences.

Note that the failure to pay electronically is an objective question, the accused can prove that he has paid, so justice is easy for compensation payments, and, so, the failure of arbiters to punish this with black list entries is easily detectable too.  The notary who counts pseudonyms is also simple to implement as an automatic response, and cheating can be established easily after the fact.  So there is no problem of complex evaluations.
You have given a guarantee to A, but A is on the black list - pay, or you, or your arbiter, or his arbiter and so on appears on the black list.

How do guarantees appear?  If you have a successful cooperation, where you have made some profit, it is reasonable to exchange guarantees with a value which is some part of the profit. So, even if you have to pay for it, you have not made a loss - only a smaller profit.

Quote
I don't mean to imply that work should not be done on this, just that a complete solution (even for something as simple as "does this person honor contracts" is probably going to evade us forever and tools for managing the information we can get about potential trading partners is a good place to start.
I'm not that pessimistic. Of course, there remain weak points, but the whole idea of basing trust on past behaviour has similar weak points.  

What is important for a reputational system is that there are no scam patterns which may be easily repeated, so that professional cheating becomes impossible.


sr. member
Activity: 312
Merit: 265
June 13, 2012, 01:51:25 AM
#9
Me and two more buddies are working on a system that you suggested.  Just starting and hopefully we will have something to show soon.
legendary
Activity: 1246
Merit: 1016
Strength in numbers
June 12, 2012, 06:52:06 PM
#8
Some other really important factors that get lost in a scalar are "how long ago did this person prove trustworthy" and "how likely are they to have let their reputation credentials get hijacked".
legendary
Activity: 1246
Merit: 1016
Strength in numbers
June 12, 2012, 06:50:30 PM
#7
That makes sense.

Still, it's hard to get something so simple to say "Honors their contracts" into something like a number. Like if I see 6/7 what am I to think? Do I really think that 11/11 is the same or even likely at all to be similar to another 11/11 I did business with?

Some contracts require nothing more than not being a total scumbag, for example, "I have $100 on dwolla and will send it after you send me 19 bitcoins". Unless I'm a total liar I'm going to uphold it and when you rank me perfectly it means a lot less than something like, "I can deliver XYZ software with A, B, C, D, and E features by time T" That requires proper estimating of the task, ones skill, avoidance of bad luck like illness etc etc. If it is completed it ought be a large boon to reputation and if it only partially fails it doesn't mean the person is untrustworthy with smaller or even similar jobs.

It makes me think of the doctor who has a great record because he never takes risky cases.

I don't mean to imply that work should not be done on this, just that a complete solution (even for something as simple as "does this person honor contracts" is probably going to evade us forever and tools for managing the information we can get about potential trading partners is a good place to start.
newbie
Activity: 21
Merit: 0
June 12, 2012, 05:45:26 PM
#6
What you describe (religious people giving bad ratings) is a fundamental problem with one dimensional systems. They just won't work as general purpose reputations which are certainly multi-dimensional.

An ebay seller rating can be one number because it pretty much just means "Does this person send what they posted".

But a general reputation covers so many things. The solution is probably a bunch of different ratings, maybe connected or maybe not.

The ultimate answer probably looks a lot like no answer at all, people just making connections and vouching for each other to make new connections. Tools for finding paths of trust of arbitrary types types would be nice. I have a great reputation without being involved in any official reputation systems at all.

I'm thinking about the one-dimensional system based on "does this person obey his contracts".  I think it is central to reputation. And everything else can be based on it using freedom of contract. 

I acknowledge that there are a lot of different things reputational systems can be used to evaluate. But they are not worth much without the central one.  You may be highly competent in every question - if you are a cheater and a liar, it is dangerous to cooperate with you.  The stupid but honest guy is, then, preferable.

Ok, there may be some aspects of honesty - imagine some Muslims who consider everybody else except Muslims as fair game for cheating. Other Muslims, then,  may be interested in another type of reputation - "Muslim honesty": "does this person obey his contracts with Muslims". But this "Muslim honesty" can be organized in a system which focusses on a single general honesty - the Muslim creates two identities, one honest which makes contracts only with Muslims, and the other one for contracts with other people.

So I would say, reputation for obeying contracts is special for human cooperation, and deserves special treatment.  A reputational system should be, therefore, centered around this special question. If there is a reputational system which is optimal for this question, it is preferable.

Then, of course, one can think about extending this system of reputation to other questions. If this makes sense, fine, if not, one has to design other reputational systems for these other questions.

In other words, the reputational system has to be as good as possible for this single question, everything else being secondary.

Why this question is that special?  The point is that we can consider society as a large network of contracts. Such a network of contracts can be modelled in the net, using, electronic (Ricardian) contracts. Seeing these contracts, programs may evaluate them and extract a lot of useful information - but all this information is useful only if we can rely somehow on the reputation of the participants. Contracts and promises made by cheaters are worthless, anything based on it is worthless too.  So, whatever we want to do online about contracts depends on this particular type of reputation.

have an electronic system of

legendary
Activity: 980
Merit: 1003
I'm not just any shaman, I'm a Sha256man
June 12, 2012, 03:48:14 PM
#5
I see OP's point tho even it required an extreme analogy to point the way.

For instance what if alot of people just wanted to attack some merchant for what ever reason (Could be a combination of people from different beliefs but all have the same goal to hurt the repuation) or may for instance a evil entity with lots of money paid off alot of people destroy the open reputation system?

Well heres my solution...

What if the open reputation system rated entities based on the voters them selves.

For example:
a person with 5 votes(all 5  out of 5 stars) from unrated voters would have a rating of say 100 reputation points

and lets say we go back and time and that same person gets 5 votes(all 5  out of 5 stars) but this time all the voters that voted for this person all have great star ratings so now the system gives this person a 1000 reputation points because its from reputable voters.


The problem i see with my solution is that anyone could cross sell to them selves with their own account.

So maybe some type of identity is required for this reputation system to work just not sure how you get someones identity with out turning off everyone giving up their personal information.
legendary
Activity: 1246
Merit: 1016
Strength in numbers
June 12, 2012, 03:35:56 PM
#4
What you describe (religious people giving bad ratings) is a fundamental problem with one dimensional systems. They just won't work as general purpose reputations which are certainly multi-dimensional.

An ebay seller rating can be one number because it pretty much just means "Does this person send what they posted".

But a general reputation covers so many things. The solution is probably a bunch of different ratings, maybe connected or maybe not.

The ultimate answer probably looks a lot like no answer at all, people just making connections and vouching for each other to make new connections. Tools for finding paths of trust of arbitrary types types would be nice. I have a great reputation without being involved in any official reputation systems at all.
newbie
Activity: 21
Merit: 0
June 12, 2012, 09:41:19 AM
#3
You think it works?  Which system do you have in mind?  Yesterday I have been pointed to
http://privwiki.dreamhosters.com/wiki/Distributed_Web_of_Trust_Proposal_2#Trust_Metric - the one of the topic of #bitcoin-wot.  I doubt it works. But, it seems, I do not yet understand some key issues of this proposal.

In particular I don't understand how one reaches the conclusion "The design is resistant to a number of possible attacks on the trust network, such as whitewashing and slander via bogus identities."

For example, a simple design for self-promotion:  I use 20 identities, 19 of them completely honest, communicating with others. They all value number 20 very high in all regards.  Number 20 seldom does anything, but it there is a large contract, it cheats.

For every cheating, the cheater receives negative rating from the cheated. But all the other 19 nodes can give similar negative ratings to the cheated guy.  So it is the cheated guy who looks like a cheater (by majority of the ratings) and his negative rating for the real cheater looks like empty slander by a cheater.

But, anyway, the successful cheater may be simply killed, and a new one created.

One may think that as a consequence of this, the propagation rating of the 19 guards will decrease.  No problem, one creates second order guards. They recommend only first order guards, who never cheat themself, and so their external propagation rating remains high. They give high propagation ratings to the first order guards. But nobody else has any actual connections with them, so there will be no other ratings to them.  And only they highly rate the real cheater. 

So I think the claim is not really justified, at least I would like to see some more justification.

legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
June 11, 2012, 09:42:11 AM
#2
Its nice to see Newbie Members post their ideas  Cheesy
But I think your being overly critical. The bottom line is a reputation system works, and what your proposing, however interesting, is not likely to be adopted very easily ( sounds like alot of work, to fix something that isn't broken)
but idk, maybe your on to something here...


newbie
Activity: 21
Merit: 0
June 11, 2012, 09:06:10 AM
#1
I think the central missing part of a modern agora (a market not controlled by government) is a reliable system of reputation.

There are proposals for systems of reputation based on ratings, often from arbitrary persons.  But I'm afraid such proposals are open to abuse by ideological enemies. Imagine, for example, some religion does not like something (be it marihuana, tobacco, porn, gays, or other religions). So, the fanatics may give extremal negative ratings to everybody who does not confirm to their religious values.  This is not the information which is interesting for those of different religions.

I see no way out of it, so I think such reputational systems are doomed.

What I would like to propose is a system which allows to establish a reasonable notion reputation also for minorities. So, bad reputation cannot be based on the fact that other people (like religious fanatics) do not like you. It should be based on something different.

The key for a reliable reputational system is the information if you obey contracts.

And the main point is that other parties (like ideological enemies) cannot destroy your reputation without reason. The solution for this is that only an arbiter you have accepted yourself can make the decision to destroy your reputation, and only if you not only violate a contract, but refuse to accept the decision of the arbiter.


For more about this reputational system seehttp://ilja-schmelzer.de/network.
Jump to: