Pages:
Author

Topic: Here is the solution for TRUST or WOT (Read 2143 times)

newbie
Activity: 23
Merit: 0
July 16, 2011, 04:18:07 AM
#33
Xephan:
To imply that I want to be the "controller/arbiter of trust" because I want complete control over my system is totally false.
For one, how could I be the controller/arbiter of trust if I'm not the one who does all the ratings?

At this point I've gotten all the input I need.
Thanks for everyone who responded to this thread.
newbie
Activity: 42
Merit: 0
July 16, 2011, 03:46:46 AM
#32
The way I see this from the fact that you cannot reveal your algorithm because you want to patent it first, implies you want full control over the system, which as you described is a centrally controlled system implies that you are asking us to help you create a trust system in which you are the ultimate arbiter/controller of "trust".

If I wanted that kind of "trust" system subjected to the whims of a patent holding entity, I wouldn't be interested in Bitcoin.

newbie
Activity: 23
Merit: 0
July 16, 2011, 03:30:59 AM
#31
Sgt:
I see what you are getting at and you have a valid concern but you must also think about the difficulty involved with engineering such an attack and everything that it entails.
(they can't easily identify people to socially engineer or hack to find if they have a high rating or not)
I would say, SURE your attack idea could be done IF thousands of IDs were specially engineered, BUT getting even 1 ID to be specially engineered would be difficult. Now add to that they would need to not only maintain the engineered one and get it to maturity but they would need to do the same for other new IDs. The only plausible way I can see is if their whole engineered group of IDs was connected through 1 or 2 roots. The problem there (for them) is that their whole group would get wild swings (downward) when the root(s) got adjusted by their non-engineered parents.
If you were sitting here at my computer I could show you examples and you would fully understand at that point. I have 60,000 data records in the original project that is 20 years old and 29 million records in the latest one, about 1TB of data.
You bring up good points and that is what I'm looking for and I'll be testing them if and when I work out the brick wall of formula problems I got right now.


thechev:
Yes it sounds like that. I'm assuming you are talking about the example I gave. The algo doesn't work like that.
I failed to mention that I was inferring a "like" tendency that people would be attributing to a user instead of "trust" which is what they would be supposed to do.
I guess it was a bad example but you can still see that adding ratings is not a good idea unless one wants to see a sort of "cumulative" trust over time.
What I am proposing is more of a rating that indicates the most to least trustworthy over a total population.
The question then remains, would you rather see cumulative trust over time or how someone fits into the total population on the attribute of trust?
newbie
Activity: 40
Merit: 0
July 12, 2011, 05:16:15 PM
#30
My whole post was about a trust system in general and not related to bitcoin.
As for celebrities, think about it. There would be numerous celebs being rated highly by millions and their rating would be added and added and added. That's not good. So a music star gets a 5 million rating and that puts them in the top 0.1% of people in the world for trustworthiness.
A cumulative rating system would be fundamentally flawed because as I a mentioned above plus other highly trustworthy people who only have 10 ratings would never even show up as being trustworthy.

What you're describing sounds like a popularity contest or Facebook's 'like' button or something.
legendary
Activity: 1400
Merit: 1005
July 12, 2011, 10:11:46 AM
#29
Why wouldn't Anon be able to get a high rating?  No one knows who is involved in Anon (kind of the point), so there's no reason that Anonymous members couldn't get just as high a rating as you or me.  And then, when they go on the attack, they just gather everyone together who has a high rating and feedback-bomb the target.

When I say a bunch, I'm thinking hundreds or thousands.  That should be plenty to completely ruin someone's reputation, right?  I mean, if I have a thousand negative ratings from highly rated members, no one is going to trade with me.  And it'll drive my ranking based on whatever scoring method you use into the dirt as well.
newbie
Activity: 23
Merit: 0
July 12, 2011, 10:03:50 AM
#28
Sgt:
I was speaking in terms of Anon attempting to get a high rating and assuming that would be their point so as to rate others higher or to rate others lower both maliciously.
When you say "bunch of members ganging up on.." you are thinking a bunch as in millions?
That is what it would take and those millions would need sufficient ratings to cause any good or bad effect.
I understand you think it could be abused some how but so far any abuse scenario you mention wouldn't work.
I personally know it wouldn't because on the other 2 applications of the algo (although totally different applications) I've seen roughly what could be translated into an abuse scenario(s) and it/they were are handled without issue.
If I took the otc wot database and ran it against the algo I could illustrate this and we would see good ole Mr. Stamp get stamped.
I had a look at the list yesterday and it looks like there is less than 1000 users.
That's pretty small and not quite fully functional for this but the algo would for the most part put F00dSt4mp much lower than he was at least.
As I said earlier, a large "web" wouldn't have any such problems.

I've delved more into the formulas to port over to the trust application and I'm hitting a brick wall on 3 of the most important parts so the whole thing might be a non-starter unless I'm able to get my head out of my ass on those. I had the same issue on the 2nd port of it and it took me 3 months to figure it out.
This one however adds a couple of funny twists that really aren't too funny.The basis function for 1 and I won't get into the other one.


thechev:

My whole post was about a trust system in general and not related to bitcoin.
As for celebrities, think about it. There would be numerous celebs being rated highly by millions and their rating would be added and added and added. That's not good. So a music star gets a 5 million rating and that puts them in the top 0.1% of people in the world for trustworthiness.
A cumulative rating system would be fundamentally flawed because as I a mentioned above plus other highly trustworthy people who only have 10 ratings would never even show up as being trustworthy.

legendary
Activity: 1400
Merit: 1005
July 12, 2011, 02:11:29 AM
#27
Sgt:
I'm typing as fast as I can and I get new ones.
Yes Anon could do that but you must remember that if Joe Schmo is not really a bad person then he will still be rated highly by others, also Anon would probably not have a high rating, how could they? So their attacks would be coming from low to mid rated users as a whole. Anon could not be expected to maintain any sort of high ratings, who would rate them highly? They themselves could but that doesn't mean anything because they are only a small small part of the web and the web has all the mathematical power.
Joe couldn't get a -1000 rating unless the only people who ever rated him was Anon and even then Joe would still be competing with the rest of the bad guys for the worst rating so chances are Joe ain't gonna be the worst guy in the world even though Anon rated him down. Anon can't shift the math in the web unless they get control over large parts of the web. Good luck with that. Smiley
Why would Anon not have a high rating?  After all, they are just "normal" members of society when they aren't browsing 4chan.  They are smart.  They're not going to do anything to hurt their rating - or at least, they won't do anything that would hurt their rating while also being anonymous.  I don't think it's fair to assume that Anonymous members wouldn't have a high rating.  And when a bunch of high (or at least, above average) members gang up on Joe Shmoe to rate him poorly, there's nothing that could be done about it.  If you call shenanigans on it, Anon would simply be smarter about how they abuse the system in the future, making it more and more undetectable.

I understand your whole scheme of a weighted feedback system now, but it can (and would be) abused, just like any other feedback system out there.
newbie
Activity: 40
Merit: 0
July 11, 2011, 06:42:39 PM
#26
You are also blurring the concept or paradigm of WoT with the reference to PGP keys as a Certificate Auth vs a Trust Authority.

I guess I was confused by your use of "trust" and "trust authority". When you mention WoT I immediately think of PKI, PGP and cryptography, because that's where the term originates.

Quote
A Cert Auth simply verifies the auth of the ID and has nothing to do with any history of ratings associated with that ID.

Right. Which is exactly what a web of trust was traditional used for also. A WoT was originally a decentralized alternative to having a centralized certificate authority. Now, making assertions about the trustworthiness of an entity can be thought of as sort of an extension of that idea and you could use either type of PKI system for that. Really, the term "web of trust" is somewhat nebulous because people use it loosely to describe several different things.

Quote
Also the Cert Auth is "centralized" but my DB should not be?

I'm not making any judgment one way or the other. I was just wondering why you thought your system was better than existing PKI systems for establishing authenticity (or trustworthiness).

Quote
The existing WoT system(s) [I know of only 1] have proven at least to be somewhat effective but they (it) is and has been open to hacking which was illustrated in the link in my first post.
The Bitcoin OTC WoT is really cool but if you examine it you will see that it is hackable and ratings can accumulate which basically throws off the usability of the rating.

That whole thread (the one you linked to) seems cryptic. I guess someone with the handle FooDSt4mP stole some BTC from someone? And you're saying someone (FooDSt4mP?) actually hacked in to the gribble bot database and changed his own rating?

Rating accumulating sounds like a good thing. For what it's worth, it looks like FooDSt4mP has a pretty terrible rating on #bitcoin-otc.

Quote
If for example celebrities were in WoT with the current rating system (not bitcoin realated WoT, I'm talking a sort of universal WoT) then you would see ratings of a million.
Obviously a rating of a million one would expect to be ridiculously trustworthy which clearly wouldn't be accurate.
This wouldn't happen in my system.

Okay, wait, you lost me here -- why would a celebrity get a rating of a million?

Quote
Mad-sciency?
Not sure if that was a compliment or not.

No offense intended.  It's just you're claiming to have a solution for trust issues involving Bitcoin, but you're not really sharing it so others can vet it.
newbie
Activity: 23
Merit: 0
July 10, 2011, 05:02:38 AM
#25
It seems this whole "centralized" vs "decentralized" thing is being misunderstood and somehow tied to the p2p nature of bitcoin.
My system needs a potentially massive database that needs a large number of queries to be done for calculations to take place in as short a time as possible.
"decentralizing" the database as in any sort of p2p would make that utterly and totally impossible.
You are also blurring the concept or paradigm of WoT with the reference to PGP keys as a Certificate Auth vs a Trust Authority.
A Cert Auth simply verifies the auth of the ID and has nothing to do with any history of ratings associated with that ID.
Also the Cert Auth is "centralized" but my DB should not be?
A PKI Cert Auth for an ID is a totally good idea and it's at the top of my list for ideas of how to auth ID's and prevent ID hijacking.
There are more details that will need to be worked out and with the help of a Steering Committee which has yet to be assembled.

The existing WoT system(s) [I know of only 1] have proven at least to be somewhat effective but they (it) is and has been open to hacking which was illustrated in the link in my first post.
The Bitcoin OTC WoT is really cool but if you examine it you will see that it is hackable and ratings can accumulate which basically throws off the usability of the rating.
If for example celebrities were in WoT with the current rating system (not bitcoin realated WoT, I'm talking a sort of universal WoT) then you would see ratings of a million.
Obviously a rating of a million one would expect to be ridiculously trustworthy which clearly wouldn't be accurate.
This wouldn't happen in my system.

Mad-sciency?
Not sure if that was a compliment or not.
newbie
Activity: 40
Merit: 0
July 08, 2011, 07:40:13 PM
#24
Can you explain what you think is wrong with decentralized WoT systems like PGP's? I think existing WoT systems have proven extremely effective over time. Can you explain why your theoretical centralized system would be any better than existing centralized PKI-based certificate authority schemes?

What you have described so far sounds pretty mad-sciency.

newbie
Activity: 23
Merit: 0
July 08, 2011, 07:35:47 PM
#23
trent:
No you are totally correct.
The physical system would have to be very robust and would need to be engineered from the beginning to work as you described in addition to being scalable.
Clusters come to mind as does RAID60 and multi-homed backbone.
As for business/entity, true as well.
sr. member
Activity: 406
Merit: 251
July 08, 2011, 07:20:49 PM
#22
There are thousands of examples of central DB systems running critical systems and they have architectures for security/redundancy/backup etc.
Your assertion shoots all of them down, including Google, Amazon (not their cloud BS), and the IRS.

I would be highly surprised if Google/Amazon and the IRS have centralized DB systems. Possibly we have a different idea of a centralized DB. My idea of a central DB is one which is the root store for the infrastructure. If that root store goes away the infrastructure suffers whether partially or fatally. I can't see Google operating such a system.

If you are referring to centralized DB from a political/social/business aspect then yes of course Google/Amazon and IRS all have centralized DB systems. If Google goes away so does their DB.

Whatever and however your system intends to operate I would encourage you to think about ways to eliminate single point of failure both at a technical level and business/entity level.

Additionally, if you intend to combine the two (technical and entity centralization) then the trust system is already compromised since trust in the entity at least would be a minimum requirement to using the system.

I could definitely get behind a distributed (technical/entity) trust based system, but not one where I have to first place trust in someone else.

Am I missing something?
newbie
Activity: 23
Merit: 0
July 08, 2011, 07:16:59 PM
#21
trent:
There would be a standard rating scale/system that people would need to familiarize themselves with and they would need to understand that the system runs on IT and not on their own rating system. They would make their own judgments about how someone should be rated but within the guidelines of the rating system. They would be encouraged to rate accurately and be given info about how consistently inaccurate ratings could affect them.
The assumption is that "generally" people would be "generally" accurate and that is all that is needed.
If "generally" people were NOT "generally" accurate then the system simply would not work as intended but it would still put things in the right direction.

Thanks for the whitelist! Smiley
newbie
Activity: 23
Merit: 0
July 08, 2011, 07:03:26 PM
#20
Sgt:
I'm typing as fast as I can and I get new ones.
Yes Anon could do that but you must remember that if Joe Schmo is not really a bad person then he will still be rated highly by others, also Anon would probably not have a high rating, how could they? So their attacks would be coming from low to mid rated users as a whole. Anon could not be expected to maintain any sort of high ratings, who would rate them highly? They themselves could but that doesn't mean anything because they are only a small small part of the web.
newbie
Activity: 23
Merit: 0
July 08, 2011, 06:52:50 PM
#19
Sgt:
You could make as many ID's as you want but none of them could rate anything because none of them have been rated by a web connected user.
Even if I dropped that requirement and they crafted a few super high rated ID's they would immediately get adjusted as soon as they got connected to the web, probably by a handful is all that would be needed.
After that their whole effort would be toast and any user could see they were "newbies" to begin with.
There is NO manipulation possible once they have robust connections to the web. It's all a done deal then.
I'm sure lulz or anon will try but I'll be giggling.
Important to note here: user security is important! If lulz or anon were to hijack hundreds of high rated ID's due to LaxSec on the user side then it could cause a problem.
It would be much simpler for them to steal bitcoin wallets or hack emails than it would be to locate hundreds of high rated ID's so consider that since ID's have no info on them.

Your subweb ideas: This is why I would require web connect before outgoing ratings could be made. Subwebs bad. I could however allow subwebs and users would just need to look at the "web connections:" stat to see if they are out in la la land or not. La La Land ratings do not compare to Web Ratings.
legendary
Activity: 1400
Merit: 1005
July 08, 2011, 06:47:43 PM
#18
You still didn't answer my question about what would stop me from registering 100 new names...

Another potentially vicious downside:  What happens if a group of people decide to attack a particular person they don't like?  Not because of that person actually doing something wrong, but because the group doesn't agree with that person?

Take Anonymous for example.  They might decide to go after Joe Shmoe, for whatever reason.  Maybe just to troll.  Maybe because Joe Shmoe made a post on 4chan that no one liked, and then all the anons went and figured out who he was.

Suddenly, Joe Shmoe ends up with -1000 rating, and since the majority of votes agree that Joe Shmoe is a scumbag, then he must be a scumbag, right?  Since so many anons voted negatively against Joe Shmoe, none of their personal ratings are affected, since they voted "with the flow".  Anonymous has accomplished their mission, which was to ruin Joe Shmoe's rating, with no personal side effects.
newbie
Activity: 23
Merit: 0
July 08, 2011, 06:37:48 PM
#17
trentzb:
There is no possible way to do hundreds of SQL queries quickly unless the DB is on 1 fast machine or fast system.
To calculate the thing with data all over the place would be impossible.
There are thousands of examples of central DB systems running critical systems and they have architectures for security/redundancy/backup etc.
Your assertion shoots all of them down, including Google, Amazon (not their cloud BS), and the IRS.
Human nature, yes absolutely you are correct.
The bottom line with that is that not everyone is bi-polar or similar all the time, in fact on average the population is generally stable and generally makes the right observations.
Because abnormal or anomalous behavior is not the norm this system is able to put everything where "general" is and handle those abnormalities.
I know the system is foolproof as I have described because of it's current use in systems that are HIGHLY anomalous but have a general normality to them.
It handles them very well.


sr. member
Activity: 406
Merit: 251
July 08, 2011, 06:32:44 PM
#16
Ok so the human incentive would be to preserve my rating (my interests) by accurately rating someone else and the consequence of inaccurate rating would tarnish my rating. That makes all the sense in the world. But what/who defines accurate and inaccurate ratings?

Edit: I added you to the pending whitelist so as soon as an admin has a chance you should be out of newbie jail. Smiley
legendary
Activity: 1400
Merit: 1005
July 08, 2011, 06:22:00 PM
#15
Ok, that makes a whole lot more sense now.

What would stop me from making 100 new profiles, then rating each of them with positive ratings to each other?  I think you mentioned something about subwebs being prevented, but I am interested to hear more about that aspect...

How would you differentiate a malicious subweb from an accidental, but legitimate subweb?  In other words, maybe a bunch of Amish people (I know they wouldn't, but hear me out) use the system, but only rate each other, because they have no real contact with other people?
newbie
Activity: 23
Merit: 0
July 08, 2011, 06:14:38 PM
#14
Think of it this way: Let's consider this example...
All people in the world are in the web, like 6 billion people.
There is a mathematical link between Hyunmo Chen in Korea and Billybob Smith in Kentucky and their ratings can be 100% directly compared.
Good people through more and more ratings show themselves as being good and they end up in the upper part of the ratings range.
The same for bad people going to the bottom.
What I was trying to explain in the last post was that people need to rate people as they SHOULD be rated or they will face a "pull" on their own rating.
In other words, knowingly (and consistently) rating a bad person good will have some sort of negative affect on your rating for some period of time.
Doing it once would be negligible.
There is no discouraging for any sort of feedback as long as it is accurate.
If any 1 rating was accurate for it's action but was not indicative of a person generally it's ok too. The system takes it all into account.
Once one understands the dynamics it basically forces people to not only act correctly but also to rate people correctly, whatever "correct" rating is warranted.
A great example of where people don't rate correctly AT ALL is where men consistently rate ugly women with large breasts a 10. We know there are VERY few real 10's in the world yet guys would make you think there are 100 million. Hope that is not offensive, didn't mean to be. It is a real example.
Pages:
Jump to: