Pages:
Author

Topic: Here is the solution for TRUST or WOT - page 2. (Read 2135 times)

sr. member
Activity: 406
Merit: 251
July 08, 2011, 05:54:52 PM
#13
I can't run a distributed database because of the nature of the queries that need to be run. It will need to be on a huge computer/ data center.

If member ratings are stored in a central database this idea will fail. There can be NO single point of failure in such an idea and a central database no matter how secure, isolated, obscure, encrypted, inaccessible, etc, is a single point of failure.

Further, it sounds to me that this idea has a significant dependency on human nature/emotion/whatever you want to name it, and whether a human will do the right or wrong thing. Anytime dependence on human decision making comes into play your best chances to predict the outcome of such decision will be to look at and analyze human incentive in such a decision although even that is not foolproof.

If you truly believe this idea would "revolutionize mankind" then I am interested to learn what your incentives are to keep it from mankind.
legendary
Activity: 1400
Merit: 1005
July 08, 2011, 05:48:28 PM
#12
Sgt:
Consider the system is structured in a way to prevent scam/hack attempts for ratings and it's geared towards the world in general and anything related to trust, not just bitcoin WoT.
Given that trust is important in so many aspects of everyone's life this is a revolutionary system.
"web" connections I am referring to are the "web of trust" and any person "entity/object" must have a direct connection to the complete web by being rated by someone who is in the web.
This is necessary to prevent "sub webs" from being created, by hackers or even valid groups who are not connected.
"sub webs" would make ratings that are not comparable to other "sub webs".
The downside is this demands that a "seed web" is started among original trusted (or not trusted of course) people and those people would then be able to invite people into the web by rating them.
People know lots of people so this certainly is not a problem.
Anyone good or bad can join the web. They just need to be rated.
Sure, if their initiating rater gives them a good rating just to get them in it's ok. They will show 1 rating and 1 web connection which should be seen as not yet mature.
Of course their initiating rater is foolish to rate a bad person good then their rating will suffer as soon as the bad person acts as himself (bad) and gets bad ratings so the whole thing takes care of itself in short order.
It was cool to see bitcoin WoT. I said "wow this really needs my algo".
The bitcoin WoT web could or would simply be part of the big universal WoT my algo would run on.
I wouldn't call it WoT though.
So what you're saying is, if I rate my buddy with a good rating so that he can come use the system, and then he abuses the system, I get negative rating on my scorecard because of actions he took?  Or, say I purchased a bitbill from someone who is relatively new, so I give him a good rating because I had a good transaction, then he screws somebody over down the line, I would get screwed over too.

It sounds like this whole system actually discourages people from leaving positive feedback.  Why would I want to leave someone else positive feedback when it could potentially bring my own feedback score down in the future?
newbie
Activity: 23
Merit: 0
July 08, 2011, 05:38:04 PM
#11
Sgt:
Consider the system is structured in a way to prevent scam/hack attempts for ratings and it's geared towards the world in general and anything related to trust, not just bitcoin WoT.
"web" connections I am referring to are the "web of trust" and any person "entity/object" must have a direct connection to the complete web by being rated by someone who is in the web.
This is necessary to prevent "sub webs" from being created, by hackers or even valid groups who are not connected.
"sub webs" would make ratings that are not comparable to other "sub webs".
The downside is this demands that a "seed web" is started among original trusted (or not trusted of course) people and those people would then be able to invite people into the web by rating them.
Anyone good or bad can join the web. They just need to be rated.
Sure, if their initiating rater gives them a good rating just to get them in it's ok. They will show 1 rating and 1 web connection which should be seen as not yet mature.
Of course their initiating rater is foolish to rate a bad person good then their rating will suffer as soon as the bad person acts as himself (bad) and gets bad ratings so the whole thing takes care of itself in short order.
legendary
Activity: 1400
Merit: 1005
July 08, 2011, 05:22:53 PM
#10
So if no one can leave a rating without having ratings themselves, how does a person ever get rated?

What would stop me from performing 5 $10 sales in order to get good feedback, then screw someone over for $100?

What do you mean by "web" connections?
newbie
Activity: 23
Merit: 0
July 08, 2011, 05:05:37 PM
#9
Good points, let me try to explain:
Yes of course if LOTS of people rate down someone (with my algo) then his rating will go down BUT it will only stay down if he is in fact bad and continually gets rated down normally.
In other words, it would take lots of people who are rated high (not just the same people over and over because it won't accumulate that way) consistently to do any damage.
Like I said before, cream rises to the top etc..
In a small "universe" then yes there can be manipulations much more easily but even then, assuming the person being rated down is in fact a good person, then what you would see would be large movements in his rating AND you would see the same in the bad people who were attacking him.
It's difficult to explain any better than that.
As for multiple accounts, sure they could absolutely, the problem with them doing that is they would always be starting over and their "web" connections would show none or few (which shows an immature ID and possible hacker) and they would have a very difficult task of re-engineering a high rating again which would be nearly impossible from any 1 starting point.
I carefully engineered the algo with hacking in mind and I fully expect hackers to try but it's already taken care of.
This is ALL assuming that I'm dealing with a large user base.
If it's attacked in the bootstrap phase then yeah it's a problem.
I'm not sure of the timeframe that this will be actually "opened for business" as I'm still pushing through the analysis BS with the other application of this.
I took a 3 month break from that one because it's a TON of work.
It was a mindbender to port it over to that  but I'll skip the details.
It's another completely different mindbender for Trust but I managed to figure it out except for a few other things.
newbie
Activity: 25
Merit: 0
July 06, 2011, 11:33:41 PM
#8
I like to break things.. so here goes:

- I don't think you could ever stop one person from having multiple accounts.

- I still see groups of people elevating and destroying one person's trust rating without any appeal.
ie. A bunch of secret rascists who all have good ratings because they network, etc. can destroy a minority persons trust rating as soon as they move into the "wrong" neighbourhood.
ie. A member of a church comes out as gay, now everyone there demotes his trust rating.

Sure he can demote every individual that "has it out for him" but that's probably not as influential as 100 people demoting him. Correct ?

I like the idea, I just think you'll never avoid manipulation.. and if it becomes the defacto standard, people's lives could be ruined. ie. Job interviews, bank loans, potential relationships.
newbie
Activity: 23
Merit: 0
July 06, 2011, 01:09:26 AM
#7
Stephen:
Yes it is possible for someone to rate someone else they don't know but they would need that person's ID code.
I envision ID codes to be freely available but the exact mechanism I haven't yet figured out.

Multiple identities are spotted simply by observing that an ID has few web connections (or none)(this quantity would be shown) and that the ID generally is not mature.
I have figured out that ID's cannot rate others until they are connected to the web (by others rating them) of at least 2, maybe more.
This would prevent a hacker group from creating their own web and jacking up ratings on some.
They would all show ZERO web connections and a legit user would see that.
I prevent this by requiring a certain web connect count before an out going rating can occur.
newbie
Activity: 23
Merit: 0
July 06, 2011, 12:56:06 AM
#6
vertygo:
This doesn't solely have any application with bitcoin but in the case of otc it would.
It is however at the heart of all things that people trust and gives a way for people to rate and verify such trust.

I can't run a distributed database because of the nature of the queries that need to be run. It will need to be on a huge computer/ data center. Not sure why you want it to be distributed as it has nothing to do with anything. DDoS? Sure, any website can get DDoS. So does this stop Google or Amazon or me?

To your question:
you want to know if a specially crafted attack could be manufactured.
The attack you described would fail in a short time because in my algo the cream rises to the top and the crap sinks to the bottom.
Any 1 highly rated (or group of) person could of course be paid to rate someone highly.
The problem with this is that in the end the attacker (being a false actor) would get bad ratings and those who rated him highly would be negatively impacted.
legendary
Activity: 2506
Merit: 1010
July 05, 2011, 04:46:34 AM
#5
A couple of questions.

In that example that Foodstamp gave, he claims he was given a negative rating with someone he never did business with.  Does your solution provide a defense against false negatives?

An additional example from the Foodstamp scam -- he operated under multiple identities, one that had a good rating even though his was bad.  How does your solution address that?

newbie
Activity: 25
Merit: 0
July 05, 2011, 04:44:52 AM
#4
I appreciate your response, and I do respect your enthusiasm. However, you say:
"it would be plainly obvious to the users once they used and understood it's use that it was a very sound system."

If it's that obvious, why can't you actually explain it ? Why do I have to engage and immerse myself in your system before knowing it's benefits ? You have to understand that fundamentally you want everyone to be able trust everyone else, but YOU are not willing to trust ANYONE with a plain explanation. You have already stated it's hack / tamper proof. Fantastic, no reason it can't be open source then!

As for databases, that's a whole other topic. The way you've hinted at is that you will host a central trust database. Well, bitcoin is doing very well without such a thing. Distributed systems have some interesting possibilities.  I'm not saying you are wrong or that it's possible to do it some other way, I'm just saying the control of this data (even if anonymized) is a huge factor. And of course, just basic security against having the DB DOS'd constantly is another issue.

I just don't get why you can't say here.. this is it. A couple of graphs, some flowcharts, and the way you've solved the other 2 problems with the same algo. Is it like Amazon, only people review each other. Is it like ebay and we get gold stickers for doing the same thing over and over again. Is it like this forum where you have to earn a certain level by mucking about in newbie area.

Assigning concrete IDs to real people might not be too popular on a forum based on the ideology of a completely anonymous electronic currency. I don't have an alternative to IDs, I've never thought about a trust system.

If you want feedback on the system, how it might be integrated and adopted, obstacles that need to be overcome, etc., then you need to give specific information.

This is trust you're talking about.. probably the most difficult and powerful feeling a human being can have.

I'll end with a question because I would like to continue this conversation..

What is to prevent me from paying $1000 to 1000 people to ensure my trust score is top of the line.. JUST so I can scam another 1000 people (seconds after confirming my 100% trust rating) for $5000 each ?
newbie
Activity: 23
Merit: 0
July 05, 2011, 01:04:40 AM
#3
I can understand the negatives as since I've now been up for 2 days I'm pretty unable to think straight and plus I've been thinking for a long while how to present any of this without explaining everything including the algo itself which I absolutely refuse to do.
This would make the 3rd port of the algo as I am already using it for 2 other very difficult problems successfully.

I was hoping to get some feedback as far as what people thought of the concept more than anything.

As for transparency it must be understood that the algo would never be revealed and it would be plainly obvious to the users once they used and understood it's use that it was a very sound system.
People or entities would simply be assigned an ID consisting of a string of chrs, probably 128bits. No info about "them" would be stored or even able to be entered and it wouldn't be needed anyway. It's just referring to an object with values attached to it that can be modified by others through rating and looked up.
Can't store in a DB? How can you ref any object that is not stored somewhere?
Now if entities were not assigned an ID (that could also be refed by QR code) then how could anything be calculated or looked up?
Not cool? Alternatives?
You can't rate or assign any values to a non-object.
There is the foundation.
Did you have an idea of how to calculate variables that were never defined?

The "dream" has already been kept alive by using this algo on 2 other hard problems before this.
I just had this idea a few months ago after constantly being barraged by the question of why people have such a weak trust model and readily trust information from totally proven un-trustworthy sources in addition to other people who are also proven un-trustworthy.
In addition there is always a "trust chain" that could go on for very many branches which would basically be impossible for anyone to verify with a high degree of certainty.
My algo takes care of all that and does it extremely well.
newbie
Activity: 25
Merit: 0
July 05, 2011, 12:20:49 AM
#2
Wouldn't a new trust system require complete transparency ?

Your trust rating (or "Quatloo Bank") has been lowered!

Being vague about your system
 -2 Quatloos.

Everyone and every corporation will be stored in some sort of SQL DB ?
 -1 Quatloo.

Assigning everyone a QR barcode just to force them into your trust system
 - Not cool, man.

Sitting on this since 1989 ?
 +1 Quatloo for keeping the dream alive!

Perhaps not constructive, but the foundation didn't have the right permits anyway.

Tongue

newbie
Activity: 23
Merit: 0
July 04, 2011, 09:37:25 PM
#1
This is my first post. Been reading here since June 9th and mining since then.
I am a Senior Systems Engineer, endpoint security expert and Forex Trader.
I am posting this message because I have a solution to the constant problem of TRUST and I was finally driven to post this because of this crap:
https://forum.bitcoin.org/index.php?topic=25962.0 (since I can't post there I had to post in the newbie section)

I'm talking about trust not just in BTC but anywhere in life.
I created an algorithm in 1989 that handles this problem perfectly.
I saw OTC's web of trust and it was shocking to me.
They got the basic idea right but with no algo except for a rudimentary point system and is trust hacking capable.
My algo can't be hacked or manipulated by anyone, even me.
The best example I've seen was OTC WOT and it's not even close.
Pages:
Jump to: