(I asked Poker Player to unlock this thread so that I could post in it. I drafted a reply that grew much larger than I meant for it to, and basically didn't want it to go to waste. Even though it's written in a way that suggests that I might be expecting a response to it, I'm not; I'm just sharing some long-winded thoughts, is all, in the hope that some of them might be found to be interesting.)I am not surprised. Of course tagging someone when they have already scammed and disappeared from the forum is not going to save you from anything.
Hmm... Probably I expressed myself poorly which led to you misunderstanding me. The problem is, my thoughts on the trust system are scattered across more than a few posts (and probably even more PMs), and it's not really practical for me to try to share my complete perspective on it each time I write something concerning it.
A lot of my perspective on the trust system can be understood as an argument formed in terms of
expected value. Judging by your handle, I'm guessing that you already understand that principle well, but, as a refresher for the people that don't understand it, let's ask ourselves whether or not it makes mathematical sense to accept a gambling offer like the following:
"Flip a fair coin 5 times. If all 5 flips come up as heads, then you win $100. If at least one of your flips comes up as tails, then you lose $5."
One way to analyze an offer like that is to multiply the probability of a successful outcome (which is: 0.5**5, or 3.125%) by the amount you will gain in that case ($100), and then subtract from that the probability of an unsuccessful outcome (which is: 1 - 0.5**5, or 96.875%) multiplied by the amount you will lose in that case ($5).
So, 0.03125 * 100 - 0.96875 * 5 = −1.71875. In other words, you'll lose ~$1.72 (that's a tilde, not a minus sign, BTW) each time you try that game. That's difficult for most people to see, because in concrete terms each attempt will either lose you $5, or gain you $100. But, in some abstract mathematical sense you're actually losing ~$1.72 every time you play the game, and the more you play the game, the more you should expect your concrete balance to mimic your abstract one (that is, if you play the game 100 times, then you should expect to lose ~$172, and if you play it 1000 times, then you should expect to lose ~$1720).
I've run into my fair share of people who aren't convinced by the concept of expected value, and think that it's some kind of neat idea that doesn't correlate with reality, so to drive the point home, here's a Python script that actually simulates what would happen to a starting balance of $0 if you played the above game 1 million times (you should expect to lose something in the neighborhood of 1.7 million dollars, so let's see if that actually bears out):
#!/usr/bin/env python3
import random
balance: int = 0
def playGame() -> None:
global balance
flip1: str = random.choice(('heads', 'tails'))
flip2: str = random.choice(('heads', 'tails'))
flip3: str = random.choice(('heads', 'tails'))
flip4: str = random.choice(('heads', 'tails'))
flip5: str = random.choice(('heads', 'tails'))
if (flip1, flip2, flip3, flip4, flip5) == ('heads', 'heads', 'heads', 'heads', 'heads'):
balance = balance + 100
else:
balance = balance - 5
def main() -> None:
print(f'Starting balance: ${balance}')
for trial in range(1000 * 1000):
playGame()
print(f'Ending balance: ${balance:,}')
if __name__ == '__main__':
main()
Running the above script gives me:
Starting balance: $0
Ending balance: $-1,713,185
(If my
[hide] tag were available, I'd have used it on the above aside.)
So, let's take "expected value" as a sensible idea, and, for ease of discussion, let's give the specific two-outcome formula from above some mnemonic terms, like this:
S *
u -
F *
d (where
S is the probability of
success,
u is the
upside,
F is the probability of
failure, and
d is the
downside).
Let's also invent a new unit called
assist and imagine that that made-up unit encapsulates the idea of Bitcointalk-related value (as in, when you do something that helps the forum, your action can be thought of as one that produces
positive assist, and when you do something that harms the forum, your action can be thought of as one that produces
negative assist).
Different trust-actions admit slightly different treatments, so let's (for the rest of this post) just focus on the following case: Leaving someone negative (or neutral-negative) feedback when you
estimate that something warning-worthy is happening, has happened, or will happen. In this case,
S is the chance that you're
not mistaken in your judgment, and
F is the chance that you
are mistaken in your judgment. That takes care of
S and
F, but what about
u and
d? Let's assume that the upside (
u) is 1 unit of "assist" (our made-up unit to encapsulate forum-related value), and that the downside (
d) is also 1 unit of assist (that is, let's assume we contribute as much value to Bitcointalk by being correct in our judgment as we detract from it by being incorrect; I don't think that that's a fair assumption, especially for the kind of trust-actions we're talking about, but let's do it that way for now). Finally, let's say someone has an accuracy of 95% (as in, 95% of the time their judgment turns out to have been spot-on, and 5% of the time it turns out that they were mistaken).
So, 0.95 * 1 - 0.05 * 1 = 0.9. In other words, every time that our 95%-accurate speculative-feedback-leaver does their thing, they produce 0.9 units of assist. Nice! Let them do their thing 100 times and you can expect them to have produced 90 units of assist. They should be on DT, yeah?
The problem I have with the above calculation is to do with the ratio between
u and
d. I don't share the following view, but I would guess that most people feel that
u should be bigger than
d (or at the very least that they should be set equal to each other, as above), which is to say, I think most people feel that when their judgment-calls are
correct then they've done something very good for Bitcointalk (like actually helped some other user to avoid being scammed), and that when they get things wrong and make mistakes then that's not really such a big deal (especially compared to all the
good that they believe they're doing when they're
not mistaken). Let's call this perspective (that
u should be greater than or equal to
d, and that getting it "right" is worth many instances of getting it "wrong") the "
u>=
d" perspective.
I have the opposite (and then some) view (let's call this one the "
u<<
d" perspective). My perspective is that even when you
do get things right, you're producing a much smaller effect on other people's decision-making than you think you are (that is, you're not
actually sparing other users pain and suffering on any non-negligible scale, you just like to
believe that you are, or, even worse, you already suspect that you're not
really helping anything or anyone, but your sense-of-justice compels you to do
something, even if it makes little sense to do so). In my view, the
upside when you're right is much, much smaller than the
downside when you're wrong. When you're right, you
think that your feedback will correctly help some other user to make a better/safer decision than they would have been able to make without you taking the action that you took (and in my view, that's mostly just wishful thinking that everyone
wants to believe is true [1]). But when you're wrong, you're almost certainly causing definite harm (as in, tagging some innocent user and contributing to them losing their enthusiasm for the forum, for example).
So, I'd scale the value for
u way, way down. Something like 0.01 makes more sense to me than 1 does. I mean, who can really say what the value
should be set to, especially when we're dealing with a made-up unit, but, remember, all that that value relates to is how
genuinely helpful to other people your accurate speculative feedback actually is, and what I'm saying is that (I think) people have been vastly overestimating that value (
u), and that scaling it down (relative to
d) by two factors of 10 would be my guesstimate to get it within realistic proximity of its "true" value. With that adjustment, the previous hypothetical user with an accuracy of 95% is actually producing -0.0405 units of assist per trust-action (of the kind we've limited our thinking to), and should therefore
stop doing that. Even a user with 99% accuracy would produce small amounts of negative assist with each action that they take. If you look at things through that lens (like I do), then your view will be that this kind of feedback is just slowly making Bitcointalk worse and worse (in the same way that my example-game unavoidably loses you ~$1.72 each time you play it).
Look, I don't expect many will support my view (because of how counterintuitive and difficult-to-accept the conclusion is), but I hope that most people can at least appreciate the
shape of my argument, and realize how easy it would be for someone to set out to do one thing (for example, to try to make the forum a nicer/safer place by taking it upon themselves to actively "police" it) and instead end up mostly accomplishing something else (like maybe succeeding in making things
negligibly safer, but only at the greater expense of very non-negligibly contributing to the negative forum dynamics that make the whole environment less productive and much less hospitable than it could be for new users and new businesses).
[1] Since I think it's typically DT members that feel the need to leave the kind of feedback we've been talking about, here's an interesting thought experiment: Imagine Bitcointalk implemented a policy of dissolving DT for the first three months of every year. What do you predict the consequences of that would be? Personally, I find the "doomsday" prediction that during the first quarter of each year Bitcointalk would temporarily devolve into some kind of shitshow with people getting scammed left and right to be an extremely silly one.