3) Nobody should be able to "rate" another user without having done business with them directly, and it would not be too difficult to create a system where two or more users agree to a transaction and allow themselves to rate each other. This record could then be publicly displayed.
So we wait for an obvious scammer to scam someone before the person who gets scammed can leave trust which will then be pointless as the scammer will have just ditched the account and moved on? This method would only really benefit scammers who would be quick to take advantage of it.
An obvious scammer won't be scamming anyone, it's the cunning scammers who are successful. Assuming guilt (often on a whim) simply doesn't work, regardless of whether the intent is good or not. It just creates another vector for unscrupulous individuals to harass other members.
An example of what would work better:
- User wants to sell/provide service. Creates a thread and selects a checkbox to state that it's a transactional thread. Optionally, limit such threads by category and/or require that a user establish a degree of positive feedback before being allowed to create threads in more 'sensitive' categories.
- Other users who are interested can then contact the seller privately, but cannot post in the thread unless they've actually performed a transaction. The purpose of the thread will be to act as the ad and not a discussion.
- All such threads will be publicly visible from the user's profile, ordered by thread and a summary of +/- ratings. A completed transaction should default to a + rating for both buyer and seller.
- The public rating should be based on last 3-months average to keep values fresh and make it so account selling is unfavorable (i.e. a bunch of old feedback that is >3 months won't be as good as a current account).
With this system, new accounts with no history will not be trusted for anything substantial as a matter of common sense, but the members will have the opportunity to establish trust without trolls leaving fake/baseless negative comments.
Once a user has established trust, anyone can see it for themselves and know that the feedback was genuine. Of course, the problem with users using multiple accounts still exists. One way to get around that is to offer an option for users to have their identify verified by a 3rd party, and once that 3rd party service confirms the verification they get a notation near their name that their identity is verified.