Hi. Thank you for your question.
Mode of operation
STAGE 1
- The Requestor creates a Request by specifying his search query which may include a logo, a brand, signature style elements, etc. The Doer’s task is to find this information.
- The Requestor adds as much information as possible (what kind of infringements are of interest to him, countries, cities, etc.)
- The submitted Request goes to Reviewing (if necessary, the moderator asks and/or recommends to provide clarifications or to fill out other fields). The moderator’s work is done in the background and remains invisible to other users.
- When ready, the Request goes immediately to the Requests catalogue and is made viewable for Doers.
- The Requestor can indicate the desired number of Alerts by transferring the appropriate amount to the account.
... you can find more information in WP (pages 19-25) here: https://stopthefakes.io/docs/WP_En.pdf
STAGE 3
- The received Alert is redirected to the preliminary reviewing to check if it contains any prohibited material, such as pornography or graphic violence.
- When the reviewing stage is over, the Doer’s Alert goes to ‘Request with alerts’. Alerts containing all the details are added to the Request on this page.
- The Requestor receives an email inviting him or her to consider the new Alert.
- The Requestor decides on the Alert within a specific time limit.
- Based on the Requestor’s decision the signal is then assigned one of the three statuses:
1. The Alert is accepted. In this case, the Doer gets remuneration.
2. The Alert is being checked.
3. The Alert has been canceled. In this case, the Doer is not remunerated, and the Requestor must state the reason for declining the Alert by selecting one of the options from the menu and specifying in a text file why the Alert has not been accepted.
Rating system
In our opinion, our main challenge is to deal with low-quality or irrelevant data
provided by Doers. Therefore, we are introducing a rating system to minimize
Reviewing expenses and to improve the quality of Alerts.
Data processing is a costly and arduous process, so we intend to maximize Doers'
involvement. Ratings will not only influence the level of earnings, but they also
gamify the whole process. Introducing a game element incites competitive spirit
and motivates Doers to work more productively.
General principles
- The Service keeps track of the Doer’ rating and updates it in the online mode
- The rating system applies to Doers only
- Doers receive or lose points for specific actions
- The more points a user has, the higher his position and earnings
- Both Doers and Requestors can view the rating of any Doer, the number of points he earned and the latest modifications.
...
Does the moderator play any role in evaluating whether an alert or request might be justified or not? Or does the moderator only act according to the rules, whereas the rules, what are they? Are they specified by the decentralized community over time or by you as a service provider?
Moderators only check the sent alerts so that they do not contain spam or illegal content. The decision is made only by the requestors.
Ok but even then I can see a lot of edge cases where it will turn out to be difficult for the moderator to make the right decision. But I assume over time your system will evolve and you can more quickly and accurately decide whether something is spam or not.
You are right. We are planning to automatize some of this processes. We know some solutions which will help.