As I understood, quality of Experts' evaluations and reviews in manual Widgets is evaluated by Subscribers and automatically. Will these two types (manual by Subcribers and automated) be evaluated separately with own rate or will it be an aggregate estimate (if so, what is the "weight" prescribed to each of this type of evaluation?)? What will be the algorithm of automated evaluation of Experts' evaluations be based on (formula 3 from Appendix 1 or some special machine algorithm)?
Thanks in advance.
The Subscribers rate Experts' evaluation on a 1 to 5 scale. Then the Experts' distribution bases are calculated based on their average rating (you can see the formulae in Appendix 1 of WP).
As for the aggregation - the experts rate projects' aspects on a 1 to 5 scale. These scores can be aggregated and the distribution may be shown in a separate widget.
By the way, how are you going to eradicate the plagiarism in widgets and Experts' evaluations? It seems there going to be lots of widgets and evaluations placed by rating, so what will stop people from copying (even partly) the ideas of those not yet rated highly newbies' ideas?
There will be automatic plagiarism detection and moderators - pretty hard to bypass all those measures)