Hello, everyone!
I really liked the developers’ idea and the way they are turning in to the reality, but I’d like you to clarify some points. You stated that there will be consensus forecast, so in what form is it going to be? As I understood, experts will create their own opinion with their own arguments, but how is it going to be consensus? Is it only about statistical representation of your platform’s experts’ suggestion whether invest in some particular ICO or not?
Buy the way, I am very interested in your evaluation using machine learning. Well, it is useful, when you get information about market’ moods in a form as your widget provides, however, would it be effective? I mean it is not hard to find comments that are not really referred to some ICO’s theme or are not going with definitions «positive» or «negative», moreover there are only comments from bitcointalk. Please, tell more about how machine learning will be adopted.
Besides, I am concerned about experts’ rating system. As it is written in whitepaper, experts have their payment based on 3 months result activity. Then how do you see the process of rating experts by subscribers? For example, some expert writes a review on some ICO with may be suggestion that its coins price is going to grow for 4 month, then a subscriber make an investing decision and after 4 month he or she goes to the review and rates the expert. In that particular case how would the expert be rated and payed? It happens that 4 months ago the expert could have different effectiveness than now, so his or her rating (and payment) is calculated by ratings given for present activity or by «marks» given in present? Also would it be possible to see at an expert’s previous activity and ratings?
Additionally, as a potential investor I’d like to know more about allocation of funds, specially community development fund which includes education programs and learning materials. You are going to spend $2 300 000 on it, so It’d be great if you’ll be more concrete about it.
And the last question is would Dolphin provide some roughly speaking criteria or benchmark about how experts should work and how to rate them? Just to make sure that the paid platform will not become a usual forum with tons of unorganized and vaguely useful information.
Thank you very much in advance for paying attention to my personal concerns
Wow, thanks for looking at the project so thoroughly!
The part about the consensus is really interesting. It is not the consensus on whether this or that particular ICO is good, but rather a consensus on how ICO analysis must be structured. We hope that as people will analyze ICOs they will collectively agree on a framework of ICO analysis that can answer questions like: "What aspects are the most important in defining whether the ICO is trustowrthy or not?" or "How does a "perfect", in terms of trustworthiness and completeness of information, ICO look like?". If such framework comes to existence, it will also enforce restrictions on the ICOs - they will have to adher to a "code of conduct" described in the framework to be taken seriously.
This is a big dream, but we hope that our platform will if not bring it about, then at least bring it closer to reality. Such framework will bring structure to the ICO market, which will benefit all.
Now, let's talk sentiment analysis. It is now susceptible to the problems you've described, but I'm actively working on overcoming them. Right now I'm finalizing a new implementation of the model that is based on Recurrent Neural Networks, which should improve overall quality of sentiment detection. I've included some tweaks that will help the model to recognize posts unrelated to the ICO itself as neutral, thus decreasing the extent of the problem you've described. There is also a problem with answers to posts - for example, if some poster says something negative about the project and the answer calls them "an uneducated dummy", the algorithm will label the answer as negative (because of negative lexicon), though it is clearly positive
towards the project. I have an idea on how to overcome this, but I'll need some time for implementation. As for other social media besides BTT, don't worry - we are working on it!
It is indeed possible to see the expert's previous activity (though it depends on the widget design, we will include this in our basic expert widget). The 3 months aggregation period was decided with the following logic - the expert should be able to go on a vacation without losing their reputation, but prolonged inactivity should be punished. As for rating the expert's previous posts - we decided that the expert should only be rated in the current month, because it simplifies the rating system and makes it more robust to attacks. However, the situation that you've described can arise. One way to solve it for the Subscriber is to rate the expert in the current month for their previous activity. Though it is inelegant, and we will think of a better solution. Thank you for raising a point.
The main purpose of the community development fund is to attract early Experts and Authors and to fill the platform with content. The money will be used to subsidize new Experts and Authors, as well as attract well-known and respectable industry Experts to post on the platform. I think may peers will be able to tell you more about precise sums, but this is the general idea.
As for review structure - it depends on the widget design. The way we see our basic Expert widget is such - the Expert scores the main aspects of the project (i.e. idea, team, etc.) from 1 to 5 and will give a brief summary on each criteria, explaining why they have given this or that score. But anybody may create another Expert widget with entirely different review structure.