Author

Topic: [ANN][PRESALE]⭐ REPU ⭐ - Smart reputation managment - Pre-ICO Soon🚀🚀 - page 111. (Read 19289 times)

full member
Activity: 518
Merit: 101
Interesting idea.  How does it compare to Evernym?
These are slightly different projects, the Evernym have an emphasis on the identity of the individual, in REPU on her reputation. I agree the projects concepts are similar, but comparing them is not entirely correct ..
Perhaps there is a chance that they will cooperate, and it's will become good partnership? Such things happened before when projects have similar concepts.
full member
Activity: 252
Merit: 100
The worst part about these popular topics here is, finding the answer to the question you asked a day or two ago. Any word from the team about possible smear campaigns and alternate accounts reputation boosting and the all-around abuse of the system?


I would appreciate a detailed and structured summarizing statement from the devs im with your opinion

The devs have stated that they will use AI and manual checks of users and ratings in order to prevent this kind of abuse. However, your concerns and questions are valid. This will probably be their biggest task to solve before going live.


Yes, there are AI filtration, and manual filtration.
AI first and then followed by manual filtration.
The quality of this platform will depend on these things.

Quote
We have several stages of reviews verification:
- At first, we are using an AI machine moderation, that analyses more then 50 parameters, such as tone analyzer, IP, geolocation, reviews frequency etc.
- Some of the reviews automatically declined by the system, some of them will get manual filtration.
- Second important part - is user rating in the system - it consists of overall score, age of account, number of ratings given for others. If user has low rating or if its newbie in the system, the more filters are provided to his/her rating.
- Third part is that we are limiting the number of rating, which can post one account per day/week/month. It depends of user rating.

During pre-alpha-testing all of these factors showed 95% of fraud detection. But this is only pre-alpha and we are still working with the algorithms Smiley

yes, that sounds about right. they seem to have covered all the bases. I'm glad they taken this so serious because a reputation system can make or brake a person.
I like how they think to manage the situation of fake ratting, 95% is a good percentage gain in alpha testing.
Oh, so that's 95% chance that they will detect and fix fake rating? If so and if they are going to improve that result, they would be able to almost entirely eliminate faking

95% is amazingly high if true already. It's impossible to reach 100%, but it's still interesting to see whether the 95% will stack up in real world situations
full member
Activity: 420
Merit: 136
Interesting idea.  How does it compare to Evernym?
These are slightly different projects, the Evernym have an emphasis on the identity of the individual, in REPU on her reputation. I agree the projects concepts are similar, but comparing them is not entirely correct ..
full member
Activity: 938
Merit: 159
The one-pager of REPU ICO: https://repu.io/onepager_eng.pdf

I think from now on every ICO should do this.

But please update it with info on token price and token sales dates.
One pager is very helpful to understand what an ICO is about. Many projects have this.


thats so cool. yeah all of the project should do that. it allows a perfect first impression
you are right a page with all the essential information about the project as tokens that will be generated, staff etc, a page easy to read and understand all the ico should make a similar one
newbie
Activity: 98
Merit: 0
The one-pager of REPU ICO: https://repu.io/onepager_eng.pdf

I think from now on every ICO should do this.

But please update it with info on token price and token sales dates.
One pager is very helpful to understand what an ICO is about. Many projects have this.


thats so cool. yeah all of the project should do that. it allows a perfect first impression

What a good idea! Everything completely clear.
jr. member
Activity: 42
Merit: 2
The one-pager of REPU ICO: https://repu.io/onepager_eng.pdf

I think from now on every ICO should do this.

But please update it with info on token price and token sales dates.
One pager is very helpful to understand what an ICO is about. Many projects have this.


thats so cool. yeah all of the project should do that. it allows a perfect first impression
full member
Activity: 392
Merit: 100
The one-pager of REPU ICO: https://repu.io/onepager_eng.pdf

I think from now on every ICO should do this.

But please update it with info on token price and token sales dates.
One pager is very helpful to understand what an ICO is about. Many projects have this.
full member
Activity: 602
Merit: 110
Interesting. This project has already scored decently. And there are prospects for growth. Very similar to the project of El Petro Vinesuela. I think to invest for the benefit of Peru and myself loved.
full member
Activity: 224
Merit: 100
Become Part of the Mining Family
The one-pager of REPU ICO: https://repu.io/onepager_eng.pdf

I think from now on every ICO should do this.

But please update it with info on token price and token sales dates.
member
Activity: 434
Merit: 10
The worst part about these popular topics here is, finding the answer to the question you asked a day or two ago. Any word from the team about possible smear campaigns and alternate accounts reputation boosting and the all-around abuse of the system?


I would appreciate a detailed and structured summarizing statement from the devs im with your opinion

The devs have stated that they will use AI and manual checks of users and ratings in order to prevent this kind of abuse. However, your concerns and questions are valid. This will probably be their biggest task to solve before going live.


Yes, there are AI filtration, and manual filtration.
AI first and then followed by manual filtration.
The quality of this platform will depend on these things.

Quote
We have several stages of reviews verification:
- At first, we are using an AI machine moderation, that analyses more then 50 parameters, such as tone analyzer, IP, geolocation, reviews frequency etc.
- Some of the reviews automatically declined by the system, some of them will get manual filtration.
- Second important part - is user rating in the system - it consists of overall score, age of account, number of ratings given for others. If user has low rating or if its newbie in the system, the more filters are provided to his/her rating.
- Third part is that we are limiting the number of rating, which can post one account per day/week/month. It depends of user rating.

During pre-alpha-testing all of these factors showed 95% of fraud detection. But this is only pre-alpha and we are still working with the algorithms Smiley

yes, that sounds about right. they seem to have covered all the bases. I'm glad they taken this so serious because a reputation system can make or brake a person.
I like how they think to manage the situation of fake ratting, 95% is a good percentage gain in alpha testing.
Oh, so that's 95% chance that they will detect and fix fake rating? If so and if they are going to improve that result, they would be able to almost entirely eliminate faking
member
Activity: 154
Merit: 10
The worst part about these popular topics here is, finding the answer to the question you asked a day or two ago. Any word from the team about possible smear campaigns and alternate accounts reputation boosting and the all-around abuse of the system?


I would appreciate a detailed and structured summarizing statement from the devs im with your opinion

The devs have stated that they will use AI and manual checks of users and ratings in order to prevent this kind of abuse. However, your concerns and questions are valid. This will probably be their biggest task to solve before going live.


Yes, there are AI filtration, and manual filtration.
AI first and then followed by manual filtration.
The quality of this platform will depend on these things.

Quote
We have several stages of reviews verification:
- At first, we are using an AI machine moderation, that analyses more then 50 parameters, such as tone analyzer, IP, geolocation, reviews frequency etc.
- Some of the reviews automatically declined by the system, some of them will get manual filtration.
- Second important part - is user rating in the system - it consists of overall score, age of account, number of ratings given for others. If user has low rating or if its newbie in the system, the more filters are provided to his/her rating.
- Third part is that we are limiting the number of rating, which can post one account per day/week/month. It depends of user rating.

During pre-alpha-testing all of these factors showed 95% of fraud detection. But this is only pre-alpha and we are still working with the algorithms Smiley

yes, that sounds about right. they seem to have covered all the bases. I'm glad they taken this so serious because a reputation system can make or brake a person.
I like how they think to manage the situation of fake ratting, 95% is a good percentage gain in alpha testing.
full member
Activity: 588
Merit: 100
The worst part about these popular topics here is, finding the answer to the question you asked a day or two ago. Any word from the team about possible smear campaigns and alternate accounts reputation boosting and the all-around abuse of the system?


I would appreciate a detailed and structured summarizing statement from the devs im with your opinion

The devs have stated that they will use AI and manual checks of users and ratings in order to prevent this kind of abuse. However, your concerns and questions are valid. This will probably be their biggest task to solve before going live.


Yes, there are AI filtration, and manual filtration.
AI first and then followed by manual filtration.
The quality of this platform will depend on these things.

Quote
We have several stages of reviews verification:
- At first, we are using an AI machine moderation, that analyses more then 50 parameters, such as tone analyzer, IP, geolocation, reviews frequency etc.
- Some of the reviews automatically declined by the system, some of them will get manual filtration.
- Second important part - is user rating in the system - it consists of overall score, age of account, number of ratings given for others. If user has low rating or if its newbie in the system, the more filters are provided to his/her rating.
- Third part is that we are limiting the number of rating, which can post one account per day/week/month. It depends of user rating.

During pre-alpha-testing all of these factors showed 95% of fraud detection. But this is only pre-alpha and we are still working with the algorithms Smiley

yes, that sounds about right. they seem to have covered all the bases. I'm glad they taken this so serious because a reputation system can make or brake a person.
hero member
Activity: 1456
Merit: 567
The worst part about these popular topics here is, finding the answer to the question you asked a day or two ago. Any word from the team about possible smear campaigns and alternate accounts reputation boosting and the all-around abuse of the system?


I would appreciate a detailed and structured summarizing statement from the devs im with your opinion

The devs have stated that they will use AI and manual checks of users and ratings in order to prevent this kind of abuse. However, your concerns and questions are valid. This will probably be their biggest task to solve before going live.


Yes, there are AI filtration, and manual filtration.
AI first and then followed by manual filtration.
The quality of this platform will depend on these things.

Quote
We have several stages of reviews verification:
- At first, we are using an AI machine moderation, that analyses more then 50 parameters, such as tone analyzer, IP, geolocation, reviews frequency etc.
- Some of the reviews automatically declined by the system, some of them will get manual filtration.
- Second important part - is user rating in the system - it consists of overall score, age of account, number of ratings given for others. If user has low rating or if its newbie in the system, the more filters are provided to his/her rating.
- Third part is that we are limiting the number of rating, which can post one account per day/week/month. It depends of user rating.

During pre-alpha-testing all of these factors showed 95% of fraud detection. But this is only pre-alpha and we are still working with the algorithms Smiley
full member
Activity: 616
Merit: 145
🚀🚀 ATHERO.IO 🚀🚀
The worst part about these popular topics here is, finding the answer to the question you asked a day or two ago. Any word from the team about possible smear campaigns and alternate accounts reputation boosting and the all-around abuse of the system?


I would appreciate a detailed and structured summarizing statement from the devs im with your opinion

The devs have stated that they will use AI and manual checks of users and ratings in order to prevent this kind of abuse. However, your concerns and questions are valid. This will probably be their biggest task to solve before going live.

Manual check will be very boring thing to do. As the number of users will increase, I can't imagine how they will handle such manual check.

In the beginning I believe they will have to do some manual checks for some cases, but apart of AI, ML will be also used, so with time and increased amount of users it will all become automatized.

They will do manual checks only if AI check fails, in fact I think that the AI they will have in place will be one of the most importants things of this projects
sr. member
Activity: 798
Merit: 262
I guess the devs should be more active here before it becomes a cold ann thread Smiley

there are a couple of questions only the devs can answer ><"

devs can hiring community manager for handle the community, it will better to keep communicate with us here.

It's already became cold topic. A community manager would have been nice.
sr. member
Activity: 1792
Merit: 293
👉bit.ly/3QXp3oh | 🔥 Ultimate Launc
I guess the devs should be more active here before it becomes a cold ann thread Smiley

there are a couple of questions only the devs can answer ><"

devs can hiring community manager for handle the community, it will better to keep communicate with us here.
full member
Activity: 798
Merit: 115
The worst part about these popular topics here is, finding the answer to the question you asked a day or two ago. Any word from the team about possible smear campaigns and alternate accounts reputation boosting and the all-around abuse of the system?


I would appreciate a detailed and structured summarizing statement from the devs im with your opinion

The devs have stated that they will use AI and manual checks of users and ratings in order to prevent this kind of abuse. However, your concerns and questions are valid. This will probably be their biggest task to solve before going live.

Manual check will be very boring thing to do. As the number of users will increase, I can't imagine how they will handle such manual check.

In the beginning I believe they will have to do some manual checks for some cases, but apart of AI, ML will be also used, so with time and increased amount of users it will all become automatized.
sr. member
Activity: 1022
Merit: 252
The worst part about these popular topics here is, finding the answer to the question you asked a day or two ago. Any word from the team about possible smear campaigns and alternate accounts reputation boosting and the all-around abuse of the system?


I would appreciate a detailed and structured summarizing statement from the devs im with your opinion

The devs have stated that they will use AI and manual checks of users and ratings in order to prevent this kind of abuse. However, your concerns and questions are valid. This will probably be their biggest task to solve before going live.

Manual check will be very boring thing to do. As the number of users will increase, I can't imagine how they will handle such manual check.
full member
Activity: 462
Merit: 100
The worst part about these popular topics here is, finding the answer to the question you asked a day or two ago. Any word from the team about possible smear campaigns and alternate accounts reputation boosting and the all-around abuse of the system?


I would appreciate a detailed and structured summarizing statement from the devs im with your opinion

The devs have stated that they will use AI and manual checks of users and ratings in order to prevent this kind of abuse. However, your concerns and questions are valid. This will probably be their biggest task to solve before going live.

I assume this is not just a task for before going online but a process which is needed the whole time. They will have to work on better and better methodes over time to keep the quality. People are always finding ways to avoid controls.
There will always be problems with a user who cheats, but the important thing is to have a system that works well. Over time, this type of user will be detected and corrected.
full member
Activity: 336
Merit: 112
The worst part about these popular topics here is, finding the answer to the question you asked a day or two ago. Any word from the team about possible smear campaigns and alternate accounts reputation boosting and the all-around abuse of the system?


I would appreciate a detailed and structured summarizing statement from the devs im with your opinion

The devs have stated that they will use AI and manual checks of users and ratings in order to prevent this kind of abuse. However, your concerns and questions are valid. This will probably be their biggest task to solve before going live.

I assume this is not just a task for before going online but a process which is needed the whole time. They will have to work on better and better methodes over time to keep the quality. People are always finding ways to avoid controls.
Jump to: