NEO platform was selected because of various reasons.
Firstly barrier to entry with the GAS FEE of 500 gas ( 490 ) This reduces associated scam coins
Regulation is approached in a practical way
Community is amazing, there is alot of support from community members of NEO
The current technology suits our needs as a utility token and we feel we wont need to port from NEO in the future.
Currently fees on the network for users sending tokens is free.
The app is free to use at this point and is to show a proof of concept, the concept of incentives users to contribute better data for our algorithm to improve machine translation.
The web app is currently live for testing -
www.telegram.translateme.chatThe point about 6500 languages is that at the movement translation providers only support a very small amount of languages, its less that 1%. We saying that any culture can contribute data thats needed to start building machine translation that could in future be used in niche markets like small language groups not supported by other providers.
Great to hear that! Introducing a new app should be free in the initial stage of its inception.
Once people are satisfied with your services, then you can think of your next step to further your development.
But, how can you assure the reliability of those 6500 languages? Is there a group that will validate the accuracy of translation?
Building a language platform that will serve many languages will take time, our advantage is we will use the incentive model to build data required to rapidly expand our language offering. Our platform will allow contributors to initiate communities in language groups which would be the start of a supported language, once we reach a quality score acceptable to use in live translation we will activate the language and from here it will continue to grow as more users correct the machine translation. This will grow as the language demand is needed, we will focus on niche markets google doesn't support very well, unique translation solutions for specialized language groups and domains.
Validation is amazing with machine learning, we will use user rating as a variable
Example:
Age of user activated
Participation score
Score of user translations ( on our translation market platform
https://www.youtube.com/watch?v=kZU2ZmYgnDk )
Then we will use machine learning to validate submissions:
Our algorithm will look for duplicated data submitted by different users, the algorithm works by comparing submitted sentences that have a similar associated text. By taking two or more users submissions and comparing them to find a common consensus on a particular suggestion we validate the associated words and the score of this submission is increased to be used in future translation. If the new output is not further corrected then this validates the data further. The algorithym takes the orginal subitted english users this as a marker, identifying other similiar words in a similar sentence will allow us to validate in parts based on original sentence.
This means that if a user submits complete rubbish and for a sentence thats been translated : Example from our telegram app.
User 1. Score . ****
🇷🇺 "However we do bring value to translators by empowering them to be paid directly."
Sentence 1 : Ax, дa.
Sentence 2: Я и нe пoдyмaл oб этoм.
Sentence 4: Дeйcтвитeльнo, пocpeдники мeждy пepeвoдчикaми и зaкaзчикaми cшибaют бoльшoй кyш oбычнo.
Sentence 3: Cпacибo зa oтвeт, этo xopoшo мoтивиpyeт, чтoбы внocить вклaд cюдa )
[ TranslateMe Network ]
🇬🇧 "However, we have to pay directly to them."
Sentence 1 : Oh yes.
Sentence 2 :I did not think about it.
Sentence 3: Indeed, intermediaries between translators and customers often hit the big score.
Sentence 4: Thanks for the answer, it is well motivated to contribute here)
The sentences are separated then marked as translated version 1 then they wait for corrections, when a correction is submitted and the combination or words contain some of the currently marked sentence then the algorithm assigns the submissions under these marked original, if enough data is duplicated from separate users then the original version translation of a language will change to a updated version based on suggestions, this becomes version two of that string of words. This then goes back into translation and if further adjustments are submitted this time based on updated translation by machine then these suggestions will be associated now as the next building blocks for improving the string, once enough duplication data is received its further validated to repeat the process until the string doesn't receive any further changes.
You should join our telegram for more info we can explain the process further.