You are contradicting yourself here. Someone with the skill set to create a model that can write semi-coherent posts, and actually post using a bitcointalk account is not going to be wasting their time trying to farm accounts. Their time will simply be too valuable for someone like this to earn coin like this.
Firstly, what I entail isn't as involved as what you might have in your head: you're just leveraging the already-existing GAN and consuming the output. The implementation of the tool would solely be accessing the API and taking the text. Secondly, I'm not quite sure how much you could gain from account farming but presumably it's enough for some people to do it manually.
I am not aware of any publically available GANs (specifically generator models) that are pretrained and ready to generate content. (there are a few websites that will use a pretrained generator model to generate an image).
Most of the GAN architectures I am aware of involve images, and a few involve generating sounds. I have built generative models involving text before, but they were not GANs, and they could not come close to generating the two posts we are discussing (they would look at an image and create a caption of the image).
Even if one were to implement a GAN architecture that already exists on a text dataset to create their own GAN to create posts, they would need to have the skills that far outweigh the possible income generated from farming accounts. You can know how much you can earn from farming accounts by reviewing
pay rates of signature campaigns, and estimating how many accounts could be farmed at once. I would question if someone could earn anything at all from farming accounts using AI because I am not sure if they would earn enough merit to rank up to even a Junior Member.
I don't think this is the case of the "user doesn't want to do much work" I think this is the case of the user wants to have their posts be 100% automated, without intervention without humans. -snip- Hence an attacker would want to try to use their working model where there are robust measures to detect plagiarism.
Sure, this is a thought, but I would never expect Bitcointalk to be the place to do so. Have you seen our moderation? I mean... really.
I feel like someone using BCT to limit test their model would have a much higher success rate here than elsewhere - although the rules and all are pretty anti-spam, it doesn't look much like it when you actually browse the place.
The type of content we allow here is very broad, we do not allow plagiarism, and are under consistent spam attacks because we allow our content to be indexed by search engines (which is not the case for many social media platforms). We also require posts to be coherent and replies to be related to the OP of a thread, neither of which apply to other social media platforms. Someone tweeting incoherent nonsense here would probably get banned, while someone tweeting the same thing would simply be ignored by anyone who reads the tweets on Twitter.