I think mods are in a quite difficult situation when they evaluate AI reports. First things first, there is no rule which prohibits the use of AI. So, reporting it and using "AI" as excuse, does not satisfy the conditions for deletion. Only if the post is completely meaningless, it'd be removed, but that's because being "completely meaningless" is against the rules. So, if someone used accurate AI responses, I'm not entirely sure what a mod should do.
Pretty sure I'm up to over 100 "good" reports where I put "AI spam" in the report description along with a link to the post that details the detector results in
this thread. But yes, simply being AI-generated isn't enough to have it deleted. I find that threads that open with an AI post aren't necessarily deleted, especially if its an advertisement for a new coin, token, or service. A post can contain "accurate responses" and still be spam at the same time.
Should there be more mobilization of users in that regard? Perhaps start giving neg trust to those abusing AI to make posts... Or we just don't care about it anymore?
If it's totally apparent, then yes. Any account which is essentially run by a chatbot, presented as a human, cannot be trusted.
I've been neutral tagging users for it -- I don't think red tag is appropriate as it doesn't necessarily have anything to do with trade trustworthiness. If others want to red tag offending accounts for it, I'm not gonna try to stop them.