Pages:
Author

Topic: This forum will need explicit rules on the use of AI. - page 2. (Read 764 times)

staff
Activity: 1316
Merit: 1610
The Naija & BSFL Sherrif 📛
If an AI is able to consistently create content that doesn't break any of the forum's rules (good quality, on-topic, not just a padded word salad, no plagiarism, etc, etc), I, for one, welcome our new machine overlords. Otherwise, content that violates the rules, AI or human produced, can already be dealt with with our current rules and policies in mind.

Alright. I'll see if I can contract Aaron Schwarzenegger to build me one last Terminator for this purpose.

On a more serious note, campaign managers should exclude AI-generated posts from the payroll as soon as possible, otherwise this feature will be abused as ChatGPT sometimes makes incorrect answers.

Campaign managers: Use this tool to detect AI-generated text: https://huggingface.co/openai-detector. I recommend considering anything with a score of 90%+ as AI-generated.

The GPT-2 you provided only works with ChatGPT generated posts and the site prediction is inaccurate, I tried the site twice with original local Naija posts and received two different predictions, ChatGPT is not the only AI-generated platform that people use to post on forums. It is unreliable to use https://huggingface.co/openai-detector. We are only going to force traffic on the site to make them more money nothing more.
hero member
Activity: 1680
Merit: 845
If an AI is able to consistently create content that doesn't break any of the forum's rules (good quality, on-topic, not just a padded word salad, no plagiarism, etc, etc), I, for one, welcome our new machine overlords. Otherwise, content that violates the rules, AI or human produced, can already be dealt with with our current rules and policies in mind.
I agree. Whether we like it or not, AI content is not necessarily plagiarized. As long as we don't have duplicate content, which might occur if a large number of users use a specific platform on a specific question such as "Should I invest in Bitcoin?" it doesn't do much harm on the forum if used in moderation. Imagine 90% of users being practically bots; that would be disastrous.

Campaign managers should prohibit the usage of AI, which would actually discourage users from using AI in their posts.
legendary
Activity: 1526
Merit: 1359
Question is, where does the AI get it's ideas/ content from?
If we see a someone praising bitcoin and LN to later find out it was AI, who's ideas was posted?

A forum's community needs genuine discussions, what if I used the AI but didn't really agreed with the results but posted it anyways to eaither. e.g., farm merits or earn easy sig money? the spirit of humanity will be lost if that happens.

If people are just pretending to agree or disagree with something they read by copying the replies of an AI, they are basically just faking it until they make it. And that is not exactly the best way to build a genuine community, is it? It is like trying to win a game of poker with a deck of all jokers. Sure, you might get lucky and fool someone for a while, but eventually, your bluff will be called. So let us all agree to just be real with each other and let us celebrate and embrace our individual differences, and create a space where we can all feel valued and special, like a royal flush!
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
After some edit on punctuation, I got result 99.58% real.

Substantial edits to AI generated text can no longer be classified as such, but as written by a human. Thus, managers should choose a confidence score that will withstand "lazy adjustments" to punctuation and typos to try to drop the score down.

Of course, if you see a post full of typos, then it's not AI generated and in most campaigns the person writing such posts would be considered incompetent and thrown out.
hero member
Activity: 952
Merit: 662
Campaign managers: Use this tool to detect AI-generated text: https://huggingface.co/openai-detector. I recommend considering anything with a score of 90%+ as AI-generated.
It's useful, but it's not really a perfect tool to detect AI generated text, see below:

First I got result 0.80% real.
I am sure good campaign managers who care about quality will, but I am not so sure about campaigns like 1xBit and maybe some others.
Why we need to care with 1xbit scam manager and other managers that get payment with worthless tokens? they're a rubbish in this forum and there's no way to control them except create self moderated threads to delete their low quality post or just report to moderators.
legendary
Activity: 1358
Merit: 1565
The first decentralized crypto betting platform
On a more serious note, campaign managers should exclude AI-generated posts from the payroll as soon as possible, otherwise this feature will be abused as ChatGPT sometimes makes incorrect answers.

Campaign managers: Use this tool to detect AI-generated text: https://huggingface.co/openai-detector

I am sure good campaign managers who care about quality will, but I am not so sure about campaigns like 1xBit and maybe some others.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
If an AI is able to consistently create content that doesn't break any of the forum's rules (good quality, on-topic, not just a padded word salad, no plagiarism, etc, etc), I, for one, welcome our new machine overlords. Otherwise, content that violates the rules, AI or human produced, can already be dealt with with our current rules and policies in mind.

Alright. I'll see if I can contract Aaron Schwarzenegger to build me one last Terminator for this purpose.

On a more serious note, campaign managers should exclude AI-generated posts from the payroll as soon as possible, otherwise this feature will be abused as ChatGPT sometimes makes incorrect answers.

Campaign managers: Use this tool to detect AI-generated text: https://huggingface.co/openai-detector. I recommend considering anything with a score of 90%+ as AI-generated.
copper member
Activity: 1330
Merit: 899
🖤😏
Question is, where does the AI get it's ideas/ content from?
If we see a someone praising bitcoin and LN to later find out it was AI, who's ideas was posted?

A forum's community needs genuine discussions, what if I used the AI but didn't really agreed with the results but posted it anyways to eaither. e.g., farm merits or earn easy sig money? the spirit of humanity will be lost if that happens.
legendary
Activity: 1162
Merit: 2025
Leading Crypto Sports Betting & Casino Platform
-snip-
...If someone is active here for economical benefit only and that's from 1 account only, both he and owner of 5 accounts are in same category of course.

they are in the same category, but in my personal opinion it makes things worse in the case of account farming and account abuse.
If the bots/artificial intelligence participate around here undetected, it could degenerate the forum, the campaign participation and become a tool for shady people to grow accounts automatically to put them for sale, because in the end merit senders might see no much of a difference between an actual human being and the AI.



In that case, we need admins to explicitly state on the rules that the use of artificial intelligence for the creation of content here is considered plagiarism. Otherwise, someone could argue that there was no connection between the AI and the old rule itself.
Pretending the AI's text was created by you is the plagiarism. It doesn't matter where the AI got it.

I agree, for the reasons previously presented.
We do not need an Artificial intelligence pretending to be a human around here, imho.
legendary
Activity: 4256
Merit: 8551
'The right to privacy matters'
I agree with @philipma1957, all we need is just explicit disclaimer. If usage of AI can improve post or discussion quality on this forum, i'd welcome such AI.

Also, it is straightforward to detect GPT3 content, there are AI detection sites that do just that.

Do we know how reliable such website? Does it work against specific user input (such as using certain writing style) or have low false positive?

Well look at my difficulty thread  if it says any are AI it is not accurate as I do not do AI and I think no one else does so on that thread.

I think the problem is if you put those posts  at least 10% would test as AI.

In order to not test as AI you need a lot of original thought vs combining 3 posts and voicing an opinion on them.

If you were to go to my difficulty thread

https://bitcointalksearch.org/topic/2022-diff-thread-5378628

There is a ton of variation on a common theme. (the difficulty)

for instance

Quote

https://www.bitrawr.com/difficulty-estimator

Latest Block: ? ? ?

blah blah blah
blah blah blah

..... quote


happens over and over and over in that thread.

It is not ai or google but it is a lead source of DIFF stats

and will be repeated thousands of times.

I would love to see how that thread does on AI testing.

we have done them for years
2022
2021
2020 and many more
legendary
Activity: 1526
Merit: 1359
It all depends on how and where the AI-generated content is being used. If you are posting AI-generated content and pretending it is your own original work, then you are committing plagiarism. Plagiarism is the act of presenting someone else's work as your own without proper attribution. But, if you are using AI as a tool to help you develop and improve your own original text, then I do not think it is considered plagiarism. For example, Google Translate is an example of AI-generated content, and grammar checkers like Grammarly rely on AI to generate content. It is hard to make a clear distinction between these modern tools and define what is allowed in the forum and what is not in a clear-cut manner.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
If an AI is able to consistently create content that doesn't break any of the forum's rules (good quality, on-topic, not just a padded word salad, no plagiarism, etc, etc), I, for one, welcome our new machine overlords. Otherwise, content that violates the rules, AI or human produced, can already be dealt with with our current rules and policies in mind.

Alright. I'll see if I can contract Aaron Schwarzenegger to build me one last Terminator for this purpose.
legendary
Activity: 1792
Merit: 1296
Crypto Casino and Sportsbook
If an AI is able to consistently create content that doesn't break any of the forum's rules (good quality, on-topic, not just a padded word salad, no plagiarism, etc, etc), I, for one, welcome our new machine overlords. Otherwise, content that violates the rules, AI or human produced, can already be dealt with with our current rules and policies in mind.
Do I understand correctly that this can be regarded as an approval of the use of AI on the bitcointalk? If AI is not forbidden, then using it doesn't violate forum rules, right?.

Will the administration make changes to the forum rules regarding the use of AI or will it leave everything as it is? It seems to me that it would be good to clarify this either in the form of a rule change / edit or at least an informational post for forum users.
legendary
Activity: 2702
Merit: 4002
Is there any bot that works in a realistic way so that it shares replies that it is difficult for the average user, mod or campaign manager to distinguish from the posts made by humans?
In the sense of high quality, adding good information, well-written, and not words that were copied and pasted with the addition of some general sentences? Huh Huh


Generally, it is easy to ban these bots account with the spam, or zero/low value rule.
legendary
Activity: 2338
Merit: 10802
There are lies, damned lies and statistics. MTwain
Open AI is seemingly working on a solution to digitally watermark their outputs, through a kind of "unnoticeable secret signal in its choices of words, which you can use to prove later that, yes, this came from GPT" (see this blog):
<…> Conceptually, any post using GPT to create content on Bitcointalk is plagiarizing, as the poster is not creating the content himself, and is in fact trying to pass on someone else’s content as his (albeit that someone else being an AI). According to OpenAI’s Sharing and Publication Policy, using the API, and I have to assume that that extends to the results of their chatbot, requires one to explicitly indicate that the content was AI-generated. Though this latter point is not technically of our concern, it seems like a reasonable request to place, in a similar fashion to links on posts that are largely/verbose based on other sources.

Detection of GPT usage is not going to be easy for the most, and likely, over time, people can pick-up on patterns such as the usage of near perfect English, consistency in its usage throughout all posting history, and/or alternating with changes in style (human/AI), lack of real interaction from a less than academic point of view, certain types of formal constructions, and so forth. This is obviously is not exclusive to GPT, nor sufficient to deem someone a GTP-plagiarizer with certainty, and maybe likenesses is the closer one can get shorter of a confession.

Now all this is, if it becomes an extended practice, is going to be a drag, whereby people will be able to create bag loads of posts with cero effort and thought, and although likely matching quality-wise a large base of posts that we encounter per se, it may easily become a new spam-fest source of neutral content.

I’ve read, though I couldn’t find the original source (i.e. team declarations), that Chat-GPT can’t plagiarize per se (the language model generates the text using a probability for the next best word to use, based on the prior words it has already used), although there could be a fortuity chance of it happening. Many of the posts we read seem to me, from a reader’s point of view, a compendium of text-spinning ideas. Though that is not what it’s really doing, the probability of the next word to use is derived from the model created through the training data, and that is inevitable a subjacent reference to all the text it provides.

On the other hand, even if the text is comprehensive and aligned somewhat with what the average Joe may be able to come up with, the poster conceptually (in my book) plagiarized the output from Chat GPT, without giving credits to the source of the text’s generation, trying to pass the text on as his own.
hero member
Activity: 644
Merit: 661
- Jay -
I totally agree with allowing technology to thrive as long as it is not contravening any of the forum rules, for what it is worth, it could actually be of benefit to the forum and improve quality of discussions.

Has the admins considered the issue of people running the replies of certain high quality members here on the forum through an AI programming and they push out posts similar to the person's style?
Here is a thread on reddit[1], which discusses how this affects artists and I do not think it is limited to only art.

If someone created an AI which posts like theymos for example or satoshi based on their post history, would that raise brows?

[1] https://www.reddit.com/r/technology/comments/y6eiyq/artists_say_ai_image_generators_are_copying_their/

- Jay -
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
99% of AI users on Bitcointalk will just use it to attempt plagiarism and spam, which is already prohibited in forum rules.
It's going to be difficult to prove the plagiarism, and spam can reach very large volumes. I don't want to have to doubt if each post is genuine. Imagine what happens when someone has a bot that earns him signature payments by asking questions on the tech boards.

In that case, we need admins to explicitly state on the rules that the use of artificial intelligence for the creation of content here is considered plagiarism. Otherwise, someone could argue that there was no connection between the AI and the old rule itself.
Pretending the AI's text was created by you is the plagiarism. It doesn't matter where the AI got it.

If an AI is able to consistently create content that doesn't break any of the forum's rules (good quality, on-topic, not just a padded word salad, no plagiarism, etc, etc)
I could agree with this, if the post makes it clear it's created by an AI. Without that, if I post something created by an AI, my cat, or a sweatshop filled with elves, it's not my own creation. That makes it plagiarism.
legendary
Activity: 2156
Merit: 2100
Marketing Campaign Manager |Telegram ID- @LT_Mouse
So, in theory if someone somehow managed to farm, let's say 5 accounts up to Full member and Senior member and somehow implemented an AI for those accounts to interact around here while joined a signature campaign or a bounty, so the single person behind those accounts can get all the economical benefits, that would be acceptable as long as the quality of those posts are good enough and on-topic? Right?
That's irrelevant when forum itself allow you to use more than 1 account as long as you are good with forum rules. Why are you bringing the economical benefits here? If someone is active here for economical benefit only and that's from 1 account only, both he and owner of 5 accounts are in same category of course.

Just posting something on-topic , semi-correct and coherent doesn't necessarily mean the post isn't low value.
But it will be almost impossible to detect if the texts get a little human touch lol. I'm observing one user, pretty much sure texts are AI generated but they are not such low value either.
global moderator
Activity: 3794
Merit: 2612
In a world of peaches, don't ask for apple sauce
If an AI is able to consistently create content that doesn't break any of the forum's rules (good quality, on-topic, not just a padded word salad, no plagiarism, etc, etc), I, for one, welcome our new machine overlords. Otherwise, content that violates the rules, AI or human produced, can already be dealt with with our current rules and policies in mind.

So, in theory if someone somehow managed to farm, let's say 5 accounts up to Full member and Senior member and somehow implemented an AI for those accounts to interact around here while joined a signature campaign or a bounty, so the single person behind those accounts can get all the economical benefits, that would be acceptable as long as the quality of those posts are good enough and on-topic? Right?
All that matters is that those users don't break the rules. If they don't, then yes, there's nothing against them operating several AI powered accounts though at that point, I'd be more worried about that AI doing things outside Bitcointalk as opposed to farming signature campaigns.

Just posting something on-topic , semi-correct and coherent doesn't necessarily mean the post isn't low value. That quality is what I feel a lot of AI generated content is gonna lack (and yes I'm aware of the recemt developments in that area).
legendary
Activity: 1162
Merit: 2025
Leading Crypto Sports Betting & Casino Platform
If an AI is able to consistently create content that doesn't break any of the forum's rules (good quality, on-topic, not just a padded word salad, no plagiarism, etc, etc), I, for one, welcome our new machine overlords. Otherwise, content that violates the rules, AI or human produced, can already be dealt with with our current rules and policies in mind.

So, in theory if someone somehow managed to farm, let's say 5 accounts up to Full member and Senior member and somehow implemented an AI for those accounts to interact around here while joined a signature campaign or a bounty, so the single person behind those accounts can get all the economical benefits, that would be acceptable as long as the quality of those posts are good enough and on-topic? Right?
Pages:
Jump to: