Pages:
Author

Topic: Artificial Intelligence on the Forum - page 2. (Read 1024 times)

Vod
legendary
Activity: 3668
Merit: 3010
Licking my boob since 1970
June 29, 2024, 07:40:40 PM
#72
Exactly my point!! I mean, how did this even turn into a debate whether or not user that post with aids be banned?

Because you didn't define what you consider an aid.  Do you type on a manual typewriter (looking in a dictionary) than run uphill 2 miles to the nearest building with electricity to upload your words to the internet?

Aids could include an electronic keyboard, internet at your house, or a car to drive you where you need to go.
In the 2010s, an aid could be considered grammarley or even Google to aid you in research.

The debate is when such aids will be accepted.  There is little difference between garbage AI and garbage make your post quota.  Smiley
hero member
Activity: 798
Merit: 1045
Goodnight, ohh Leo!!! 🦅
June 29, 2024, 04:32:01 PM
#71
Sorry to reply to this so late, but I missed the updates on this thread.  And honestly, whether AI-generated posts constitute plagiarism or not, they ought to be banned, full stop.
Exactly my point!! I mean, how did this even turn into a debate whether or not user that post with aids be banned?

What's the petty sentiments about?... Y'all need to stop whatever you're doing and call a spade by its name. No offense to anyone that feels this is about them but .... c'monnn!  Are you practically telling people to follow suit?

The younger generations are looking up to you.. What's this gonna sound like? Take a look at altcointalk for instance; are y'all comfortable in there compared to Bitcointalk? No shades! I will never join any unnecessary arguments about what's considerably bad or not, but if I'm being executed cause I was caught with arms (in a country that it isn't legalized) in an attempt to steal, does that make pocket pickers to not face the laws... Good lord!
legendary
Activity: 3500
Merit: 6981
Top Crypto Casino
June 29, 2024, 03:37:27 PM
#70
I'd argue chatbot verbal diarrhea is plagiarism by definition. I've already seen claims against those companies for using data as input without permission (which is a copyright issue). They reproduce bits and pieces of other data without sharing the original source (which is a plagiarism issue).
It is even worse than usual human plagiarism since it is more likely using copy-paste from multiple websites and than it combines everything into one single post without providing any source links.
I wouldn't be surprised if developers made AI to replace some words to make it look different, and same tactics was used by human plagiators for years.

I'm not making any kind of argument as to how bad AI-generated crap is vs. plagiarism, simply stating my view that the former doesn't meet the criteria for the definition of the latter, unless the argument is that the AI program is culling what it's writing from stuff that's already been written, and I'm not sure I buy that.  If that were true, then every site on the web that uses AI to write something, every student who uses AI for a class, and essentially all use of AI to generate text produces plagiarized results and anyone who uses it is a plagiarist.  That's extreme, but regardless of how extreme I just don't think it's true.

Sorry to reply to this so late, but I missed the updates on this thread.  And honestly, whether AI-generated posts constitute plagiarism or not, they ought to be banned, full stop.  I might have missed any decrees from Theymos as well, so where does the forum stand with respect to idiots using AI?  And yes, I will go back and read the rest of this thread after posting this.
legendary
Activity: 2730
Merit: 7065
June 03, 2024, 11:54:28 AM
#69

Anything a LLM says is not plagiarism.
Let me correct that for you. Everything an LLM or text generator writes can be plagiarism, depending on how you use it.


If there is a topic that I find interesting regarding something I don't know a lot about, I might read articles from what I believe to be reputable sources, and I might paraphrase information that I find across several sources in my post. That is essentially what a LLM does.
Acquiring knowledge by reading what others have said on a particular topic and then explaining or writing about it the way you remembered it is not plagiarism. If it was, most of the stuff we say about Bitcoin could be classified as plagiarism. A simple question, like explaining what Bitcoin is can't be answered without saying something that other people haven't already said in the past (most probably).

Plagiarism is often about intention. If you find a research on hardware wallets and make it sound like you made the research, it's plagiarism. You can't write about that on the forum by saying that you conducted the research, you talked with experts, you spent hours configuring and working with the devices, if it was someone else who did it.

I also was using the grammarly as a supportive measure to atleast straighten and make my grammar a little bit readable but the whole chaos with AI generated or correct post made me let go because it got to a particular period when I actually felt that even the use of grammarly could be classified as AI.
Grammarly doesn't generate text on its own. It corrects the grammar of existing text you feed into it. You should be fine using Grammarly. No one should get into trouble for doing extra work to make their posts better, especially if English isn't their first language. It would be unfair if the admins punished users for using Grammarly.
full member
Activity: 952
Merit: 232
June 03, 2024, 07:41:22 AM
#68
Ordinarily, it's an act of cheating on others by using AI to generate a topic and laying claim of ownership to it. It could make those other members that are constantly driving in personal  effort into making quality posts get to feel as though their efforts is not making progress or improvement whenever they have to compare an AI generator poster posts to that of theirs which was done organically of their own effort. That's just one of many reasons I don't support the use of AI in the forum for discussions.

But if the administrators would want to absorb the use of AI in anyway, then I'll suggest a dedicated board is created for all kinds of AI generated topics no matter the category or section the topics should just be posted into that very board in as much as it's AI generated. And it won't need the poster to indicate if it's AI generated or not, just by having it there common sense will tell it's AI generated.


I buy the idea of having a separate board created for AI generated essays, even though that it would create a divide because any topics already created by human intelligence may have a different ring to it if created using AI.
I read here where someone suggested that users who make comments or create post using Ai, should indicate at the end or leave something like a footnote to state it was created by AI.

In all, the essence of having a community like this where we as humans from different regions in the world can share real experiences and ideas that isn't AI generated, is simply beautiful and I have always loved to partake in the reasoning of others as much I share mine, mostly those of whom I have neither seen or met or belong with in same age bracket.
hero member
Activity: 644
Merit: 520
Leading Crypto Sports Betting & Casino Platform
June 03, 2024, 07:14:03 AM
#67
Is grammarly and other tools to improve post quality is part of this AI tools? I’m using this AI tools sometimes when I’m on a device with this extension which I use to correct my grammar and improve my post. Sometimes it change my sentence construction to a more decent format.

I’m confused whether this tool is considered as AI or not but I'm sure that it’s different approach compared to chatgpt AI because my thoughts is the basis of the improved post.
I do not know how to answer this question but I would say after you have used grammarly to help your sentence construction, run your text through a AI checker software available on the internet, if they turn out to signal that it is AI written, I would advise that you should simply indicate at the end of the text that grammarly was used in helping your sentence construction. Because if any other user runs your texts via the AI checker and it turns out to be 100% AI, how do you explain to them that it was grammarly that you use for sentence construction and not chat GPT?

Anyone who often uses several tools to check AI texts can already visually predict whether a post was created using GPT chat or not. I'm not talking about guaranteed detection, but such posts most often attract the eye and beg for verification. If you're talking about grammar and spell-checking tools, then your text is unlikely to be detectable as AI-written text. I see a lot of examples, and those posts that people write on their own will never (or in rare cases) be identified as AI. Therefore, there is no need for clarification.

There is the topic, and there are quite a lot of examples of how people catch spammers in GPT chat. Moderators also use their own verification methods; otherwise, how can we think that some reports are good and some remain unprocessed?

But those cases when we mark spammers as “accounts using AI” have virtually no effect on the spammers. Having received the tag, they continue to write in AI texts.

It really is like fighting windmills.

Thank you for this clarification. I usually have a doubt when using grammarly since they now offer text improvement which rephrased a complete statement based on the format your desire by changing the tone and expression of your original statement. This makes me confused because this might already a borderline AI like tools which I might unknowingly using.

But one thing is for sure that I construct first my first and just improve using the tools to make it more appealing to read by correcting the grammar. This comment give me confidence. Thanks again
I also was using the grammarly as a supportive measure to atleast straighten and make my grammar a little bit readable but the whole chaos with AI generated or correct post made me let go because it got to a particular period when I actually felt that even the use of grammarly could be classified as AI aided post or write but I know all the writes up are made originally by me, it's just that the grammarly too helps me rearrange them to put the right punctuation and grammar set up in the right order.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
June 03, 2024, 05:39:05 AM
#66
I might read articles from what I believe to be reputable sources, and I might paraphrase information that I find across several sources in my post.
Any university would require you to credit the sources, and if not, it's plagiarism.
copper member
Activity: 2996
Merit: 2374
June 03, 2024, 05:28:26 AM
#65
no person in their right mind would just go and ask chatgpt questions and copy and paste them into forum threads just for free.
It's the modern version of shitposting account farmers. Some even manage to earn Merit with it.
There is a lot of money to be made via LLMs (which is really what the OP is referring to), and that is not from sig deals. The creators of LLMs want their models to be relevant and accurate, so I somewhat understand why some might use the forum to create LLM-created posts, although there are probably some ethical issues in doing so.

Anything a LLM says is not plagiarism. If there is a topic that I find interesting regarding something I don't know a lot about, I might read articles from what I believe to be reputable sources, and I might paraphrase information that I find across several sources in my post. That is essentially what a LLM does.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
June 03, 2024, 04:40:23 AM
#64
Spammers and scammers are going to be top AI users, if they are not already.
I'd expect regulators to act on this, but it's going to take many years before they do something. And given the great "success" of having cookie warnings everywhere, I don't have high hopes. Making the AI-manufacturers liable for the problems caused by their software would probably end this, but that's "killing innovation". Add a strong lobby, and I don't expect improvements.
Vod
legendary
Activity: 3668
Merit: 3010
Licking my boob since 1970
May 30, 2024, 10:52:50 PM
#63
AI is supposed to just be a tool to augment something, not entirely replace it.

I disagree - a lot of development in AI is to replace the human, such as in warehouse or other repetitive/dangerous/expensive work.

As a young mental health professional, you would view AI as a tool to help you, because your job will be one of the last to be replaced.  Smiley 
copper member
Activity: 42
Merit: 31
May 30, 2024, 07:26:00 PM
#62
Spammers and scammers are going to be top AI users, if they are not already.

It's because spammers and scammers don't realize that AI is supposed to just be a tool to augment something, not entirely replace it.

Like, Chat GPT is really good at telling you how to do something simply, or making a guide on how to start a fire. It's not going to, nor is it supposed to, start the fire for you IRL.

Idk if that's a shitty example, but yeah.
copper member
Activity: 2170
Merit: 1822
Top Crypto Casino
May 30, 2024, 06:59:23 PM
#61
I have to say this one is true. Detecting AI generated posts could be easier to say but when you are in the process, it might be harder than it is, most especially if you simply check his current posts without going deep into his past history. Probably the reason why these AI users are even motivated to continue what they’re doing because it will be harder for them to get caught. And maybe if they get caught, that would mean an extra work for this forum admin. Something that I can really say detecting AI post is really hard and somewhat tricky.
But there have been examples where one could clearly see that there has been intentional usage of AI to generate texts. Should we just keep ignoring such cases? How different is that from plagiarism that has been demonized in the forum from almost as long as it has existed?

I think the forum administration needs to come out on this, otherwise if we condone it, then plagiarism should also be condoned since it's also "hard to detect" at times
hero member
Activity: 2716
Merit: 904
May 30, 2024, 06:48:14 PM
#60
There is the topic, and there are quite a lot of examples of how people catch spammers in GPT chat. Moderators also use their own verification methods; otherwise, how can we think that some reports are good and some remain unprocessed?
Detecting AI is tricky unless there are obvious copy and paste. Write an article using AI, change words where necessary, add a few lines where necessary or even remove a few lines. It become impossible to detect that the source was AI but it still is AI. The only tool we have in detecting AI is to trust our feeling.

When I read posts, if I suspect a user then I start reading his post history, it takes time but after reading a few posts somehow you generate a feeling in yourself to make a decision. Problem with this method is, I can easily be wrong and a member become a victim of my wrong conclusion.
I have to say this one is true. Detecting AI generated posts could be easier to say but when you are in the process, it might be harder than it is, most especially if you simply check his current posts without going deep into his past history. Probably the reason why these AI users are even motivated to continue what they’re doing because it will be harder for them to get caught. And maybe if they get caught, that would mean an extra work for this forum admin. Something that I can really say detecting AI post is really hard and somewhat tricky.
hero member
Activity: 2954
Merit: 672
Message @Hhampuz if you are looking for a CM!
May 30, 2024, 04:53:58 PM
#59
what exactly is going on here on this forum? are people getting paid bitcoins for making postings? if so one of the rules these campaign managers should have is "no ai" and if people get caught using it then the campaign manager should be punished by deleting their campaigns. you got to cut the problem off at the source.  Shocked

no person in their right mind would just go and ask chatgpt questions and copy and paste them into forum threads just for free. unless they had a serious mental problem.

That's a way to try to solve the problem, but is not only about the signature campaigns. Someone could use AI to grow up accounts and then sell them or use them in an organic way once they hit legendary status, that's why i think is a forum problem and not only a signatures problem.
I guess that won’t create a problem if there won’t be any stupid buyer thinking to buy on that. Although I have no issues on buying account, but one should be well responsible as well to check and verify the account first before buying if there are no suspicious and AI generated posts that will put the new owner at a risk, losing all the opportunities that he might be taking advantage in the forum. Otherwise, it will be his sole problem already once the forum takes tight rule in combatting AI posts.
legendary
Activity: 2212
Merit: 7064
May 30, 2024, 10:11:31 AM
#58
Apparently Google's new AI search summary feature is unable to tell the The Onion is satire and believes everyone on Reddit is always telling the truth.
I even saw g00gle AI recommending someone to jump of the Golden Gate Bridge as a cure for depression, because one reddit user suggested it  Roll Eyes
This looks like a perfect new army of government ''workers'', they never complains, they are never hungry, and they don't ask to be paid anything.
Spammers and scammers are going to be top AI users, if they are not already.
sr. member
Activity: 1554
Merit: 334
May 29, 2024, 09:28:33 PM
#57
My thoughts on this are simple. If a user is assisted in making a post through artificial intelligence, instead of the user to pass it on as knowledge coming from themselves they should do the right thing and simply put at the footnote or references, source as artificial intelligence. Doing this without adding a source is tantamount to plagiarism. However if it is overdone by a user then the attention of the user needs to be called to it and they should be cautioned. We want to see original thoughts, ideas and write up not ai generated.
The problem with this one is that some people don't really like it even if you're honest about your involvement with AI because they don't appreciate the fact that your post isn't organic and that you claim that your post is informed opinion when it's just a generated nonsense from an AI that was made to look like it came from your head and not computer generated. Although they might forgive you a little for the honesty that you've used an AI, you will not get anything out of all of this and at the same time, your post from that point in time including your past post would be in scrutiny because they want to see if you're not a first offender and that this might not be your only time that you've used an AI, what basically this confession would lead to is just people getting in the ignored list of a lot of people and possibly a lot of reports because of how they've conducted themselves.

In my opinion, the stance on AI posts here in the forum is that of someone is using an AI to write a novel, as a publisher, you're never going to want all of that as a book because you know that it would serve only nothing but just a quota for the writer and nothing more, no emotions and hard work were poured into it besides the creativity in the prompts and it's not human enough to make AI post a thing in this forum and we will eventually arrive in a point where we don't care anymore because we've tolerated AI as a way to fulfill post counts.
sr. member
Activity: 1190
Merit: 469
May 29, 2024, 07:46:43 PM
#56
I agree. If this will only the possible way to discourage using AI, then so be it. Anyone gets caught should be punished and should be out of the campaign, otherwise the campaign manager might possibly be put at risk. Although this could be additional job for campaign managers, but I think that’s part of their job to protect the campaign from getting copy right information that are intentionally done by members of the campaign.


obviously at some point, there will be some AI tool that you let loose on the forum and it just is autonomous. it will read different threads and make replies in some of them and be doing that all day long  Shocked

24/7/365. but that would be another way to detect them. so they would have to break it into work shifts where it went on posting sprees of 5 or 6 hours and then went dark for 18 or so hours. because most people sleep.

That's a way to try to solve the problem, but is not only about the signature campaigns. Someone could use AI to grow up accounts and then sell them or use them in an organic way once they hit legendary status, that's why i think is a forum problem and not only a signatures problem.
show me one account like that. i bet its pretty rare. so i don't think that's the main problem. plus who is buying established accounts? no one has ever offered me anything to buy my account from ME.  Shocked I'm not even sure why they would do that. unless they are looking to just spam the forum. which wont last too long.
copper member
Activity: 42
Merit: 31
May 29, 2024, 07:25:40 PM
#55
Good meme 😂. I like it, thanks for sharing. Totally stealing it so I can send it to my friends :3.
legendary
Activity: 3346
Merit: 3125
May 29, 2024, 04:50:59 PM
#54
a forum problem and not only a signatures problem.

It's all of those, and it's a problem across every social network that exists right now. Would you agree?

I mean come on, there's fake shit everywhere! XD

It's like how they teach us about "Dark matter". Apparently 99% of the universe is shit we cant see called "Dark matter".

Yeah?

Well 99% of the internet feels fake right now. Fake accounts, fake likes, fake views...

How is the data anyone selling to each other even reliable LOL? Nothing's real.

And it has been like that since the start, let's remember that old meme (even older than the meme word).


And AI just make it worst, lol.
hero member
Activity: 2926
Merit: 657
No dream is too big and no dreamer is too small
May 29, 2024, 04:50:20 PM
#53
what exactly is going on here on this forum? are people getting paid bitcoins for making postings? if so one of the rules these campaign managers should have is "no ai" and if people get caught using it then the campaign manager should be punished by deleting their campaigns. you got to cut the problem off at the source.  Shocked

no person in their right mind would just go and ask chatgpt questions and copy and paste them into forum threads just for free. unless they had a serious mental problem.


I agree. If this will only the possible way to discourage using AI, then so be it. Anyone gets caught should be punished and should be out of the campaign, otherwise the campaign manager might possibly be put at risk. Although this could be additional job for campaign managers, but I think that’s part of their job to protect the campaign from getting copy right information that are intentionally done by members of the campaign.
Pages:
Jump to: