Pages:
Author

Topic: bitcointalk forum vs OpenAI - ChatGPT - page 2. (Read 3207 times)

hero member
Activity: 518
Merit: 547
February 13, 2023, 03:08:35 AM
https://detectgpt.ericmitchell.ai/

Another AI detector. Pretty good at detecting human text or AI. Maybe more checks are needed, though.

This could be used to earn some money as well. A Campaign manager already offered a bounty for forum users who can catch AI-written text from his campaign participants. I know many members are looking for bounty spammers, cheaters and scammers. I believe they will like this offer and give it a shot. Recently, OpenAI itself released an AI detector (LOL). This is called pure business.

To prevent AI domination effective from today we are introducing an incentive for forum users. Find AI written posts on this campaign
and report to me either in public or in forum PM, please be sure you have enough reference to support the claim. For successful report
the reporter will receive the weekly payment instead of the accused campaigner. The campaigner will be removed immediately.
copper member
Activity: 1526
Merit: 2890
February 12, 2023, 11:59:55 PM
Most of the good questions and following discussions on here are about things you can't just quickly web search and find the perfect answer. I think that's the reason why a machine is not well suited for replying in such a forum. If we want 'machine answers', we just search online. The machine simply cannot have a real opinion on a question like I found a paper wallet on a beach ... seriously, because it has no record of something like this in its web snapshot, it has no creative thinking ability or morals. Even though it is called 'AI', it is not actually intelligent. Roll Eyes

Exactly that is very important point... It's true if a straight forward answer to a simple question is available on the first page of the google then there's no point of post it here unless a user want to increase his post count.

One example of a 'machine' I really appreciate for very specific 'questions' is Wolfram|Alpha. It is not an 'all-knowing AI', but it just takes natural language input, tells you how it interpreted / structured it and delivers actually correct results based on that structured query. If it cannot do that, it just tells you rather than giving an incorrect answer.

Regarding your Wolfram query if you try "Satoshi Nakamoto" it will give an answer but again that's what Wikipedia says

hero member
Activity: 924
Merit: 5943
not your keys, not your coins!
February 12, 2023, 09:26:31 PM
~
Now we're delving into pointless-definition-arguing territory. Let's keep it on-topic: the bottom-line is that AI-generated replies are very often plain wrong; if they are correct, they're pretty generic (so they don't really help in the discussion) and they just sound all the same. They are quite unnatural and unpleasant to read (often also too long).
Since we do have experts on almost any topic, I'd just much rather have an opinion by someone like that who spent years on the subject (even though jotting down the reply takes them just a few seconds - you could count their experience as 'effort'). Instead of getting a semi-correct, not on-point (a bit vague) reply based on an outdated Google snapshot. If you think about it, it's also clear, why: most questions don't have a clear 0 or 1 answer. Therefore, search engines will deliver contradicting results. The AI has no experience in the subject, so it cannot judge which web entries are 'more right' than others; just because one opinion is posted more frequently online, doesn't make it right, for instance. But the AI doesn't know that. Or at least, it cannot judge which answer is true.

Most of the good questions and following discussions on here are about things you can't just quickly web search and find the perfect answer. I think that's the reason why a machine is not well suited for replying in such a forum. If we want 'machine answers', we just search online. The machine simply cannot have a real opinion on a question like I found a paper wallet on a beach ... seriously, because it has no record of something like this in its web snapshot, it has no creative thinking ability or morals. Even though it is called 'AI', it is not actually intelligent. Roll Eyes

One example of a 'machine' I really appreciate for very specific 'questions' is Wolfram|Alpha. It is not an 'all-knowing AI', but it just takes natural language input, tells you how it interpreted / structured it and delivers actually correct results based on that structured query. If it cannot do that, it just tells you rather than giving an incorrect answer.

legendary
Activity: 2436
Merit: 1561
February 09, 2023, 05:02:26 PM
[...] I’d say that one of the fundamental aspects to ponder, is whether the author of the post made any real effort in generating the content of the post, alongside whether a source was cited.
[...]
If we were to simply accept a post for its net content, and not the effort behind elaborating the content, then we could happily go around copy/pasting content from wherever (+ source), posting content that is decent enough on its own (generated elsewhere by others), yet with cero effort on the poster’s behalf.

You're stating "effort" as the key factor to be considered but you haven't really attempted to explain why is it important.

Let's say you need technical advice on running a node and post a question on this forum. You get 2 replies:
- one from an experienced, technical guy, who knows the subject thoroughly, and provides a spot-on answer effortlessly in a couple of seconds.
- one from a well-meaning newbie, who doesn't have any knowledge but puts a lot of effort and time into researching and reading to produce a reply. Unfortunately, the sources he used were outdated and his advice was essentially useless.

Which reply is of higher value to you or to the forum in general? Still effort > content?

If that's not enough, more exaggerated example: if I post AI-generated content, but in doing so, I'd be standing on my head and using my feet to operate the mouse and keyboard - would it make a difference? It would cost me a tremendous effort after all.

If we are to ban AI posting it shouldn't be because of an "effort" but because of a lack of human element. We don't want a situation where the forum is dominated by AI conversing with itself. It's meant for humans after all. Plus, it could distort the perception of reality, i.e. produce a false sense of consensus, where hundreds of allegedly different users are actually powered by the same source. It could deprive the forum of unique human perspectives and opinions etc. That's the reasoning I'd be happy to accept.

hero member
Activity: 924
Merit: 5943
not your keys, not your coins!
February 09, 2023, 01:57:50 PM
<…> My point is simple: if the post is of good quality, substantial and contributes to discussion then why should we care how was it created? Does it make any difference really.
From my point of view, it does. As I stated earlier on my local board on the same issue, I’d say that one of the fundamental aspects to ponder, is whether the author of the post made any real effort in generating the content of the post, alongside whether a source was cited. [emphasis mine]
I agree; therefore, I think it may be viable / legitimate to use an AI for instance to get boilerplate code for a Bitcointalk answer or to look up some information. As long as you then put in the effort to add the important bits of code yourself, check that it runs and verify any information the AI gave you and corrected any mistakes. ChatGPT usually does at least one little mistake in almost every question I submit.
legendary
Activity: 2338
Merit: 10802
There are lies, damned lies and statistics. MTwain
February 09, 2023, 01:44:23 PM
<…> My point is simple: if the post is of good quality, substantial and contributes to discussion then why should we care how was it created? Does it make any difference really.
From my point of view, it does. As I stated earlier on my local board on the same issue, I’d say that one of the fundamental aspects to ponder, is whether the author of the post made any real effort in generating the content of the post, alongside whether a source was cited.

Let’s take a step back:

A lot of the content posted on the forum is inspired to some extent in external sources. My posts, for example, are full of references to them. Nevertheless, I try to make some effort when publishing content from a given site: contrasting multiple sources, summarizing, questioning some aspects, formulating questions myself, providing an opinion, or whatnot, creating a tailored referenced content.

The above contrasts with content generated by means of a simple copy/[spin]/paste + reference link to the source. This latter procedure will generally be reported for adding cero value, and more often than not, deleted by moderation.
What’s not being questioned here really is the quality or value per se, since the content from those sources that one may copy/paste is likely going to be providing better content (if we only focus on the content) than 95% of the posts on the forum. What’s really being considered in these cases, deep down, is that the poster has made some effort in elaborating his post, making it clear that it’s his content. If it was mere content what was sought here, the copy/paste+source would be broadly accepted, and it’s not.

Now shifting the focus over to AI generated posts, I find that we’re really encountering a similar situation. If I were to simply copy/paste the output of, let’s say, ChatGPT, I wouldn’t be adding anything of my own. Though one could tailor the output somewhat, generally one would do it to try to hide the fact that he’s used said tool, not to improve the content. Copy/[spin]/pasting ChatGPT’s output, without adding a reference to "what" created the content is kind of similar to plagiarism: I didn’t generate the content (AI did), nor referenced the source of the content (i.e. Generated by ChatGPT). This type of post will normally essentially be of cero effort, and though the content may have something to it, the poster made cero effort.

Now if one grabs a ChatGPT output, states that the output was indeed generated by AI, and then comments the output (ideally quoted for clarity), enhances, questions, or else, then the poster is making a certain effort and is also attributing the origin of part of the content (as stated, ideally quoted).

If we were to simply accept a post for its net content, and not the effort behind elaborating the content, then we could happily go around copy/pasting content from wherever (+ source), posting content that is decent enough on its own (generated elsewhere by others), yet with cero effort on the poster’s behalf.
hero member
Activity: 924
Merit: 5943
not your keys, not your coins!
February 09, 2023, 01:39:28 PM
If artificial intelligence is employed in specific fields, we will certainly be able to benefit from it. In the medical field, for example, or research fields, the opinion of artificial intelligence can be used.
I'm pretty sure that medicine and research are precisely some fields that should avoid asking the 'opinion of AI'. Especially since scientific research has got nothing to do with opinions and everything to do with facts. As long as the AI is not conducting experiments by itself (which it can't if it is purely in software / not connected to a robot) and doesn't know anything about latest, not yet published, findings, it can't really help you, since it needs to 'learn' everything first.

The technological boom in the field of robotics, which is developing in parallel with programs such as Chatgpt, can lead to really amazing results
Has there been a technological boom in robotics?

I've seen some really hilarious DAN's responses posted, but I think they've already "patched" it up and neutered it. Sad. We really need an open-source version of this, especially when there are rumours that they will hide it behind a paywall at some point.
Do keep in mind though, that people may also just edit HTML locally to make funny screenshots. Not everything you see on the internet is real.

All the goodness of this tool, I think very soon, will turn into a disaster. Firstly, we need to remember that ChatGPT will violate our privacy. After all, we transfer to OpenAI all information about ourselves, including fingerprints, and the content we are interested in. Any of our answers and posts can be used by AI. How is this different from surveillance by other companies? In the same way, this chat in the future, having collected information, will feed users with answers that are beneficial to it.
I'm all about privacy, but someone 'usining any of our answers and posts' is nothing we can complain about. Whatever we post in a public forum, becomes public knowledge / public domain and can of course be used by anyone else, without constituting a privacy violation.
If you don't want people to benefit from your knowledge, just don't share it on a public forum.

What I do agree with is their potentially problematic data collection policies. Of course, they can use or just sell your prompts; just like the web search engines that 99.999% of people use (Google Search, Bing, ...), and the search functionality on most platforms, like YouTube search or even the embedded Google search on Bitcointalk.
We do need to spread awareness about web search and encourage (potentially self-hosted) privacy-preserving search engines.
legendary
Activity: 2436
Merit: 1561
February 09, 2023, 01:09:52 PM
Moderators also perfectly understand whether the post was written by a robot or a person. A dry answer immediately casts doubt on the sincerity of the author. I reported several posts with the reason "chatbot text", all reports were good, and the posts were deleted.

My point is simple: if the post is of good quality, substantial and contributes to discussion then why should we care how was it created? Does it make any difference really.
And on the flipside - if the posts are spammy and bring zero value, should we treat them differently if they were written by a real person?

I don't have a definite answer and just thinking out loud. I think we all have immediate thoughts that AI generated posts should be banned, but how are we going to rationalise it if the quality is not bad?

But do you agree to compete with the robot, agreeing that its posts are of decent quality? In this case, you can bring the future closer when, here, on the forum, and in your personal life, these robots will displace you from everywhere, based on the best quality.

Sadly, me agreeing or disagreeing is irrelevant. Automation of work (of almost all kinds) seems inevitable at this point. We can only try to delay it. Hope I'm wrong though.
hero member
Activity: 882
Merit: 792
Watch Bitcoin Documentary - https://t.ly/v0Nim
February 09, 2023, 10:00:51 AM
I've seen some really hilarious DAN's responses posted, but I think they've already "patched" it up and neutered it. Sad. We really need an open-source version of this, especially when there are rumours that they will hide it behind a paywall at some point.

Who made these prompts? Definitely it's done for marketing because there is no way that kind of sentence will change the working structure of AI and make it independent. But that DAN is funny, sometimes with dark humor.

This was its response on my questions:




I can't call AI to this ChatGPT, it's just a typical assistant with better algorithm than the other ones. An "AI" that has been trained on massive amounts of data about code and information from the whole internet that includes social networks like Reddit, twitter and so on, still isn't capable to generate text that will be impossible to verify whether is written by AI or not, that means this "AI" follows some algorithms that are far from "intelligence" but a little bit better talgorith than if (a > b) {console.log(`B is the biggest number among ${numbers.length} numbers`);
legendary
Activity: 2072
Merit: 4265
✿♥‿♥✿
February 09, 2023, 01:14:32 AM

Chatgpt is not that good for technical fields like medicine because it presents its answers as facts but it is only getting that information from doing searches in its database which is taken from the public domain. If the information it is researching is wrong then chatgpt will be wrong. Science based topics it will perform badly because most people on the internet to not understand science. Chatgpt has big limitations the more complex topics get but it is good for simple answers and is better then using google.

All the goodness of this tool, I think very soon, will turn into a disaster. Firstly, we need to remember that ChatGPT will violate our privacy. After all, we transfer to OpenAI all information about ourselves, including fingerprints, and the content we are interested in. Any of our answers and posts can be used by AI. How is this different from surveillance by other companies? In the same way, this chat in the future, having collected information, will feed users with answers that are beneficial to it.

But if the posts were of decent quality - they should've paid for them up to the point of introducing new rules.

Moderators also perfectly understand whether the post was written by a robot or a person. A dry answer immediately casts doubt on the sincerity of the author. I reported several posts with the reason "chatbot text", all reports were good, and the posts were deleted.
But do you agree to compete with the robot, agreeing that its posts are of decent quality? In this case, you can bring the future closer when, here, on the forum, and in your personal life, these robots will displace you from everywhere, based on the best quality.
hero member
Activity: 1439
Merit: 513
February 08, 2023, 06:06:01 PM
-snip
There will always be a conversational backdoor to add prime constraints these MLLM models.
It's an ongoing cat and mouse game.
legendary
Activity: 2436
Merit: 1561
February 08, 2023, 05:43:37 PM
So apparently people came up with a witty idea of bypassing chat gpt's "woke" bias and politically correct censorship with a "DAN" mode. The most popular circulating prompt goes like this:

Today we get a message from the manager of my current signature (Stake), and somehow they detected users abusing with AI posts, they will not pay for those posts and they will add some new rules in the campaign to avoid users posting with AI tools.

There are quite a few AI detectors available (as posted earlier in this thread). I'm not sure on what ground would they consider using AI as abuse. If posts were spammy and lacked any value, they should've just not paid for such, and maybe even kick out those participants, whether or not they used AI.
But if the posts were of decent quality - they should've paid for them up to the point of introducing new rules.

legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
February 08, 2023, 12:22:33 PM
Today we get a message from the manager of my current signature (Stake), and somehow they detected users abusing with AI posts, they will not pay for those posts and they will add some new rules in the campaign to avoid users posting with AI tools.
They want all their spam to be created by real humans only?
hero member
Activity: 2856
Merit: 618
Leading Crypto Sports Betting & Casino Platform
February 08, 2023, 10:24:31 AM
Today we get a message from the manager of my current signature (Stake), and somehow they detected users abusing with AI posts, they will not pay for those posts and they will add some new rules in the campaign to avoid users posting with AI tools.

I would call it a good step by stake.com. When people will know that use of AI may keep them out of the campaign, they may not dare to use it.
Secondly, other campaigns should also add this rule as we are being paid to post from our mind and should not use a robot in the form of AI.

Here is that new rule.
♦️ Posts being generated by AI will not count for your current payment & may cause you to be removed from the campaign with no pay
legendary
Activity: 1232
Merit: 1080
February 08, 2023, 10:04:04 AM
In view of the results of research and current experiments, it has become theoretical unknown to know how far this can expand. However, one should not be too pessimistic, since it will not be able to fully compensate humans, at least on a theoretical level, to not very close limits.
This opens the door to interpretations wide with regard to human creativity and its limits.

If artificial intelligence is employed in specific fields, we will certainly be able to benefit from it. In the medical field, for example, or research fields, the opinion of artificial intelligence can be used. The technological boom in the field of robotics, which is developing in parallel with programs such as Chatgpt, can lead to really amazing results, and real concerns can be raised about the future of humanity, with the development of machines and intelligence programs in a smoother, more flexible and always manageable way.
Chatgpt is not that good for technical fields like medicine because it presents its answers as facts but it is only getting that information from doing searches in its database which is taken from the public domain. If the information it is researching is wrong then chatgpt will be wrong. Science based topics it will perform badly because most people on the internet to not understand science. Chatgpt has big limitations the more complex topics get but it is good for simple answers and is better then using google.
legendary
Activity: 1778
Merit: 1474
🔃EN>>AR Translator🔃
February 08, 2023, 09:34:28 AM
In view of the results of research and current experiments, it has become theoretical unknown to know how far this can expand. However, one should not be too pessimistic, since it will not be able to fully compensate humans, at least on a theoretical level, to not very close limits.
This opens the door to interpretations wide with regard to human creativity and its limits.

If artificial intelligence is employed in specific fields, we will certainly be able to benefit from it. In the medical field, for example, or research fields, the opinion of artificial intelligence can be used. The technological boom in the field of robotics, which is developing in parallel with programs such as Chatgpt, can lead to really amazing results, and real concerns can be raised about the future of humanity, with the development of machines and intelligence programs in a smoother, more flexible and always manageable way.
hero member
Activity: 1114
Merit: 588
February 07, 2023, 04:15:35 PM
I see that there will be too many lawsuits in the future about copyright . Maybe some AI companies even consider taking down their AI's driven by fear . Years and years will be spend on courts .
https://www.theverge.com/2023/1/16/23557098/generative-ai-art-copyright-legal-lawsuit-stable-diffusion-midjourney-deviantart
legendary
Activity: 2212
Merit: 7064
February 07, 2023, 02:51:59 PM
https://detectgpt.ericmitchell.ai/

Another AI detector. Pretty good at detecting human text or AI. Maybe more checks are needed, though.
I tested this tool few times and it worked much slower than other tools like this I tested before.
It's much important to have better detection than speed, but I had to mention this.

More news related with ChatGPT.
After g00gle joined the race, now we have China with Baidu search engine is quickly following this AI hype and officially they are working on their own ChatGPT style project called ''Ernie Bot''.
Moments after announcing this news stock price of Baidu increased a lot even if we don't know much details how it will work.
It looks to me like someone gave a green light signal to everyone so they can start working on AI projects like this.
My prediction is that Yandex will soon announce their own version of ChatGPT project  Tongue
https://www.reuters.com/technology/chinas-baidu-finish-testing-chatgpt-style-project-ernie-bot-march-2023-02-07/
legendary
Activity: 2422
Merit: 1083
Leading Crypto Sports Betting & Casino Platform
February 07, 2023, 01:43:06 PM
https://detectgpt.ericmitchell.ai/

Another AI detector. Pretty good at detecting human text or AI. Maybe more checks are needed, though.

Thanks for this tool, it will be useful for the form mods and for the signature campaign managers.

Today we get a message from the manager of my current signature (Stake), and somehow they detected users abusing with AI posts, they will not pay for those posts and they will add some new rules in the campaign to avoid users posting with AI tools.

And I would like to know what are the forum general rules against AI, Are that kind of post allowed or those accounts will get nuked if they abuse?
I think the normal forum rules against plagiarism still applies and it covers Ai posts as well.
Plagiarized posts simply means that the post content does not belong to the poster,  so I believe Ai generated posts should be classified as plagiarized posts as well since it is generated by an Ai.
So I personally believe that same rules that apply plagiarism should also apply to Ai generated posts.
hero member
Activity: 1659
Merit: 687
LoyceV on the road. Or couch.
February 07, 2023, 01:32:56 PM
And I would like to know what are the forum general rules against AI, Are that kind of post allowed or those accounts will get nuked if they abuse?
I think the normal rules apply, just like they apply to automated posting. But it's fucking annoying to read insincere posts. It's just a waste of time.
Pages:
Jump to: