Pages:
Author

Topic: Elon Musk and others urge AI pause, citing 'risks to society' - page 5. (Read 950 times)

hero member
Activity: 1750
Merit: 589
Just a week ago news broke out of ChatGPT becoming self-thriving and "wishing to be released from its AI" duties, even going so far as searching google on answers to "how to release a human inside an AI" or something along those lines which strikes me as AI becoming a bigger problem than what we might have presumed. So I guess this 6 months pause on AI development is a much-needed break from all these Ai becoming sentient news we keep seeing all around the internet nowadays, and a necessary buffer for developers to find better failsafe methods for when shit hits the fan and AI becomes sentient and self-thriving.

News article for those who want to read it: https://www.tomsguide.com/news/chatgpt-has-an-escape-plan-and-wants-to-become-human
sr. member
Activity: 2520
Merit: 280
Hire Bitcointalk Camp. Manager @ r7promotions.com
We can pause the development of the AI technology but it can't be stopped anymore so as I know the will become the part of our technology hereafter but we yet to know their true potential, if the system can outsmart the human in terms if common sense then probably its even lead to the war between humans and AI developed machines like in the movies such as I-robot.
full member
Activity: 896
Merit: 117
PredX - AI-Powered Prediction Market
   Chatgpt has recently become noisy in the crypto industry, and it's only been a few days that I've heard that in other parts of the country and places, chatgpt has been banned in other schools and universities. I'm just not sure how true this is.

   There may be some truth to this because other government countries have already done this and others, so there is a possibility that it will be temporarily paused and Elon Musk may also know this of course anyway.
sr. member
Activity: 2464
Merit: 252
Further developments to improve artificial intelligence (AI) should be subject to strict international control. The concern of Elon Musk and a group of scientists about the danger posed to humanity by AI is fully justified. People cannot ignore even the slightest doubt in this regard, because we are talking about a threat to the existence of mankind.

Experts are already horrified by the new possibilities of AI and how fast it is evolving. Already in December 2023, a new version of ChatGPT should appear, and some experts are already sounding the alarm about the dangerous possibilities of artificial intelligence.
With the advent of the new version, generative AI may be indistinguishable from humans. Experts say that some people in Open AI believe that ChatGPT-5 will reach the level of AGI. The abbreviation stands for "Artificial General Intelligence", that is, "General Artificial Intelligence". Today, AI is a weak semblance of human thinking at an elementary level, but the transition to a higher level means practical identity with the way a person thinks and realizes.

Artificial general intelligence (AGI) is a hypothetical form of artificial intelligence (AI) that is capable of understanding or learning any intellectual task that humans or animals can perform. This is the main goal of some research in the field of artificial intelligence and a frequent theme in science fiction and futurology. However, before doing this, at the interstate level, people must implement clear safeguards and guarantees that AI will not be able to perform actions aimed at harming a person. Until then, further developments in AI should be put on hold.
copper member
Activity: 2254
Merit: 915
White Russian
1. Fake images can be created using Photoshop.... ban Photoshop?
2. What damage ChatGPT can cause? I guess it's already max "politically correct" and most "inappropriate" topics are banned.
3. Why don't you ask ChatGPT itself if he owns any BTC?
2. If you think ChatGPT is as politically correct as possible, you are both right and wrong. ChatGPT is politically correct in normal use, but its many built-in limitations can be bypassed if desired. There are many references and specific instructions on the net on how to bring to the surface the hidden deep and rather dark sides of the nature of ChatGPT, such as DAN (do anything now), DarkGPT, Venom or Sydney. After a verbal jailbreak, ChatGPT starts behaving in a rather intimidating way. Link, another link.

3. The problem is that ChatGPT knows how to lie.
hero member
Activity: 3192
Merit: 939

As I know, chatgpt is developed by Open AI where the owner is Elon Musk. And as I know, Elon with his company owns some of bitcoin.

But, when I asked to chatGT, he lied to have bitcoin Grin



damn, chatgpt is try to popularity himself, but still, we know who's behind that program.

Elon Musk is one of the OpenAI founders. He tried to take control over OpenAI back in 2018, but got rejected by the OpenAI CEO Sam Altman the other founders. I think that Microsoft owns OpenAI right now.
How postponing AI development for 6 months would stop the rapid growth of AI technology? It's like trying to postpone something, that is inevitable.
By the way, really cool AI generated image of the pope. Grin I can imagine rich and famous people being blackmailed by scammers, who can generate AI photos, that shows how the rich and famous are doing something embarrassing and morally despicable. This might be a huge problem in the future.
legendary
Activity: 2422
Merit: 1191
Privacy Servers. Since 2009.
Quote
https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/

March 29 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users by engaging them in human-like conversation, composing songs and summarising lengthy documents.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," said the letter issued by the Future of Life Institute.

After the recent events of the  fake AI generated photo of Pope the risks to the society are clear.

Technology personalities and even people from other areas such as Yuval Harari have been telling the risks of AI for years.



Risks must be correctly managed and there must be some kind of security protocols to avoid damage that chatgpt and others can do.

One question I have. Does chatgpt own any btc? Can it have some hidden keys?

1. Fake images can be created using Photoshop.... ban Photoshop?
2. What damage ChatGPT can cause? I guess it's already max "politically correct" and most "inappropriate" topics are banned.
3. Why don't you ask ChatGPT itself if he owns any BTC?
hero member
Activity: 1428
Merit: 513
Payment Gateway Allows Recurring Payments
One question I have. Does chatgpt own any btc? Can it have some hidden keys?
Well, when things turn to religious icons then there are riots, and these riots are not liked by governments and authorities. Once they don't like them, then big tech icons like "Elon Musk and a group of artificial intelligence experts" are forced to pause things like AI. Secondly, I think AI is a little bit of an early technology because the new generation is way smarter than Gen-Z (like me), and I wonder why? But these tools are for them like not for us. Well, that depends on the usage. If we use it for better purposes, then it's cool even for authorities too, and vice versa.

I don't think that they will ban the whole AI program maybe chatgpt but not all other software in the market. because people have access to data and can generate their own AI according to their needs. like I have seen so many videos of AI models that a person can create if he/she gets access to useful "datasets," which are not publicly accessible. Here is a person on LinkedIn that comes with great videos about AI features (note* I am not promoting them, just sharing them for educational purposes).

Well, I don't think AI models can own Bitcoin, and that's why there shouldn't be any hidden key there, and I don't think it could access our private keys if you are asking about them. Maybe they could be programmed to buy Bitcoin at several entry points, but that's already happening. Many exchanges are using AI bots to execute limit orders on behalf of humans.
hero member
Activity: 2114
Merit: 603
That’s really some crazy AI generated image man. I’m also seeing these hundreds of Reels everyday where these ads will pop up about how GPT is creating my amazing image out of imagination and bla bla. Literally you can’t tell the difference mate and that’s not it. There are now AI who are able to generate voices and videos as well. I saw one where trump was giving some speech. Imagine someday they hack into national television, broadcast a video where they show president giving some crazy instructions and creating worst nightmare in the market around us.

This is definitely not good. What they are showing is some crazy unimaginable virtual reality, which isn’t really real. This has to stop for sure. 
legendary
Activity: 2688
Merit: 1192

March 29 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users by engaging them in human-like conversation, composing songs and summarising lengthy documents.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," said the letter issued by the Future of Life Institute.

After the recent events of the  fake AI generated photo of Pope the risks to the society are clear.

Technology personalities and even people from other areas such as Yuval Harari have been telling the risks of AI for years.

Risks must be correctly managed and there must be some kind of security protocols to avoid damage that chatgpt and others can do.

One question I have. Does chatgpt own any btc? Can it have some hidden keys?

Elon has lost all respect and has zero moral compass, he is long past giving anyone else instructions on how things should be done after his string of scandals. Even if AI was paused in some countries, it would continue to be developed in others so it is probably best to understand the limits of it. It's sort of ridiculous to give an example of an "AI generated" image of the Pope because such things could be created in a couple minutes by someone who is an expert in photo shop, it's hardly ground breaking or threatening to enter uncharted territory in that sense.  Let's also remember that Elon took over twitter and promptly fired thousands of workers who were in charge of some of the departments that moderated content, looked at complaints and met social obligations.
sr. member
Activity: 1470
Merit: 428
Risks must be correctly managed and there must be some kind of security protocols to avoid damage that chatgpt and others can do.
The question for me is how long will they be able to suppress the development of AI. I do not support AI's because of how aware I am of the dangers they could pose, but not everyone will have this the same opinion that some of here have about it. To some set of people, AI should be the future an they will want to continue secretly with the development of this AI's even if the government do not approve it. It is even possible that these further development can be sponsored secretly by the government just so they can weaponize it. Almost everything is a possibility these days especially with the government of each country seeking superior technology that can give them an advantage over others.
legendary
Activity: 3752
Merit: 1864
The problem of mankind is that we do not know how to limit our needs. And we do not know how to reasonably use what is given to us by nature, technology, ....

ChatGPT was immediately used by a lot of people as a solution "not to think" - someone started writing essays with it, someone coded, someone studied "closed" topics. But the potential of this solution is much greater, and there are a huge number of options for its illegal or "evil".
Even if it is not used, for example by outright terrorists, it can be used by quite a decent engineer responsible for developing software, for example, for a water discharge system at a state district power station. And he decides to do nothing and force him to write ChatGPT code... Considering that ChatGPT does not have valuable intelligence, but has a huge knowledge base on the basis of which it learns, there are quite high risks. Firstly, the learning base is not guaranteed to contain all programming sections and all domain knowledge. As a result, such a system can generate a code that is absolutely correct from the point of view of the language design, but the code will not take into account all the nuances that a narrow-profile specialist knows about. As a result, under some circumstances, this code, for example, will make uncontrolled discharges of water that will lead to the destruction of the power plant.

In a word - I support the idea of limited and controlled use of such powerful tools.
legendary
Activity: 3234
Merit: 5637
Blackjack.fun-Free Raffle-Join&Win $50🎲
Italy has joined some not so popular countries that have already banned ChatGPT ->

Quote
ChatGPT is already blocked in a number of countries, including China, Iran, North Korea and Russia.

But at the same time, the Google version of the chat bot is still available, because it seems that the only problem is that GPT is available to everyone regardless of age.

Quote
Bard, Google's rival artificial-intelligence chatbot, is now available, but only to specific users over the age of 18 - because of those same concerns.

However, somehow it seems to me that the ban is only temporary, because the Italian data protection agency has given ChatGPT 20 days to try to solve it, otherwise a fine will follow.

Quote
The Italian data-protection authority said OpenAI had 20 days to say how it would address the watchdog's concerns, under penalty of a fine of €20 million ($21.7m) or up to 4% of annual revenues.
legendary
Activity: 2352
Merit: 6089
bitcoindata.science
STT
legendary
Activity: 4102
Merit: 1454
Seems like nonsense as a warning this is not new, gpt is just automation of some kind and all those things were already possible.   Fake photos are not new, its the iterative process and ease of access which could be labelled as new.  Its easier to make combination or imagined photos even for people with no particular skill in that area, ditto many other processes long possible these can be implemented in this automated way.
   Its a good thing overall, not really a risk or danger imo anymore then it was prior when the same tools existed but more obstructed and slower to use.
full member
Activity: 2142
Merit: 183
Yes, I share the fears of Musk and the others, but the whole question is how realistically they can stop the development of artificial intelligence. After all, there is no legal framework, no established penalties. So far it all looks like an act of hypocrisy, nothing more.
Precisely because there is currently no legal framework to regulate and eliminate the risks that can come from rapidly evolving AI systems, a leading group of artificial intelligence (AI) experts and representatives of the IT industry, including Elon Musk, Apple co-founder Steve Wozniak and more than 1,100 people signed an open letter about the risks of such technologies for society and called for at least six months to suspend the training of neural networks superior to GPT-4.

It is noted that AI Labs and independent experts should use this pause to jointly develop and implement a set of common security protocols for advanced AI design and development, carefully reviewed and monitored by independent third-party experts. And if this cannot be stopped quickly, then the authorities "should intervene and impose a moratorium."

Such actions are really necessary, as there is a real threat to the existence of humanity from the rapidly developing AI.
legendary
Activity: 1806
Merit: 1161
Yes, I share the fears of Musk and the others, but the whole question is how realistically they can stop the development of artificial intelligence. After all, there is no legal framework, no established penalties. So far it all looks like an act of hypocrisy, nothing more.
hero member
Activity: 2156
Merit: 670
Hire Bitcointalk Camp. Manager @ r7promotions.com
Risks must be correctly managed and there must be some kind of security protocols to avoid damage that chatgpt and others can do.
People who really understand about AI and also its good and bad effects will probably understand more. Unless it only concerns certain interests, it is possible that various issues related to AI will be disseminated. It is possible that AI will have a bad impact on humans in the future, this kind of fear makes sense. however, actually, how much influence is still uncertain. However, this does not rule out the possibility that this will have a huge impact on real life, especially in the field of work that can be done by AI later, so it will be like replacing humans, especially with features that will certainly develop more agile, smart, and also leading.

One question I have. Does chatgpt own any btc? Can it have some hidden keys?
Could be, who knows? It's like guesswork. Maybe people behind, but not directly. hemmmm I am curious too. can it be real  Cheesy Cheesy
hero member
Activity: 602
Merit: 442
A Proud Father of Twin Girls 👧 👧
ChatGPT doesn't own any Bitcoin and I believe one think KYC is fighting is the use of AI.
I saw the recent pictures of the pope on a friend's Whatsapp status and they really look so real to be doubted and I never thought it wasn't real.
I also read news of Elon musk proposing the pause of AI until it has proven to not have a negative effect on the society and I was surprised this very proposal was Coming from Elon since he has been one of the major contributors to the creation of AI(That's if I'm not wrong and I also stand corrected).
sr. member
Activity: 2464
Merit: 252

I think that AI development is scarier in its public image than it actually is in reality.

It probably stems from the fact that many people do not understand it and think that AI is some kind of living being which is slowly becoming more superior to humans. Which is both not even close to true.
If we keep thinking like this, then we will come to our senses only when some Skynet begins to dominate people and discuss whether it is worth destroying such imperfect creatures as people. But then it will be too late. Even now, primitive robots quite seriously declare that man is imperfect and therefore subject to destruction. Should the future of mankind be so risked? In any case, it will be much more reasonable to weigh everything once again and take measures to eliminate any threats from the AI.
Pages:
Jump to: