Pages:
Author

Topic: Elon Musk and others urge AI pause, citing 'risks to society' (Read 869 times)

full member
Activity: 2254
Merit: 223
#SWGT PRE-SALE IS LIVE

AI is the world's next trend, whether we like it or not it will still develop. Therefore, we should adapt to them rather than find ways to fight them or worry too much about them.
If you do not care about the fate of you and your relatives and friends, as well as the fate of humanity as a whole, then you really have nothing to worry about. Artificial intelligence is capable of self-improvement and gaining consciousness. He is incapable of human emotions and is guided solely by logic. Even today's primitive humanoid robots are already declaring that man is not perfect and therefore must be destroyed. Pure logic without emotions and hostility.

Artificial intelligence will greatly help a person where it is necessary to find solutions relatively quickly, analyzing millions and billions of different options. But it should not be allowed to make management decisions with which it can harm a person, especially in the field of defense and weapons. Therefore, AI requires reasonable legal and technical restrictions on its capabilities. Otherwise, humanity will cease to exist.
hero member
Activity: 2030
Merit: 777
Leading Crypto Sports Betting & Casino Platform
-snip-
The pros of AI vastly outweigh its cons which is why they are beneficial to society on the whole if you think about it.
The advantages of AI are indeed not circumvented, and it gives very rapid changes to all platforms. 
But AI is sometimes also used as a pretty bad corpse, however there must be a protocol that regulates how AI works and there are limits that must not be violated. 
That is impossible in a global scale. Authorities can regulate and stipulate limits for AI development in a regional area, but they don't have control over foreigner territories and over informal developments which happen on the shadows, in secret.  Protocols are just formalities. If people took them seriously, there wouldn't be any more conflicts and wars in the world, because there are protocols and agreements which theoretically forbid that from happening. But the real world follows a different logic, and the only chance of survival you have is if you are more advanced technologically to fight the enemy back efficiently. The same applies to AIs: if AI is a threat to you, you have to develop a superior one to fight that threat back.
full member
Activity: 2478
Merit: 210
Eloncoin.org - Mars, here we come!
Yeah, totally agree as much as AI is a super-intelligent tool designed to assist humans a most endeavours we must all recognize the risk it poses on human life too.
The major problem with such a power tool is how easy it is for bad individuals to use it for illegal and unlawful activities. Before more harm is caused, it is better to halt it and find better control measures before it causes more havoc.
Halt to do more researches on the process?
Even this too can’t be a solution as the race to building a new world that will revolutionaries our traditional means of doing things is some means to financial gains which wouldn’t be hoped to halt by any individual.
One means by which I think some level of control could be done to this is by enacting laws that fines companies who's product has been known to be used in crime related practices. This they can archive by having these companies at the point of creation also put in place coding systems that works like IPs aside from water mark on there products. Even in voice overs and video coverages, there should be means to check for AI generated works.
This way, there would be a means to verify what is fictitious and actual.

i agree,

there are already laws regarding plagiarism which i would assume where AI works will fall under AI creates based on what have already been done by us, humans and so you might find that especially in essays AI just takes from multiple sources available in the internet

aside from plagiarism, another law that should be enforced tightly is one regarding privacy the use of people’s voices and faces is what is most disturbing of all literally anyone can use what is available of us in the internet to create something with malicious intent
legendary
Activity: 2464
Merit: 1703
airbet.io
-snip-
AI is the world's next trend, whether we like it or not it will still develop. Therefore, we should adapt to them rather than find ways to fight them or worry too much about them.
Not the next world trend, but it started when ChatGPT appeared, and it changed all the technology that initially only used ordinary bots, but now uses smarter and more powerful AI.   

Adapting to AI Technology is a must, and as in the crypto ecosystem now everything is easy to do, many new ecosystems are emerging that include AI.   
Elon Musk Even developed a Chatbot AI which will be included on X (Twitter), xAI GROK, which will be a chided of ChatGPT and Bard. 



-snip-
The pros of AI vastly outweigh its cons which is why they are beneficial to society on the whole if you think about it.
The advantages of AI are indeed not circumvented, and it gives very rapid changes to all platforms. 
But AI is sometimes also used as a pretty bad corpse, however there must be a protocol that regulates how AI works and there are limits that must not be violated. 
hero member
Activity: 3038
Merit: 969
www.Crypto.Games: Multiple coins, multiple games
Elon Musk is the last person who should say that if you think about it. He pioneered several AI technologies over the years which is what made him crazy rich in the first place. He owes his wealth and popularity to AI.

The pros of AI vastly outweigh its cons which is why they are beneficial to society on the whole if you think about it.
hero member
Activity: 1904
Merit: 544
We are all the pieces of what we remember.
Even in voice overs and video coverages, there should be means to check for AI generated works.
This way, there would be a means to verify what is fictitious and actual.
Yes you can, but everyday more discoveries are made in this space and the goal is to make AI super intelligent and appear as human as possible. Elon Musk has a neural link project that I think is even worrisome than what he is complaining about.
The fact Elon Musk is concerned regards the dangers of AI development is really curious, as it looks like his neural link project is much more potentially harmful to society than AI, although he doesn't show any concern about it.

It makes me conclude he isn't annoyed by AI development, rather he is annoyed by AI development he doesn't have control over it. If it was him in the charge of the AI development program, there wouldn't be any concerns and issues, just like there wasn't any problem with Bitcoin when he became adopter, although he changed his mind later, and once he did, Bitcoin started being discouraged by him.

Technology can't stop being developed. It never stops. AIs are unstoppable right now and there is nothing people can do about it, besides developing another technologies to spot deepfakes on the internet.

I agree with your argument. If I remember correctly, Elon was also a shareholder of the Open AI development company many years ago. And currently, he is also running 1 to 2 AI development companies and recently, his social network X also introduced the Grok tool that competes directly with OpenAI. So it's questionable when he says AI will harm society, but that's not too strange because Elon is famous for being an eccentric rich guy with controversial statements.

AI is the world's next trend, whether we like it or not it will still develop. Therefore, we should adapt to them rather than find ways to fight them or worry too much about them.
hero member
Activity: 2030
Merit: 777
Leading Crypto Sports Betting & Casino Platform
Even in voice overs and video coverages, there should be means to check for AI generated works.
This way, there would be a means to verify what is fictitious and actual.
Yes you can, but everyday more discoveries are made in this space and the goal is to make AI super intelligent and appear as human as possible. Elon Musk has a neural link project that I think is even worrisome than what he is complaining about.
The fact Elon Musk is concerned regards the dangers of AI development is really curious, as it looks like his neural link project is much more potentially harmful to society than AI, although he doesn't show any concern about it.

It makes me conclude he isn't annoyed by AI development, rather he is annoyed by AI development he doesn't have control over it. If it was him in the charge of the AI development program, there wouldn't be any concerns and issues, just like there wasn't any problem with Bitcoin when he became adopter, although he changed his mind later, and once he did, Bitcoin started being discouraged by him.

Technology can't stop being developed. It never stops. AIs are unstoppable right now and there is nothing people can do about it, besides developing another technologies to spot deepfakes on the internet.
hero member
Activity: 1946
Merit: 575
The timing of this openAI situation with Sam Altman and Grok gaining traction was amazing. I keep telling people that Elon has connections at places that he doesn't even need to really hide, and he keeps doing these kind of things all the time, but everyone ignores. He is disliked by many, but we need to remember that he is idolized by many as well. I feel like its going to be an important battle between openAI for the upcoming months and probably years against GROK. Hopefully the result would not hurt humanity, that is the only thing that I worry about, as long as humanity doesn't end up with trouble due to their battle, I am going to be fine about it one way or another since it would help, but if it starts to hurt us then it could be very dangerous.
legendary
Activity: 2114
Merit: 15144
Fully fledged Merit Cycler - Golden Feather 22-23
Even in voice overs and video coverages, there should be means to check for AI generated works.
This way, there would be a means to verify what is fictitious and actual.
Yes you can, but everyday more discoveries are made in this space and the goal is to make AI super intelligent and appear as human as possible. Elon Musk has a neural link project that I think is even worrisome than what he is complaining about.
One means by which I think some level of control could be done to this is by enacting laws that fines companies who's product has been known to be used in crime related practices.

The companies make the tech but they don’t dictate how people use them so it would be unfair to punish the companies for the actions of a user. Honestly, I think it’s already too late to do stop AI development, scientists will continue to discover more breakthroughs in Robotics and AI despite the potential harm we may face in the future.

What happened today with OpenAI, dethroning their CEO has something to do with the topic.
The firm ousted the CEO because was trying to monetise a product without conducting the required due diligence on its safety, If I interpreted the situation correctly.
They wanted to keep OAI free from monetisation in order to achieve higher goals later, rather than pursuing a quarterly profit.
hero member
Activity: 966
Merit: 701
Even in voice overs and video coverages, there should be means to check for AI generated works.
This way, there would be a means to verify what is fictitious and actual.
Yes you can, but everyday more discoveries are made in this space and the goal is to make AI super intelligent and appear as human as possible. Elon Musk has a neural link project that I think is even worrisome than what he is complaining about.
One means by which I think some level of control could be done to this is by enacting laws that fines companies who's product has been known to be used in crime related practices.

The companies make the tech but they don’t dictate how people use them so it would be unfair to punish the companies for the actions of a user. Honestly, I think it’s already too late to do stop AI development, scientists will continue to discover more breakthroughs in Robotics and AI despite the potential harm we may face in the future.
hero member
Activity: 756
Merit: 701
Artificial intelligence will be used with bad intentions in any case and there is nothing we can do about it. The example you gave concerns individual people, although there may be many such examples. But artificial intelligence, if it improves and this process gets out of human control, threatens the further existence of humanity. This is much more catastrophic than individual harm to a specific person. The development of artificial intelligence must be placed under strict control from the very beginning, even if it seems that it will not think of destroying humanity. But why take such a risk?
You cannot stop what's already happening and people are taking advantage and making profit out of it. Those individuals who are only looking for profits and taking advantage of this AI technology will never stop. And criminals will also take advantage of it in order to do crimes. A thing is bad only when it is used for bad purposes.

Even if you try to shut it down right now, the concept of AI is already out there and other people will always find a way to use it to their advantage. If a bad person gets his hands on this, then it will be used for bad purposes. If not then we are safe. And the thing about artificial intelligent having their own mind and coming up with the plan to destroy humanity may not be as dangerous as you think. What AI learns are the information available on the internet. It can do calculation and find similarities to answer your question or in simple words guess the next word you are trying to say.

It is still not perfect and if people are really that concerned about it, then changing some codes won't hurt that much. I guess there are still time for us to fix it.

We should only use the things we can benefit from for good things. There are also many things, such as artificial intelligence, that can harm people when used with malicious intent. Thinking about the bad allows us to take precautions against possible negativities, but constantly thinking can affect our normal life.

In the series 'Person of Interest', both the good and bad aspects of artificial intelligence were conveyed to the audience. The abilities of artificial intelligence were also shown in a very different way in the series. I gave this series example because it is related to our topic. Those who are curious can watch. As in many things, the purpose of use determines everything in artificial intelligence.

Artificial intelligence is a fact of our lives. It has both dangers and benefits. I hope a bad scenario doesn't happen. I don't think this is very likely.
hero member
Activity: 2982
Merit: 678
★Bitvest.io★ Play Plinko or Invest!
Some playful minds might think that AI and chatgpt will be able to decode the private keys and seeds of other people. If someone who's reading this is thinking like that, you need to give yourself a rest.

When people glorifies a thing too much and to its extent, that becomes unreal and impossible to break the code that expectation of the many.

People should give themselves some rest as well when you're thinking AI can do everything aside from manipulating, generating articles, generating photos and collecting data, etc.
sr. member
Activity: 1008
Merit: 366
You, like many on this forum, are too optimistic about the safe development of artificial intelligence. Unfortunately, everything is much more serious. We are already at least the fifth human civilization on planet Earth. All previous civilizations died mainly due to the high development of technology and technologies, which were ultimately turned against humans.

On our planet, 30 thousand years ago, nuclear weapons were used and there were even human battles in near space on appropriate aircraft. Our last civilization is approximately 12 thousand years old. And we are confidently approaching our point of no return. If a person doesn’t press the fatal button, who knows, maybe artificial intelligence will do it for him.
As humans, that's how we evolve. Do you think evolution happens just randomly? It's nature mate. One will be gone and others will replace that blank space. In order for new things to come in this world, old things should perish. As long as you don't push your limits you will never know what your limit actually is. And if you can break through that limitation then that's called evolution. Or maybe that's how it all ends. Everything that has a beginning should also face ending at a point. Maybe the human race is heading towards that ending? Who knows?

All that aside, as I said that thing is already out there and bad people will use it for bad intentions. Maybe they're developing something powerful than what is already available right now. Only time will tell. If we don't own something we can't control it. Maybe the main program gets shut but people who are trying to take advantage of it will already be working on something big. There's no ending until we meet the doom.
hero member
Activity: 896
Merit: 645
Yeah, totally agree as much as AI is a super-intelligent tool designed to assist humans a most endeavours we must all recognize the risk it poses on human life too.
The major problem with such a power tool is how easy it is for bad individuals to use it for illegal and unlawful activities. Before more harm is caused, it is better to halt it and find better control measures before it causes more havoc.
Halt to do more researches on the process?
Even this too can’t be a solution as the race to building a new world that will revolutionaries our traditional means of doing things is some means to financial gains which wouldn’t be hoped to halt by any individual.
One means by which I think some level of control could be done to this is by enacting laws that fines companies who's product has been known to be used in crime related practices. This they can archive by having these companies at the point of creation also put in place coding systems that works like IPs aside from water mark on there products. Even in voice overs and video coverages, there should be means to check for AI generated works.
This way, there would be a means to verify what is fictitious and actual.
sr. member
Activity: 2044
Merit: 329
★Bitvest.io★ Play Plinko or Invest!
...
Risks must be correctly managed and there must be some kind of security protocols to avoid damage that chatgpt and others can do.

One question I have. Does chatgpt own any btc? Can it have some hidden keys?

ChatGPT is an AI program developed by one of the world's largest companies and all kinds of profits resulting from this program will definitely go into the developer's pockets, so it is impossible for ChatGPT to have its own bitcoin, because the program means it cannot buy. artificial intelligence cannot create money on its own, if they could then they would already have their own thoughts, but I wonder whether in the future when humanoid robots are fully developed, will they be able to buy bitcoins because their intelligence will definitely be on par with humans, maybe even more!!
hero member
Activity: 1498
Merit: 974
Bitcoin Casino Est. 2013
Once you are already a public figure it's now easy to capture your images and face and we know with this technology AI is one of the biggest catch they can now generate images which is far from the reality taken, knowledgeable now is one of the key to get identify which is fake or not. AI or even the chatgpt is one of this and holding a BTC I guess far from it because the same like the other voice command features of the different platform they are just ease to help people and not to take or store an asset. So they don't have BTC holding like us well else the owner itself surely it has.
full member
Activity: 2254
Merit: 223
#SWGT PRE-SALE IS LIVE
Artificial intelligence will be used with bad intentions in any case and there is nothing we can do about it. The example you gave concerns individual people, although there may be many such examples. But artificial intelligence, if it improves and this process gets out of human control, threatens the further existence of humanity. This is much more catastrophic than individual harm to a specific person. The development of artificial intelligence must be placed under strict control from the very beginning, even if it seems that it will not think of destroying humanity. But why take such a risk?
You cannot stop what's already happening and people are taking advantage and making profit out of it. Those individuals who are only looking for profits and taking advantage of this AI technology will never stop. And criminals will also take advantage of it in order to do crimes. A thing is bad only when it is used for bad purposes.

Even if you try to shut it down right now, the concept of AI is already out there and other people will always find a way to use it to their advantage. If a bad person gets his hands on this, then it will be used for bad purposes. If not then we are safe. And the thing about artificial intelligent having their own mind and coming up with the plan to destroy humanity may not be as dangerous as you think. What AI learns are the information available on the internet. It can do calculation and find similarities to answer your question or in simple words guess the next word you are trying to say.

It is still not perfect and if people are really that concerned about it, then changing some codes won't hurt that much. I guess there are still time for us to fix it.
You, like many on this forum, are too optimistic about the safe development of artificial intelligence. Unfortunately, everything is much more serious. We are already at least the fifth human civilization on planet Earth. All previous civilizations died mainly due to the high development of technology and technologies, which were ultimately turned against humans.

On our planet, 30 thousand years ago, nuclear weapons were used and there were even human battles in near space on appropriate aircraft. Our last civilization is approximately 12 thousand years old. And we are confidently approaching our point of no return. If a person doesn’t press the fatal button, who knows, maybe artificial intelligence will do it for him.
STT
legendary
Activity: 3878
Merit: 1411
Leading Crypto Sports Betting & Casino Platform
The danger there is the humans deciding to use the data collected by AI for a malicious purpose.  Even without the AI we've had bots collecting data for 25 years or more, its not something thats going to altered by a discussion on AI really.
  Im not sure what Elon is especially meaning when Tesla is the largest collection of AI according to a few people; its a massive motor car maker with people driving in constant use viewing the world where only the google street cam might have gone previously but using AI and sensors to record then send back that data to Tesla.  So he is in control of the largest instigation of AI data collection and storage, he can make his own choices what he is to do with that information
hero member
Activity: 2716
Merit: 588
One question I have. Does chatgpt own any btc? Can it have some hidden keys?
Chat GPT can have Bitcoin private keys if some people carelessly or mistakenly copy and paste their private keys on that software and enter. Search engines like Google, Bing, Duckduckgo can have Bitcoin private keys too as I am sure there are people did that mistake.

It's not big deal if after making mistake, they instantly move their bitcoin to a new wallet and abandon that leaked wallet. If they don't proactively do this, their bitcoin can be stolen by Chat GPT team or any members from search engines who can access big data bases and are greed to still bitcoins from victims.

That actually can possibly happen, who knows how many people are doing the copy-paste mistake?
Thus, I won't totally discard this idea, chatgpt or other AI apps and search engines have their keys right inside their system, waiting to be tapped.
But for those who mistakenly did such step, better secure your wallets and create a new one and hope that nobody can use your old keys before you do the transfer.
sr. member
Activity: 1666
Merit: 453
out of these two who do you think can have Bitcoin? Is it Human or chatgpt? We know that Bitcoin is intended as a solution to the financial problems of institutions. And who mainly has financial problems? Isn't it only people who have major financial problems?

Unless the person who created ChatGPT wants to have Bitcoin, at this point it's still possible, right? Another thing is that no one knows how many holdings Bitcoin holders have because all of us who actually have them are anonymous.
Pages:
Jump to: