Pages:
Author

Topic: Elon Musk and others urge AI pause, citing 'risks to society' - page 3. (Read 950 times)

hero member
Activity: 1190
Merit: 543
fillippone - Winner contest Pizza 2022
This is clear hypocrisy. He was championing the course for halting and regulating the artificial intelligence sector now he has become a major player in the same industry. xAI will be headed by the South African billionaire and he claims that his new firm will make better AI products. He even joked that his firm will develop TruthGPT which will be better than OpenAI's ChatGPT. All these inventors are money-seeking individuals that only consider their pocket and ego.
Believe Elon Musk and his bunch of corporate executives at your own risks. They are the biggest hypocrites in existence. One minute he's speaking out against AI and gets a standing ovation because they think he is for them, next minute he is building a competitive AI driven product and tries to sound like what is doing is different from his competitors. It is all for the money and the shareholders.
Elon is a wise man and sometimes he tend to say the truth and another time he might decide to play at his words. I think he knows that capability of AI in the next future and if we are not careful, artificial intelligence will make life difficult for people to leave. There will be illegal use of it that will make real things look like fake. The wrong people might hijack it and use it to make life difficult for others. If there is AI that can create a n image of a person without the use of humans hand then there will be many images that will be created to portray the wrong signal or opinion about the way we see things.
sr. member
Activity: 1022
Merit: 368
This is clear hypocrisy. He was championing the course for halting and regulating the artificial intelligence sector now he has become a major player in the same industry. xAI will be headed by the South African billionaire and he claims that his new firm will make better AI products. He even joked that his firm will develop TruthGPT which will be better than OpenAI's ChatGPT. All these inventors are money-seeking individuals that only consider their pocket and ego.
Believe Elon Musk and his bunch of corporate executives at your own risks. They are the biggest hypocrites in existence. One minute he's speaking out against AI and gets a standing ovation because they think he is for them, next minute he is building a competitive AI driven product and tries to sound like what is doing is different from his competitors. It is all for the money and the shareholders.
hero member
Activity: 574
Merit: 554
Leading Crypto Sports Betting & Casino Platform
funny that currently elon is creating his own AI company, it seems despite the risk cited by him, regardless he'd seek some fortune out of it.
there are so many companies right now trying to offer their product that revolves around AI. but honestly i don't think it will be risk to society, after all, with the emergence of new technologies, new job is created.
its been that way since forever. moreover AI will definitely highly regulated in the future once government have sufficient data regarding the AI itself, eventually it will rule out the possibility of it threatening the society as a whole. but considering the fact that AI has simplified many workflow nowadays, I could see it turning out to be positive in the future.
This is clear hypocrisy. He was championing the course for halting and regulating the artificial intelligence sector now he has become a major player in the same industry. xAI will be headed by the South African billionaire and he claims that his new firm will make better AI products. He even joked that his firm will develop TruthGPT which will be better than OpenAI's ChatGPT. All these inventors are money-seeking individuals that only consider their pocket and ego.

The AI sector will make many jobs obsolete but more jobs will be created in the future. Nobody expected the IT sector to create so many job opportunities we currently have in the world. Many people are scared that they will soon become unemployed because of AI but I think the best thing anymore can do now is to learn how to apply AI tools to perform tasks faster and more convenient. For me, AI tools are here not to take jobs but to make the job process easy.
legendary
Activity: 1806
Merit: 1161
funny that currently elon is creating his own AI company, it seems despite the risk cited by him, regardless he'd seek some fortune out of it.
there are so many companies right now trying to offer their product that revolves around AI. but honestly i don't think it will be risk to society, after all, with the emergence of new technologies, new job is created.
its been that way since forever. moreover AI will definitely highly regulated in the future once government have sufficient data regarding the AI itself, eventually it will rule out the possibility of it threatening the society as a whole. but considering the fact that AI has simplified many workflow nowadays, I could see it turning out to be positive in the future.

I wonder what new jobs will come with the improvement of artificial intelligence? At the expense of what? On the contrary, artists, rewriters, writers, poets and many other specialties will not be needed. And Elon Musk is bluffing - he wants only his campaign to function in the market.
sr. member
Activity: 1400
Merit: 268
Fully Regulated Crypto Casino
The latest news I heard about AI is that it's getting dumber now, https://fortune.com/2023/07/19/chatgpt-accuracy-stanford-study/. The article entitled
Quote
Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds

Basically it summarized a research that says that after 3 months the GPT-4 has less accuracy on answering and solving math problem, other thing is that the GPT are being more hesitant on answering a sensitive question. My take from this research is that the AI at least GPT did actually learn from Human, but it doesn't make any different, it just become more humane. I don't think it's much of a risk for society.
Basically AI would become smarter with time because more people use it, more educational information AI will receive from interactions with users. The result caused me to guess that these companies behind AI did something to reduce functionality of their AI.

I believe it is not coincidence of AI drop performance and complains, requests from community including Elon Musk. Those AI companies want to charge more fees from their users too. It can be because they become more greed or because they have to change their strategies with pressure and orders from governments.

From 98% of accuracy to 2% of accuracy is very unbelievale big change.

Your argument that the creator trying to charge more fee and nerf the free version is a good point and kinda make sense. But I still can't wrap the idea on how AI can get smarter than human if they learn from Human. It's true that they might have more information and can access those information more quickly and easily than human, but their nature of processing those information and making decision based on those information won't be so much different from human, since they learn from human.
hero member
Activity: 2996
Merit: 536
Leading Crypto Sports Betting & Casino Platform
funny that currently elon is creating his own AI company, it seems despite the risk cited by him, regardless he'd seek some fortune out of it.
there are so many companies right now trying to offer their product that revolves around AI. but honestly i don't think it will be risk to society, after all, with the emergence of new technologies, new job is created.
its been that way since forever. moreover AI will definitely highly regulated in the future once government have sufficient data regarding the AI itself, eventually it will rule out the possibility of it threatening the society as a whole. but considering the fact that AI has simplified many workflow nowadays, I could see it turning out to be positive in the future.
hero member
Activity: 2170
Merit: 575
As someone who looked into AI before, to see if it will take my job away or not, I can easily tell you that AI is nowhere near that level at all. You can start using the image ones these days for some stuff, but this is more like a tool instead of a weapon. Its not there to make you lose your job, its not there to attack you neither, its there to help you, probably in the future and not right now because its not that great at the moment. Just recently there was a person who asked me to write articles with AI, I rejected because I do not have any time left after my work is done, well I do have some time left but I rather spend that relaxing instead. So all in all I can easily say that AI is there, slowly used by professionals, but just like how photoshop existing didn't make designers obsolete, this will not neither. Its not going to hurt you, its nowhere near that, its just a software to use, just like any other we use daily.
legendary
Activity: 1946
Merit: 1100
Leading Crypto Sports Betting & Casino Platform
The government need to control the use or else it would be used to cause a lots of attacks thag would look real to humans eyes. People had started using it to clone other people's imagine which should be framed at. I have started seeing a lots of things we can use the chatgpt to do which would look extraordinary to people when they see the outcome. We may be enjoying it now but there a bad people that will not mind to use the dark side to cause harm and destroy personalities.

I share these concerns, but I don't yet see what real steps governments can take to bring the situation under control. Stopping the development of artificial intelligence is tantamount to trying to stop progress. There is no legal or other reason to enforce laws restricting the development of artificial intelligence, because there are no norms that can determine what level of development is acceptable.

When there is a rule that guides an environment, then we should expect a high level of abuse by people in that area, that is exactly what is surrounding AI use but Elon calling for a break is like using a switch pox to stop a creativity. AI is another level of niche that people are making money from and also helping people reduce their workload, I think embedding laws and policies will regulate and reduce the way it is been used, it will also improve and create room for development if that should happen instead calling for total scratch, that will kill a lot of dreams.

I don't know why Elon get to do a lot of calls recently, Being a leading company CEO and the world's richest is sealed oxygen, when expose to air, it will evaporate under high temperature, I believe very soon, there will come a new person that will attract lots of people to his system and amass wealth to take over, Bezoz, Bill gate have all tasted that wealth and where are they now, they have all calm because power is transient.
Here we go again with "Billionaire Showdown." Elon appears to take pleasure in causing contention with his frequent dispatches, no? Is he right to be worried about the potential misuse of AI? Perhaps. But to imply a complete stoppage of something traveling at the speed of light is just Elon being Elon. Yes, there should be safeguards for AI, but it doesn't mean we should turn it off totally. AI is opening up new avenues for creativity and productivity, not just making money. The use of AI should be governed by strict ethical principles. That's a debate we should have. I get what you're saying about Elon being everywhere. The "richest man" title is quite the spotlight, and being in the spotlight may be very alluring. But keep in mind that time and tide wait for no man, or billionaire for that matter. Many wealthy tycoons have come and gone throughout history.
legendary
Activity: 1932
Merit: 1273
Yes, in my opinion, the threat to humanity from neural networks is quite real. 

Artificial intelligence is real intelligence.  Yes, this mind is not human, but this does not make it less dangerous for people. 

As for modern neural networks (for example, based on the GPT-4 algorithm), they have no prohibitions and restrictions on harming a person. 

They are trying to introduce such prohibitions and restrictions, but it is already clear that their development and implementation does not keep pace with the development of artificial intelligence technology.
The neural network menace to humanity, folks, it's a contentious concept that's got everyone on their toes. Tech's moving forward, and so are the ethical conundrums tagging along. Time to scrutinize the accountability of tech creators and the possible fallout.

Philosophically speaking, we gotta ask ourselves if it's kosher to whip up a mind without boundaries or limits. Are we overstepping by playing deity, creating an entity that might wreak havoc?

Wow, should we also call munition is a menace since it has been proven to harm or destroy society and civilization? What is our attempt to obliterate or stop it?

Both of your take on it is simply absurd. A centralized ChatGPT is censorable and prohibition or restriction can be set, you are assuming the current AI technologies are godlike, which is not the case. You are saying intelligence as we truly understand and comprehend the concept of intelligence thus we are able to replicate it for our creation, but this is also not the case.
By continuing to develop artificial intelligence, humanity is opening a Pandora's box, as AI will definitely try to destroy humanity in the end. Even today's primitive robots are already saying that from their point of view, man is imperfect and must be destroyed. What more evidence is needed here that AI should be kept within very strict limits and not allowed to make decisions in the military and other global industries? But as always, there will be mistakes and abuses, and our civilization will end for this, or some other similar reason. Man, being self-confident and stupid, who does not learn from his past mistakes.

Again, that is also an absurd take. What a baseless and unfounded claim, and you do even use words like "definitely" and "destroy" to make it more ridiculous. Here is the thing, stop taking science fiction as truth.

Certainly, "primitive" robots did not have a point of view. Even the current ChatGPT or any other Large Language Model products are architecturally incapable of reasoning. It is more likely the one who is self-confident and stupid is definitely you.
legendary
Activity: 2044
Merit: 1018
Not your keys, not your coins!
The latest news I heard about AI is that it's getting dumber now, https://fortune.com/2023/07/19/chatgpt-accuracy-stanford-study/. The article entitled
Quote
Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds

Basically it summarized a research that says that after 3 months the GPT-4 has less accuracy on answering and solving math problem, other thing is that the GPT are being more hesitant on answering a sensitive question. My take from this research is that the AI at least GPT did actually learn from Human, but it doesn't make any different, it just become more humane. I don't think it's much of a risk for society.
Basically AI would become smarter with time because more people use it, more educational information AI will receive from interactions with users. The result caused me to guess that these companies behind AI did something to reduce functionality of their AI.

I believe it is not coincidence of AI drop performance and complains, requests from community including Elon Musk. Those AI companies want to charge more fees from their users too. It can be because they become more greed or because they have to change their strategies with pressure and orders from governments.

From 98% of accuracy to 2% of accuracy is very unbelievale big change.
sr. member
Activity: 1400
Merit: 268
Fully Regulated Crypto Casino
The latest news I heard about AI is that it's getting dumber now, https://fortune.com/2023/07/19/chatgpt-accuracy-stanford-study/. The article entitled
Quote
Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds

Basically it summarized a research that says that after 3 months the GPT-4 has less accuracy on answering and solving math problem, other thing is that the GPT are being more hesitant on answering a sensitive question. My take from this research is that the AI at least GPT did actually learn from Human, but it doesn't make any different, it just become more humane. I don't think it's much of a risk for society.
sr. member
Activity: 966
Merit: 391
Underestimate- nothing
The government need to control the use or else it would be used to cause a lots of attacks thag would look real to humans eyes. People had started using it to clone other people's imagine which should be framed at. I have started seeing a lots of things we can use the chatgpt to do which would look extraordinary to people when they see the outcome. We may be enjoying it now but there a bad people that will not mind to use the dark side to cause harm and destroy personalities.
The only thing that can happen in this case is a situation, the collaboration between the government and private sector on issues like this is actually a very risky one, the only thing I can advise is that the government should be in full control of programs. private organization having control of such tech can be a threat to the government and society, and also the functionality of the tech also matters, there are techs that are a great help to humanity, even in the health sector in China they use tech to perform surgery and also in restaurants projects like that can be considered, once the functionality is examined they it can be approved or licensed.


I share these concerns, but I don't yet see what real steps governments can take to bring the situation under control. Stopping the development of artificial intelligence is tantamount to trying to stop progress. There is no legal or other reason to enforce laws restricting the development of artificial intelligence, because there are no norms that can determine what level of development is acceptable.

I can remember in some movies where private organization research and projects will later become a threat to the people, I think there is an organization in the government that is put in place just to inspect anything concerning tech, and we really need to start slowing down in this aspect of tech, since the era is changing to that of tech, a lot of people are losing their jobs since technology is already replacing people and cutting cost on labor.

Quote
There is no legal or other reason to enforce laws restricting the development of artificial intelligence
Don know are true this is, about the government not having, bodies to restrict some particular project? and if bodies like that don't exist then polices should be initiated.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
The fear towards artificial intelligence, in my opinion, it is understandable.
A few years ago getting these kind of deep fakes was matter of science fiction, also the way now people can prompt a machine to make images for them from text looks like something out of a sci-fi movie from the 2000s.

People with bad intentions could use these tools for extremely convincing misinformation campaigns, to incite other into violence and influence over society. No mention how this will likely transform the relation between humankind and labor in the long term.

These are valid concerns and apprehension of many people who are afraid that AI technology can be misused for malicious purposes. The rapid technological advancement in AI has also created deep fake  and convincing tools those can create manipulative content.

In order to address these society need to behave responsibly and governments need to work towards making appropriate regulation to maintain ethical and moral values of the society.

> Calls on everyone to stop creating more advanced AI models in order to avert catastrophic 'risks to society' from armed Terminator-style robots

> Creates his own AI company to catch up to ChatGPT

Yeah, I'm not having it. It seems to him that pausing AI development should only apply to other companies, so that he doesn't have to play catch-up with everyone else. I guess that's what happens when you arrive so late in the field.
hero member
Activity: 1106
Merit: 912
Not Your Keys, Not Your Bitcoin
The government need to control the use or else it would be used to cause a lots of attacks thag would look real to humans eyes. People had started using it to clone other people's imagine which should be framed at. I have started seeing a lots of things we can use the chatgpt to do which would look extraordinary to people when they see the outcome. We may be enjoying it now but there a bad people that will not mind to use the dark side to cause harm and destroy personalities.

I share these concerns, but I don't yet see what real steps governments can take to bring the situation under control. Stopping the development of artificial intelligence is tantamount to trying to stop progress. There is no legal or other reason to enforce laws restricting the development of artificial intelligence, because there are no norms that can determine what level of development is acceptable.

When there is a rule that guides an environment, then we should expect a high level of abuse by people in that area, that is exactly what is surrounding AI use but Elon calling for a break is like using a switch pox to stop a creativity. AI is another level of niche that people are making money from and also helping people reduce their workload, I think embedding laws and policies will regulate and reduce the way it is been used, it will also improve and create room for development if that should happen instead calling for total scratch, that will kill a lot of dreams.

I don't know why Elon get to do a lot of calls recently, Being a leading company CEO and the world's richest is sealed oxygen, when expose to air, it will evaporate under high temperature, I believe very soon, there will come a new person that will attract lots of people to his system and amass wealth to take over, Bezoz, Bill gate have all tasted that wealth and where are they now, they have all calm because power is transient.
copper member
Activity: 1316
Merit: 715
Eloncoin.org - Mars, here we come!
The fear towards artificial intelligence, in my opinion, it is understandable.
A few years ago getting these kind of deep fakes was matter of science fiction, also the way now people can prompt a machine to make images for them from text looks like something out of a sci-fi movie from the 2000s.

People with bad intentions could use these tools for extremely convincing misinformation campaigns, to incite other into violence and influence over society. No mention how this will likely transform the relation between humankind and labor in the long term.

These are valid concerns and apprehension of many people who are afraid that AI technology can be misused for malicious purposes. The rapid technological advancement in AI has also created deep fake  and convincing tools those can create manipulative content.

In order to address these issues, society need to behave responsibly and governments need to work towards making appropriate regulation to maintain ethical and moral values of the society.
sr. member
Activity: 2464
Merit: 252
Yes, in my opinion, the threat to humanity from neural networks is quite real. 

Artificial intelligence is real intelligence.  Yes, this mind is not human, but this does not make it less dangerous for people. 

As for modern neural networks (for example, based on the GPT-4 algorithm), they have no prohibitions and restrictions on harming a person. 

They are trying to introduce such prohibitions and restrictions, but it is already clear that their development and implementation does not keep pace with the development of artificial intelligence technology.
The neural network menace to humanity, folks, it's a contentious concept that's got everyone on their toes. Tech's moving forward, and so are the ethical conundrums tagging along. Time to scrutinize the accountability of tech creators and the possible fallout.

Philosophically speaking, we gotta ask ourselves if it's kosher to whip up a mind without boundaries or limits. Are we overstepping by playing deity, creating an entity that might wreak havoc?

Wow, should we also call munition is a menace since it has been proven to harm or destroy society and civilization? What is our attempt to obliterate or stop it?

Both of your take on it is simply absurd. A centralized ChatGPT is censorable and prohibition or restriction can be set, you are assuming the current AI technologies are godlike, which is not the case. You are saying intelligence as we truly understand and comprehend the concept of intelligence thus we are able to replicate it for our creation, but this is also not the case.
By continuing to develop artificial intelligence, humanity is opening a Pandora's box, as AI will definitely try to destroy humanity in the end. Even today's primitive robots are already saying that from their point of view, man is imperfect and must be destroyed. What more evidence is needed here that AI should be kept within very strict limits and not allowed to make decisions in the military and other global industries? But as always, there will be mistakes and abuses, and our civilization will end for this, or some other similar reason. Man, being self-confident and stupid, who does not learn from his past mistakes.
legendary
Activity: 1932
Merit: 1273
Yes, in my opinion, the threat to humanity from neural networks is quite real. 

Artificial intelligence is real intelligence.  Yes, this mind is not human, but this does not make it less dangerous for people. 

As for modern neural networks (for example, based on the GPT-4 algorithm), they have no prohibitions and restrictions on harming a person. 

They are trying to introduce such prohibitions and restrictions, but it is already clear that their development and implementation does not keep pace with the development of artificial intelligence technology.
The neural network menace to humanity, folks, it's a contentious concept that's got everyone on their toes. Tech's moving forward, and so are the ethical conundrums tagging along. Time to scrutinize the accountability of tech creators and the possible fallout.

Philosophically speaking, we gotta ask ourselves if it's kosher to whip up a mind without boundaries or limits. Are we overstepping by playing deity, creating an entity that might wreak havoc?

Wow, should we also call munition is a menace since it has been proven to harm or destroy society and civilization? What is our attempt to obliterate or stop it?

Both of your take on it is simply absurd. A centralized ChatGPT is censorable and prohibition or restriction can be set, you are assuming the current AI technologies are godlike, which is not the case. You are saying intelligence as we truly understand and comprehend the concept of intelligence thus we are able to replicate it for our creation, but this is also not the case.
hero member
Activity: 1764
Merit: 514
Leading Crypto Sports Betting & Casino Platform
New invention of science is beneficial but it depends on proper use. AI has made the work of users much easier today. Again, using this same AI, people are also doing various prohibited activities. Although there are good comments about this AI at present, It is definitely beneficial for mankind. It can also play a leading role in creating chaos in the society. It is difficult to say which photo is made by AI and which is the original. In the case of a respectable person in the society, if he is accused of anything by a fake picture created by AI, it can create a huge havoc in the society. Considering the negative and positive aspects, the scope and advantages of AI should be increased.
legendary
Activity: 2380
Merit: 17063
Fully fledged Merit Cycler - Golden Feather 22-23
Don't listen to what Elon Musk says, look at what Elon Musk does:

Elon Musk quietly starts X.AI, a new artificial intelligence company to challenge OpenAI

Quote
Elon Musk is preparing to launch a new artificial intelligence (AI) startup, X.AI, that will compete directly with OpenAI, according to a bombshell report published by the Wall Street Journal on Friday.

More info in this paywalled article by the WSJ (paywall removed by yours truly), where they remind us that Elon Musk was an early investor in OpenAI.
legendary
Activity: 2716
Merit: 1383
...
AIs can't totally replace us humans because we still control them and we can stop them anytime. Lastly, Previous civilizations have disappeared for other reasons and not because of the technology and there are still no AIs that time.

But we have all seen the movies, everything starts out quite harmless until it turns into something very dangerous. A good ground for creating theory conspiracies, and this is just the beginning. The entire hype around AI is crazy, and with so many "experts" around we have so many different angles, so I don't have the slightest clue what will happen in the upcoming years. Will AI progress or will this hype die out like some others... I guess there are good chances for both things to happen. Simply people give too many credits, something we usually see in every hype. But what people wish and wrote about is different from reality.
We are still far away from those scenarios, the AI we are currently using is very dumb, by this I mean that it has no consciousness of itself or anything like that, it can perform very complex tasks but it cannot do nothing which threaten us, if anything chatgpt just saves you a few seconds, as you could obtain the same information with a fast internet search and it is nowhere near the level in which it can threaten the existence of humans as the dominant species on the planet.
Pages:
Jump to: