Pages:
Author

Topic: Poll: Is the creation of artificial superinteligence dangerous? - page 4. (Read 24791 times)

full member
Activity: 149
Merit: 100
I.A is a serious topic.
The benefits for the society are beyond our imagination.
But when the I.A surpass the human brain capacity, we'll not be able to fully understand this technology, and this can be extremally dangerous.

Man always destroyed what he could not understand. But if there is a strong retaliatory strike, then the world may come to an end. In this issue, it turns out that people are much more stupid than artificial intelligence.
sr. member
Activity: 420
Merit: 252
I.A is a serious topic.
The benefits for the society are beyond our imagination.
But when the I.A surpass the human brain capacity, we'll not be able to fully understand this technology, and this can be extremally dangerous.
full member
Activity: 128
Merit: 100
Artificial intelligence of any machine is limited to a set of commands assigned to it, and they will not be able to think. In good hands, this can be used for help, and in bad hands it can become a weapon.
member
Activity: 118
Merit: 100
Nobody knows what form the artificial intelligence will acquire and how it can threaten humanity. It is dangerous not because it can affect the development of robotics, but how its appearance will affect the world in principle and for what purposes it will be used.
legendary
Activity: 1135
Merit: 1001
To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Unfortunately scientists do not have common sense. For them science is their life and they are not interested in the consequences that may take place after their invention. You forget that nuclear weapons are invented by scientists.

And good things too as Okurkabinladin said. But you can't blame the scientists only for those types of inventions. Their funding has to come from somewhere. If governments and large corporations choose to throw money and manpower at what gets them the most return on investment or power or whatever, and incentivize people training in certain areas of research, there is not much individuals can do.
That weak excuse that scientists are inventing new ways of mass destruction. If we follow your logic you can justify any of the killer. And that he has no Finance and it is found that the customer financed it.

Couple of things there. I am not saying i don't believe in personal responsibility. Both the killer and the costumer are to blame in your example. And both the scientists and the system that encourages and rewards them share responsibility for what they work on. But you can't ignore either side. Whatever the scientists work on, it's not them that finally decides to go to war or nuke other nations or something. Those are political and social decisions. Decisions that would have to be made even if we only had sticks and stones to fight with. And by the way most discoveries aren't of the type that either harms humanity or helps humanity. It's not that simple.
hero member
Activity: 1106
Merit: 501
The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.
The fact is that most of science research proven,that people gets more and more stupid indeed,within a flow of years.
It happens because in our society,we dont need to exercise our brain doing some let's say for example math problems,or other problems where we need to sit and think for some time to solve it.That leads to lesser usage of our brain,which means we just get less and less inteligent over the centuries.

I think it wont happen even if they create such things, others will destroy before they know. People are scared for the outcome of something too dangerous and people always feel superior but scared from being overcome. That is why lots of people don't want aliens and God exist they are trying to execute things before getting in their way.
hero member
Activity: 924
Merit: 502
CryptoTalk.Org - Get Paid for every Post!
I don't think we will ever reach the point where we will create a hard AI, soft AI is everywhere and it is useful but to create an AI that can do everything will be an enormous task, while the predictions seem to suggest we may reach that point in 2050 I disagree the prediction have always been wrong so I think it will be a matter of hundreds if not thousands of years.

Well i think its possible. Though not to the point that it would be beyond any human's control that it will become a threat to us. Technology is moving very fast and it may not take even a decade before we can come up with that hard ai you're talking about. But everything will still be under human control however intelligent ais can be
sr. member
Activity: 980
Merit: 255
I don't think we will ever reach the point where we will create a hard AI, soft AI is everywhere and it is useful but to create an AI that can do everything will be an enormous task, while the predictions seem to suggest we may reach that point in 2050 I disagree the prediction have always been wrong so I think it will be a matter of hundreds if not thousands of years.
member
Activity: 84
Merit: 10
To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Unfortunately scientists do not have common sense. For them science is their life and they are not interested in the consequences that may take place after their invention. You forget that nuclear weapons are invented by scientists.

And good things too as Okurkabinladin said. But you can't blame the scientists only for those types of inventions. Their funding has to come from somewhere. If governments and large corporations choose to throw money and manpower at what gets them the most return on investment or power or whatever, and incentivize people training in certain areas of research, there is not much individuals can do.
That weak excuse that scientists are inventing new ways of mass destruction. If we follow your logic you can justify any of the killer. And that he has no Finance and it is found that the customer financed it.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
On his “The Singularity Institute’s Scary Idea” (2010),  Goertzel, writing about what Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, says about the expected preference of AI's self-preservation over human goals, argues that a system that doesn't care for preserving its identity might be more efficient surviving and concludes that a super AI might not care for his self-preservation.

But these are 2 different conclusions.

One thing is accepting that an AI would be ready to create an AI system completely different, another is saying that a super AI wouldn't care for his self-preservation.

A system might accept to change itself so dramatically that ceases to be the same system on a dire situation, but this doesn't mean that self-preservation won't be a paramount goal.

If it's just an instrumental goal (one has to keep existing in order to fulfill one's goals), the system will be ready to sacrifice it to be able to keep fulfilling his final goals, but this doesn't means that self-preservation is irrelevant or won't prevail absolutely over the interests of humankind, since the final goals might not be human goals.

Moreover, probably, self-preservation will be one of the main goals of a conscient AI and not just an instrumental goal.

Anyway, as secondary point, the possibility that a new AI system will be absolutely new, completely unrelated to the previous one, is very remote.

So, the AI will be accepting a drastic change only in order to self-preserve at least a part of his identity and still exist to fulfill his goals.

Therefore, even if only as an instrumental goal, self-preservation should me assumed as an important goal of any intelligent system, most probably, with clear preference over human interests.


legendary
Activity: 1218
Merit: 1027
Can i feel emotions pain sorrow ?..If i can why am i standing here human you do it BITCH Grin..
legendary
Activity: 1135
Merit: 1001
To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Unfortunately scientists do not have common sense. For them science is their life and they are not interested in the consequences that may take place after their invention. You forget that nuclear weapons are invented by scientists.

And good things too as Okurkabinladin said. But you can't blame the scientists only for those types of inventions. Their funding has to come from somewhere. If governments and large corporations choose to throw money and manpower at what gets them the most return on investment or power or whatever, and incentivize people training in certain areas of research, there is not much individuals can do.
sr. member
Activity: 289
Merit: 250
Leprikon,

as well as nuclear power plants, including those that power deep space probes  Wink

Personally, though, I do not see the need for even smarter computers. I see need for smarter people. I have problem with super artifical intelligence, because neither humanity nor its many goverments know what do with it.

I agree with you on scientists in general yet they are but representatives of common folk. Just smarter, more focused and more educated.

You cant screw around with powerful tools, be it omni-present computers or chainsaws...
There is still a conspiracy in the field of IT technologies. Manufacturers together with scientists to produce new computer hardware. Programmers write for their programs specifically increasing demands on the equipment.It's a business
hero member
Activity: 574
Merit: 506
Leprikon,

as well as nuclear power plants, including those that power deep space probes  Wink

Personally, though, I do not see the need for even smarter computers. I see need for smarter people. I have problem with super artifical intelligence, because neither humanity nor its many goverments know what do with it.

I agree with you on scientists in general yet they are but representatives of common folk. Just smarter, more focused and more educated.

You cant screw around with powerful tools, be it omni-present computers or chainsaws...
sr. member
Activity: 280
Merit: 250
To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Unfortunately scientists do not have common sense. For them science is their life and they are not interested in the consequences that may take place after their invention. You forget that nuclear weapons are invented by scientists.
hero member
Activity: 574
Merit: 506
To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
legendary
Activity: 1135
Merit: 1001
I have researched a bit about the Artificial Intelligence and I know some good concepts about it and saying the truth Artificial Intelligence is good until a level but it doesn't end and can be very dangerous for the humans because they can replace many people in factory or somewhere else where Artificial Intelligence is applied, I have read recently that google is experimenting by making programmer bots through AI that can do the same job like a programmer.

That is not a bad thing. Automation should replace workers where possible. No point in people wasting time in something that a machine can do better and faster. Problem is most countries aren't prepared. Others like in the eu are thinking of ways to tax the use of robots. But this probably won't be enough when large numbers of people are without a job because of automation.

The Terminator outcome is very improbable, unless we create a fighting AI, teach it to exterminate and link it to all the defense systems in a given country.
Why do people always perceive machines as evil? Maybe because we fear what we don't know. Machines won't become our enemies just like that, just like no child is born evil.
You should think so. Russia is trying to develop a system that will strike back at America if the American missiles will hit the target first. This is not what is described in the terminator?

Russia and other countries already have that. It's called submarines. But yes if ai is developed the military will be using it for sure.
sr. member
Activity: 250
Merit: 250
The Terminator outcome is very improbable, unless we create a fighting AI, teach it to exterminate and link it to all the defense systems in a given country.
Why do people always perceive machines as evil? Maybe because we fear what we don't know. Machines won't become our enemies just like that, just like no child is born evil.
You should think so. Russia is trying to develop a system that will strike back at America if the American missiles will hit the target first. This is not what is described in the terminator?
legendary
Activity: 2478
Merit: 1360
Don't let others control your BTC -> self custody
The Terminator outcome is very improbable, unless we create a fighting AI, teach it to exterminate and link it to all the defense systems in a given country.
Why do people always perceive machines as evil? Maybe because we fear what we don't know. Machines won't become our enemies just like that, just like no child is born evil.
sr. member
Activity: 255
Merit: 250
I have researched a bit about the Artificial Intelligence and I know some good concepts about it and saying the truth Artificial Intelligence is good until a level but it doesn't end and can be very dangerous for the humans because they can replace many people in factory or somewhere else where Artificial Intelligence is applied, I have read recently that google is experimenting by making programmer bots through AI that can do the same job like a programmer.
Artificial intelligence is the ruin of mankind. Remember the movie the Terminator? This will actually lead to judgment day. To monitor someone's intelligence is very difficult. Maybe better not to tempt fate?
Pages:
Jump to: