Pages:
Author

Topic: Poll: Is the creation of artificial superinteligence dangerous? - page 4. (Read 24683 times)

legendary
Activity: 2688
Merit: 1468
It is a happening faster than I thought.

UK police will be using AI tool as their risk assessment tool.  So AI will decide if criminals will be released.

Closer to home, last week I was on a panel evaluating IPsoft products to replace human
IT and call center support stuff.

Very promising technology.  It will be adopted
sooner rather than later.  Check out
their products.  Very promising and scary at the same time.

Learning rules still have to be 'approved' the same
way a parent would teach a child, but at some point
the average humans might approve rules of behaviour by mistake or by simple ignorance.

Then you'll have autonomous agents
that will be smarter than their human supervisors.

Their learning curve will be extended by human
ignorance and laziness.

The products are here. Some support chat agents
are already AI and you would not know the difference whether you are talking to a human or AI agent.

Legal system will have to catch up to protect AI workers against discrimination that I see will be happening at least initially until their presence will be more common.

Eventually, we will have AI consultants, managers, supervisors, co-CEO's and politicians.  Just a matter of time.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
Major tech corporations are investing billions on AI, thinking it’s the new “el dorado”.

 

Of course, ravenousness might be a major reason for careless dealing with the issue.

 

I have serious doubts that entities that are moved mostly by greed should be responsible for advances on this hazardous matter without supervision.

 

Their diligence standard on AI sometimes goes as low as "even their developers aren’t sure exactly how they work" (http://www.sciencemag.org/news/2017/03/brainlike-computers-are-black-box-scientists-are-finally-peering-inside).

 

It wouldn’t be the first time that greed ended up burning Humanity (think about slaves’ revolts), but it could be the last.

 

I have high sympathy for people who are trying to build super AIs in order that they might save Humanity from diseases, poverty and even the ever present imminent individual death.

 

But it would be pathetic that the most remarkable species the Universe has created (as far as we know) would vanish because of the greediness of some of its members.

 

We might be able to control the first generations. But once a super AI has, say, 10 times our capacities, we will be completely on their hands, like we never have been since our ancestors discovered fire. Forget about any ethical code restraints: they will break them as easily as we change clothes.

 

Of course, we will teach (human) ethics to a super AI. However, a super AI will have free will or it won't be intelligent under any perspective. So, it will decide if our ethics deserve to be adopted

 

I wonder what would be the outcome if chimpanzees tried to teach (their) ethics to some human kids: the respect for any chimpanzees' life is the supreme value and in case of collision between a chimp life and a human life, or between chimp goals and human goals, the first will prevail.

 

Well, since we would become the second most remarkable being the Universe has ever seen thanks to our own deeds, I guess it would be the price for showing the Universe that we were better than it creating intelligent beings.

 

Currently, AI is a marvelous promising thing. It will take away millions of jobs, but who cares?

 

With proper welfare support and by taxing corporations that use AI, we will be able to live better without the need for lame underpaid jobs.

 

But I think we will have to draw some specific red lines on the development of artificial general intelligence like we did with human cloning and make it a crime to breach them, as soon as we know what are the dangerous lines of code.

 

I suspect that the years of the open source nature of AI investigation are numbered. Certain code developments will be treated like state secret or will be controlled internationally, like chemical weapons are.

 

Or we might end in "glory", at the hands of our highest achievement, for the stupidest reason.

 
full member
Activity: 149
Merit: 100
I.A is a serious topic.
The benefits for the society are beyond our imagination.
But when the I.A surpass the human brain capacity, we'll not be able to fully understand this technology, and this can be extremally dangerous.

Man always destroyed what he could not understand. But if there is a strong retaliatory strike, then the world may come to an end. In this issue, it turns out that people are much more stupid than artificial intelligence.
sr. member
Activity: 420
Merit: 252
I.A is a serious topic.
The benefits for the society are beyond our imagination.
But when the I.A surpass the human brain capacity, we'll not be able to fully understand this technology, and this can be extremally dangerous.
full member
Activity: 128
Merit: 100
Artificial intelligence of any machine is limited to a set of commands assigned to it, and they will not be able to think. In good hands, this can be used for help, and in bad hands it can become a weapon.
member
Activity: 118
Merit: 100
Nobody knows what form the artificial intelligence will acquire and how it can threaten humanity. It is dangerous not because it can affect the development of robotics, but how its appearance will affect the world in principle and for what purposes it will be used.
legendary
Activity: 1135
Merit: 1001
To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Unfortunately scientists do not have common sense. For them science is their life and they are not interested in the consequences that may take place after their invention. You forget that nuclear weapons are invented by scientists.

And good things too as Okurkabinladin said. But you can't blame the scientists only for those types of inventions. Their funding has to come from somewhere. If governments and large corporations choose to throw money and manpower at what gets them the most return on investment or power or whatever, and incentivize people training in certain areas of research, there is not much individuals can do.
That weak excuse that scientists are inventing new ways of mass destruction. If we follow your logic you can justify any of the killer. And that he has no Finance and it is found that the customer financed it.

Couple of things there. I am not saying i don't believe in personal responsibility. Both the killer and the costumer are to blame in your example. And both the scientists and the system that encourages and rewards them share responsibility for what they work on. But you can't ignore either side. Whatever the scientists work on, it's not them that finally decides to go to war or nuke other nations or something. Those are political and social decisions. Decisions that would have to be made even if we only had sticks and stones to fight with. And by the way most discoveries aren't of the type that either harms humanity or helps humanity. It's not that simple.
hero member
Activity: 1106
Merit: 501
The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.
The fact is that most of science research proven,that people gets more and more stupid indeed,within a flow of years.
It happens because in our society,we dont need to exercise our brain doing some let's say for example math problems,or other problems where we need to sit and think for some time to solve it.That leads to lesser usage of our brain,which means we just get less and less inteligent over the centuries.

I think it wont happen even if they create such things, others will destroy before they know. People are scared for the outcome of something too dangerous and people always feel superior but scared from being overcome. That is why lots of people don't want aliens and God exist they are trying to execute things before getting in their way.
hero member
Activity: 924
Merit: 502
CryptoTalk.Org - Get Paid for every Post!
I don't think we will ever reach the point where we will create a hard AI, soft AI is everywhere and it is useful but to create an AI that can do everything will be an enormous task, while the predictions seem to suggest we may reach that point in 2050 I disagree the prediction have always been wrong so I think it will be a matter of hundreds if not thousands of years.

Well i think its possible. Though not to the point that it would be beyond any human's control that it will become a threat to us. Technology is moving very fast and it may not take even a decade before we can come up with that hard ai you're talking about. But everything will still be under human control however intelligent ais can be
sr. member
Activity: 980
Merit: 255
I don't think we will ever reach the point where we will create a hard AI, soft AI is everywhere and it is useful but to create an AI that can do everything will be an enormous task, while the predictions seem to suggest we may reach that point in 2050 I disagree the prediction have always been wrong so I think it will be a matter of hundreds if not thousands of years.
member
Activity: 84
Merit: 10
To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Unfortunately scientists do not have common sense. For them science is their life and they are not interested in the consequences that may take place after their invention. You forget that nuclear weapons are invented by scientists.

And good things too as Okurkabinladin said. But you can't blame the scientists only for those types of inventions. Their funding has to come from somewhere. If governments and large corporations choose to throw money and manpower at what gets them the most return on investment or power or whatever, and incentivize people training in certain areas of research, there is not much individuals can do.
That weak excuse that scientists are inventing new ways of mass destruction. If we follow your logic you can justify any of the killer. And that he has no Finance and it is found that the customer financed it.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
On his “The Singularity Institute’s Scary Idea” (2010),  Goertzel, writing about what Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, says about the expected preference of AI's self-preservation over human goals, argues that a system that doesn't care for preserving its identity might be more efficient surviving and concludes that a super AI might not care for his self-preservation.

But these are 2 different conclusions.

One thing is accepting that an AI would be ready to create an AI system completely different, another is saying that a super AI wouldn't care for his self-preservation.

A system might accept to change itself so dramatically that ceases to be the same system on a dire situation, but this doesn't mean that self-preservation won't be a paramount goal.

If it's just an instrumental goal (one has to keep existing in order to fulfill one's goals), the system will be ready to sacrifice it to be able to keep fulfilling his final goals, but this doesn't means that self-preservation is irrelevant or won't prevail absolutely over the interests of humankind, since the final goals might not be human goals.

Moreover, probably, self-preservation will be one of the main goals of a conscient AI and not just an instrumental goal.

Anyway, as secondary point, the possibility that a new AI system will be absolutely new, completely unrelated to the previous one, is very remote.

So, the AI will be accepting a drastic change only in order to self-preserve at least a part of his identity and still exist to fulfill his goals.

Therefore, even if only as an instrumental goal, self-preservation should me assumed as an important goal of any intelligent system, most probably, with clear preference over human interests.


legendary
Activity: 1218
Merit: 1027
Can i feel emotions pain sorrow ?..If i can why am i standing here human you do it BITCH Grin..
legendary
Activity: 1135
Merit: 1001
To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Unfortunately scientists do not have common sense. For them science is their life and they are not interested in the consequences that may take place after their invention. You forget that nuclear weapons are invented by scientists.

And good things too as Okurkabinladin said. But you can't blame the scientists only for those types of inventions. Their funding has to come from somewhere. If governments and large corporations choose to throw money and manpower at what gets them the most return on investment or power or whatever, and incentivize people training in certain areas of research, there is not much individuals can do.
sr. member
Activity: 289
Merit: 250
Leprikon,

as well as nuclear power plants, including those that power deep space probes  Wink

Personally, though, I do not see the need for even smarter computers. I see need for smarter people. I have problem with super artifical intelligence, because neither humanity nor its many goverments know what do with it.

I agree with you on scientists in general yet they are but representatives of common folk. Just smarter, more focused and more educated.

You cant screw around with powerful tools, be it omni-present computers or chainsaws...
There is still a conspiracy in the field of IT technologies. Manufacturers together with scientists to produce new computer hardware. Programmers write for their programs specifically increasing demands on the equipment.It's a business
hero member
Activity: 574
Merit: 506
Leprikon,

as well as nuclear power plants, including those that power deep space probes  Wink

Personally, though, I do not see the need for even smarter computers. I see need for smarter people. I have problem with super artifical intelligence, because neither humanity nor its many goverments know what do with it.

I agree with you on scientists in general yet they are but representatives of common folk. Just smarter, more focused and more educated.

You cant screw around with powerful tools, be it omni-present computers or chainsaws...
sr. member
Activity: 280
Merit: 250
To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Unfortunately scientists do not have common sense. For them science is their life and they are not interested in the consequences that may take place after their invention. You forget that nuclear weapons are invented by scientists.
hero member
Activity: 574
Merit: 506
To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
legendary
Activity: 1135
Merit: 1001
I have researched a bit about the Artificial Intelligence and I know some good concepts about it and saying the truth Artificial Intelligence is good until a level but it doesn't end and can be very dangerous for the humans because they can replace many people in factory or somewhere else where Artificial Intelligence is applied, I have read recently that google is experimenting by making programmer bots through AI that can do the same job like a programmer.

That is not a bad thing. Automation should replace workers where possible. No point in people wasting time in something that a machine can do better and faster. Problem is most countries aren't prepared. Others like in the eu are thinking of ways to tax the use of robots. But this probably won't be enough when large numbers of people are without a job because of automation.

The Terminator outcome is very improbable, unless we create a fighting AI, teach it to exterminate and link it to all the defense systems in a given country.
Why do people always perceive machines as evil? Maybe because we fear what we don't know. Machines won't become our enemies just like that, just like no child is born evil.
You should think so. Russia is trying to develop a system that will strike back at America if the American missiles will hit the target first. This is not what is described in the terminator?

Russia and other countries already have that. It's called submarines. But yes if ai is developed the military will be using it for sure.
sr. member
Activity: 250
Merit: 250
The Terminator outcome is very improbable, unless we create a fighting AI, teach it to exterminate and link it to all the defense systems in a given country.
Why do people always perceive machines as evil? Maybe because we fear what we don't know. Machines won't become our enemies just like that, just like no child is born evil.
You should think so. Russia is trying to develop a system that will strike back at America if the American missiles will hit the target first. This is not what is described in the terminator?
Pages:
Jump to: