Pages:
Author

Topic: Poll: Is the creation of artificial superinteligence dangerous? - page 3. (Read 24767 times)

newbie
Activity: 19
Merit: 0
I believe it is theoretically possible for AI to become as intelligent as humans. This shouldn't be a great cause for concern though. Everything that AI can do is programmed by humans. Perhaps the question could be phrased differently: "Could robots be dangerous?" Of course the could be! If humans programs robots to destroy and do bad things, then the robots could be dangers. That's basically what military drones do. They are remotely controlled, but they are still robots.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
Henry Kissinger just wrote about AI's dangers: https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/

It isn't a brilliant text, but it deserves some attention.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
even on the field of conscious AI we are making staggering progress:

 

“three robots were programmed to believe that two of them had been given a "dumbing pill" which would make them mute. Two robots were silenced. When asked which of them hadn't received the dumbing pill, only one was able to say "I don't know" out loud. Upon hearing its own reply, the robot changed its answer, realizing that it was the one who hadn't received the pill.”
(http://uk.businessinsider.com/this-robot-passed-a-self-awareness-test-that-only-humans-could-handle-until-now-2015-7).

 

Being able to identify his voice, or even its individual capacity to talk, seems not enough to talk about real consciousness. It’s like recognizing that a part of the body is ours. It’s different than recognizing that we have an individual mind (self-theory of the mind).

I’m not talking about phenomenological or access consciousness, which many basic creatures have, including AlphaZero or any car driving software (it “feels” obstacles and, after an accident, it could easily process this information and say “Dear inept driving monkeys, please stop crashing your cars against me”; adapted from techradar.com).

 

The issue is very controversial, but even when we are reasoning, we might not be exactly conscious. One can be thinking about a theoretical issue completely oblivious of oneself.

 

Conscious thought (as reasoning that you are aware of, since emerges “from” your consciousness) as opposed to subconscious thought (something your consciousness didn’t realize, but that makes you act on a decision from your subconsciousness) is different from consciousness.

 

We are conscious when we stop thinking about abstract or other things and just recognize again: I’m alive here and now and I’m an autonomous person, with my own goals.

 

When we realize our status as thinking and conscious beings.

 

Consciousness seems much more related to realizing that we can feel and think than to just feeling the environment (phenomenological consciousness) or thinking/processing information (access consciousness).

 

It’s having a theory of the mind (being able to see things from the perspective of another person) about ourselves (Janet Metcalfe).

legendary
Activity: 2702
Merit: 1468
https://www.youtube.com/watch?v=ERwjba9qYXA

Watch around 53-54 minute mark, great example of what AI can do.

People think that humans are unique because evolution gave us consciousness, but guess what?  AI will achieve consciousness in few decades if not sooner.

The progress in AI is exponential, what took evolution millions of years to achieve is done in years if not months.

The emergence in action.

Is it dangerous?   Well, it depends, define "dangerous".

AI is just another step in the evolutionary ladder, IMHO.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
hero member
Activity: 808
Merit: 502
I definetly believe that AI devellopement can become a real threat to mankind, and real fast.

Big brother is definetly watching !
newbie
Activity: 28
Merit: 0
it's even more dangerous to see the results of the poll pinned! so many of you guys do not consider even the possibility of bad consequences of technical progress
be aware! it's artificial.but it's intelligence and it can adopt with time
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
Major update on the OP.

Basically, I make a distinction between intelligent and conscious AI, stressing the dangers of the second, but not necessarily of an unconscious (super) AI.

Taking in account AlphaZero, an unconscious super AI, able to give us answers for scientific problems, might be created on 5 years.

Clearly, there are many developers working on conscious AI and some important steps have been made.


Besides the dangers, I also point out for the unethical nature of creating a subservient conscious super AI, but also on the dangers of a unsubservient conscious AI.


I removed the part about the Fermi Paradox, since it's too speculative.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence

We, basically, have little clues on how our brain works, how it creates consciousness and allows us to be intelligent, therefore we don't have a clue about how to teach/program a machine to be as intelligent as a human.

We are just creating computers with massive processing power and algorithms structured on layers and connections similar to our neural system (neural networks), giving them massive data and expecting that they will learn by trial and error about how to make sense of it (deep learning).

However, Alphago learned to play Go with human assistance and data, but AlphagoZero learned completely by itself from scratch, with no human data, with the so called reinforcement learning (https://www.nature.com/articles/nature24270.epdf), by playing countless games against itself. It ended up beating Alphago.

Moreover, the same algorithm AlphaZero learned how to play chess in 4 hours on itself and then beat the best machine chess player, Stockfish, 28 to 0, with 72 draws, with less computer power than Stockfish.

A grand master, seeing how these AI play chess, said that "they play like gods".

Then, it did the same thing with the game Shogi (https://en.wikipedia.org/wiki/AlphaZero).

Yes, AlphaZero is more or less a General AI, ready to learn anything with clear rules by itself and, then, beat everyone of us.

So, since no one knows how to teach machines to be intelligent, the goal is creating algorithms that will be able to figure out how to develop a general intelligence comparable to ours by trial and error.

If a computer succeeds, and becomes really intelligent, we most probably won't know how it did it, what are its real capacities, how we can control it and what we can expect from it.
 ("even their developers aren’t sure exactly how they work": http://www.sciencemag.org/news/2017/03/brainlike-computers-are-black-box-scientists-are-finally-peering-inside).


All of this is being done by greedy corporations and some optimistic programmers, trying to make a name for themselves.

This seems a recipe for disaster.

Perhaps, we might be able to figure out, after, how they did it and learn a lot about ourselves and about intelligence with them.

But in between we might have a problem with them.

AI development should be overseen by an independent public body (as argued by Musk recently: https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html) and internationally regulated.

One of the first regulations should be about deep learning and self-learning computers, not necessarily on specific tasks, but on general intelligence, including talking and abstract reasoning.

And, sorry, but forget about open source AI. On the wrong hands, this could be used with very nasty consequences (check this 7m video: https://www.youtube.com/watch?v=HipTO_7mUOw).

I had hopes that a general human level AI couldn't be created without a new generation of hardware. But AlphaZero can run on less powerful computers (single machine with four TPUs), since it doesn't have to check 80 million positions per second (as Stockfish), but just 80 thousand.

Since our brain uses much of its capacities running basic things (like the beat of out heart, the flowing of blood, the work of our body organs, controlling our movements, etc.), that an AI won't need, perhaps current supercomputers already have enough capacity to run a super AI.

If this is the case, the all matter is dependent solely on software.

And, at the pace of AI development, probably there won't be time to adopt any international regulations, since normally this takes at least 10 years.

Without international regulations, Governments won't stop or really slow AI development, because of fear of being left behind on this decisive technology.

Therefore, it seems that a general AI comparable to humans and, so, much better, since it would be much faster, is inevitable on the short term, perhaps less than 10 years.

The step to a super AI will be taken short after and we won't have any control over it.

https://futurism.com/openai-safe-ai-michael-page/

"I met with Michael Page, the Policy and Ethics Advisor at OpenAI. (...) He responded that his job is to “look at the long-term policy implications of advanced AI.” (...) I asked Page what that means (...) “I’m still trying to figure that out.” (...) “I want to figure out what can we do today, if anything. It could be that the future is so uncertain there’s nothing we can do,”.

legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
In China, a robot was approved on the medical exam and accepted to work on a hospital as an assistant doctor:


http://www.chinadaily.com.cn/business/tech/2017-11/10/content_34362656.htm
https://www.ibtimes.co.uk/robo-doc-will-see-you-now-robot-passes-chinas-national-medical-exam-first-time-1648027

This just means that doctors are mostly out of work, since this robot will be upgraded, mass produced and exported soon to every country.

I already can see medics on strike, protesting all around the world, arguing about "safety" and the risks... good luck.

Are you thinking about going to medical school? Think twice, this is just the first stupid generation of medical robots.
sr. member
Activity: 644
Merit: 259
CryptoTalk.Org - Get Paid for every Post!
Whiles Human Beings and sometimes animals have a conscience and can differentiate between wrong and right which helps us to make decisions, I really doubt AI will have the same thing and without a conscience and empathy, I believe they are going to be very dangerous.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
Taking in account what we know, I think these facts might be truth:

1) Basic life, unicellular, is common on the Universe. They are the first and last stand of life. We, humans, are luxurious beings, created thanks to excellent (but rare and temporary) conditions.

2) Complex life is much less common, but basic intelligent life (apes, dolphins, etc.) might exist on some planets of our galaxy.

3) Higher intelligence with advanced technological development is very rare.

Probably, currently, there isn't another high intelligent species on our galaxy or we already would have noticed its traces all over it.

Because higher intelligence might take a few billion years to develop and planets that can offer climatic stability for so long are very rare (https://www.amazon.com/Rare-Earth-Complex-Uncommon-Universe/dp/0387952896 ; https://en.wikipedia.org/wiki/Rare_Earth_hypothesis).

4) All these few rare high intelligent species developed according to Darwin's Law of evolution, which is an universal law.

So, they share some common features (they are omnivorous, moderately belligerent to foreigners, highly adaptable and, rationally, they try to discover more easily ways to do things).

5) So, all the rare higher intelligence species with advanced technological civilizations create AI and, soon, AI overcomes them in intelligence (it's just a question of organizing atoms and molecules, we'll do a better job than dumb Nature).

6) If they change themselves and merge with AI, their story might end well and it's just the Rare Earth hypothesis that explains the silence on the Universe.

7) If they lost control of the AI, there seems to be a non ignorable probability that they ended extinct.

Taking in account the way we are developing AI, basically letting it learn on its own and, thus, become more intelligent on its own, I think this outcome is more probable.

An AI society probably is an anarchic one, with several AI competing for supremacy, constantly developing better systems.

It might be a society in constant internal war, where we are just the collateral targets, ignored by all sides, as the walking monkeys.

8] Contrary to us, AI won't have the restraints developed by evolution (our human inclination to be social and live in communities and our fraternity towards other members of the community).

The most tyrannical dictator never wanted to kill all human beings, but his enemies and discriminated groups.

Well, AIs might think that extermination is the most efficient way to solve a threat and fight themselves to extinction.

Of course, there is a lot of speculation on this post.

I know Isaac Arthur's videos on the subject. He adopts the logical Rare Earth hypothesis and dismisses AI too fast by not taking in account that AI might end up destroying themselves.

legendary
Activity: 1135
Merit: 1001

- AI superintelligence poses a threat to the existence of the human species, so we should go for that since the human species is overrated anyway


If you had kids, you wouldn't write that.

As far as we know, taking in account the silence on the Universe, even with all our defects, we might be the most amazing being the Universe already created.

After taking care of us, AI might take care of themselves, ending up destroying everything.

Actually, this might be the answer for the Fermi Paradox.

Could be that it happened to some civilizations out there. But all of them? And they always create several competing ais and the ais always destroy themselves? Seems like the ais would have a sense of self preservation for them to fight each other and replace their creators. So it would only take one being able to maybe escape off world from the fight or out think the others for us to be able to see signs of it somewhere with enough time. Because if it has self preservation it will probably want to expand and secure resources as any form of life would.

On this topic, been seeing some videos from a channel you might like: https://www.youtube.com/channel/UCZFipeZtQM5CKUjx6grh54g/videos Has a lot about the fermi paradox in the older videos and some about machine intelligence and transhumanism as well.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence

- AI superintelligence poses a threat to the existence of the human species, so we should go for that since the human species is overrated anyway


If you had kids, you wouldn't write that.

As far as we know, taking in account the silence on the Universe, even with all our defects, we might be the most amazing being the Universe already created.

After taking care of us, AI might take care of themselves, ending up destroying everything.

Actually, this might be the answer for the Fermi Paradox.
full member
Activity: 238
Merit: 100
I strongly believe the development of super intelligence is in it advance stages and AIs will be an intergral part of human existence in no time but i also understand that any machine just like the chemical or atom bomb, that falls into the hands of the bad person can bring an end to the human race therefore there is the need to develop sophisticated and encrypted security channels that will ensure the safe usage of AI.
full member
Activity: 714
Merit: 117
I could not take part to the poll because one of the crucial possible answers was missing:

- AI superintelligence poses a threat to the existence of the human species, so we should go for that since the human species is overrated anyway
member
Activity: 135
Merit: 10
I don't agree with the idea that "humankind extinction is the worst thing that could happen", because in evolution there is no good and no evil, just nature operating. If humakind will disappear, this means that it was not fit for existence, which would be a fact, and possible something more efficient (the AI) would then disclose a new era of life.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
General job destruction by AI and the new homo artificialis


Many claim that the threat that technology would take away all jobs has been made many times in the past and that the outcome was always the same: some jobs were eliminated, but many others, better ones, were created.

So, again, that we are making the "old wasted" claim: this time is different.

However, this time isn't repetitive manual jobs that are under threat, but white collar intellectual jobs: it's not just driving jobs that are under threat, but also medics, teachers, traders, lawyers, financial or insurance analyst or journalists.

Forget about robots: for this kind of jobs, it's just software and a fast computer. Intellectual jobs will go faster than the manual picky ones.

And this is just the beginning.

The major problem will arrive with a general AI comparable to humans, but much faster and cheaper.

Don't say this won't ever happen. It's just a question of organizing molecules and atoms (Sam Harris). If the dumb Nature was able to do it by trial and error during our evolution, we will be able to do the same and, then, better than it.

Some are writing about the creation of a useless class. "People who are not just unemployed, but unemployable" (https://en.wikipedia.org/wiki/Yuval_Noah_Harari) and arguing that this can have major political consequences, with this class losing political rights.

Of course, we already have a temporary and a more or less definitive "useless class": kids and retired people. The first doesn't have political rights, but because of a natural incapacity. The second have major political power and, currently, even better social security conditions than all of us will get in the future.

As long as Democracy subsists, these dangers won't materialize.

However, of course, if the big majority of the people losses all economic power this will be a serious threat to Democracy. Current inequality is already a threat to it (see  https://bitcointalk.org/index.php?topic=1301649.0).

Anyway, the creation of a general AI better than humans (have little doubt: it will happen) will make us an "useless species", unless we upgrade the homo sapiens, by merging us with AI.

CRISPR (google it) as a way of genetic manipulation won't be enough. Our sons or grandsons (with some luck, even ourselves) will have to change a lot.

Since it seems that the creation of an AI better than ourselves is inevitable (it's slowly happening right now), we we'll have to adapt and change completely or we'll become irrelevant. In this case, extinction would be our inevitable destiny.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
There have been many declarations against autonomous military artificial intelligence/robots.

For instance: https://futureoflife.org/AI/open_letter_autonomous_weapons

It seems clear that future battlefields will be dominated by killer robots. Actually, we already have them: drones are just the better known example.

With less people willing to enlist on armed forces and very low birth rates, what kind of armies countries like Japan, Russia or the Europeans will be able to create? Even China might have problems, since its one child policy created a fast aging population.

Even Democracy will impose this outcome: soldiers, their families, friends and the society in general will want to see human causalities as low as possible. And since they vote, politicians will want the same.

For now, military robots are controlled by humans. But as soon as we realize that they can be faster and decisive if they have autonomy to kill enemies on its own decision, it seems obvious that once on an open war Governments will use them...

Which government would avoid to use them if it was fighting for its survival, had the technology and concluded that autonomous military AI could be the difference between victory or defeat?

Of course, I'm not happy with this outcome, but it seems inevitable as soon as we have a human level general AI.
hero member
Activity: 1246
Merit: 529
CryptoTalk.Org - Get Paid for every Post!
I.A is a serious topic.
The benefits for the society are beyond our imagination.
But when the I.A surpass the human brain capacity, we'll not be able to fully understand this technology, and this can be extremally dangerous.

Man always destroyed what he could not understand. But if there is a strong retaliatory strike, then the world may come to an end. In this issue, it turns out that people are much more stupid than artificial intelligence.

well I think that retaliatory strike you're talking about won't be coming from any ai soon. man is intelligent and can make decision on a whim and however Intelligent ai is, I don't think it would be enough to topple man's ability to adapt
Pages:
Jump to: