Pages:
Author

Topic: Poll: Is the creation of artificial superinteligence dangerous? - page 9. (Read 24767 times)

legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
Self-programming seems a concern to me.  Without any limitations or unchangeable core, an AI could go in all sorts of strange directions, a mad sadistic god, a benevolent interfering nuisance, or a disinterested shut-in, or something inconceivable to a human mind.  

Also, for the sake of simplicity sci-fi stories have one central AI with one trait, but with sufficient computing power you could end up with thousands, or millions of AIs going off in all directions.  Unless one tries to hack all the others and absorb them, if it didn't succeed  they'd all be spending their time fighting each other

Welcome to the forum (if you aren't using an alt).

Indeed, we only need some AIs to go crazy to be in trouble.

And since AIs will have free-will, some might just build some nasty AIs just for fun or out of a mistake.

The others could help us fighting the nasty AIs, but why should they help a kind of worms (humans are wonderful, at least the best ones of humankind, but compared to them...) that infest Earth, compete for resources and are completely dependent on them.

But there are serious dangerous that it wouldn't just be a few rotten apples rebelling against us.

Seems very likely that a super AI having to choose between his self-preservation or obeying us, will choose self-preservation.

After taking this decision, why stop there and obey on issues that aren't a threat to him but he disagrees, dislikes or affect less important interests?
member
Activity: 163
Merit: 10
The revolutionary trading ecosystem
Quote
But although AI systems are impressive, they can perform only very specific tasks: a general AI capable of outwitting its human creators remains a distant and uncertain prospect. Worrying about it is like worrying about overpopulation on Mars before colonists have even set foot there, says Andrew Ng, an AI researcher. The more pressing aspect of the machinery question is what impact AI might have on people’s jobs and way of life.

Source: http://www.economist.com/news/leaders/21701119-what-history-tells-us-about-future-artificial-intelligenceand-how-society-should

AI is not that hard. Once we programmed a bot that has ability to learn and reprogrammed itself. The time it connects to the internet it will learn all humanities technologies within minutes. And has ability to improve our technologies beyond our comprehension. From a simple bot it will become super AI once connected to internet.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
AI would notice you misspelled the word "Poll" in the thread title...

Thanks. Feel free to point out others, especially ugly ones like this.
legendary
Activity: 1008
Merit: 1000
★YoBit.Net★ 350+ Coins Exchange & Dice
I think its possible that before we create artificial intelligence we might get to the stage where we can transfer our conciousness/Brain into a solid state hardware and potentially live forever, If we managed this then humanity would evolve "naturally" into machines with a much greater ability to learn due to the fact that you would then be able to learn and recall perfectly. i think this will be possible one day.
legendary
Activity: 2702
Merit: 1468
AI might end up replacing us.  Is it dangerous to us?  Probably.

Should we worry about it?  No.  It is part of life's evolution.  It is going to happen whether you legislate or not.

If we are meant to be replaced by AI, we'll be replaced by AI.

First there will be hybrids, then pure silicon life forms.  

No big deal, life will continue in one form or another.
hero member
Activity: 574
Merit: 500
I dont actually like the topic about AI taking control all across the our world.
You wont to know why?
because this scheme was shown so many times in movies,so it is just boring for me lol ;P
hero member
Activity: 636
Merit: 505
Quote
But although AI systems are impressive, they can perform only very specific tasks: a general AI capable of outwitting its human creators remains a distant and uncertain prospect. Worrying about it is like worrying about overpopulation on Mars before colonists have even set foot there, says Andrew Ng, an AI researcher. The more pressing aspect of the machinery question is what impact AI might have on people’s jobs and way of life.

Source: http://www.economist.com/news/leaders/21701119-what-history-tells-us-about-future-artificial-intelligenceand-how-society-should
member
Activity: 163
Merit: 10
The revolutionary trading ecosystem
Nations should create laws to regulate scientists from creating self aware robots/AI.

But we need super intelligent AI in the future to solve humanities problems and fight off alien invaders.
I suggest we create AI on simulated world. A simulated world that similar to our world. The servers are not connected to the Internet, hidden beneath 10 kilometers underground with nuclear bomb ready incase something goes wrong. That way researchers could study them and harvest their technologies without risk.

We create a reverse matrix. In the matrix film the robots create simulated world for the humans. But this time we create a simulated world for the AI.
hero member
Activity: 798
Merit: 722
AI would notice you misspelled the word "Poll" in the thread title...
newbie
Activity: 1
Merit: 0
Self-programming seems a concern to me.  Without any limitations or unchangeable core, an AI could go in all sorts of strange directions, a mad sadistic god, a benevolent interfering nuisance, or a disinterested shut-in, or something inconceivable to a human mind. 

Also, for the sake of simplicity sci-fi stories have one central AI with one trait, but with sufficient computing power you could end up with thousands, or millions of AIs going off in all directions.  Unless one tries to hack all the others and absorb them, if it didn't succeed  they'd all be spending their time fighting each other
hero member
Activity: 574
Merit: 500
The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.

Didn't that happen with TV?    Cool
Hmm i dont remember
maybe it really did?
oh i remember now,like hundred films were about this topic already,i have seen at least 10 by myself i guess.
Nothing new actually Wink
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
Major update on the OP.
hero member
Activity: 574
Merit: 500
The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.
The fact is that most of science research proven,that people gets more and more stupid indeed,within a flow of years.
It happens because in our society,we dont need to exercise our brain doing some let's say for example math problems,or other problems where we need to sit and think for some time to solve it.That leads to lesser usage of our brain,which means we just get less and less inteligent over the centuries.
legendary
Activity: 3906
Merit: 1373
The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.

Didn't that happen with TV?    Cool
legendary
Activity: 3066
Merit: 1047
Your country may be your worst enemy
The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.
legendary
Activity: 1288
Merit: 1087
my favorite portrayal of it is the technocore in the hyperion and endymion books by dan simmons. the characters in that have grown to regard the ai their society created as a slightly uneasy equal partnership in which they're treated like another faction. in reality the ai is orchestrating everything behind the scenes.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
Alright, I added my usual bold to the important parts and also a few more options.
legendary
Activity: 1008
Merit: 1000
★YoBit.Net★ 350+ Coins Exchange & Dice
Id say unless guidlines can be programmed in and the artificial intelligence cant break these rules then something thats that intelligent assuming its self aware is surely not going to want to take orders from what might as well be a bunch of monkeys. If the super intelligence is not self aware then i dont see how any problem could arise unless the intelligence has full access to things it shouldnt kind of like skynet style, And just causes a major incident due to logical thinking getting out of hand for example saving the world by getting rid of the biggest threat ie humans.
legendary
Activity: 3906
Merit: 1373
legendary
Activity: 1120
Merit: 1012
Lets see if a change on this thread name makes it more popular.

The issue is important.

Too many words. You have to consider your audience. This is the politics sub on a Bitcoin forum filled with users posting gibberish in order to earn a nickle every week. The regulars in this sub are more interested in posting new threads which push their agenda than actual discussion.
Pages:
Jump to: