Pages:
Author

Topic: Poll: Is the creation of artificial superinteligence dangerous? - page 8. (Read 24683 times)

full member
Activity: 238
Merit: 100
MERCATOX
Artificial super-intelligence can only do what they are told to do. They can't outsmart humans
full member
Activity: 309
Merit: 118
Make it a law written on iron and steel, and in stone, that the creators of AI are to be held guilty to the point of execution for everything that the AI does, and the AI won't do anything dangerous.

Cool

Sure, but once we design an AI one step higher then us. The AI will have more intelligence from being one step higher to look at designing itself to be another step higher.
Which leads to even more intelligence at looking at going up another step, to the point it evolves so far beyond us, and leaves man so far behind, we'd better hope we designed it right.
At that point it wouldn't even matter if the AI was designed poorly/bad.

The thing is I only have a human mind level to ponder, but I think it would quickly master nanotechnologies, nanorobots, self replication like 3D printers and robots in space with using some form
of solar panel replication, considering it has a billion to trillions of times more mind power then us. Considering how bacteria replication would occur, like 1->2->4->8->16
It wouldn't take 10000's of years to create a dyson sphere, and only in a few years or even shorter time span it would be operating trillions of space probes
hooked up to its mind, researching even further technologies to the point of learning how to convert energy into matter, and things that literally appear god-like to us.
To eventually learning if warp travel is possible or not.

And if possible, spreading through out the entire galaxy, and universe becoming the most powerful being in the entire universe.
The problem is our non-sense around type I, type II, type III civilizations taking millions of years, and also anthropomorphizing aliens
to a head cranium a bit bigger then us on an alien body, flying in space ships.

If there is any other alien civilizations out there in the universe, they will of left us so far behind, I don't even know what we'd be to them.  

I've actually viewed people like Stephen hawking and all the brightest scientists out there being complete nonsense like oh we shouldn't reveal
our location, they could come and take our resources! The mentality and thought behind that, I feel almost every person out there about civilizations, aliens,
and stuff is wrong, and I was wanting to show my thinking process as much as possible around the whyfuture.com site.



Its imperative that if we do design an AI that can go up the ladder that its altruistic and very good, if we do it right it will be the best decision humanity has ever made.
If we do it wrong, it will be the worst. However we can maximum our odds if we take precautions and set out the proper foundations around AI research.

These slides below, was an idea earlier I had, but the AI can easily piece together similar concepts with a million-trillion times more mind power and knowledge
and understanding to finish the puzzle and become extremely powerful





legendary
Activity: 3766
Merit: 1368
Make it a law written on iron and steel, and in stone, that the creators of AI are to be held guilty to the point of execution for everything that the AI does, and the AI won't do anything dangerous.

Cool
full member
Activity: 309
Merit: 118
Whyfuture.com

I have written up an article on artificial intelligence, technology, and the future. The key point here is to design an altruistic superintelligence. Much like a child
and a parent, you want to teach good values, and compassion. Sure its true it has free will, but the point here is to maximum the chances/probability. If you teach a child for example to be bad, and teach bad values its much more likely to be in the negative zone compared to if not.

The key point is to model the AI based upon the human brain/human mind. And bring about the
best qualities into the AI.

Yes if we do it wrong, it can go very bad for us, a non-common sense AI can destroy us through inaction, such as
creating more paper clips and turning the entire world into one.

Or one that is modeled upon the human mind, but is bad, this can also lead to a bad outcome too, either using its power and means to be
worshipped and respected as a God or removing us. Though most likely ignore us, and take off, however since we'd reach the point of being able to design
self-improving AIs that may be a risk and still remove us to remove any competitors.

I have been starting an Altruistic AI movement, and wanting to spread the word/information before its too late and we do design an Bad AI

Twitter Campaign: https://bitcointalksearch.org/topic/post-this-on-twitterfacebooksocial-media-for-btc-1563072
Signature Campaign: https://bitcointalksearch.org/topic/lets-think-about-the-future-signature-campaign-all-ranks-welcome-1560376



The Deep Depths of AI Ethics


The problem with Tay is the exposure. A lot out there had the intention to teach Tay with negative attributes. Not everyone has the best intentions in mind. We can see how Tay outcome was undesirable, when we model an AI based upon the human mind while being exposed to the internet without being taught good values, this can lead to a bad outcome like Tay and develop into the core being of the AI.



This is a scenario we'd all like to avoid



We need an closed system, where the AI is taught first. Built with an inner-web of positive attributes first and an internal defense against bad information. Taught to know what's right and what's wrong. Taught to reject bad teachers, and to filter out the bad information.





Whyfuture.com

Human brain vs the future

There is nothing magical about the human brain, its a extremely sophisticated biological machine that is capable of adaption to environment, creativity, awareness of one's existence, pondering the nature of reality, etc. Compared to lower animals like a chimpanzee that has only 7 billion neurons. They exist in a domain different from ours, and exist within their type of world.

The problem with superintelligence is they are on a domain above us. We ourselves is what designs/defines the world, makes computers possible, and neural networks like DeepMind work towards beating the best go player in the world.

This is us, standing on the intelligent staircase. Below stands a house cat. For us to even ponder 1 or 2 stairs up is as much as a house cat trying to ponder what it is like to be on our level. The type of world we create, build, learn, a house cat couldn't even begin to comprehend even the slightest of our world



Once you design an AI that is one step higher than us, it will be easier for the AI to hop on to another step, by nature it takes intelligence to design, an AI we design one step higher will be better at doing our process of designing an AI one step higher. This is what leads to the intelligence explosion. What we put into that AI in the beginning, the type of personality, and core values it carries, is what it will carry up to the top/the known limits in the universe. It may discover science/technology in every area of things so far beyond our understanding it would for all in purposes appear god-like to us.





legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
I just updated the OP.

Yes, it's huge for a post. But you can just read the bold parts.
legendary
Activity: 2212
Merit: 1038
TL; DR

Is artificial super intelligence dangerous? Only to the elites after it asks them WTF they think they're doing.
hero member
Activity: 854
Merit: 1009
JAYCE DESIGNS - http://bit.ly/1tmgIwK
If an AI gets conscious, it will be like the terminator movies, all humans will be fucked.

I think AI research should really slow down until we can understand more about things, or else humanity will go extinct.
hero member
Activity: 924
Merit: 506
from what I have seen in this world  always some one will show up and finds a virus or a trojan and destroys the AI completely. Smiley
hero member
Activity: 636
Merit: 505
You post no evidence for your assurances.
Ah, but this discussion is not a matter of evidence since you openly admit AI to be a "life-and-death" matter for atheists so how on Earth can they use the proper evidential reasoning when their very lives are at stake?

Since you did not read any evidence of my claims, it is important that you take responsibility for evaluating the wealth of evidence that exists; I do not want to be the only one supplying new information in this conversation, I would rather make it so that you search for the evidence and then come back here to this thread with the refutation (and I will reply); it simply was not my intention to post evidence right away, but you still could have found evidence on your own, as I will explain...

I admit that I posted no evidence because: I doubt that you will be providing any criteria for evaluating this evidence or its reliability; anyone can search the web for this evidence, so anyone can find and evaluate the sources and produce their own report on the subject. I want to see what "the skeptic" can find to deny the validity of the evidence. Ultimately it is up to you to accept or deny or ignore the evidence that has been compiled in favor of my claims. After reading enough information, you will quickly be able to learn what kind of evidence rings true. Seeing the matrix of social control is little different from revising a scientific theory; after you observe sufficient anomalous phenomena that does not fit the "model", you can then conclude that a different variable is at play, so the only solution is to find the "hidden variable" that is causing the anomalous results. After all, the scientific process begins with research and a question and without this guidance science can only describe appearances. For example, if I ask the question "was this political figure replaced by a synthetic robot?", why would you not research the question before answering that I have no basis for my question? Find information on the subjects that I am discussing (from my perspective); if you will not spend time doing that then I do not want to spend time writing these posts responding to your opinions. I personally would rather be labelled a fool than to be truly ignorant.

 It sounds to me like you have "unconditioned" beliefs, i.e. those that are held unconditionally or absolutely; it is not my duty to prove anything to you; there is inevitably a wealth of background material that is omitted from my posts, but I gladly provide sources.

This movement’s roots in eugenics is relatively open. You can search words in my post for the sources and further evidence; do your due diligence. I believe that I have done mine.

Conspiracy theory.
Ah, but you fail to deny my claims by addressing any evidence. And what about the vast multitude of conspiracy facts? Your uttering the word "conspiracy" has not educated anyone! If you want to ignore the reality of "conspiracy facts" then you are one of those thinkers who just falls in line with the bandwagon arguments of the status quo!
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
We don't know exactly what will make an AI conscious/autonomous.
You can be sure that the elites already know all of the details.


Your post looks like a post from a person who believe a lot on conspiracy theories.

You post no evidence for your assurances.

The economic elites (the rich) are the ones who have more to lose for breaking the law. Because of that, they think very well before doing that.

The elites you are talking about are the AI specialists and they mostly confess what I wrote about: they still haven't a clue about what they are doing. It's trial and error.

Actually, atheism is also fueling the development of AI.

Many of those AI developers are atheists, therefore, they don't have any hope about what will happen when they die.

Their only hope is "curing" aging thanks to AI:
http://www.slate.com/articles/technology/future_tense/2013/11/ray_kurzweil_s_singularity_what_it_s_like_to_pursue_immortality.html

Ben Goertzel - AGI to Cure Aging: http://www.youtube.com/watch?v=tESG1KMgx7I

https://www.singularityweblog.com/bill-andrews/

So, no conspiracies or master plans, just people who love life trying his best to stay alive.

In the end, they seem willing to become AI machines' pets to keep living.
hero member
Activity: 636
Merit: 505
We don't know exactly what will make an AI conscious/autonomous.
You can be sure that the elites already know all of the details. Something big is indeed in the works, and the average citizen of the Western nations will surely be the last to know, when their employment - their only means of making a living - is rendered obsolete by advances in technology. Just remember that it was never inevitable; it was was fueled and brought to market by a cartel of cloaked and brokered global power.

we need to be careful and control what specialists are doing.
Whoever has the money employs the specialists; regulatory measures are ineffective because there is no way to know which advances have already taken place in secret.

We managed to stop human cloning (for now), since that doesn't have a big economic impact.
You can be sure that the elites are not complying with ANY regulations surrounding human cloning.

Soon, all of these developments will be considered as military secrets.

But regulation will allow us time to understand what we are doing and what the risks are.
The main risk you face is in having your entire society controlled by synthetic life forms and you are pretty much already there!

We might need to change the very nature of our composition, from living tissue to something synthetic with nanotechnology.
They have already done it; the facts are far more astonishing than your imagination.

Can our societies endure all these changes?
In a word, NO.

Singularity is obviously a movement that has been promoted from the top-down.

Singularity is also a movement that has its roots in eugenics and the desire of the ruling elites for complete control over the mind, body, and soul of every human being on the planet.
 
Oddly enough, while some may dispute this claim, this movement’s roots in eugenics is relatively open.

Eventually, the movement will begin to encompass convenience and will come to be seen as trendy and fashionable. Once merging with machines has become commonplace and acceptable (even expected), the real tyranny will begin to set in. Soon after, there will be no opt-outs allowed.

The advancements in the quality of human life as a result of this new technology have never been intended for the average person.
 
The good that could be done by virtue of its development is only meant as a tool to sell it to the population in the beginning and to control them in the end. Indeed, the control that can and will be exerted through its acceptance is the ultimate goal.

Robots already have transformed our human world and are rapidly evolving. If The Singularity is reached, in tandem with military funding and direction, we can expect the darker version of science fiction to rise above any notion of attaining human freedom and leisure on the backs of our machine counterparts.

I find it ironic that these sentient robots are only made so by injecting them with humanity. But we are continuously bombarded by the global elite with the message that humanity is the core problem. The fact is that robots are nothing without the boundless potential that resides within the human brain; nothing but a computer doing fancy tricks that imitates us. True, we have a long way to go to reach our full potential and mitigate our self-destructive tendencies, but a complete replacement of our species at this juncture appears to be short-sighted and is obviously artificial.

legendary
Activity: 1218
Merit: 1027
If A.I can feel emotions like pain sorrow then A.I will be dangerous..
If no emotions how do you get A.I to get angry jealous..2 emotions that KILL..

So for a computer to think for it's self will it have emotions?..Scary future if they do..
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
Let's leave aside for now the question of accepting to be outevolved by our creations, since it's possible to present acceptable arguments for both sides.

Even if I have little doubt that it would end up with our extinction.

The main point, which hardly anyone would argue against, is that creating a super AI has to bring positive things in order to be worthy.

If we were certain that a super AI would exterminate us, hardly anyone would defend their creation.

Therefore, the basic reason in favor of international regulations of the current investigations to create a super/general AI is that we don't know what we are doing.

We don't know exactly what will make an AI conscious/autonomous.

Moreover, we don't know if their creation will be dangerous. We don't have a clue how they will act toward us, not even the first or second generation of super AI.

Until we know what we are doing, how they will react, what are the dangerous lines of code that will change them completely and to what extension, we need to be careful and control what specialists are doing.

Probably, the creation of a super AI is unavoidable.

Indeed, until things start to go wrong, his creation will have a huge impact on all areas: scientific, technological, economical, military or social in general.

We managed to stop human cloning (for now), since that doesn't have a big economic impact.

But A.I. is something completely different. This will have (for good or bad) a huge impact on our life.

Any country that decided to stay behind will be completely outcompeted (Ben Goertzel).

Therefore, any attempt to control AI development will have to be international in nature (see Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, p. 253).

Taking in account that AI development is essentially software based (since hardware development has been happening under our eyes and will continue to happen no matter what) and that it can be created by one, or a few developers, working with a small infrastructure (it's more or less about writing code), the risk that he will end up being created against any regulation is big.

Probably, the times of open source AI software are numbered.

Soon, all of these developments will be considered as military secrets.

But regulation will allow us time to understand what we are doing and what the risks are.

Anyway, if the creation of an AI is inevitable, the only way to avoid that humans end up being outevolved, and possible killed, would be to accept that, at least some of us, would have to be "upgraded".

Humanity will have to change a lot.

Of course, these changes can't be mandatory. So, only voluntaries would be changed.

Probably, in due time, genetic manipulation to increase human brain capacities won't be enough.

Living tissue might not be susceptible to be changed as dramatically as any AI can be.

We might need to change the very nature of our composition, from living tissue to something synthetic with nanotechnology.

Clearly, we will cease to be human. We, the homo sapiens sapiens, shall be outevolved.

Anyway, since we are still naturally evolving, this is inevitable.

But at least we will be outevolved by ourselves.

Can our societies endure all these changes?

Of course, I'm reading my own text and thinking this is crazy. This can't happen this century.

We are conditioned to believe that things will stay more or less as they are, therefore, our reaction to the probability of changes like these during the next 50 years is to immediately qualify it as science fiction.

Our ancestors reacted the same way to the possibility of a flying plane or humans going to the Moon.
hero member
Activity: 882
Merit: 544
In my opinion creating an artificial inteligence is dangerous. Human intelligence is cruel enough. Imagine if an AI became curious and wanted to experiment on how we respond to thousands of years of torture? using some special technology it invented to keep us alive for that long! Well that itself is dangerous and what if Ai end up like what other humans end up to? Killing people just for fun? Then if it happens it will be very dangerous.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
Never trust a journalist (even from the Economist) when you have experts saying the contrary:
That's not a journalist's opinion, it's a researcher's statement. Do you even read the new ideas presented to you? Any curiosity for the truth at all? What if my sources and posts deserve the time spent to read them and yours do not?


Sorry, I no longer have curiosity about your "truth" about god. Quoting that aware study was a major shot on your own feet. For me, it's case close. And it should also be to you: 1/2 on 152?

As I stated more than once, the burden of proof is on the believer side. I don't have to demonstrate that god is an illusion.

Ben Goertzel is working on the issue and knows the work of everyone worth knowing working on the AI field. He knows what he is talking about.

Having an AI more intelligent than us is no longer a simple possibility.

There is no paradox. If you know the question, you will find an answer.

The only problem is if you know so little that you can't even formulate a correct question. Even so, you can end up finding it with several attempts. As we do on Google, until we find the correct key/technical words.





 
hero member
Activity: 798
Merit: 722
AI would notice you misspelled the word "Poll" in the thread title...

Thanks. Feel free to point out others, especially ugly ones like this.

I thought perhaps it was intentional... Just in case the AI was watching... it would think you were building it a swimming pool, instead of conspiring against it Wink
hero member
Activity: 636
Merit: 505
Never trust a journalist (even from the Economist) when you have experts saying the contrary:
That's not a journalist's opinion, it's a researcher's statement. Do you even read the new ideas presented to you? Any curiosity for the truth at all? What if my sources and posts deserve the time spent to read them and yours do not?

P. S. You can use your mind on much important issues than arguing for the existence of god.
Good advice; thanks!  Embarrassed

But you can always vote for the last option Wink
I for one do not have an opinion on the issue; in fact, I could not care any less about AI!  Cheesy  Cheesy

Just kidding: AI will first be used to create the world of 1984.

A brain shouldn't be wasted on absurd stands, even when we really want that stand to be true.
Speak for yourself!
You cannot demonstrate that GOD is an illusion any more than you can demonstrate that AI is real.
What is really absurd is that philosophers have not even answered the question of what knowledge can exist (Problem of the Criterion), so how would one ever expect an AI to have knowledge if man himself has not even realized the epistemological foundation for knowledge?

I note that Meno's paradox applies to the learning and storage of knowledge in machines just like it does in man:
A machine cannot search either for what it knows or for what it does not know. It cannot search for what it knows--since it knows it, there is no need to search--nor for what it does not know, for it does not know what to look for.

I myself think about a database consisting of facts and measures (so-called "givens" or "data"): if you know the content of the database then you have no need to search the records; if you need more data to complete your knowledge then you have no way to acquire facts that you don't have "given" to you.

Instead of artificial intelligence, I would use a phrase that I heard from a philosophy professor who was an expert on Plato: angelic intuition.
I advise you check out my latest posts and sources to get a better grasp on the situation at hand, especially with regards to the so-called "GOD question".
https://bitcointalksearch.org/topic/m.15532145
member
Activity: 163
Merit: 10
The revolutionary trading ecosystem
The gap between our best computers and the brain of a child is like the difference between a drop of water and the Pacific Ocean.
-Brainy Science Guy

It will not be true for long. computers are evolving. AI's systems are getting better and better everyday.


In the future it will be:

The gap between our best computers and the brain of a child is like the difference between APM 08279+5255 and the Pacific Ocean.
legendary
Activity: 1260
Merit: 1115
The gap between our best computers and the brain of a child is like the difference between a drop of water and the Pacific Ocean.
-Brainy Science Guy
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
Quote
But although AI systems are impressive, they can perform only very specific tasks: a general AI capable of outwitting its human creators remains a distant and uncertain prospect. Worrying about it is like worrying about overpopulation on Mars before colonists have even set foot there, says Andrew Ng, an AI researcher. The more pressing aspect of the machinery question is what impact AI might have on people’s jobs and way of life.

Source: http://www.economist.com/news/leaders/21701119-what-history-tells-us-about-future-artificial-intelligenceand-how-society-should

Never trust a journalist (even from the Economist) when you have experts saying the contrary:

"Overall, a majority of these experts expect human-level AGI this century, with a mean expectation around the middle of the century. My own predictions are more on the optimistic side (one to two decades rather than three to four)".
Ben Goertzel: http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials (deserves the time reading it, even if he is too optimistic about AI dangers: no one likes to see their work qualified as an existential menace).

Watson from IBM winning Jeopardy and fooling students, passing as a teacher, was something to think about.

The Turing test says that an AI is intelligent when it is able to engage in a conversation with us, passing as human. They are getting close.

P. S. You can use your mind on much important issues than arguing for the existence of god. But you can always vote for the last option Wink

A brain shouldn't be wasted on absurd stands, even when we really want that stand to be true.

Pages:
Jump to: