Pages:
Author

Topic: The technological singularity - page 3. (Read 766 times)

copper member
Activity: 224
Merit: 14
October 02, 2018, 04:09:09 AM
#10
I read a few years ago that we put 300,000 rat neurons into a robot and out of pure boredom the robot started doing things on its own accord.. consciousness in some form was born.. can't we lift the morel dilemma about what is defined as 'life' and start really exploring deeply what happens if we put 30,000,000 human neurons into a robot.

HYBROT


Wont organic neuron's, and bionic CPU's and programmable DNA be the best way to spawn Ai and its artificial general intelligence (AGI)/Consciousness .. and in turn the question/concerns about how will Ai will react to us.. will be that we are nothing more than the antiquated ancestors of the next organic/semiconductor evolutionary step.

Surely that puts to rest the fears of AGI being our demise.
legendary
Activity: 2926
Merit: 1386
October 01, 2018, 04:36:36 PM
#9
....everything we can perceive is made of the same consciousness and because of that, EVERYTHING has consciousness. (If you want to listen him talking about this specific topic watch this video at minute 9:12 https://www.youtube.com/watch?v=owppju3jwPE&t=515s

It may sound crazy from a materialistic/traditional science point of view, but even the most advanced scientific understanding that we have nowadays (which is Quantum Mechanics) may prove that there are evidence of it being totally possible.

My consciousness says your consciousness is wrong, and what my consciousness knows about QM says there's zero relationship between QM principles or theory and consciousness.

But it's worth noting that if Ai develops consciousness here in 50 years or less, then it has numerous other places in the universe a long time ago.
jr. member
Activity: 56
Merit: 9
October 01, 2018, 03:35:33 PM
#8
I would guess closer to 50 years from now myself, maybe 2070.  Mostly because science has no idea where consciousness comes from.  A programmer can create a neural-net that acts like a brain, but, how do you give it a consciousness and free will?

Your consciousness is not the brain itself, but rather "the watcher" or "the decider", aka "the man behind the curtain"... how do you give such an aspect to a computer?

Some people think that once a neural network becomes large enough it could become self-aware spontaneously.  If this happened it could be a lot sooner than 50 years.  Though, I don't see how you could even tell it was self-aware unless it had the ability to reprogram itself.

A self-aware AI with the ability to reprogram itself would evolve faster than people could keep up with it.  Soon we would not understand it's code, even if it allowed us to view it.  This sounds like the "singularity" you refer to.  You can't put that genie back in the bottle once it is out...

You are touching directly the important point here, the theme of consciousness and how far we are from creating not artificial intelligence (because it has been already created) but artificial CONSCIOUSNESS, which basically means: A mechanism that is able to reflect on it's own condition and capable of learning and re-organizing itself to overcome its obstacles.

That's the real phylosophycal deal in this field, which has been explored by Phillip K. Dick in his novel called "Do androids dream of electrical sheeps?" and in a Ridley Scott movie called "Blade Runner" which is an adaptation of the mentioned novel.

I like the position that Ben Goertzel (Probably the greatest IA developer in history) takes: He sees consciousness not as a consequence of a complex interconnected web of neurons but as the most fundamental 'substance' by which all of reality is made.
For he, it doesn't depends on if a robot has consciousness or not, because he believes that everything we see, touch and experience is just an expression of (the fundamental) consciousness, just in different degrees and shapes. So everything we can perceive is made of the same consciousness and because of that, EVERYTHING has consciousness. (If you want to listen him talking about this specific topic watch this video at minute 9:12 https://www.youtube.com/watch?v=owppju3jwPE&t=515s

It may sound crazy from a materialistic/traditional science point of view, but even the most advanced scientific understanding that we have nowadays (which is Quantum Mechanics) may prove that there are evidence of it being totally possible.
jr. member
Activity: 114
Merit: 2
October 01, 2018, 03:15:42 PM
#7
We're going to join AI's capabilities with things like Musk's Neuralinks implanted in our minds within the next ten years. This will potentially be used as mind uploading tech (brain scan)-allowing us to join the cloud and remain immortal in the virtual space.

We're already getting acquainted with AI in our daily life thanks to things like virtual agents as of now..
full member
Activity: 574
Merit: 152
October 01, 2018, 09:21:17 AM
#6
As soon as artificial general intelligence (AGI) is created, technological singularity is only weeks away.

The difficulty lies in creating the general artificial intelligence. Application specific intelligence is a lot easier than general intelligence.
legendary
Activity: 2926
Merit: 1386
October 01, 2018, 09:17:09 AM
#5
The technological singularity, how far are we off from the 2045 date when this will all come together....

I don't know.

I will ask Google.
full member
Activity: 574
Merit: 108
October 01, 2018, 09:16:30 AM
#4
Computers will not be as smarter as humans are because they already are. Computers were far more capable of doing things conveniently and easier than humans. They are faster and more accurate in doing calculations and alike. They are more complex when it comes the ability do work on things on a versatile and extraordinary manner. However, despite the computers' more advantages than the humans, we cannot deny the fact that computers were created by humans, and there can be no computers without the existence of human minds. By this, we can imply that human minds gets developed naturally without the programming of others while computer functions cannot increase when not programmed by the humans.
hero member
Activity: 798
Merit: 722
October 01, 2018, 08:49:53 AM
#3
...computers will be as smart as humans by 2029

I hate to break it to you, but computers are already smarter than humans.  Deep Blue beat the world chess champion in 1997.  More recently, deep learning AI has been able to beat humans at Go, a more complex game than chess.

https://en.wikipedia.org/wiki/Computer_Go#2015_onwards:_The_deep_learning_era
Quote
In October 2015, Google DeepMind program AlphaGo beat Fan Hui, the European Go champion, five times out of five in tournament conditions.
In March 2016, AlphaGo beat Lee Sedol in the first three of five matches.
In May 2017, AlphaGo beat Ke Jie, who at the time was ranked top in the world, in a three-game match during the Future of Go Summit.
In October 2017, DeepMind revealed a new version of AlphaGo, trained only through self play, that had surpassed all previous versions, beating the Ke Jie version in 89 out of 100 games.

A well-designed AI is better than a human at nearly anything these days, but everything is very specific.  AlphaGo is designed to play a game called Go, it would be horrible at doing anything else.


I assume the dates of 2029 and 2045 are guesses to when AI will achieve consciousness.

I would guess closer to 50 years from now myself, maybe 2070.  Mostly because science has no idea where consciousness comes from.  A programmer can create a neural-net that acts like a brain, but, how do you give it a consciousness and free will?

Your consciousness is not the brain itself, but rather "the watcher" or "the decider", aka "the man behind the curtain"... how do you give such an aspect to a computer?

Some people think that once a neural network becomes large enough it could become self-aware spontaneously.  If this happened it could be a lot sooner than 50 years.  Though, I don't see how you could even tell it was self-aware unless it had the ability to reprogram itself.

A self-aware AI with the ability to reprogram itself would evolve faster than people could keep up with it.  Soon we would not understand it's code, even if it allowed us to view it.  This sounds like the "singularity" you refer to.  You can't put that genie back in the bottle once it is out...
legendary
Activity: 2702
Merit: 1468
October 01, 2018, 07:20:27 AM
#2
The technological singularity, how far are we off from the 2045 date when this will all come together to the next leap in civilisation or that computers will be as smart as humans by 2029 - according to Ray Kurzweil

Whats your guys thoughts..

I think progress in AI will not be linear but 2029 might be too optimistic.

Quantum computing will play a role in AI achieving supremacy over humans.  

Humans are bad at processing large sets of data.

People who are skeptical about AI do not understand AI.

copper member
Activity: 224
Merit: 14
October 01, 2018, 04:19:20 AM
#1
The technological singularity, how far are we off from the 2045 date when this will all come together to the next leap in civilisation or that computers will be as smart as humans by 2029 - according to Ray Kurzweil

Whats your guys thoughts..
Pages:
Jump to: