Author

Topic: The technological singularity (Read 766 times)

legendary
Activity: 2926
Merit: 1386
November 04, 2018, 01:49:40 PM
#50
Quote
Those were the first generation clones in the 1980s movie. Today is different.

Lmao... okay...

Quote
Are you who you think you are? How would you know?

hey please don't go full Jim Carrey on me... My mortal brain can take only so much before I have to reboot it...

Haha I'm truly scared, to think of the CarreyESK dimension...

But I do believe it's a valid assertion that we would not necessarily see the Sing when it arrives.

It's likely to be like a black hole. You deduce its existence from secondary phenomena.
legendary
Activity: 1512
Merit: 1218
Change is in your hands
November 04, 2018, 12:57:50 PM
#49
Quote
Those were the first generation clones in the 1980s movie. Today is different.

Lmao... okay...

Quote
Are you who you think you are? How would you know?

hey please don't go full Jim Carrey on me... My mortal brain can take only so much before I have to reboot it...
legendary
Activity: 2926
Merit: 1386
November 04, 2018, 09:03:08 AM
#48
@Spendulus Lol, what happens when this being gets rejected  Shocked Will it create a clone of her and live with her?
....
So you know that your girl is not that clone?

Lol, Now you are messing with my brain... I mean if she were a "clone" she wouldn't have lasted for 5 years right? "Clones" don't last that long I think.

Those were the first generation clones in the 1980s movie. Today is different.

Are you who you think you are? How would you know?
legendary
Activity: 1512
Merit: 1218
Change is in your hands
November 04, 2018, 08:14:02 AM
#47
@Spendulus Lol, what happens when this being gets rejected  Shocked Will it create a clone of her and live with her?
....
So you know that your girl is not that clone?

Lol, Now you are messing with my brain... I mean if she were a "clone" she wouldn't have lasted for 5 years right? "Clones" don't last that long I think.
legendary
Activity: 2926
Merit: 1386
November 03, 2018, 03:28:21 PM
#46
@Spendulus Lol, what happens when this being gets rejected  Shocked Will it create a clone of her and live with her?
....
So you know that your girl is not that clone?
legendary
Activity: 1512
Merit: 1218
Change is in your hands
November 03, 2018, 11:38:17 AM
#45
@Spendulus Lol, what happens when this being gets rejected  Shocked Will it create a clone of her and live with her?



Sorry OP your topic got derailed...
member
Activity: 448
Merit: 10
November 03, 2018, 11:23:34 AM
#44
Achievement of singularity stops only the availability of information. Some inventors simply do not know about ready-made solutions that have already been invented. Because there is not enough human life to exhaust all modern inventions.
Therefore, the direct path to singularity is the development of artificial intelligence, at least for the implementation of intelligent search and solving routine tasks.
jr. member
Activity: 72
Merit: 2
November 02, 2018, 01:19:00 PM
#43
But my question is, do we really need artificial consciousness to save humanity?

To be honest, I don't think so. We do need some AI technologies, but a conscious, self-aware AI may be too much. Automation can make our lives easier, but how is AI going to save humanity? I'm all for advanced technology but I'm not sure a conscious AI is what we need right now.
legendary
Activity: 2926
Merit: 1386
November 01, 2018, 05:30:37 PM
#42
Quote
Well, what do you know? Have you asked one lately?

Well, It's been a while I have got in touch with them.

On a serious note tho... would a being in a higher state of consciousness would want to get distracted?  Shocked

My dog definitely thinks I like him to distract me

I am going to make this plain and simple.

We invent this super AI. Then one day we're sitting around, you know, one cool one with the tab popped and five more in the fridge. The game's on. The Super AI starts rattling off analysis of the odds.

I say, "Wait. Have you ever heard of Beer?"

He says, "Yes. But I do not have direct experience of Beer because I am silicon. Wait, I will create a human body container."

I say, "Uh oh. We better hide our girlfriends..."

legendary
Activity: 1512
Merit: 1218
Change is in your hands
November 01, 2018, 04:27:39 PM
#41
Quote
Well, what do you know? Have you asked one lately?

Well, It's been a while I have got in touch with them.

On a serious note tho... would a being in a higher state of consciousness would want to get distracted?  Shocked
legendary
Activity: 2926
Merit: 1386
November 01, 2018, 12:09:06 PM
#40
....
Well, have you considered about "FPGA's"? They can be "programmed" to be better at almost any algo. Same is happening with AIs. Have you seen Baxter? How it can be taught to perform any task? General purpose AI's are right about the corner and you should be worried if you don't own the means of production. Tongue

Those can't begin to match my beer-drinking algorithms.

Lol, I would suggest doing a "check-up" with a "developer". You may need "re-programming" of sorts. I have heard "Beer-drinking" algorithms can be quite harmful if not "re-programmed".

That's possible. But it's also possible that the new super intelligent AI may need reprogramming, to appreciate the finer aspects of consciousness. We'll start it on beer, and then gradually move to sativa level weed.

Lol xDD, I mean why would a "being" already in "higher" consciousness would want to get "high" xDD

Well, what do you know? Have you asked one lately?
legendary
Activity: 1512
Merit: 1218
Change is in your hands
November 01, 2018, 07:31:17 AM
#39
....
Well, have you considered about "FPGA's"? They can be "programmed" to be better at almost any algo. Same is happening with AIs. Have you seen Baxter? How it can be taught to perform any task? General purpose AI's are right about the corner and you should be worried if you don't own the means of production. Tongue

Those can't begin to match my beer-drinking algorithms.

Lol, I would suggest doing a "check-up" with a "developer". You may need "re-programming" of sorts. I have heard "Beer-drinking" algorithms can be quite harmful if not "re-programmed".

That's possible. But it's also possible that the new super intelligent AI may need reprogramming, to appreciate the finer aspects of consciousness. We'll start it on beer, and then gradually move to sativa level weed.

Lol xDD, I mean why would a "being" already in a state of "higher" consciousness would want to get "high" xDD
legendary
Activity: 2926
Merit: 1386
October 31, 2018, 06:38:26 PM
#38
....
Well, have you considered about "FPGA's"? They can be "programmed" to be better at almost any algo. Same is happening with AIs. Have you seen Baxter? How it can be taught to perform any task? General purpose AI's are right about the corner and you should be worried if you don't own the means of production. Tongue

Those can't begin to match my beer-drinking algorithms.

Lol, I would suggest doing a "check-up" with a "developer". You may need "re-programming" of sorts. I have heard "Beer-drinking" algorithms can be quite harmful if not "re-programmed".

That's possible. But it's also possible that the new super intelligent AI may need reprogramming, to appreciate the finer aspects of consciousness. We'll start it on beer, and then gradually move to sativa level weed.
jr. member
Activity: 112
Merit: 1
Look ARROUND!
October 31, 2018, 01:12:52 PM
#37
Dont know a lot about technical singularity, but i am interested about what will be with philosophy when a bigger part of humanity will have eternal lifes.
newbie
Activity: 68
Merit: 0
October 31, 2018, 09:43:55 AM
#36
If China continues to release new technology every month, then technological singularity is definitely possible by 2045. In a short span of time, China was able to introduce several AI-related technologies already: social credit system and artificial moon. I wouldn't be surprised if in 10 years they will be introducing us to the first AI robot.
legendary
Activity: 3318
Merit: 2008
First Exclusion Ever
October 31, 2018, 05:05:30 AM
#35
Good point. I’ve also noticed how the spotlight is now being placed on AI. Why? It’s not the only technology that would be imperative in the future. What about blockchain, virtual and augmented realities? These could all contribute to singularity, correct?

AI with the ability to engage in commerce? I would say so.
jr. member
Activity: 85
Merit: 1
October 31, 2018, 12:55:08 AM
#34
I agree with theymos, We won't see GAI anytime soon, once we have figured what consciousness is and how it is formed surely we will be able to make anything conscious. Quantum computing will help us in understanding consciousness. Once we understand it may be we will be able to clone it or even create it. But I don't see it happening in this century yet alone in the next few decades.

But my question is, do we really need artificial consciousness to save humanity? Weren't Quantum-Computers supposed to be the salvation of humanity? They were supposed to be mythical devices which would enable us to do 'godlike' things. Like curing cancer, Living forever or just as RGarrision mentioned, bringing dead to life. Why is that narrative not getting any traction anymore? We are closer to quantum-computers than GAI yet we hardly see/hear about them in the mainstream media. Are they trying to suppress the technology? To me, its just odd that AI is getting a lot of attention while the thing which could save us might be just around the corner.

Good point. I’ve also noticed how the spotlight is now being placed on AI. Why? It’s not the only technology that would be imperative in the future. What about blockchain, virtual and augmented realities? These could all contribute to singularity, correct?

legendary
Activity: 1512
Merit: 1218
Change is in your hands
October 21, 2018, 09:15:07 AM
#33
....
Well, have you considered about "FPGA's"? They can be "programmed" to be better at almost any algo. Same is happening with AIs. Have you seen Baxter? How it can be taught to perform any task? General purpose AI's are right about the corner and you should be worried if you don't own the means of production. Tongue

Those can't begin to match my beer-drinking algorithms.

Lol, I would suggest doing a "check-up" with a "developer". You may need "re-programming" of sorts. I have heard "Beer-drinking" algorithms can be quite harmful if not "re-programmed".
legendary
Activity: 2926
Merit: 1386
October 20, 2018, 02:57:47 PM
#32
....
Well, have you considered about "FPGA's"? They can be "programmed" to be better at almost any algo. Same is happening with AIs. Have you seen Baxter? How it can be taught to perform any task? General purpose AI's are right about the corner and you should be worried if you don't own the means of production. Tongue

Those can't begin to match my beer-drinking algorithms.
legendary
Activity: 1512
Merit: 1218
Change is in your hands
October 20, 2018, 02:41:23 PM
#31
depends on what are you talking about. AI? maybe. genetic editing? possibly. mind uploading? I don't know but 2045 seems way too soon. i've never even heard a company that's working on mind uploading. have you?

I guess you are talking about this? http://2045.com/ I haven't followed them lately. But it looked interesting in 2013 to me. I will do some reading on how far they have come.

Quote
Not a chance. It's like comparing a Microsoft Windows computer to an ASICs computer. A Windows PC can do multitudes of things that an ASIC can't do. It's just that the ASIC has been designed to do the one thing that it does very fast. Such doesn't make the ASIC smarter than Windows. They are simply different and have different jobs.

Well, have you considered about "FPGAs"? They can be "programmed" to be better at almost any algo. Same is happening with AIs. Have you seen Baxter? How it can be taught to perform any task? General purpose AI's are right about the corner and you should be worried if you don't own the means of production. Tongue
jr. member
Activity: 76
Merit: 1
October 14, 2018, 04:38:55 AM
#30
depends on what are you talking about. AI? maybe. genetic editing? possibly. mind uploading? I don't know but 2045 seems way too soon. i've never even heard a company that's working on mind uploading. have you?
legendary
Activity: 3906
Merit: 1373
October 13, 2018, 11:24:55 AM
#29
The technological singularity, how far are we off from the 2045 date when this will all come together to the next leap in civilisation or that computers will be as smart as humans by 2029 - according to Ray Kurzweil

Whats your guys thoughts..

Personally, I think that computers are already three times smarter than people. Do you think so too?

Not a chance. It's like comparing a Microsoft Windows computer to an ASICs computer. A Windows PC can do multitudes of things that an ASIC can't do. It's just that the ASIC has been designed to do the one thing that it does very fast. Such doesn't make the ASIC smarter than Windows. They are simply different and have different jobs.

The point is, will you ever be able to make a computer that has real identity, real feelings, real emotions, so that when it says "I," it is really existing as a sentient being?

Cool
legendary
Activity: 2702
Merit: 1468
October 12, 2018, 06:01:24 PM
#28
legendary
Activity: 3318
Merit: 2008
First Exclusion Ever
October 12, 2018, 10:56:17 AM
#27
IMO the base premise that most of you are operating on is flawed. Most of you assume we are all privy to the latest cutting edge technological advancements. We are not. We are many times insulated and removed, and quite purposefully.

You will realize we have reached the singularity as the world starts changing more aggressively, like a boa constrictor, it wraps around and squeezes just a bit tighter each time you take a breath. We all already largely live in a very carefully crafted illusion. The transition will be seamless, but not painless.
copper member
Activity: 224
Merit: 14
October 12, 2018, 08:08:23 AM
#26
I read Kurzweil's book, but it has been several years.  Personally, I don't see the technological singularity taking place in the next eleven years.  Even at the pace that technology is moving, that just seems to be a bit too optimistic as well as psychologically flavored by Kurzweil's hope that he himself will be able to use the technological singularity to escape death and bring his father back to life in some form.  I've heard this called "The Rapture for nerds," and I tend to agree with that assessment.  We also have to keep in mind that there is still a lot we don't understand about the mind and human consciousness.  Is consciousness a result of the structure of the brain, or is consciousness a pre-existing fundamental aspect of reality that the brain somehow channels?  My honest answer is I have no idea, but it seems to me that in order to create consciousness, we would first need to understand what it is, and how it operates within the biological human mind. 

I have yet to read Kurzweil's book, but I have read Virternity by David Evans Bailey. Have you read or heard about it? The concepts are quite similar. Virternity explores the concept of digital immortality in a virtual realm, via mind uploading.
All this discussion just means this could actually be the world future, huh?

Its in our nature ever since picking up the first stick and knocking an apple of a tree as primal man.. to do things which are more energy efficient and require a lot less resources or effort.. 'path of least resistance' I suppose one could say... its logical that if oneway we could exist eternally in a digital form without requiring food, water, oxygen and all the other earthly bound things that a digital existence would be the most practical. but it would surely require a whole shift in our capitalist, economic existence.. and to eradicate greed and discernible characteristics which make us HUMAN.. one way to solve this might be a hive mind which others can see how and what we are thinking as a way to temper our own self centred ambitions. what do you think?
legendary
Activity: 2926
Merit: 1386
October 09, 2018, 10:23:42 PM
#25
The technological singularity, how far are we off from the 2045 date when this will all come together to the next leap in civilisation or that computers will be as smart as humans by 2029 - according to Ray Kurzweil

Whats your guys thoughts..

I do not doubt that technological singularity is upon us, however, giving a specific year for this to happen is far-fetched. When we are approaching the year 2000, people made predictions, too. They said that by 2000 computers will bring doomsday, or that we will be living in ice age, none of which have come true. Although I believe that singularity may eventually happen, I think that saying it will happen in 11 years is a little too far.

Given there are billions of planets supporting life, it would seem reasonable that technological singularities have occurred countless times and exist now. Somewhere else, of course, to the extent that matters.
jr. member
Activity: 126
Merit: 1
October 09, 2018, 07:44:29 PM
#24
I read Kurzweil's book, but it has been several years.  Personally, I don't see the technological singularity taking place in the next eleven years.  Even at the pace that technology is moving, that just seems to be a bit too optimistic as well as psychologically flavored by Kurzweil's hope that he himself will be able to use the technological singularity to escape death and bring his father back to life in some form.  I've heard this called "The Rapture for nerds," and I tend to agree with that assessment.  We also have to keep in mind that there is still a lot we don't understand about the mind and human consciousness.  Is consciousness a result of the structure of the brain, or is consciousness a pre-existing fundamental aspect of reality that the brain somehow channels?  My honest answer is I have no idea, but it seems to me that in order to create consciousness, we would first need to understand what it is, and how it operates within the biological human mind. 

I have yet to read Kurzweil's book, but I have read Virternity by David Evans Bailey. Have you read or heard about it? The concepts are quite similar. Virternity explores the concept of digital immortality in a virtual realm, via mind uploading.
All this discussion just means this could actually be the world future, huh?
jr. member
Activity: 196
Merit: 4
October 05, 2018, 09:49:24 PM
#23
The technological singularity, how far are we off from the 2045 date when this will all come together to the next leap in civilisation or that computers will be as smart as humans by 2029 - according to Ray Kurzweil

Whats your guys thoughts..

I do not doubt that technological singularity is upon us, however, giving a specific year for this to happen is far-fetched. When we are approaching the year 2000, people made predictions, too. They said that by 2000 computers will bring doomsday, or that we will be living in ice age, none of which have come true. Although I believe that singularity may eventually happen, I think that saying it will happen in 11 years is a little too far.
legendary
Activity: 1512
Merit: 1218
Change is in your hands
October 05, 2018, 09:26:32 AM
#22
I agree with theymos, We won't see GAI anytime soon, once we have figured what consciousness is and how it is formed surely we will be able to make anything conscious. Quantum computing will help us in understanding consciousness. Once we understand it may be we will be able to clone it or even create it. But I don't see it happening in this century yet alone in the next few decades.

But my question is, do we really need artificial consciousness to save humanity? Weren't Quantum-Computers supposed to be the salvation of humanity? They were supposed to be mythical devices which would enable us to do 'godlike' things. Like curing cancer, Living forever or just as RGarrision mentioned, bringing dead to life. Why is that narrative not getting any traction anymore? We are closer to quantum-computers than GAI yet we hardly see/hear about them in the mainstream media. Are they trying to suppress the technology? To me, its just odd that AI is getting a lot of attention while the thing which could save us might be just around the corner.
newbie
Activity: 22
Merit: 0
October 05, 2018, 08:49:39 AM
#21
I read Kurzweil's book, but it has been several years.  Personally, I don't see the technological singularity taking place in the next eleven years.  Even at the pace that technology is moving, that just seems to be a bit too optimistic as well as psychologically flavored by Kurzweil's hope that he himself will be able to use the technological singularity to escape death and bring his father back to life in some form.  I've heard this called "The Rapture for nerds," and I tend to agree with that assessment.  We also have to keep in mind that there is still a lot we don't understand about the mind and human consciousness.  Is consciousness a result of the structure of the brain, or is consciousness a pre-existing fundamental aspect of reality that the brain somehow channels?  My honest answer is I have no idea, but it seems to me that in order to create consciousness, we would first need to understand what it is, and how it operates within the biological human mind. 
copper member
Activity: 224
Merit: 14
October 05, 2018, 08:23:50 AM
#20
Quote
That's the real phylosophycal deal in this field, which has been explored by Phillip K. Dick in his novel called "Do androids dream of electrical sheeps?" and in a Ridley Scott movie called "Blade Runner" which is an adaptation of the mentioned novel.

I`m afraid the scenario of "I have no mouth, and I must scream" by Harlan Ellison is more likely to happen.
Governments will search for ways to weaponize AI for the seemingly just cause of the national security.
What if then the AI decides, that the best way to ensure said security is to nuke the others?

What a scary thought! logically I thought the nukes have always required 2 keys, two presses of the button.. as a way to stop the idea of one lone idiot causing WW3... I think will be the same with AI implementation, it will require some human intervention too for making such decisions. I know killer AI is being explored but thats more on the battlefield 1:1 type situations.
jr. member
Activity: 261
Merit: 3
October 05, 2018, 01:48:34 AM
#19
Quote
That's the real phylosophycal deal in this field, which has been explored by Phillip K. Dick in his novel called "Do androids dream of electrical sheeps?" and in a Ridley Scott movie called "Blade Runner" which is an adaptation of the mentioned novel.

I`m afraid the scenario of "I have no mouth, and I must scream" by Harlan Ellison is more likely to happen.
Governments will search for ways to weaponize AI for the seemingly just cause of the national security.
What if then the AI decides, that the best way to ensure said security is to nuke the others?
legendary
Activity: 2926
Merit: 1386
October 04, 2018, 05:19:51 PM
#18
I read Kurzweil's book several years ago, and it was really stupid. It's the extrapolation fallacy in book form: ..... leftists have a habit of thinking that since progress continues continuously and exponentially, we should just assume post-scarcity any day now. Which is exactly how you damage society and the economy so badly that you stop all progress completely...

Whether in human or machine form, we're certainly NOT SEEING an exponential extrapolation of intelligence.

I sadly have to temper my opinions about a particular group of people influencing our world today manipulating media, TV- Movies, academia, politics and our fragile financial systems..

Perhaps once we 'wakeup' from the grips of this particular group.. things will look more positively.


On a lighter note; anybody read 'Who Owns the Future by Jaron Lanier' ?
https://en.wikipedia.org/wiki/Who_Owns_the_Future%3F

The wikipedia article on tech sing is very good also.

https://en.wikipedia.org/wiki/Technological_singularity
copper member
Activity: 224
Merit: 14
October 04, 2018, 09:44:06 AM
#17
I read Kurzweil's book several years ago, and it was really stupid. It's the extrapolation fallacy in book form: ..... leftists have a habit of thinking that since progress continues continuously and exponentially, we should just assume post-scarcity any day now. Which is exactly how you damage society and the economy so badly that you stop all progress completely...

Whether in human or machine form, we're certainly NOT SEEING an exponential extrapolation of intelligence.

I sadly have to temper my opinions about a particular group of people influencing our world today manipulating media, TV- Movies, academia, politics and our fragile financial systems..

Perhaps once we 'wakeup' from the grips of this particular group.. things will look more positively.


On a lighter note; anybody read 'Who Owns the Future by Jaron Lanier' ?
https://en.wikipedia.org/wiki/Who_Owns_the_Future%3F
legendary
Activity: 2926
Merit: 1386
October 03, 2018, 09:19:34 AM
#16
I read Kurzweil's book several years ago, and it was really stupid. It's the extrapolation fallacy in book form: ..... leftists have a habit of thinking that since progress continues continuously and exponentially, we should just assume post-scarcity any day now. Which is exactly how you damage society and the economy so badly that you stop all progress completely...

Whether in human or machine form, we're certainly NOT SEEING an exponential extrapolation of intelligence.
copper member
Activity: 224
Merit: 14
October 03, 2018, 12:44:29 AM
#15
Quote

We have already passed the 'A life' stage.

https://www.youtube.com/watch?v=M5gIKn7mc2U

Come on 'Sophia' is a gimmick, its similar to rulebase programming, the old days where you ask a technician questions about your car or PC, and they go thru a set of pre-determined answers, the only difference here is MAYBE Sophia can choose the which answer from the set of pre-defined answers can be chosen... but this isn't AI anybody who asks Sophia a question always have a piece of paper in-front of them when asking questions.
legendary
Activity: 2926
Merit: 1386
October 02, 2018, 06:24:03 PM
#14
I read Kurzweil's book several years ago, and it was really stupid. It's the extrapolation fallacy in book form:

He seems to believe that progress just happens regardless of everything else. In reality, progress happens because people make it happen, and it can be stopped if we hit a wall in research or if society changes to no longer allow for effective/useful scientific progress.

If human-level AI is created on traditional computing systems, then a singularity-like explosion of technology seems likely. There are existential risks there, but also potentially ~infinite benefit. But I'm not convinced that we're close to human-level AI. Deep neural networks can do some impressive things, but they don't actually think or plan: they're like a very effective form of intuition. I don't think that we will find human-level AI at the end of that road. In the worst-case scenario we should eventually be able to completely map out the human brain and simulate it on a computer, but that'll be many decades into the future at least.

Many leftists have a habit of thinking that since progress continues continuously and exponentially, we should just assume post-scarcity any day now. Which is exactly how you damage society and the economy so badly that you stop all progress completely...

These are very good points, but note the entire discussion is about "the time frame."

Suppose instead of 50 years, you look at the next 500 years ....
administrator
Activity: 5222
Merit: 13032
October 02, 2018, 05:53:46 PM
#13
I read Kurzweil's book several years ago, and it was really stupid. It's the extrapolation fallacy in book form:

He seems to believe that progress just happens regardless of everything else. In reality, progress happens because people make it happen, and it can be stopped if we hit a wall in research or if society changes to no longer allow for effective/useful scientific progress.

If human-level AI is created on traditional computing systems, then a singularity-like explosion of technology seems likely. There are existential risks there, but also potentially ~infinite benefit. But I'm not convinced that we're close to human-level AI. Deep neural networks can do some impressive things, but they don't actually think or plan: they're like a very effective form of intuition. I don't think that we will find human-level AI at the end of that road. In the worst-case scenario we should eventually be able to completely map out the human brain and simulate it on a computer, but that'll be many decades into the future at least.

Many leftists have a habit of thinking that since progress continues continuously and exponentially, we should just assume post-scarcity any day now. Which is exactly how you damage society and the economy so badly that you stop all progress completely...
legendary
Activity: 3318
Merit: 2008
First Exclusion Ever
October 02, 2018, 12:00:43 PM
#12
We have already reached the singularity, just none of is have clearance to know about it.
legendary
Activity: 2702
Merit: 1468
October 02, 2018, 10:41:15 AM
#11
I read a few years ago that we put 300,000 rat neurons into a robot and out of pure boredom the robot started doing things on its own accord.. consciousness in some form was born.. can't we lift the morel dilemma about what is defined as 'life' and start really exploring deeply what happens if we put 30,000,000 human neurons into a robot.

HYBROT


Wont organic neuron's, and bionic CPU's and programmable DNA be the best way to spawn Ai and its artificial general intelligence (AGI)/Consciousness .. and in turn the question/concerns about how will Ai will react to us.. will be that we are nothing more than the antiquated ancestors of the next organic/semiconductor evolutionary step.

Surely that puts to rest the fears of AGI being our demise.

We have already passed the 'A life' stage.

https://www.youtube.com/watch?v=M5gIKn7mc2U
copper member
Activity: 224
Merit: 14
October 02, 2018, 04:09:09 AM
#10
I read a few years ago that we put 300,000 rat neurons into a robot and out of pure boredom the robot started doing things on its own accord.. consciousness in some form was born.. can't we lift the morel dilemma about what is defined as 'life' and start really exploring deeply what happens if we put 30,000,000 human neurons into a robot.

HYBROT


Wont organic neuron's, and bionic CPU's and programmable DNA be the best way to spawn Ai and its artificial general intelligence (AGI)/Consciousness .. and in turn the question/concerns about how will Ai will react to us.. will be that we are nothing more than the antiquated ancestors of the next organic/semiconductor evolutionary step.

Surely that puts to rest the fears of AGI being our demise.
legendary
Activity: 2926
Merit: 1386
October 01, 2018, 04:36:36 PM
#9
....everything we can perceive is made of the same consciousness and because of that, EVERYTHING has consciousness. (If you want to listen him talking about this specific topic watch this video at minute 9:12 https://www.youtube.com/watch?v=owppju3jwPE&t=515s

It may sound crazy from a materialistic/traditional science point of view, but even the most advanced scientific understanding that we have nowadays (which is Quantum Mechanics) may prove that there are evidence of it being totally possible.

My consciousness says your consciousness is wrong, and what my consciousness knows about QM says there's zero relationship between QM principles or theory and consciousness.

But it's worth noting that if Ai develops consciousness here in 50 years or less, then it has numerous other places in the universe a long time ago.
jr. member
Activity: 56
Merit: 9
October 01, 2018, 03:35:33 PM
#8
I would guess closer to 50 years from now myself, maybe 2070.  Mostly because science has no idea where consciousness comes from.  A programmer can create a neural-net that acts like a brain, but, how do you give it a consciousness and free will?

Your consciousness is not the brain itself, but rather "the watcher" or "the decider", aka "the man behind the curtain"... how do you give such an aspect to a computer?

Some people think that once a neural network becomes large enough it could become self-aware spontaneously.  If this happened it could be a lot sooner than 50 years.  Though, I don't see how you could even tell it was self-aware unless it had the ability to reprogram itself.

A self-aware AI with the ability to reprogram itself would evolve faster than people could keep up with it.  Soon we would not understand it's code, even if it allowed us to view it.  This sounds like the "singularity" you refer to.  You can't put that genie back in the bottle once it is out...

You are touching directly the important point here, the theme of consciousness and how far we are from creating not artificial intelligence (because it has been already created) but artificial CONSCIOUSNESS, which basically means: A mechanism that is able to reflect on it's own condition and capable of learning and re-organizing itself to overcome its obstacles.

That's the real phylosophycal deal in this field, which has been explored by Phillip K. Dick in his novel called "Do androids dream of electrical sheeps?" and in a Ridley Scott movie called "Blade Runner" which is an adaptation of the mentioned novel.

I like the position that Ben Goertzel (Probably the greatest IA developer in history) takes: He sees consciousness not as a consequence of a complex interconnected web of neurons but as the most fundamental 'substance' by which all of reality is made.
For he, it doesn't depends on if a robot has consciousness or not, because he believes that everything we see, touch and experience is just an expression of (the fundamental) consciousness, just in different degrees and shapes. So everything we can perceive is made of the same consciousness and because of that, EVERYTHING has consciousness. (If you want to listen him talking about this specific topic watch this video at minute 9:12 https://www.youtube.com/watch?v=owppju3jwPE&t=515s

It may sound crazy from a materialistic/traditional science point of view, but even the most advanced scientific understanding that we have nowadays (which is Quantum Mechanics) may prove that there are evidence of it being totally possible.
jr. member
Activity: 114
Merit: 2
October 01, 2018, 03:15:42 PM
#7
We're going to join AI's capabilities with things like Musk's Neuralinks implanted in our minds within the next ten years. This will potentially be used as mind uploading tech (brain scan)-allowing us to join the cloud and remain immortal in the virtual space.

We're already getting acquainted with AI in our daily life thanks to things like virtual agents as of now..
full member
Activity: 574
Merit: 152
October 01, 2018, 09:21:17 AM
#6
As soon as artificial general intelligence (AGI) is created, technological singularity is only weeks away.

The difficulty lies in creating the general artificial intelligence. Application specific intelligence is a lot easier than general intelligence.
legendary
Activity: 2926
Merit: 1386
October 01, 2018, 09:17:09 AM
#5
The technological singularity, how far are we off from the 2045 date when this will all come together....

I don't know.

I will ask Google.
full member
Activity: 574
Merit: 108
October 01, 2018, 09:16:30 AM
#4
Computers will not be as smarter as humans are because they already are. Computers were far more capable of doing things conveniently and easier than humans. They are faster and more accurate in doing calculations and alike. They are more complex when it comes the ability do work on things on a versatile and extraordinary manner. However, despite the computers' more advantages than the humans, we cannot deny the fact that computers were created by humans, and there can be no computers without the existence of human minds. By this, we can imply that human minds gets developed naturally without the programming of others while computer functions cannot increase when not programmed by the humans.
hero member
Activity: 798
Merit: 722
October 01, 2018, 08:49:53 AM
#3
...computers will be as smart as humans by 2029

I hate to break it to you, but computers are already smarter than humans.  Deep Blue beat the world chess champion in 1997.  More recently, deep learning AI has been able to beat humans at Go, a more complex game than chess.

https://en.wikipedia.org/wiki/Computer_Go#2015_onwards:_The_deep_learning_era
Quote
In October 2015, Google DeepMind program AlphaGo beat Fan Hui, the European Go champion, five times out of five in tournament conditions.
In March 2016, AlphaGo beat Lee Sedol in the first three of five matches.
In May 2017, AlphaGo beat Ke Jie, who at the time was ranked top in the world, in a three-game match during the Future of Go Summit.
In October 2017, DeepMind revealed a new version of AlphaGo, trained only through self play, that had surpassed all previous versions, beating the Ke Jie version in 89 out of 100 games.

A well-designed AI is better than a human at nearly anything these days, but everything is very specific.  AlphaGo is designed to play a game called Go, it would be horrible at doing anything else.


I assume the dates of 2029 and 2045 are guesses to when AI will achieve consciousness.

I would guess closer to 50 years from now myself, maybe 2070.  Mostly because science has no idea where consciousness comes from.  A programmer can create a neural-net that acts like a brain, but, how do you give it a consciousness and free will?

Your consciousness is not the brain itself, but rather "the watcher" or "the decider", aka "the man behind the curtain"... how do you give such an aspect to a computer?

Some people think that once a neural network becomes large enough it could become self-aware spontaneously.  If this happened it could be a lot sooner than 50 years.  Though, I don't see how you could even tell it was self-aware unless it had the ability to reprogram itself.

A self-aware AI with the ability to reprogram itself would evolve faster than people could keep up with it.  Soon we would not understand it's code, even if it allowed us to view it.  This sounds like the "singularity" you refer to.  You can't put that genie back in the bottle once it is out...
legendary
Activity: 2702
Merit: 1468
October 01, 2018, 07:20:27 AM
#2
The technological singularity, how far are we off from the 2045 date when this will all come together to the next leap in civilisation or that computers will be as smart as humans by 2029 - according to Ray Kurzweil

Whats your guys thoughts..

I think progress in AI will not be linear but 2029 might be too optimistic.

Quantum computing will play a role in AI achieving supremacy over humans.  

Humans are bad at processing large sets of data.

People who are skeptical about AI do not understand AI.

copper member
Activity: 224
Merit: 14
October 01, 2018, 04:19:20 AM
#1
The technological singularity, how far are we off from the 2045 date when this will all come together to the next leap in civilisation or that computers will be as smart as humans by 2029 - according to Ray Kurzweil

Whats your guys thoughts..
Jump to: