Author

Topic: Transhumanism (Read 2260 times)

member
Activity: 62
Merit: 10
Radical prolongation of life through cryonics
October 19, 2017, 02:20:16 AM
#16
Roach is wrong, only initially though. If the mind(memories, sense of identity) were transferred onto a digital plane, it'd be the same as you. You'd technically be the same person, just on two different levels of existence for the briefest of seconds. This is because once there are "two you's" one on a digital plane and one on a physical/biological plane, you would begin to differ in future interactions, events, thoughts, etc. Only then would you actually become "different" people, sort of like a clone. But initially, you'd be the same.

And, the cloning we have today cannot be compared to "digital cloning". Not even remotely...Being on a digital plane would give you theoretical immortality. Also, probably with the ability to do anything imaginable(That's if said digital plane isn't constrained by laws like we have now e.i laws of physics, etc)

A lot of interesting things in this branch, despite the fact that it was created long ago. Today, many ideas are already close to implementation, for example, CrioRus is experimenting with nanorobots in the field of cryonics.
hero member
Activity: 1092
Merit: 520
Aleph.im
April 14, 2016, 04:25:09 PM
#15
We are waiting for the singularity...  Cool
hero member
Activity: 636
Merit: 505
April 14, 2016, 05:07:57 AM
#14
Debates rage over whether AI-based conscious machines will be a boon, or a danger, but would machines based on brain mapping actually be conscious? Would the hard problem fall by the wayside?

I don’t think so.

While waiting for neuronal maps of mammalian brains to implement in silicon, some AI researchers have simulated the entire, already-mapped nervous system (302 neurons) of the tiny worm C elegans. Like paramecium, we don’t know if they’re conscious, but C elegans clearly exhibits ‘easy problem’ behaviors, e.g. moving in response to stimuli. But even artificial C elegans just sits there, with no functional behavior. AI can’t simulate the ‘easy problems’ in simple brains. Something is missing.

Read the entire article:
http://www.huffingtonpost.com/stuart-hameroff/darwin-versus-deepak-whic_b_7481048.html
legendary
Activity: 1260
Merit: 1000
April 14, 2016, 04:54:17 AM
#13
1 year later, is David Latapie a robot yet...
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
April 16, 2015, 11:46:28 AM
#12
what is H+ community is it like transhumanist club?

I personally think that robots will substitude humans as cromagnons neandhertals... we cant compare to machines, they are better and more capable in every way. Its naive to think that rbotics will never go wrong... our history is proof of what humans are able to do. Its gonna end bad for us and I am not technophobe.

Is this the time to invoke Roko's Basilisk?

Maybe you know the future already... or maybe you help to create the future you desire.
full member
Activity: 223
Merit: 100
🌟 æternity🌟 blockchain🌟
April 16, 2015, 05:23:12 AM
#11
what is H+ community is it like transhumanist club?

I personally think that robots will substitude humans as cromagnons neandhertals... we cant compare to machines, they are better and more capable in every way. Its naive to think that rbotics will never go wrong... our history is proof of what humans are able to do. Its gonna end bad for us and I am not technophobe.
legendary
Activity: 1428
Merit: 1001
getmonero.org
April 15, 2015, 12:14:49 PM
#10
I just want to state that i am more of a 'mind uploading' follower when it comes to transhumanism and singularity.

This approach seems easier to me since what humans always did was to reverse engineer nature. And in my thinking solves problems like machines turning against biological life. We actually transform to machines.

There are problems though. What will the first humans that upload their minds are going to do? Will we be able to upload all humans since today we cant even allocate enough food/water to the whole humanity.

Its funny because science/technology is evolving rather fast (when we choose that iphone 3 is good enough for our needs until iphone 6 comes out) and matters like those may probably affect our lives soon...
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
April 11, 2015, 07:49:54 AM
#9
The H+ community discusses all types of organic and inorganic human advancement/self-improvement: something as small as practicing caloric restriction to increase lifespan, or consuming lots of supplements to prevent aging
I suspect all of us in this thread are members of the H+ community, thus the title.
full member
Activity: 197
Merit: 100
March 17, 2015, 10:45:24 PM
#8
The H+ community discusses all types of organic and inorganic human advancement/self-improvement: something as small as practicing caloric restriction to increase lifespan, or consuming lots of supplements to prevent aging
hero member
Activity: 770
Merit: 500
March 17, 2015, 10:11:55 PM
#7
The question of terminating its previous self depends on how the self-preservation routines are coded and handled. If the machine can convince itself that "dying is not dying", it can work. An irrational system (human brain) can do it (going to heaven). I have the intution that a rational system (a computer) can do it (no loss of meaningful information = no dying).

If this is real AI we're talking about, it's going to be dealing with abstract ideas and not just number crunching.  To really advance forward, the AI would have to use trial and error, or experimentation to move forward in areas.  If any trial and error is involved, it might want a failsafe of having the old code being able to act as a mechanic on the new code should something go wrong with it's experimental upgrade.  I think these variables I've outlined will force multiple, diverging AI out into the real world, basically replicating biological evolution.

There is also the issue that you will probably have to replicate biological evolution to create AI at all, since it can't be created from scratch due to the issues I've talked about where the human created error checking and debug systems would define everything the AI does.  The only viable way to do it is how I talked about below:

Instead of trying to create AI from scratch, with human based error checking and debug rules encompassing all of it's functionality, if all you did was try to digitize a rat brain, the low overhead of machine reproduction could accelerate natural selection so fast that it turns from rat to god overnight, possibly while just sitting inside of a simulator fighting other rats.  So then the question is, what is the lowest level organism needed to be digitized to accomplish such a task.

In this model, you're not actually trying to create high level organisms, you're just trying to lower the overhead of natural selection on more primitive organisms.  If the machines only used asexual type of reproduction, you could end up with only great white shark, apex predator type creatures because they're not really required to interact with other entities in a non-hostile manner.  You might have to force non-asexual reproduction to achieve higher levels of advancement in the realm of communication, etc.


This might not relate to what you've said. But, I've had this interesting idea for the use of nanobots to replicate the functions of specific cells within the body and "enhance" it. For ex: Telomeres naturally shorten over time, leading to things such as the death of the cell or even cancer. If a nanobot could stop such a thing from happening, then that would cure a host of problems all by itself.

And yes, I view death and degeneration as problems. Since I believe that the evolution and the initial conception of life itself is that of a low probability, then life must be "precious" and death the absolute worst that can happen to life.
legendary
Activity: 1260
Merit: 1000
March 17, 2015, 08:29:36 PM
#6
The question of terminating its previous self depends on how the self-preservation routines are coded and handled. If the machine can convince itself that "dying is not dying", it can work. An irrational system (human brain) can do it (going to heaven). I have the intution that a rational system (a computer) can do it (no loss of meaningful information = no dying).

If this is real AI we're talking about, it's going to be dealing with abstract ideas and not just number crunching.  To really advance forward, the AI would have to use trial and error, or experimentation to move forward in areas.  If any trial and error is involved, it might want a failsafe of having the old code being able to act as a mechanic on the new code should something go wrong with it's experimental upgrade.  I think these variables I've outlined will force multiple, diverging AI out into the real world, basically replicating biological evolution.

There is also the issue that you will probably have to replicate biological evolution to create AI at all, since it can't be created from scratch due to the issues I've talked about where the human created error checking and debug systems would define everything the AI does.  The only viable way to do it is how I talked about below:

Instead of trying to create AI from scratch, with human based error checking and debug rules encompassing all of it's functionality, if all you did was try to digitize a rat brain, the low overhead of machine reproduction could accelerate natural selection so fast that it turns from rat to god overnight, possibly while just sitting inside of a simulator fighting other rats.  So then the question is, what is the lowest level organism needed to be digitized to accomplish such a task.

In this model, you're not actually trying to create high level organisms, you're just trying to lower the overhead of natural selection on more primitive organisms.  If the machines only used asexual type of reproduction, you could end up with only great white shark, apex predator type creatures because they're not really required to interact with other entities in a non-hostile manner.  You might have to force non-asexual reproduction to achieve higher levels of advancement in the realm of communication, etc.
hero member
Activity: 658
Merit: 503
Monero Core Team
March 17, 2015, 07:10:18 PM
#5
I feel like you left out the actual interesting parts of my post and only left the primitive parts.  I stated in my post that it's possible an AI could either sit forever at 0.0000001% CPU utilization or be stuck hammered at 100% while trying to calculate the position of every photon.  Then I stated the debug and error checking systems required to prevent such activity from occurring would define what the AI would actually be doing at any given time, so the human element required in creating the error checking and debug systems might make real AI impossible.  

Original post below, I feel the last sentence is the most interesting possibility:


If you wanted to get really complex, the AI could possibly re-write it's debug systems itself.  The question here is, does the old version actually terminate on version updates, or does a new virtual and/or physical presence of the AI spawn each time, who then fight each other over resources.  It would basically be recreating evolution.
"Will one day machine be a smart as humans? - Yes, but not for long" (because very quicly, it will become much smarter than humans)

The question of terminating its previous self depends on how the self-preservation routines are coded and handled. If the machine can convince itself that "dying is not dying", it can work. An irrational system (human brain) can do it (going to heaven). I have the intution that a rational system (a computer) can do it (no loss of meaningful information = no dying).
legendary
Activity: 1260
Merit: 1000
March 04, 2015, 09:11:28 AM
#4
I feel like you left out the actual interesting parts of my post and only left the primitive parts.  I stated in my post that it's possible an AI could either sit forever at 0.0000001% CPU utilization or be stuck hammered at 100% while trying to calculate the position of every photon.  Then I stated the debug and error checking systems required to prevent such activity from occurring would define what the AI would actually be doing at any given time, so the human element required in creating the error checking and debug systems might make real AI impossible.  

Original post below, I feel the last sentence is the most interesting possibility:


I've never understood this Ray Kurzweil and others cult.  If you transfer yourself into the digital world, you're obviously only creating a copy if the original can still exist at the same time.  The whole thing is a logical fallacy.  There's no such thing as "transhumanism", only a movement to create a copy machine for humans for some unknown reason.  We can already do this now for physical creatures with cloning, yet nobody does it.  Doing this with a digital creature is the same difference, except it would store and interpret data faster, but it's still a clone.

It might function similar to how a computer virus does.  Since it can process all sensory data in an extremely fast manner, it would do it very quickly, then lay dormant with idle bandwidth awaiting triggers for it to leap into action to do something.  The notion of time would either become irrelevant, or extremely monotonous, since you would process all external sensory very quickly and constantly wait on something new.  So there you are sitting at 0.0000001% CPU utilization forever.  

It's interesting that the human brain has low CPU utilization already with no known triggers to max it out.  Perhaps the system disabled it's processing power to conserve resources and/or avoid boredom or insanity, or perhaps monitoring the position of every photon provides no benefit, or is impossible due to quantum effects.  The law of diminishing returns at work.  

If the universe has a beginning, and travel is constrained by the speed of light, then processing of external data would have to be constrained at some point as well.  One constraint for external data available, limited by physical laws, and one constraint for available resources needed to process that data.  At this point, complexity could also be much higher than available means to detect it, so a real computer AI could also just sit at 100% CPU utilization forever, trying to track the position of every photon, failing, and accomplishing basically nothing.

Since all human debug systems are biological in nature, AI based off of humans would be in danger of being stuck in a hard loop with no way to recover.  Creating a digital AI would require a debug and error checking system to run on top of whatever you consider to be the real AI.  The only problem is that the debug and error checking system would define much of what the system actually was doing at any given time, and this element would obviously be rigidly human created and specified.  If the inflexible, human defined rules are that pronounced, can you really call it AI?  Have I just debunked the possibility of true AI entirely?

If you wanted to get really complex, the AI could possibly re-write it's debug systems itself.  The question here is, does the old version actually terminate on version updates, or does a new virtual and/or physical presence of the AI spawn each time, who then fight each other over resources.  It would basically be recreating evolution.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
February 27, 2015, 11:46:58 PM
#3
Digital plane?
Presumably this is software, which also requires hardware, or 'wetware'.

There may be human cloning occurring on a semi regular basis as well as animal.
There are laws against doing it in many countries, so presumably there is a reason for these laws.  We don't have a lot of laws against unicorn poaching for good reason.
http://en.wikipedia.org/wiki/Human_cloning#Current_law
Lots of progress on this came out of the genome project.

Transhumanism, as David eloquently points out, is a very broad study and it covers everything from implants and prosthetic to surgical enhancement and life extension technologies as well as the more theoretical things like the recording and transfer of mental states, sensory and thought.

The current US administration is pushing hard on the mind/brain problem with fundamental research,
http://www.whitehouse.gov/share/brain-initiative
as also the EU, and there is an upcoming meet in Switzerland:
http://thebrainforum.org/

So apparently all the conspiracy theorists worried about government mind control devices were spot on.  Wink

newbie
Activity: 28
Merit: 0
February 27, 2015, 06:20:01 PM
#2
Roach is wrong, only initially though. If the mind(memories, sense of identity) were transferred onto a digital plane, it'd be the same as you. You'd technically be the same person, just on two different levels of existence for the briefest of seconds. This is because once there are "two you's" one on a digital plane and one on a physical/biological plane, you would begin to differ in future interactions, events, thoughts, etc. Only then would you actually become "different" people, sort of like a clone. But initially, you'd be the same.

And, the cloning we have today cannot be compared to "digital cloning". Not even remotely...Being on a digital plane would give you theoretical immortality. Also, probably with the ability to do anything imaginable(That's if said digital plane isn't constrained by laws like we have now e.i laws of physics, etc)
hero member
Activity: 658
Merit: 503
Monero Core Team
February 27, 2015, 05:29:21 PM
#1
I forked a digression on DRK vs XMR warez to address what the author said about transhumanism.

If you transfer yourself into the digital world, you're obviously only creating a copy if the original can still exist at the same time.  The whole thing is a logical fallacy.  There's no such thing as "transhumanism", only a movement to create a copy machine for humans for some unknown reason.  We can already do this now for physical creatures with cloning, yet nobody does it.  Doing this with a digital creature is the same difference, except it would store and interpret data faster, but it's still a clone.
Terminology notes
1. Transhumanism is more than mind upload - mind upload is only a subset of transhumanism and, as it goes for any complex thinking, not all transhumanists agree with mind upload of even consider it feasible.
2. Transhumanism is about improving human through technology. What defines an improvement is of course a subject of large debate (what defines human also - Juan Enriquez gave a great talk about future speciation of mankind and of course antispecism comes into play too: what about uplifted animals, artificial intelligences and advanced simulations). Usually, improvement is considered as the opposite of medicine: medicine helps someone falling below what the society considers as a standard of performance (mobility, sensing, acting, etc.) to come back to this standard (prosthesis, glasses, psychotherapy, etc.) whereas transhumanism aims to outreach this said level (better, stronger, faster than the average human, but also different, as in being able to enjoy a magnetic sculpture or see the ultraviolet, live more than 120 years, thought access to machines for more than just compensating tetraplegy, adapting to lack of food, extremes in temperature or various (extra-)planetary conditions, like zero-g adaptation and any kind of panspermia)
3. An extended definition of transhumanism deals also with societal consequences of such adaptation. 3D printing, unfriendly AI (read Elon Musk on it and if you want more in-depth material than just a warning,
Eliezer Yudkowsky's, Artificial Intelligence as a Positive and Negative Factor in Global Risk as well as http://intelligenceexplosion.com), post-scarcity society (including reputation-as-a-money, a topic where cryptos 2.0 are of great importance)
I wrote a progressive introduction to transhumanism that you may find interesting: progressive introduction to transhumanism… and beyond (English version at the end)
Code:
== NBIC Convergence ==
Level 1: NBIC
Level 2: Convergence

== Near ==
The Eyeborg documentary        youtu.be/TW78wbN-WuU
Real Humans                    enwp.org/Real_Humans

== Lointain ==
Will our kids be of            on.ted.com/Enriquez12
a different species?
Intelligence explosion         intelligenceexplosion.com/
Transhuman Space               enwp.org/Transhuman_Space
Eclipse Phase                  enwp.org/Eclipse_Phase
Accelerando                    enwp.org/Accelerando
Orion's Arm                    orionsarm.com

== Tangent ==
The Power of the blockchain    plus.google.com/explore/powerofblockchain

Now that this preamble on terminology is done, let's go to the heart of the topic.

Cloning is done on a daily basis - read about equine cloning (for whole living creatures) or about the future of organ transplants (use sterm cells to clone a healthy version of your liver). The main barrier for human cloning is ethical, not technological (I am not 100% sure, but I think humans embryos have been cloned, then destroyed pretty quickly - the whole thing was illegal, but illegal != technologically impossible)

Perfect copy (including the mind) is another matter. First, we don't know what is the mind, the consciousness. It may be possible that perfect cloning is impossible if the mind is stored at a quantum level, but this also is unknown. Assuming perfectly digitizing a mind and creating a convincing emulation of the body are possible, the issues at hand would be about identity. Two of the most important issues are
  • continuation of identity. On this topic, I encourage you to read about the Theseus' paradox (a.k.a. "Ship of Theseus") a question that Ancient Greeks already pondered, and to also watch Vanilla Sky for how continuation of identity is possible as long as it is progressive (which is the whole point of the Theseus' paradox). Or, more simply, consider how you continue to see your desktop PC as "your PC" when you only change one component at a time (this is especially true if your PC has a name, which is a common occurence among real geeks - a category to wich I consider I belong, so this is not derogatory)
  • multiple copies. Contrary to the analog world where perfect copy is nigh-impossible, perfect copy is central to the digital world. It is also hugely disruptive for any society as long as more than one is active at the same time (if it is not, this is simply a backup). I recently started a conversation about how the double-spending-proof nature of the blockchain might solve this issue - there is a reason why there are so many h+ in crypto.

It might function similar to how a computer virus does.  Since it can process all sensory data in an extremely fast manner, it would do it very quickly, then lay dormant with idle bandwidth awaiting triggers for it to leap into action to do something.  The notion of time would either become irrelevant, or extremely monotonous, since you would process all external sensory very quickly and constantly wait on something new.  So there you are sitting at 0.0000001% CPU utilization forever.
You fall to the classical no-growth fallacy (watch AlBartlett on this). That is, not taking into consideration that new resources bring new uses (this also explain why altcoins exist, by the way: since there is a new resource, the "clone" button on Git, then new uses, the altcoins, are possible). We won't sit à 0.0000001% forever.
Horror vacui (nature abbhors vacuum). Like a perfect gas expands untils it fills a container, a digital entity would use its computing power until it reaches 99% (100% being considered impossible). But I grant that we have no idea what it could do with all of this power, much like a dog has no idea how what we can do with your brain power.

I welcome everyone, transhumanist or not, to share his thoughts about these topics and others.
Jump to: