Hey, bud, yes! We need more clarification on
http://neuroethics.com/ as you suggested to us below. Also, I'm curious as to why the link no longer works or was that a PM link via Rassah?
Does the link no longer work for you? It should be up at
www.neuroethics.com.
Below are the abstracts two talks given by David Pearce. The ideas discussed do not require any religious or political affiliation (e.g. they are fully compatible with atheism). Some of the terms are technical -- "zombies" is used in the philosophical
https://en.wikipedia.org/wiki/Philosophical_zombie rather than undead sense.
Are any other transhumanist charities accepting bitcoin? It seems to be a natural fit.
Will Humanity's Successors Also Be Our Descendants?
ABSTRACT
Accelerating technological progress leads some futurists to predict the imminent end of the human era and the dawn of posthuman superintelligence. But what is superintelligence? How does intelligence relate to sentience? What are the Explanatory Gap, Moravec's Paradox, and the Binding Problem? Will nonbiological machines ever be more than zombies?
This talk explores three different scenarios for the major evolutionary transition in prospect.
In the first scenario, biological humans will rewrite our genetic source code, recursively self-edit our own minds, and bootstrap our way to full-spectrum superintelligence. Mastery of our reward circuitry will deliver life based on information-sensitive gradients of bliss.
In the second, Kurzweilian scenario, cybernetic brain implants will enable humans to fuse our minds with artificial intelligence; and also allow humans to scan, digitise and "upload" ourselves into a less perishable substrate. In digital nirvana, the distinction between biological and nonbiological machines will effectively disappear.
In the third scenario, most closely associated with mathematician I.J. Good and The Singularity Institute, a combination of Moore's law and the advent of recursively self-improving software-based minds will culminate in an ultra-rapid Intelligence Explosion and an era of nonbiological superintelligence. Posthuman superintelligence may or may not be human-friendly.
How strong is the supporting evidence for each of these prophecies?
Singularity Hypotheses:
The BioIntelligence Explosion
ABSTRACT
Genetic change in biological humans is slow. Progress in digital computing is fast. Software run on serial, programmable digital computers is executed exponentially faster (cf. Moore's Law); it's copyable without limit; it runs on multiple substrates; and it can be rapidly edited, tested and debugged. Singularitarians like Ray Kurzweil or SIAI's Eliezer Yudkowsky prophesy that human programmers will soon be redundant because AI run on digital computers will undergo accelerating cycles of self-improvement (cf. Kurzweil's Law of Accelerating Returns). Artificial, greater-than-human nonbiological intelligence will swiftly be succeeded by artificial posthuman superintelligence.
This talk examines, and then discounts, the prospect of a "robot rebellion". Biological humanity is on the brink of a Biointelligence Explosion. Humans are poised to exploit "narrow" or "weak" AI to enhance our own code in a positive feedback loop of mutual enhancement. Starting with individual genes, then clusters of genes, and eventually hundreds of genes and alternative splice variants, a host of recursively self-improving organic robots ("biohackers") will modify their own source code and modes of sentience: their senses, their moods, their motivation, their world-simulations, their cognitive apparatus and their default state of consciousness. As the era of open-source genetics unfolds, tomorrow's biohackers will use high-level gene editing tools, insertion vector applications, nonviral gene-editing kits, and user-friendly interfaces to add, delete, edit and customize their own legacy code in a positive feedback loop of cognitive and emotional enhancement. Recursively self-improving biological humans are going to bootstrap their way to full-spectrum superintelligence - and indescribable bliss.