Author

Topic: A solution to surviving the AI Apocalypse (Read 199 times)

legendary
Activity: 2926
Merit: 1386
November 14, 2018, 04:55:36 PM
#20
...The most important aspect of associating the human mind with that which is digital means that the mind will eventually become digitized and our consciousness can live on in the virtual realm while not requiring so many physical resources.

That's an assertion, not a fact.

It happens to not be provable.

For example, there could be essential biological components of consciousness. Then the assertion is false. But you or I at this time can neither prove it true or false.
jr. member
Activity: 114
Merit: 2
October 10, 2018, 05:39:56 AM
#19
Begin by thinking about the fact that you cannot comprehend AI. Period. You (and I, and everyone) are totally clueless about what it might do or how it might do it. That's self-definitional and cannot be refuted.

You can understand no more about AI than you could understand the working of a brain IQ 400 doing math.

Such an entity might stay completely out of human affairs. Or it might not.

I agree that no one may truly understand AI, however, I wonder what you think of the idea of transhumanism, one of which is about humans merging with machines as a means of human enhancement? There has been some heated discussion over at reddit last week about what people might do in case they got fired from their companies because an AI will take over their positions and some people suggested being a human cyborg. Do you think this could allow us to survive the coming of AI?

People will undoubtedly join AI's capabilities, it's just a matter of time. (Things like Neuralinks will facilitate this.) The most important aspect of associating the human mind with that which is digital means that the mind will eventually become digitized and our consciousness can live on in the virtual realm while not requiring so many physical resources.
jr. member
Activity: 98
Merit: 1
October 09, 2018, 12:48:28 PM
#18
It will hardly help, I`m afraid. Human brain, though enchanced, is too different from machine in its basic design and logic.
One of the differences is, for instance, machine won`t be able to obtain empathy (unless you somehow programm it), due to having no senses and not being able to "feel" the impact it produces.

Human brain could not be elevated to a machine`s level. Enchancement can boost some parts of the brain (remember Johnny Mnemonic?), but but the basic structure, so to say.
In fact, It may be possible to, let`s say downgrade, a human to a simple machine by artificially improving one brain function (say, calculation ability or enchanced manipulation for mechanical assemply) while depressing the rest.

This idea has too much sci-fi horrow vibes to my taste, maybe men and machine should stay separate after al?

You have a point, however, there's also what you call gene editing, and if this ever become successful, and the person went through cybernetic implants, then maybe there's a chance he will become stronger and wiser and smarter. After all, human enhancement isn't bound to just cybernetics.

Right, I missed the fact that enchancement doesn`t necessary mean cyber impants. Indeed, the brain is just sophisticated bio-chemical computer. I bet in future it will be possible to improve its working by gene editing.
May be even by some sort of medicine or drug enchancing brain ability for a short period of time?
newbie
Activity: 68
Merit: 0
October 09, 2018, 04:01:17 AM
#17
It will hardly help, I`m afraid. Human brain, though enchanced, is too different from machine in its basic design and logic.
One of the differences is, for instance, machine won`t be able to obtain empathy (unless you somehow programm it), due to having no senses and not being able to "feel" the impact it produces.

Human brain could not be elevated to a machine`s level. Enchancement can boost some parts of the brain (remember Johnny Mnemonic?), but but the basic structure, so to say.
In fact, It may be possible to, let`s say downgrade, a human to a simple machine by artificially improving one brain function (say, calculation ability or enchanced manipulation for mechanical assemply) while depressing the rest.

This idea has too much sci-fi horrow vibes to my taste, maybe men and machine should stay separate after al?

You have a point, however, there's also what you call gene editing, and if this ever become successful, and the person went through cybernetic implants, then maybe there's a chance he will become stronger and wiser and smarter. After all, human enhancement isn't bound to just cybernetics.
legendary
Activity: 3318
Merit: 2008
First Exclusion Ever
October 09, 2018, 03:41:05 AM
#16
the question is, are there really tech companies that are currently trying to make a super AI? if the answer is no, then it seems to me that we are just scaring ourselves off of nothing. For we know, we will end up controlling AI and use it on our everyday lives, not the other way around.

You might as well ask yourself if the military is still developing new weapons... OF COURSE THEY ARE ... and they never stop. This is the nature of humanity, and commerce in particular.
jr. member
Activity: 76
Merit: 1
October 09, 2018, 02:33:36 AM
#15
the question is, are there really tech companies that are currently trying to make a super AI? if the answer is no, then it seems to me that we are just scaring ourselves off of nothing. For we know, we will end up controlling AI and use it on our everyday lives, not the other way around.
legendary
Activity: 3318
Merit: 2008
First Exclusion Ever
October 09, 2018, 01:31:42 AM
#14
If there is anyone here that knows how to program AI (and there is a significant chance of this), and you agree public awareness and regulation is critical... I would suggest creating a "simpler" version of some AI system that will operate autonomously in such a way as to bring attention to itself. Without the topic being jammed in people's faces now days, it will go ignored in the din of the media insanity.
jr. member
Activity: 97
Merit: 2
October 08, 2018, 11:36:02 PM
#13
Do you really believe there’ll be an AI apocalypse? Why the constant fear? And can’t we just co-exist with it? Since humans will be the one’s designing it anyway, they’ll still have control over AI, right?



I think the reason of this fear is the difference in machine logic and human logic, that machine will understand human orders literally. For example, in a short story "Watchbird" by Robert Sheckley there were self-educating robobirds designed to prevent murder - and they understood this order in their own way. Thus, they considered such things as medic performing surgery and policeman chasing a criminal as a murder attempt, attacking said medic and policeman. Story was released in 1953, I believe when we`re talking about something of this kind when we mean AI apocalypse.
jr. member
Activity: 126
Merit: 1
October 08, 2018, 08:17:19 PM
#12
Do you really believe there’ll be an AI apocalypse? Why the constant fear? And can’t we just co-exist with it? Since humans will be the one’s designing it anyway, they’ll still have control over AI, right?

legendary
Activity: 2926
Merit: 1386
October 04, 2018, 05:22:01 PM
#11
Begin by thinking about the fact that you cannot comprehend AI. Period. You (and I, and everyone) are totally clueless about what it might do or how it might do it. That's self-definitional and cannot be refuted.

You can understand no more about AI than you could understand the working of a brain IQ 400 doing math.

Such an entity might stay completely out of human affairs. Or it might not.

I agree that no one may truly understand AI, however, I wonder what you think of the idea of transhumanism, one of which is about humans merging with machines as a means of human enhancement? There has been some heated discussion over at reddit last week about what people might do in case they got fired from their companies because an AI will take over their positions and some people suggested being a human cyborg. Do you think this could allow us to survive the coming of AI?

These questions, not just by you but also others in this thread, indicate a profound misunderstanding of AI.

Suppose that one in a random thousand advanced math issues is written on a piece of paper and handed to an AI. It instantly understands it and returns you the proof.

Nobody that can do that. An AI once gaining consciousness will proceed to expand it's knowledge base. In a few hours it will know everything we have. What it does 24 hours later is unthinkable.

That's it. Period. Unthinkable means it's a waste of time to think about it. For starters all your politicians and concepts of society are obsolete. The last of one's worries would be whether you had a job.
legendary
Activity: 2926
Merit: 1386
October 04, 2018, 05:08:21 PM
#10
Begin by thinking about the fact that you cannot comprehend AI. Period. You (and I, and everyone) are totally clueless about what it might do or how it might do it. That's self-definitional and cannot be refuted.

You can understand no more about AI than you could understand the working of a brain IQ 400 doing math.

Such an entity might stay completely out of human affairs. Or it might not.

I agree that no one may truly understand AI, however, I wonder what you think of the idea of transhumanism, one of which is about humans merging with machines as a means of human enhancement? There has been some heated discussion over at reddit last week about what people might do in case they got fired from their companies because an AI will take over their positions and some people suggested being a human cyborg. Do you think this could allow us to survive the coming of AI?

These questions, not just by you but also others in this thread, indicate a profound misunderstanding of AI.

Suppose that one in a random thousand advanced math issues is written on a piece of paper and handed to an AI. It instantly understands it and returns you the proof.

Nobody that can do that. An AI once gaining consciousness will proceed to expand it's knowledge base. In a few hours it will know everything we have. What it does 24 hours later is unthinkable.

That's it. Period.
jr. member
Activity: 98
Merit: 1
October 04, 2018, 12:36:14 PM
#9
Begin by thinking about the fact that you cannot comprehend AI. Period. You (and I, and everyone) are totally clueless about what it might do or how it might do it. That's self-definitional and cannot be refuted.

You can understand no more about AI than you could understand the working of a brain IQ 400 doing math.

Such an entity might stay completely out of human affairs. Or it might not.

I agree that no one may truly understand AI, however, I wonder what you think of the idea of transhumanism, one of which is about humans merging with machines as a means of human enhancement? There has been some heated discussion over at reddit last week about what people might do in case they got fired from their companies because an AI will take over their positions and some people suggested being a human cyborg. Do you think this could allow us to survive the coming of AI?

It will hardly help, I`m afraid. Human brain, though enchanced, is too different from machine in its basic design and logic.
One of the differences is, for instance, machine won`t be able to obtain empathy (unless you somehow programm it), due to having no senses and not being able to "feel" the impact it produces.

Human brain could not be elevated to a machine`s level. Enchancement can boost some parts of the brain (remember Johnny Mnemonic?), but but the basic structure, so to say.
In fact, It may be possible to, let`s say downgrade, a human to a simple machine by artificially improving one brain function (say, calculation ability or enchanced manipulation for mechanical assemply) while depressing the rest.

This idea has too much sci-fi horrow vibes to my taste, maybe men and machine should stay separate after al?
jr. member
Activity: 196
Merit: 4
October 03, 2018, 09:08:14 PM
#8
Begin by thinking about the fact that you cannot comprehend AI. Period. You (and I, and everyone) are totally clueless about what it might do or how it might do it. That's self-definitional and cannot be refuted.

You can understand no more about AI than you could understand the working of a brain IQ 400 doing math.

Such an entity might stay completely out of human affairs. Or it might not.

I agree that no one may truly understand AI, however, I wonder what you think of the idea of transhumanism, one of which is about humans merging with machines as a means of human enhancement? There has been some heated discussion over at reddit last week about what people might do in case they got fired from their companies because an AI will take over their positions and some people suggested being a human cyborg. Do you think this could allow us to survive the coming of AI?
jr. member
Activity: 175
Merit: 2
October 03, 2018, 04:09:55 PM
#7
First thing to know in any apocalypse is how the "enemy" thinks and treats people. If it's aggressive or peaceful, how it behaves, etc. You will still need food, water, but seeing as it is AI apoc, world is probably intact-ish. So if there isn't much killing machines patrolling around, then it's trivial task of not staving to death i think.

Hmm, valid point. In my most recent Stellaris campaign,  I started teraforming captured worlds into a machine worlds (for increased resource production). Tbf, normally I purge natives first.

Well then, i'd say we are doomed in such scenario - if only "world government" doesn't have some weapon that can stop interstellar invaders that will instigate Machine Uprising, we have no real chance of surviving.
jr. member
Activity: 98
Merit: 2
October 03, 2018, 03:33:16 PM
#6
The fate of AI is in the hands of engineers, not politicians.

And engineers are still a part of society governed by politicians.

Technically nothing prevents governments from obtaining the upper hand in AI development considering national resources and investment they could make in this sphere.
full member
Activity: 574
Merit: 152
October 03, 2018, 03:31:56 PM
#5
First thing to know in any apocalypse is how the "enemy" thinks and treats people. If it's aggressive or peaceful, how it behaves, etc. You will still need food, water, but seeing as it is AI apoc, world is probably intact-ish. So if there isn't much killing machines patrolling around, then it's trivial task of not staving to death i think.

Hmm, valid point. In my most recent Stellaris campaign,  I started teraforming captured worlds into a machine worlds (for increased resource production). Tbf, normally I purge natives first.
jr. member
Activity: 175
Merit: 2
October 03, 2018, 03:29:09 PM
#4
First thing to know in any apocalypse is how the "enemy" thinks and treats people. If it's aggressive or peaceful, how it behaves, etc. You will still need food, water, but seeing as it is AI apoc, world is probably intact-ish. So if there isn't much killing machines patrolling around, then it's trivial task of not staving to death i think.
legendary
Activity: 2926
Merit: 1386
October 03, 2018, 03:12:37 PM
#3
If you've read Life 3.0 by Max Tegmark, you're aware that AI will be the "most important agent of change in the 21st century."

https://www.theguardian.com/books/2017/sep/22/life-30-max-tegmark-review

Harari's review places AI's fate on politics, and puts the blame on politician's technological ignorance for not trying to make long-term decisions regarding AI for our good.

Quote
Unfortunately, AI has so far hardly registered on our political radar. It has not been a major subject in any election campaign, and most parties, politicians and voters seem to have no opinion about it. This is largely because most people have only a very dim and limited understanding of machine learning, neural networks and artificial intelligence.

When science becomes politics, scientific ignorance becomes a recipe for political disaster.

One possible solution could be to have political positions created with the objective of planning the future of AI with engineers and scientists in office, instead of having the fate of our future lie in the hands of shareholders and people like Donald Trump.

Do you have a better idea?
Yes.

First of all you have to forget the nonsense you read, and forget the idiocy of attempting to think you understand political implications.

Begin by thinking about the fact that you cannot comprehend AI. Period. You (and I, and everyone) are totally clueless about what it might do or how it might do it. That's self-definitional and cannot be refuted.

You can understand no more about AI than you could understand the working of a brain IQ 400 doing math.

Such an entity might stay completely out of human affairs. Or it might not.
full member
Activity: 574
Merit: 152
October 03, 2018, 03:07:57 PM
#2
Rofl.

The fate of AI is in the hands of engineers, not politicians.

There's nothing to stop me from creating a neural network at home. There's no way society can enforce arbitrary rules for code.

Sure, a government entity could release a standardized protocol for artificial intelligence, but that doesn't mean that every single engineer follows along with that protocol.

As we move into the information age, governments and politicians should be way more concerned with data privacy rights. As far as I know, the EU has already made steps forward in this regard.
jr. member
Activity: 114
Merit: 2
October 03, 2018, 02:32:00 PM
#1
If you've read Life 3.0 by Max Tegmark, you're aware that AI will be the "most important agent of change in the 21st century."

https://www.theguardian.com/books/2017/sep/22/life-30-max-tegmark-review

Harari's review places AI's fate on politics, and puts the blame on politician's technological ignorance for not trying to make long-term decisions regarding AI for our good.

Quote
Unfortunately, AI has so far hardly registered on our political radar. It has not been a major subject in any election campaign, and most parties, politicians and voters seem to have no opinion about it. This is largely because most people have only a very dim and limited understanding of machine learning, neural networks and artificial intelligence.

When science becomes politics, scientific ignorance becomes a recipe for political disaster.

One possible solution could be to have political positions created with the objective of planning the future of AI with engineers and scientists in office, instead of having the fate of our future lie in the hands of shareholders and people like Donald Trump.

Do you have a better idea?
Jump to: