Author

Topic: Tech Apocalypse, AI, Tolkien, and Elon Musk (Read 641 times)

legendary
Activity: 1134
Merit: 1599
Very interesting analogy. I see AI possibly becoming a very big danger if these microchips are EMP-proof and created to be destroy-proof. In a world where AI wants to take control, I guess EMP might be the only hope of salvation. The other side of the story is if humans start using the chips to their own advantage. As it is embedded inside your own brain, I assume it will collect data out of it too. Musk said the chip will help with vision and other medical stuff too, so I understand it should be smart enough to know what you suffer from so it could come with the right solution.

What is very interesting is the fact that in almost all movies and TV shows, the rich do not have the microchips embedded in their heads or, in the ones where they do, it's at one point used against themselves. But usually, it's the rich without it and the poor with it. If you take it from another perspective, microchips are designed in most movies as the perfect weapon for control.

The scenario where humans use the microchip to their own advantage, besides the usage of it as a weapon of control there is also the possibility of using it to steal crucial information. For instance, cyber-thieves killing someone for the microchip to scrap the data off it and know all the accounts & passwords that user knew or to find out where their wealth was hidden at. In another instance, intel agencies could profit off their adversaries by murdering one of them and reading data off the chip to know what their plan is, so that they could create the perfect counter-attack.

A potentially even larger danger? If the chip ever gets smart enough to know the exact way a specific human it's been embedded into thinks, the said intel agency could then take the chip, send it the bits required for it to know the plan has been exposed and that the intel is working on a counter-measure. In consequence, the chip would respond by producing the thoughts its host would've produced, helping the agency become one impossible to counter.

The upsides of microchips are nothing compared to what the downsides could be. We're comparing the possibility of humans with lost vision or paralysis to fully recover with the possibility of AI takeover.

Maybe we are going to reach a completely new and better level of humanity, but if we give them the ability to think on their own and put them into a brain or body which means they'd be 24/7 connected to an energy source, we'd never know what truly goes behind the scenes. In fact, there could be a very dark world self-produced by the chips that could be hidden very well from us once we come into interaction with it.

So.. how worth it will this all AI thing really be?
full member
Activity: 130
Merit: 109
It's kind of ironic how the guy who invented Neuralink is the same guy who says he's concerned about AI. Perhaps he knows something about the way humans could evolve with his new Neuralink?

Humans with AI sure do look like a very creepy future, and it's going to turn into reality in a matter of months if Musk proceeds with his plan of implanting the first chip before the end of the year. It could have some very strange effects on humans and that really does scare me. If I decide to take the implanted chip, how do I ever know that chip doesn't take over my brain and cancel all my abilities, hence making me become basically trapped inside my own body with no way of taking back the control?

If his Neuralink AI consists in auto-learning and will give us the ability of communicating non-verbally, doesn't this mean the chip will be able to communicate with other chips and, one day, they could auto-teach each other to overtake humanity?

I'm just not sure this is the right path we're taking. When I think of it in-depth, it scares me. If we take a look back at relics and all the ancient symbols of astronauts and etc, what if this is similar to the last advancement in technology the last advanced race before us was able to achieve before they self-destroyed.. How will you know that, when you talk to me while I have the chip linked to my brain, you're communicating with me and not with the artificial brain inside me?

The chip being linked to our brain means it'd be able to make us see things that aren't real.. perhaps fooling us into thinking we've discovered a new dimension with other beings while, in fact, that's all only inside our brain.

This is a new level we're taking science to and I personally fear we're going to enter a phase of science we create but cannot understand.

Your post has a few things I think I could respond to.  First, to extend the Tolkien ring analogy, many of the men in his trilogy were tempted to use the powers of the ring against its maker, Sauron, really (in analog) this is the same thing that Musk is trying to do with AI -- it is bound to fail, we can't overcome AI by joining with AI.  The ring in LoTR corrupts those who use its powers, eventually turning them into evil 'golems' and destroying their authentic selves.  

If you don't know of the golem archetype:

https://en.wikipedia.org/wiki/Golem

"is an animated anthropomorphic being that is created entirely from inanimate matter"
That men could become like this seems to be the worry expressed in your third question.  

The only way to stop the AI eventually, once we realize the diabolical, dystopian practical nature of a world driven by AI, is to completely turn off the power button on whatever technological substrate the AI runs on -- probably necessitating a mandated technological quarantine of some kind.  The use of cryptography as a means of privacy to counter the AI through unbreakable functions grounded in the randomness of the prime numbers (whose mysteries I doubt even a super AI could transcend) gives me hope.  In other words, that there is an order in the world which not even the AI could understand...

Musk, is like a Saruman, who out of despair and madness seeks to create a new breed (half man and orc) to join with the spirit of Sauron to fight against the good in Middle-earth.  The new breed is Musk's chipped consumers on the AI matrix...

When you mentioned ancient symbols and other cultural artifacts I thought about Tolkien's 'Atlantis complex', it is mentioned in his letters that he had recurring dreams of an island being inundated by water.  Now I don't really accept the hypothesis of prior advanced technological civilizations, I believe we are unique in that regard.  And I see the astronaut archetypes which you mention as being more or less archetypes of what are considered angels or demons, we probably also view these as UFOs (when I say UFO I don't necessarily mean alien either, in unexplained cases just an archetype or form of the preternatural reality, but probably in most cases just an illusion or case of misattribution ).  I see the Atlantis archetype or form as probably being emblematic of a potential future catastrophe re-framed in the past so that we have a warning.

In accordance with this line of thinking one could see most of the evil magic in the LoTR as being emblematic of the intersection of ideology and technology and the evil resulting thereof... the Atlantis myth is a similar warning.
legendary
Activity: 1134
Merit: 1599
It's kind of ironic how the guy who invented Neuralink is the same guy who says he's concerned about AI. Perhaps he knows something about the way humans could evolve with his new Neuralink?

Humans with AI sure do look like a very creepy future, and it's going to turn into reality in a matter of months if Musk proceeds with his plan of implanting the first chip before the end of the year. It could have some very strange effects on humans and that really does scare me. If I decide to take the implanted chip, how do I ever know that chip doesn't take over my brain and cancel all my abilities, hence making me become basically trapped inside my own body with no way of taking back the control?

If his Neuralink AI consists in auto-learning and will give us the ability of communicating non-verbally, doesn't this mean the chip will be able to communicate with other chips and, one day, they could auto-teach each other to overtake humanity?

I'm just not sure this is the right path we're taking. When I think of it in-depth, it scares me. If we take a look back at relics and all the ancient symbols of astronauts and etc, what if this is similar to the last advancement in technology the last advanced race before us was able to achieve before they self-destroyed.. How will you know that, when you talk to me while I have the chip linked to my brain, you're communicating with me and not with the artificial brain inside me?

The chip being linked to our brain means it'd be able to make us see things that aren't real.. perhaps fooling us into thinking we've discovered a new dimension with other beings while, in fact, that's all only inside our brain.

This is a new level we're taking science to and I personally fear we're going to enter a phase of science we create but cannot understand.
full member
Activity: 130
Merit: 109
Elon Musk seems to be very concerned about the effects of AI on human life in the not so distant future, whether this is just the newest tech-scare driven by silicon paranoia and modern social isolation is yet to be seen, but it seems likely that what is called artificial 'intelligence' AI (be it actually intelligent or not) is going to be a highly disruptive technology.  

Some of the concerns relating to AI are found in an extrapolation of Darwinian competition from natural life to between natural and artificial life.  Whether this extrapolation of process is in itself merited is also yet to be seen, but if it is, the concerns would appear to be justified.  Much more concerning here, however, (to me) seems to be the too abundant confidence that man can legislate his own morality.  The democratic polis hasn't been able to work humanistically without the introduction of ideology (which always has its own circular reason) that is to say is not separated from the passions of the particular man who thinks ideologically. If man can't figure out (apart from faith) how to think about a universal moral code (and when such an ideological code has been forcefully mandated it becomes positively immoral, oftentimes incorporating genocide) then how could we possibly find the hubris to believe that we can legislate the morality of AI?  It would seem quite possible that whatever ideological morality is provided to the machines could easily be taken to an extreme that negates its codified intent. On the other hand, religious, faithful thinking, which I believe is particular to man doesn't feel like it could be something that one can piece into bits and bytes so to speak...

One of the largest problems in thinking about AI seem to also be contingent upon linguistic and non-linguistic philosophical problems that could be unrelated to the analytical aspects of studying AI itself, this problem is a qualitative problem akin to asking: 'what is the mind, soul or intellect?', and would need to be answered before we can proceed to questions like: 'does this program I wrote actually bring forth an intelligent being?'  Of course the process could pass the Turing test and it could be indistinguishable from a human, but this is an inherently unsatisfying notion of reality.  Its akin to believing that the way something 'looks' is the way something 'is'.  'Is' is always related to Being, that is to say there is a logos of the thing that can be spoken about.  We can speak about the appearances of a thing, but to be sure that we are not deceived requires a faith in an 'is' behind the looks of the thing.  I don't think that this faith can be present in AI robots which, inherently, prevent us from providing them with a non-ideological moral code, that is to say the one we find 'in' the order of the universe, not just what we can say is 'of' the universe (can be examined scientifically or codified).  

Tolkien saw the one ring as (if anything) being technology, I would say this is also the perfect metaphor for what I am trying to communicate here.  A ring is circular and ideologically speaking things are the same way an A=A, a super AI (Roko's Basilisk anyone?) still isn't capable of breaking that circle, in fact, to function at all it is still dependent on that programmatic-ideological loop -- the circle seems to be its own confines.  The way to intellection necessitates speaking about real objects, that is to say 'thinking straight' or 'operating faithfully' -- that isn't done tautologically.  

Elon Musk is currently working, through his company Neuralink, on "high bandwidth brain-machine interfaces to connect humans and computers", his philosophy is 'if you can't beat them then join them'.  If you sync yourself up with the AI net you would be assuming that this would give you more knowledge about the world, about 'real things', but really there is no such guarantee. A simple examination of the way that technology works (in cutting many people off from reality) would go a long ways toward convincing me that syncing yourself up so to speak would not be lucid, but actually quite hysterical.  You would be inundated with facts, but only see all the meaningless ideological ways to organize those facts.  All that new information you could recognize, in itself, would probably say nothing more about reality than what one understood before being chipped.  I could only see this as being inherently maddening.

Jump to: