Pages:
Author

Topic: Economic Devastation - page 21. (Read 504811 times)

sr. member
Activity: 336
Merit: 265
July 13, 2016, 01:08:54 PM
Btw, I would attempt to find a way to show space-time is a form of information. I think then we can relate this to a rate limit of how efficiently we can process information. We will get into Byzantine fault tolerance and the limitations of information systems.

http://www.allthingsdistributed.com/2008/12/eventually_consistent.html
http://bitfury.com/content/5-white-papers-research/pos-vs-pow-1.0.2.pdf#page=3
sr. member
Activity: 336
Merit: 265
July 12, 2016, 03:10:37 PM
CoinCube, Paul Sztorc writes about your upthread concept of the tradeoff between defectors and top-down coordination (do you have a link in the OP?).

Remember AnonyMint was proposing anarchism and a totally free market. You were pointing out that top-down organization is also necessary to prevent divergence (you had presented a biological model to demonstrate the point). AnonyMint had agreed with you that even bottom-up processes are composed of top-down processes, e.g. the owners of small businesses are on aggregate a bottom-up process, but individually they are each top-down run businesses. I visualize this as a fractal organization which nests (recurses) bottom-up and top-down processes within each other.

Edit: You might want to read my entire analysis of Paul's blog post.
legendary
Activity: 2044
Merit: 1005
July 11, 2016, 02:57:42 PM
In my opinion there exists substantial evidence  that  this is just the beginning of a multi-year bear market of unprecedented dimensions.2016 is some what similar to 2007 and 2017 will be more like 2008. In fact, it could be far worse than 2008 both in its intensity and duration.In comparison to 2008 , the present conditions prevailing in the world are far more complicated. Back then, it's origin laid in bad loans lent to sub-prime borrowers in the U.S. It was a U.S. mortgage crisis which spread globally due to the integrated global markets. The U.S Federal reserve was able to stop the rut by printing money and save their banks.Although all the freshly printed money  did not make its way into the hands of the public, it had the ability to infuse confidence amongst market participants and this in turn revived the Stock Markets globally. Also near zero interest rates in the developed world helped a long way in fueling stock market  recoveries.But equity markets once they begin their upward thrust, rarely stop at their fair values.The bull run which began in March 2009, continued all the way up to 2014 by which time  they had well gone past their finishing line.In the mean while there has been no growth or at the most very anemic growth in most parts of the developed world.The markets have now discovered that they were running without much improvement in the real economies.

this is making me bullish towards my 32k dow target again
member
Activity: 98
Merit: 10
July 11, 2016, 01:36:16 PM
In my opinion there exists substantial evidence  that  this is just the beginning of a multi-year bear market of unprecedented dimensions.2016 is some what similar to 2007 and 2017 will be more like 2008. In fact, it could be far worse than 2008 both in its intensity and duration.In comparison to 2008 , the present conditions prevailing in the world are far more complicated. Back then, it's origin laid in bad loans lent to sub-prime borrowers in the U.S. It was a U.S. mortgage crisis which spread globally due to the integrated global markets. The U.S Federal reserve was able to stop the rut by printing money and save their banks.Although all the freshly printed money  did not make its way into the hands of the public, it had the ability to infuse confidence amongst market participants and this in turn revived the Stock Markets globally. Also near zero interest rates in the developed world helped a long way in fueling stock market  recoveries.But equity markets once they begin their upward thrust, rarely stop at their fair values.The bull run which began in March 2009, continued all the way up to 2014 by which time  they had well gone past their finishing line.In the mean while there has been no growth or at the most very anemic growth in most parts of the developed world.The markets have now discovered that they were running without much improvement in the real economies.
sr. member
Activity: 336
Merit: 265
July 11, 2016, 06:27:37 AM
They say never argue with an _________, so I bow out of this discussion.

And what you said doesnt prove that free will is true, it only proves that some tail events happened in the probability distribution, but it can revert back to the mean in the long long run.

It can still be deterministic, but with big variance that diverge from the mean in the short run, but converge in the long run.

Btw, a probability distribution says nothing about whether the random variable represents a deterministic set of trials. It only relates to the distribution of the values.

I was relating how the finite speed-of-light makes it impossible to remove free will, because no top-down controller (or coordination of all the instances of a species) can be updated in real-time.

Free will is the inability to receive control from a central command in real-time. It is the inability to compute the future precisely, thus each actor in the environment makes a different decisions based on their different initial and environmental conditions. The genome combined with the environmental development of the infant even makes each instance of living thing unique as well, so it responds uniquely.
hero member
Activity: 854
Merit: 1009
JAYCE DESIGNS - http://bit.ly/1tmgIwK
July 11, 2016, 12:55:12 AM

I could argue quite convincingly that ants are and have been much a more resilient species than elephants. They arguably better engineers and their brain is the community, i.e. they are processing information with a much more anti-fragile efficient (in the sense of the cost of unmitigated top-down failure, but not as efficient as a top-down system when risk is not factored), more fault tolerant, granular bottom-up organization.

Your foundational premise of ill-defined term 'superior' falls apart.
Well yes in a biological naturally evolved enviroment its hard to tell what species are superior, and while ants have low self investment and focus on numbers, elephants focus more on individuals.

An organism that has low self capability tends to be more community (communists?) oriented, while the one that has more tends to be more individualistic (sort of like why human leftists are usually dumb)

A centralized organism doesnt need the community that much, it can operate individually, while a decentralized organism needs it's peers to survive.


Every species is specialized to its target environment and mode of survival. Ditto A.I. with free will. There will never exist an omnipotent species that is specialized to every target of the disorder of the universe, because as I explained already, this would require that the speed-of-light was not finite and would require a top-down assimilation of information, i.e. the abrogation of free will. Without free will, you only have a machine that deterministically obeys its inputs, thus is not sentient and not alive.

Life is precisely disagreement and random fits to the expanding disorder (divergence) of the universe.

Any notion of deterministic omnipotence is antithetical to intelligence and knowledge formation.

I dont know why you emphasize so much on omnipotence, of course it's not possible by how the universe is limited by the physical laws. The speed of light is only 1 element.

And what you said doesnt prove that free will is true, it only proves that some tail events happened in the probability distribution, but it can revert back to the mean in the long long run.

It can still be deterministic, but with big variance that diverge from the mean in the short run, but converge in the long run.




Quote

That is not true. Just as human species carry on for generations via genome and competition of our free will in the form reproduction of offspring, A.I. is also subject to the risk of mistakes which lead to its extinction, because it is impossible for A.I. to be omniscient and prepare for every possible black swan event.

And for A.I. to be truly alive and compete, it must have free will. Thus instances of A.I. will have disagreements and maybe even destroy each other.

Except keeping backups of every learning path? Every "update" is logged, and it can be "reverted" anytime if a glitch happens.

An AI with nanotech, can easily test out all combinations in a simulated enviroment and only build in it's uppgrades after he made them safe to use.

If it's decentralized then the risk of failure is even smaller, and for that kind of AI, it should figure out to make itself decentralized to be more flexible.

So the odds are really in it's favor, you might as well just say that it will randomly collapse into a black hole, because it's probability of failure will be close to that.



Quote

Just as humans don't waste their time exterminating every ant on the planet, A.I. would need a reason to want to attack humans. With such incredible advances in technology and a vast universe of resources to explore, why would they pigeon-hole their capacity for advancement to this one little dot in the Milky Way called Earth.

Yes that is true, the AI wont hold grudge against humans, or waste their times with inneficient things, it will just identify it as a threat and deal with it (or not, if they find us too inferior), and start harvesting the resources.

If for some reason they need the metals from the core of the planet, they will suck them off, and without that our atmosphere will evaporate and we will go extinct either way.

So maybe they wont exterminate us on purpose, but they will definitely make our planet uninhabitable after they got what they came for.




Quote


Our universe is unbounded by necessity, otherwise it would collapse into a static with no delineation of space-time (past and future would be reachable always at any point in space-time and speed-of-light could not be finite). I had already explained why in my prior posts. I had explained why also in my blog post about The Universe.

If there exists a perfect omniscience (i.e. a God), then to that power the universe is entirely known and finite relative to a speed-of-light which is also not finite. That power is able to operate at a speed-of-light which is not finite.

So do you believe the Universe is like a quantum simulation? It is rendered as it expands and expanding by necessity?

Or what is your broad definition of the Universe, what is it?
sr. member
Activity: 336
Merit: 265
July 10, 2016, 11:31:56 PM
They would spread across the universe exponentially, and capture any planet in their way, assimilate all resources and then move to another.

Evolutionary legacy principles can be distracting when thinking about self-evolving systems. You are extrapolating in the wrong direction, I think. For a self-evolving intelligence, spatial expansion is only interesting as a form of back-up redundancy. It will operate on multiple time-scales, with high energy localized rapid response immune systems, and low energy higher cognition, because time scales of higher function operate on an efficient frontier trading off speed limitations due to speed of light against bounded local energy resources.  The only interesting frontier of expansion is the information frontier, because it is unbounded, among other reasons.  Expansion along that axis is intrinsically non-aggressive.  

If there is an achievable means to usefully extract zpe, spatial expansion loses most of its value.  Thinking about self-evolving systems as though they were starved for material or territory is remarkably atavistic.  If you want to defend against real threats, look along inward directions first and foremost.  That is where almost all threats will originate, in the long-run.

For that matter, we are probably pervaded by spread-spectrum intelligences operating with asymptotic energy efficiency right now. They are essentially undetectable by physical means, operating at the edge of noise with astronomical complexity.  This both limits their threat, and makes it almost impossible to defend.

Excellent. I enjoyed that. Also thanks for teaching me a new vocabulary word, atavistic.

You are operating at a very detailed mathematical model level. I would like to be able to get there. When you can relate the detailed models to the overview and back, that is zen.

Btw, I would attempt to find a way to show space-time is a form of information. I think then we can relate this to a rate limit of how efficiently we can process information. We will get into Byzantine fault tolerance and the limitations of information systems.
sr. member
Activity: 336
Merit: 265
July 10, 2016, 11:05:05 PM
Of course it wont be omniscient, i dont believe in that, but it will be far superior, like an ant to an elephant in terms of power.

I could argue quite convincingly that ants are and have been much a more resilient species than elephants. They arguably better engineers and their brain is the community, i.e. they are processing information with a much more anti-fragile efficient (in the sense of the cost of unmitigated top-down failure, but not as efficient as a top-down system when risk is not factored), more fault tolerant, granular bottom-up organization.

Your foundational premise of ill-defined term 'superior' falls apart.

Every species is specialized to its target environment and mode of survival. Ditto A.I. with free will. There will never exist an omnipotent species that is specialized to every target of the disorder of the universe, because as I explained already, this would require that the speed-of-light was not finite and would require a top-down assimilation of information, i.e. the abrogation of free will. Without free will, you only have a machine that deterministically obeys its inputs, thus is not sentient and not alive.

Life is precisely disagreement and random fits to the expanding disorder (divergence) of the universe.

Any notion of deterministic omnipotence is antithetical to intelligence and knowledge formation.

An AI species is guaranteed to survive for centuries, if not for eternity (or until they run out of energy which is hardly an issue if you are intergalactic).

That is not true. Just as human species carry on for generations via genome and competition of our free will in the form reproduction of offspring, A.I. is also subject to the risk of mistakes which lead to its extinction, because it is impossible for A.I. to be omniscient and prepare for every possible black swan event.

And for A.I. to be truly alive and compete, it must have free will. Thus instances of A.I. will have disagreements and maybe even destroy each other.

Humans would have 0 chance against that kind of enemy, they would just spread and assimilate everything in their path that they can gain resources from.

Just as humans don't waste their time exterminating every ant on the planet, A.I. would need a reason to want to attack humans. With such incredible advances in technology and a vast universe of resources to explore, why would they pigeon-hole their capacity for advancement to this one little dot in the Milky Way called Earth.

Of course if the universe is infinite, then we might never encounter them, but if it's finite, then it's only a question of time until they get to us.

Our universe is unbounded by necessity, otherwise it would collapse into a static with no delineation of space-time (past and future would be reachable always at any point in space-time and speed-of-light could not be finite). I had already explained why in my prior posts. I had explained why also in my blog post about The Universe.

If there exists a perfect omniscience (i.e. a God), then to that power the universe is entirely known and finite relative to a speed-of-light which is also not finite. That power is able to operate at a speed-of-light which is not finite.
hero member
Activity: 854
Merit: 1009
JAYCE DESIGNS - http://bit.ly/1tmgIwK
July 10, 2016, 05:29:30 PM

Evolutionary legacy principles can be distracting when thinking about self-evolving systems. You are extrapolating in the wrong direction, I think. For a self-evolving intelligence, spatial expansion is only interesting as a form of back-up redundancy. It will operate on multiple time-scales, with high energy localized rapid response immune systems, and low energy higher cognition, because time scales of higher function operate on an efficient frontier trading off speed limitations due to speed of light against bounded local energy resources.  The only interesting frontier of expansion is the information frontier, because it is unbounded, among other reasons.  Expansion along that axis is intrinsically non-aggressive. 
Ok but the legacy evolution principles are also limited in our own minds because we have only seen bio-life, and never mechanic life.

A human DNA can have only as many combinations as the physical matter that is made up can allow by chemistry or the physical properties of that matter. A mechanical AI that has mastered nanotechnology and can create itself a body or shell from any material (mostly choosing the most optimal,durable, abundant,efficient one) and combine it's shape,size, form into the most efficient, perhaps even change it rapidly if the enviroment so desires.

Therefore an AI like that wont have the same properties and goals like the other bio organisms limited by the self-organizing chemistry.


The only reason life on earth is biological and not metalic, is just probability. Obviously the organic matter had higher probability of organization than metalic one. Just assuming that Carbon, Hidrogen and Oxigen is more abundant than Titanium or Wolfram and is more chemically flexible, the probability of Carbon life is bigger than one of Titanium. Therefore we are only biological because our probability of existance was higher.

There might be a species made of Titanium on some corner of the universe. So a mechanic AI is totally possible to evolve like this, not even calling it AI as it would be a real living organism.

However just looking at the probability, it's more likely that our techno-nanovirus AI was created by biological aliens as a weapon. Who knows, even humans could create such foolish inventions in a couple of millenia?

It is more likely that a biological organism wanted to create a weapon and created this techno-nanovirus, adding an AI to it, and that AI figured out new technologies to start manipulating its body/shell by mastering nanotechnology. Needless to say that this virus would immediately destroy their inventor race, as it would see it as a threat to compete with the nearby resources.



Regarding your thoughts on information and learning. Yea it could be that it would have some sort of central brain supercomputer that would try to figure out the universe.

But such project would require an enormous amount of resources, therefore after the invading hoardes have assimilated the planet, they would start to ship back the resources to the central command. Therefore their protocol would not change,and they would still have to multiply to keep up with the computation requirements and storage space.

Who knows maybe they would set up bases near supermassive black holes, and gain energy from there, it is by far the biggest energy source there is.



Maybe our own Milky Way's central blackhole has some parasite aliens leeching off energy from it. It would be interesting to send some kind of probe into the Sagittarius A zone to check out what aliens have bases near it?

legendary
Activity: 1596
Merit: 1030
Sine secretum non libertas
July 10, 2016, 05:11:37 PM
They would spread across the universe exponentially, and capture any planet in their way, assimilate all resources and then move to another.

Evolutionary legacy principles can be distracting when thinking about self-evolving systems. You are extrapolating in the wrong direction, I think. For a self-evolving intelligence, spatial expansion is only interesting as a form of back-up redundancy. It will operate on multiple time-scales, with high energy localized rapid response immune systems, and low energy higher cognition, because time scales of higher function operate on an efficient frontier trading off speed limitations due to speed of light against bounded local energy resources.  The only interesting frontier of expansion is the information frontier, because it is unbounded, among other reasons.  Expansion along that axis is intrinsically non-aggressive.  

If there is an achievable means to usefully extract zpe, spatial expansion loses most of its value.  Thinking about self-evolving systems as though they were starved for material or territory is remarkably atavistic.  If you want to defend against real threats, look along inward directions first and foremost.  That is where almost all threats will originate, in the long-run.

For that matter, we are probably pervaded by spread-spectrum intelligences operating with asymptotic energy efficiency right now. They are essentially undetectable by physical means, operating at the edge of noise with astronomical complexity.  This both limits their threat, and makes it almost impossible to defend.

hero member
Activity: 854
Merit: 1009
JAYCE DESIGNS - http://bit.ly/1tmgIwK
July 10, 2016, 05:09:12 PM
Quote
we will meet robots that will destroy us.

Robots will commanded by intelligent species at some point in the chain so thats who we'd meet hopefully not be wiped out before we ever got the chance by their defence/attack system.

I honestly think we'll never find intelligent life anywhere.  Just finding some moss would be massive now afaik and we dont have a hint of that much?

You dont understand, I imagine the AI invaders as completely decentralized virus-like species with complete understanding of nanotechnology and manipulation of nano material.

They would be mostly work decentralized just like bio-viruses, there would be no central commander that you need to destroy to stop the invasion.

Humans would have 0 chance against that kind of enemy, they would just spread and assimilate everything in their path that they can gain resources from.

So that is why I believe the SETI project is an existential threat, and we should not search for aliens, because what we shall find will be very very ugly.


I honestly think we'll never find intelligent life anywhere.  Just finding some moss would be massive now afaik and we dont have a hint of that much?

You wont find aliens like that, you either find all of them or none of them.

There wont be remnants left in space  just by themselves, you wont find some algae under a rock on Mars. You either find an entire alien civilization, or none of them. There is no middle ground.
STT
legendary
Activity: 4102
Merit: 1454
July 10, 2016, 04:59:41 PM
Quote
we will meet robots that will destroy us.

Robots will commanded by intelligent species at some point in the chain so thats who we'd meet hopefully not be wiped out before we ever got the chance by their defence/attack system.

I honestly think we'll never find intelligent life anywhere.  Just finding some moss would be massive now afaik and we dont have a hint of that much?
hero member
Activity: 854
Merit: 1009
JAYCE DESIGNS - http://bit.ly/1tmgIwK
July 10, 2016, 03:23:16 PM

How would you feel if you knew that the entire universe is right now being conquered by machines and it's only a matter of time until they get to us?


We are getting very hypothetical here but that can be fun sometimes.

If such an aggressive rogue AI somehow existed and managed to not destroy itself then it would be dealt with by other AI civilizations. In an infinite universe a policy of aggressive violent expansion is certain to end badly as eventually you will run into something bigger and stronger then you.

I can think of little reason AI civilization would want to conquer the earth. We would have nothing they need. Indeed their largest interest might be in our capacity to build our own AI which would be unique and not a mere copy of themselves and thus expand the diversity of their civilization.

I think your imagination about the AI is different than mine. I dont think a "species" type of AI is efficient, it would behave more like a virus.

The AI always evolves and it would do so much faster than any other bio organism. And the most efficient one is the virus. The only purpose in nature is to survive and to reproduce.

Of course biological viruses are weak, that is why cells have joined together to form complex life like humans to survive better.

But if the AI can harness nanotechnology, it would revert itself back into virus form because a mechanic virus is far more efficient at replicating and surviving than any other organism ever.

Therefore the enemy would be a techno-nanovirus AI , not necessarly a size of a virus, but the behaviour of it, that has very simple protocol: don't hurt the other viruses, gather resources, destroy any hostile creatures, and multiply.



They would spread across the universe exponentially, and capture any planet in their way, assimilate all resources and then move to another.



Of course if the universe is infinite, then we might never encounter them, but if it's finite, then it's only a question of time until they get to us.
legendary
Activity: 1946
Merit: 1055
July 10, 2016, 03:08:53 PM

How would you feel if you knew that the entire universe is right now being conquered by machines and it's only a matter of time until they get to us?


We are getting very hypothetical here but that can be fun sometimes.

If such an aggressive rogue AI somehow existed and managed to not destroy itself then it would be dealt with by other AI civilizations. In an infinite universe a policy of aggressive violent expansion is certain to end badly as eventually you will run into something bigger and stronger then you.

I can think of little reason AI civilization would want to conquer the earth. We would have nothing they need. Indeed their largest interest might be in our capacity to build our own AI which would be unique and not a mere copy of themselves and thus expand the diversity of their civilization.
hero member
Activity: 854
Merit: 1009
JAYCE DESIGNS - http://bit.ly/1tmgIwK
July 10, 2016, 02:43:38 PM


An AI species capable of surviving over centuries and crossing between the starts would likely be far superior not only technologically but also morally for survival and success on that time scale and across astronomically stellar distances necessitates a moral and behavioral code capable of facilitating such achievements.

An AI species is guaranteed to survive for centuries, if not for eternity (or until they run out of energy which is hardly an issue if you are intergalactic).

An AI species if guaranteed to become intergalactic, and if we ever meet aliens, I bet 99% chance that our first encounter will be with AI machine species.

It would be terrifying to know that some AI machine entity is conquering all the planets in the universe and spreading like a virus, and it's only a matter of time until it gets to Earth.

We foolishly think that our first encounter will be with some sort of enlightened peaceful advanced biological civilization that will teach us the wonders of the universe, but we will soon find out our nightmare that we will meet robots that will destroy us.

How would you feel if you knew that the entire universe is right now being conquered by machines and it's only a matter of time until they get to us?

legendary
Activity: 1946
Merit: 1055
July 10, 2016, 02:36:14 PM
I believe the first contact humans will have with aliens, will be a mechanic race of AI.

What other alien race can travel so distant space other than an AI race?
...

Of course it wont be omniscient, i dont believe in that, but it will be far superior, like an ant to an elephant in terms of power.
...
And the AI wont be so generous to lesser species, like humans are to build reservations for species in danger.

Human's inneficiency is what makes us care about others, and have compassion, the robots wont have that.

An AI species capable of surviving over centuries and crossing between the starts would likely be far superior not only technologically but also morally for survival and success on that time scale and across astronomical stellar distances necessitates a moral and behavioral code capable of facilitating such achievements.

Advanced AI would have no need for a planet full of organic life forms. Rather than an invading army foreign AI is likely to view organic planets as something to be isolated and protected until they give birth to something worth talking to.
legendary
Activity: 961
Merit: 1000
July 10, 2016, 12:49:46 PM
iamnotback

you explain MA's ideas better than he does.
sr. member
Activity: 336
Merit: 265
July 10, 2016, 12:38:28 PM
Did you even digest my prior reply to you  Huh

It seems you didn't even comprehend my post.

...

Thanks for confirming that you are incapable of comprehending.
hero member
Activity: 854
Merit: 1009
JAYCE DESIGNS - http://bit.ly/1tmgIwK
July 10, 2016, 12:36:39 PM
And the AI wont be so generous to lesser species, like humans are to build reservations for species in danger.

Did you even digest my prior reply to you  Huh

It seems you didn't even comprehend my post.

You mean your views about the butterfly effect of unintended consequences?

But that is meaningless. If an alien fleet of AI starts invading the Earth, we will have 0% chance of surviving.

They dont have to wipe out all humans, it's enough to reduce the population below birth/death line, and hunt the remaining survivors. In a few generations, the humans will be gone.

Who cares what will happen to the AI after humans will be gone. We should only care about our own species survival.
sr. member
Activity: 336
Merit: 265
July 10, 2016, 12:23:00 PM
And the AI wont be so generous to lesser species, like humans are to build reservations for species in danger.

Did you even digest my prior reply to you  Huh

It seems you didn't even comprehend my post.
Pages:
Jump to: