Pages:
Author

Topic: Is a Madmax outcome coming before 2020? Thus do we need anonymity? - page 31. (Read 102812 times)

hero member
Activity: 518
Merit: 521
More anonymity is always good. Because NSA Because Privacy Because Rights Because we can.

One potential downside of increased anonymity is the government and the socialism (meaning all those dumb blissful people who think government is necessary and good), are probably going to target it for destruction.

But the entire point is if we can't build something the government can't destroy, then we are just masturbating with Bitcoin any way (or helping to lay the acceptance of electronic currency which the government will take control of then we are right back in the same problems again).

So yes we must build it. And we must build it good enough to be be able to give our silent "F U" to the government.
full member
Activity: 588
Merit: 100
More anonymity is always good. Because NSA Because Privacy Because Rights Because we can.
hero member
Activity: 518
Merit: 521
CoinCube, the advanced sentient AI you refer to, would only employ humans for manual labor if it was less expensive than producing less sentient robots to do that work.

I commend the concise way you explained my "P.S." as virtual simulation versus complete integration in "real life".

Your point about different niches and sum total being greater, is really a less mathematical way of summarizing my point that the entropy and thus diversity must be maximized. Unless AI has all the entropy of human life, it can't replace it. It can add entropy but not replace it.

The basic fallacy spouted by Kurzweil is the assumption that if the computational power of computers exceeds a single human brain (or even if the computational power of all computers exceed that of all human brains), then computers will replace and supersede all creativity of human brains. It is false because computational power is not (alone) responsible for the creativity of humans. Rather humans fail massively and it is the fitness of the chance of a human brain being in the right place at the right time, that creativity emerges from. And the fact that humans are immersed in this biological life and each brain is unique with a historical (and ongoing dynamic) entropy from the beginning of time. The only way that robots can replace and supersede all that creativity that emerges from entropy is to replace the biological life, i.e. to reinvent humans. The computational power has nothing whatsoever to do with it the issue.

Computational power will be useful for emulating human abilities such as understanding speech, which is still very far from accurate enough. To the extent that computational power becomes more useful than what humans can already do, humans will simply add this computer to their brains and wire it in as cyborgs.

We have much more to fear from humans than AI. The coming debt collapse will again reveal the nasty side of human nature and the power vacuum of democracy.

Obama has begun tilting the USA towards a totalitarian dictatorship where some large portion of the citizenry is in support of "99% must take from the 1%". Just look at the stages of Germany's collapse into Wiemar and then Hitler for a general outline on how communism+socialism+fascism meld into a wonderful outcome. For example the minimum wage (which Obama nearly doubled this week) is a form of communism and results in economic failure. Eventually the 1% is the (even lower) middle class and later it is everyone, as the cancer of communism+socialism+fascism eats itself, as it runs out of other people's money to confiscate and spend.

The big difference in the USA is some percent are awake and have guns. The elite who are controlling the USA know this. This is why for example Obama has been issuing gun control by executive orders, they are building concentration camps throughout the USA often near to railroad tracks, and Homelove "hands down your pants" Security has deputized local police as federal and will obtain 2714 tank-like vehicles and 6 billion hollow point rounds over the next 8 years. Note the father of "we are nearing a new world order" President Bush, named Prescott Bush, was apparently via Union Bank one of the financiers of factories provided free labor by the concentration camps.

It is going to be "interesting" to see how this plays out.

Europe is in worse condition believe it or not.
legendary
Activity: 1946
Merit: 1055
TiagoTiago you are envisioning the robots we see in science fiction movies like terminator and Star Trek.
Let me go over each of your points and explain why I think they are very improbable.

Humans wouldn't take kindly to robots stealing their ore, messing with their electric grid etc, nor to viruses clogging their cat-tubes. But by then, they would already have advanced so much that we would at most piss them off; and it doesn't sound like a good idea to attract the wrath of a vastly superior entity.
You have this backwards. Advanced AI of the type you are envisioning would never need to steal any of these things. They would be the most productive aspect of society and thus control and make most of the wealth. They may even hire humans to do much of the mundane work of resource production because it is manual labor, boring and not really worth their time. More likely it will be humans that are doing the stealing.

Quote
The difference from neanderthals is a post-singularity AI would be self-improving, and would do so at timescales that are just about instant when compared to organic evolution or even human technological advancements.
Not in the sense you are thinking. Such low timescale evolution would only be relevant in virtual simulation with little relevance in the real world. Things that cause and effect the real world would still require empiric physical testing to confirm and would thus involve significant cost.  

Quote
If combination is better, we would be assimilated; resistance would be futile.
You misunderstand... my point was not about cyborgs. The question you need to ask is if a combination of AI and human society (meaning the combined productive/creative/work output of theses societies when summed up would exceed a society of AI without any humans at all. As we would occupy very different ecological niches the answer is yes.  

Quote
The difference is a post-singularity AI would be able to increase it's effective population much faster than humans, while at the same time improving the efficiency of its previously existing "individuals".
The "efficiency" you describe would be efficiency/adaption to survive in their enviornment mainly the net/web or whatever we have at that time. AI would not be immune to competitive pressures and would likely spend most of its time competing with other AI over whatever AI decides is important to it (data space/memory/who knows). They are not likely to be interested in being efficient humans.

Quote
The AI could for example decide it would be more efficient to convert all the biomass on the planet into fuel, or wipe all the forests to build robot factories, or cover the planet with solarpanels etc. Using-up-all-the-resources-of-the-planet seems like a very likely niche; humans themselves are already aiming to promote themselves up on the Kardashev scale in the long term...

FUD. AI superior to us would be intelligent enough to see the value in sustainability. Destroying a resource you can not replace is unwise. AI could also decide to call itself Tron and make all humans play motorcycle and Frisbee games in duels to the death. The key question is what is likely.

I highly recommend reading The Golden Age by John C. Wright. Is is Scifi which explores a future with vastly superior AI. It is a reasonable guess at what a future filled by a race of superior AI beings might look like and is a fun read.
hero member
Activity: 518
Merit: 521
The thing is, for a virtual or decentralized synthetic agent, the cost of failure can be much smaller than for organic species. And due to the fact they wouldn't restricted by DNA, they would be able to evolve much faster.

See CoinCube, he is never going to understand.

After how many times did I tell him that massive and continuous failure is integral with adaptation and optimal fitness and resilience. And he still doesn't understand (or ignores) that massive failure (maximizing the number of minimum orthogonal probabilities) is necessary to maximize entropy and stay on the universal trend as stated in the Second Law of Thermodynamics.

He continues to think that faster computation is meaningful in the context of fitness of the macro economy, even though I have explained over and over again that it is chance and maximizing diversity that is salient to optimization of the fitness (i.e. maximizing economy thus by definition success).

(note faster computation is meaningful for solving computable algorithms, but this is not the same as macro fitness. And don't forget Godel's incompleteness theorem which says that no computable set of axioms can be both complete and consistent)

He doesn't comprehend the links I provided upthread on simulated annealing. If you cool ice too quickly it develops cracks, because the localized ordering of the molecules isn't optimal (the cooling was forced top-down too fast for the localized actors to do sufficient trial-and-error, i.e. massive failure is required on the way towards optimal fitness). He doesn't understand that this resistance/friction (mass) to top-down control is necessary otherwise entropy collapses to minimum (past and present become one) and failure of optimization results.

I mean I have tied it all together, but some people will be incapable of understanding, unless perhaps it is developed carefully into a book that is taught step-by-step. The math needs to be detailed and perhaps some further derivations can be achieved with pull it all together holistically with existing science.

So that is why I bowed out and will now bow out.

If he continues to write things that might sound plausible to many readers, yet which continue to fail to comprehend, then I think I will just have to let readers think he is correct so they can join him in failing to understand.

I am sorry, but how can I respond in way that is not condescending and not ending up repeating the same things over and over again for the next 100 posts.

I really don't like being disrespectful and he has been so cordial. I have noticed that other very smart people tend to try to embarrass people who can't get it and that is not my intention here. I guess it is just frustration. I am trying to get him to realize that he is not addressing my points at all, and keeps repeating concepts which ignore my points.

Also the assumption that eliminating DNA and that DNA alone is responsible for human evolution is a non-sequitur or strawman. That is another complex discussion which I don't feel up to addressing here and now.

P.S. Notwithstanding that the above stands on its own without this following point, it is also quite myopic to assume the cost of failure can be less. If the robots are interacting with the the biological life which they must if they are going threaten and dominate humans as feared, then those life forms are going to suffer from failures and take actions accordingly. One of the myriad of potential reactions could be to destroy the robots who are causing failure.
hero member
Activity: 616
Merit: 500
Firstbits.com/1fg4i :)
It seems we're at a deadlock. Perhaps some input from some neutral third-parties could help the discussion move on?

Perhaps I can help find consensus here. In my opinion TiagoTiago you and AnonyMint are talking past each other.

You are arguing that through innovation we will create a being superior to humans and that will lead to human extinction via a tech singularity where computers vastly out think humans.
AnonyMint is arguing as per his blog Information is Alive that for computers to match or exceed humanity they would essentially need to be alive aka human reproducing and contributing to the environment. 

Is is possible to create AI that is better then humans? By better I mean AI that exceeds the creativity/potential of all of humanity.
Sure it's possible, but as argued by Anonymint such an AI would have to be dynamic, alive, and variable with a chance a failure and would thus not be universally superior.
The thing is, for a virtual or decentralized synthetic agent, the cost of failure can be much smaller than for organic species. And due to the fact they wouldn't restricted by DNA, they would be able to evolve much faster.

So what we are really talking about here is will the creation of sentient AI lead to a race of AI that will result in the inevitable extinction of humans. That answer to this is a definite no.

The dynamics of inter-species competition depend on the degree each species is dependent on shared limited resources. A simple model of pure competition between two species is the Lotka-Volterra model of direct competition.
Even with pure competition species A is always and in every way bad for species B the outcome is not necessarily extinction. It depends on the competition coefficient (which is essentially a measure of how much the two species occupy the same niche).
Sure it is possible that the AI would cooperate with or otherwise be beneficial to humans. But the only pressure towards that direction is what humans would do; and we would only have an extremely short window of time to influence it before it gets beyond our reach.

Should we invent AI or even a society of AI that is collectively vastly superior to human society we would only be in danger of extinction if such robots were exactly like us (eating the same food, wanting the same shelter, ect add better endowed and lusting after human women if you want to add insult to injury Grin). Now obviously this would be very hard to do because we have evolved over a long time and are very very good at filling our personal ecological niche. We wiped out the last major contenders the neanderthal despite the fact that they were bigger, stronger, and had bigger brains (very good chance they were individually smarter).
The difference from neanderthals is a post-singularity AI would be self-improving, and would do so at timescales that are just about instant when compared to organic evolution or even human technological advancements.

Much more likely is that any AI species would occupy a completely different niche then we do (consuming electricity, living online, non organic chemistry, ect) Such an AI society would be in little to no direct competition with humans and would likely be synergistic.
Humans wouldn't take kindly to robots stealing their ore, messing with their electric grid etc, nor to viruses clogging their cat-tubes. But by then, they would already have advanced so much that we would at most piss them off; and it doesn't sound like a good idea to attract the wrath of a vastly superior entity.


The question in that case is not whether the AI society is collectively superior but is instead whether the combination of human and AI together is superior to AI alone.
If combination if better, we would be assimilated; resistance would be futile.

As the creativity of sentience is enhanced as the sentient population grows that answer is apparent.
The difference is a post-singularity AI would be able to increase it's effective population much faster than humans, while at the same time improving the efficiency of its previously existing "individuals".

Could humanity wipe itself out by creating some sort of super robot that is both more intelligent (on average) and occupies the exact same ecological niche we do? Sure we could do it in theory but it would be very very hard (much harder then just creating sentient AI). There are far easier ways to wipe out humanity.     
The AI could for example decide it would be more efficient to convert all the biomass on the planet into fuel, or wipe all the forests to build robot factories, or cover the planet with solarpanels etc. Using-up-all-the-resources-of-the-planet seems like a very likely niche; humans themselves are already aiming to promote themselves up on the Kardashev scale in the long term...
hero member
Activity: 518
Merit: 521
CoinCube that is excellent. Thanks.

Indeed I was also going to mention that we don't know if AI would want to destroy or cooperate with humans. And I also raised the point that we can leverage the AI. And outcomes are much more likely to be complex, myriad of synergies and competitions than one single low entropy result of annihilation or total enslavement.

The most fundamental takeaway is that entropy is always trending to maximum and it is very implausible to engineer a counter-trend very low entropy result such as extinction or universal superiority. Such extinction/slavery results happen by chance and are thus high entropy results.

What I am trying to get readers to realize is how easily humans fall into the emotional trap of adopting Malthusian fear, which is really the illogical fear that the trend of the universe to maximum entropy will stop or that the mostly highly unlikely and rare outcomes are real.

Rather than fearing things that never happen (e.g. walking into a room without oxygen, i.e. a very low entropy result), we should be preparing for the repeating socialism, debt collapse, megadeath outcome that repeats throughout human history.
legendary
Activity: 1946
Merit: 1055
It seems we're at a deadlock. Perhaps some input from some neutral third-parties could help the discussion move on?

Perhaps I can help find consensus here. In my opinion TiagoTiago you and AnonyMint are talking past each other.

You are arguing that through innovation we will create a being superior to humans and that will lead to human extinction via a tech singularity where computers vastly out think humans.
AnonyMint is arguing as per his blog Information is Alive that for computers to match or exceed humanity they would essentially need to be alive aka human reproducing and contributing to the environment.  

Is is possible to create AI that is better then humans? By better I mean AI that exceeds the creativity/potential of all of humanity.
Sure it's possible, but as argued by Anonymint such an AI would have to be dynamic, alive, and variable with a chance a failure and would thus not be universally superior.
So what we are really talking about here is will the creation of sentient AI lead to a race of AI that will result in the inevitable extinction of humans. That answer to this is a definite no.

The dynamics of inter-species competition depend on the degree each species is dependent on shared limited resources. A simple model of pure competition between two species is the Lotka-Volterra model of direct competition.
Even with pure competition species A is always and in every way bad for species B the outcome is not necessarily extinction. It depends on the competition coefficient (which is essentially a measure of how much the two species occupy the same niche).

Should we invent AI or even a society of AI that is collectively vastly superior to human society we would only be in danger of extinction if such robots were exactly like us (eating the same food, wanting the same shelter, ect add better endowed and lusting after human women if you want to add insult to injury Grin). Now obviously this would be very hard to do because we have evolved over a long time and are very very good at filling our personal ecological niche. We wiped out the last major contenders the neanderthal despite the fact that they were bigger, stronger, and had bigger brains (very good chance they were individually smarter).

Much more likely is that any AI species would occupy a completely different niche then we do (consuming electricity, living online, non organic chemistry, ect) Such an AI society would be in little to no direct competition with humans and would likely be synergistic. The question in that case is not whether the AI society is collectively superior but is instead whether the combination of human and AI together is superior to AI alone.

As the creativity of sentience is enhanced as the sentient population grows that answer is apparent.

Could humanity wipe itself out by creating some sort of super robot that is both more intelligent (on average) and occupies the exact same ecological niche we do? Sure we could do it in theory but it would be very very hard (much harder then just creating sentient AI). There are far easier ways to wipe out humanity.    



hero member
Activity: 518
Merit: 521
Armstrong writes about misleading China trade data. He says importers are bringing in cash in a carry trade. I had also read that exporters offshore and hide profits in Singapore and Hong Kong, so this understates the trade imbalances and makes it look like Chinese factories are not profitable.

Essentially the elite of China are extracting all some or much (not sure which) of the wealth from the people. And they are exacerbating the debt bubble and misallocation of capital to the wrong activities and oversupply. This is not a sustainable domestic doom, but rather a bubble.

This is why I have written that China must go through economic implosion chaos and an overthrow of the elite before it can really grow again.

What the non-western countries have in their favor is very low government share of GDP because the social welfare institution is not as perverse. But what they have in their disfavor is they have an elite top-down structure that is standing in the way of bottom-up economic growth. So the developing world must also go through the chaos/riots of the uptick in the war cycle that Armstrong's model predicts.

The developing world has to less to fall because they are already lower economically with less social promises. Yet the fall will still be very painful since they have so many at the poverty line already.



Martin Armstrong's position has been there is no proof of a global conspiracy, and he doesn't speculate. That is an acceptable position, except that he continues to assert there is no global conspiracy, which is thus speculation, since he doesn't have any proof to support that assertion. So I urge him to stop being disingenuous and appearing to be a tool of the elite towards a one world currency which he has proposed as a solution to this crisis.

As for proof of a global conspiracy, we got a big chunk of proof from Aaron Russo as follows.

https://bitcointalksearch.org/topic/m.3497509

Quote from: AnonyMint
As a Treasury official said some decade ago about the time he also said, "we will burn the fingers of the goldbugs up to their armpits", it has always been the plan to go after the millionaires and steal back all their gains to the elite (skip to 36:35 min of the linked video) who run the fiat system. And Bitcoin is an amazingly great tracking tool to aid them in this coming global confiscation via taxation of the rich process. Note the elite super rich are always excluded from such gestapos.

Former (Jewish?) IRS Commissioner and the man who wrote much of the tax code law, said to (Jew) Aaron Russo (producer of Bette Midler, The Rose, Trading Places, etc) in Ashkenazi Jewish Yiddish language, "nothing will help you". Skip to the 37 min point in the linked video.

The elite know exactly what they are doing by launching Bitcoin via the fictitious anonymous identity "Satoshi".

Nick Rockefeller told Aaron Russo what the goal is.

P.S. The Ashkenazi Jews have a much higher average IQ of 117, and many elite are Ashkenazi Jews. The says nothing against all Jews however.

Also it is rather incredulous to discount the fact that all the transition to AML, KYC which is enabling this hunt for capital which Martin admits and writes about, was engineered starting with 9/11. And it pretty difficult to discount that 9/11 was not done by 16 guys on camels and was rather engineered by ... (much circumstantial evidence points to Dick Cheney as key cog in the wheel). They evidence that the buildings were not downed with airplanes is overwhelming, even 1000s of architects and engineers have signed a petition saying the government's story is implausible. And this terrorism false flag farce is being used to keep the world locked into a non-default increasing debt trajectory with a hunt for all capital. Precisely what is necessary to drive the world into a severe economic contagion which can usher in a one-world currency type result after destroying the nation-state concept.

I am not sure there is a global conspiracy. And it doesn't really affect my actions nor goals any way. So I don't really care. But I am skeptical of a guy (Armstrong) who says speaks against decentralized cryptocurrencies, speaks for a one-world currency solution (with national or regional currencies floating relative to it), and who speaks against the possibility of the global conspiracy without any proof.

Just because Armstrong is aware of manipulations at the lower-level echelon of the NY bankers club is not proof that the higher echelon doesn't exist. Logic 101 really.


Update: Armstrong writes today about the $2.3 trillion missing from Pentagon accounting and paperwork was conveniently destroyed at the Pentagon on 9/11.
hero member
Activity: 518
Merit: 521
And so now it begins:

http://armstrongeconomics.com/2014/01/27/is-it-your-money-are-you-sure/

Quote
HSBC is denying people the right to withdraw their own money and are demand you prove why you even want it. The arrogance of these people knows no bounds. They are all getting full of themselves when it comes to this nonsense of Money Laundering demanding you have to PROVE it is your money, WHERE it came from, and did you PAY TAXES before you can even take it. The real conspiracy is simply government against the people as it has always been since the dawn of recorded history.

This is why people are buying real estate and equities. They are getting off the grid and away from the big banks that seem to be losing their mind. This is one time you may be better off with a real local bank and away from any of the big money center banks that trade with other people’s money. They will lose their shirt and your pants after 2015.75.
hero member
Activity: 518
Merit: 521
Email sent to Martin Armstrong.

Quote
Subject: Martin have you considered the entropic force

Martin writes more recently about there being no top-down global conspiracy, rather only manipulation of markets:

http://armstrongeconomics.com/2014/01/27/bank-manipulations-coming-to-an-end/


Martin may be missing the entropic force and explanation. I go in depth into that in the following links.

https://bitcointalksearch.org/topic/m.4790440
(see prior two pages of the above linked post and thread)

http://unheresy.com/The%20Universe.html#Entropic_derivation


The manipulations he speaks of are irrelevant to whether there is a global conspiracy. The entire system can crash and the global conspiracy towards a one world currency can still be intact.

Martin the smartest globalists know very well that the crash is coming. In fact they want it and helped to engineer it, because they want the nation-state (governance and central banking) model to be discredited so they can usher in a Brussels-centered world goverment and currency model.

The specifically encouraged and enabled these manipulations that you speak of, because they knew it would destroy the sovereignty of nations.

Either you are a tool of them, disingenuous, or not as smart as I think you are. Which is it?

In all due respect, I hope this finally clicked on the light bulb in your head.

The technologically unemployed due to the past socialism top-down misdirecting human capital over the past decades will demand a top-down solution since they can't adjust and compete in the coming computer automation Knowledge Age. This is how the entropic force relates. The larger mass will double-down on a more centralized solution and more economies-of-scale for debt, while the knowledge age people will break away into an anonymous cryptocurrency to break free from that dying mass.
hero member
Activity: 518
Merit: 521
I am cheerful because although we face some seriously bad events coming due to the crash of socialism, there are also enormous opportunities. For example, who ever invests in the right altcoin that fixes the problems with Bitcoin which I have enumerated in my archives, is probably going to become incredibly wealthy.

Although the outcomes of the individual human (or other) actors in the system are chaotic, a (closed thermodynamic) system has patterns due to stochastic reproducibility, e.g. we know the atoms of a gas distribute throughout a volume because this maximizes entropy and do not stay clumped without some external force or barrier.

If we study human history, we see repeating economic and political patterns. I posit this might be due to the timing patterns in the lifespans of the humans, e.g. the 26 year cycle seems to correspond with the approximate mean maturity of attaining one's own family.

Armstrong has identified a repeating 78 year (3.1459 x 26 years) real estate crash cycle. This appears to coincide with a technological unemployment cycle. Following are links on that (all written by me except the one on Armstrong's website):

http://esr.ibiblio.org/?p=4824&cpage=1#comment-395999
(above link contains a textual table of this cycle since 1600s at the bottom of the linked post)

http://armstrongeconomics.com/models/
(above link is Armstrong's summary about his model, note he spent $10 million just to collect the ancient coins to construct the silver price chart on the Roman empire)

http://www.coolpage.com/commentary/economic/shelby/Housing%20Recovery%20Illusion.html
(above link contains many graphical charts)


http://esr.ibiblio.org/?p=4867&cpage=1#comment-397370
http://esr.ibiblio.org/?p=4867&cpage=1#comment-397444
http://esr.ibiblio.org/?p=4946&cpage=1#comment-403068


The four seasons cycle:

http://goldwetrust.up-with.com/t9p570-inflation-or-deflation#4735



Climate scientist’s defamation suit allowed to go forward
...

Now the new judge has denied the SLAPP attempt yet again...


http://arstechnica.com/science/2014/01/climate-scientists-defamation-suit-allowed-to-go-forward/
-----------------------------------------------------------------------------------------------------

I predict no scientists in the future will be called a fraud because they will not exist. Ever. According the to law of "lawsuit pressure"

Right on time as expected by Armstrong's model. Another reason we need anonymity. The bankrupt socialism (will refuse to just die and instead) will eat itself like a cancer and destroy everything it can tax, confiscate, regulate, and legislate.
hero member
Activity: 518
Merit: 521
Anonymint's pet theory that a good cull is what's needed for humans to evolve.

Vindictive noise is the antithesis of objective, rational inquiry.

How we get from the logical upthread point of top-down (socialism) induced failure by destroying chance & reducing entropy, to the blabalbber slanderous accusation can only be illogical emotions.

He is so clueless that he doesn't realize that the failure induced top-down (lower entropy) by socialism is what is destroying fitness and is the antithesis of the bottom-up (higher entropy), anarchistic free market I would prefer to obtain by eliminating the power vacuum of taxation.

The higher entropy of continuous localized failures (e.g. private bank runs in the 1800s) are much more fit due to continuous adaptation than the much reduced entropy of top-down delayed adaptation, e.g. with a central bank backstop that has enabled debt to grow to 200 year highs as admitted by the IMF this month.




http://armstrongeconomics.com/2014/01/23/ukraine-the-revolution-in-full-motion/

Quote
One primary difference between former Communist countries and the West is critical to understand. BECAUSE they have no social nets, the people do not trust nor rely upon government. In the West, people depend on government for everything. In Ukraine, they can grow their own food and family units still take care of all members. In the West, the downside of socialism is that the children have not planned to take care of parents for that was the State’s job. Socialism has endorsed the me-me-me culture and that is the weakness for there is no self-reliance and society has been totally dependent. The risks in the West are actually far greater.



Regarding my point of the worse delayed failure of top-down (lower entropy) versus continuously annealed micro-failures adaptation of bottom-up (higher entropy):

http://armstrongeconomics.com/2014/01/27/iceland-let-its-banks-fail-has-proved-to-be-the-best-decision-ever/

Quote
While in the USA and Europe, this idea that banks have to be saved at all expense is quite absurd. Iceland let its banks fail and the collapse ended the process and a rebirth began. The same is true during the Great Depression. The collapse in the US share market was just 34 months and it was over. The more government gets involved, the worst it all becomes. Just look at Japan and you will see the price of government intervention. Europe is facing the same mistake and the EU Parliament is destroying the economy of Europe and there is no hope of preventing a massive depression just as we say in Japan. In the end, nobody will trust government again and the damage is profound for it typically lasts for a generation. Others are starting to notice this trend as well, albeit perhaps not articulated so bluntly.

I was predicting that outcome many years ago when Iceland defaulted and refused to pay.

Btw, many people don't know there was a deep depression in 1919 in USA, because it was over by 1921 because everything was allowed to fail and the government (and central bank) did not backstop and prevent the failures.
hero member
Activity: 616
Merit: 500
Firstbits.com/1fg4i :)
It seems we're at a deadlock. Perhaps some input from some neutral third-parties could help the discussion move on?
hero member
Activity: 518
Merit: 521
What makes you think evolving machines wouldn't have at least as much entropy and diversity as humans?

When you understand that from my writings, then you will have the "a ha" eureka epiphany. I think I already explained my hypothesis:

In order to increase entropy, there must be more chance, i.e. more cases of failure not less. Remember the point I made about fitness and there being hypothetical infinite shapes and most failing to interlock. Computation is deterministic and not designed to have failure. Once you start building in failure into robots' CPU, then you would have to recreate biology. You simply won't be able to do a better job at maximizing entropy than nature already does, because the system of life will anneal to it. Adaption is all about diverse situations occurring simultaneously, but don't forget all the diverse failures occurring simultaneously also.

I don't see sufficient utility to rewrite what I already wrote in my blog and explained further in this discussion.

The next step would be to develop it formally with some math, formal logic, and write a more comprehensive paper or book. I don't have the time right now.

Thanks for the persistence and (your) cordial discussion. May I opt out now?

P.S. the big take away is how most people view life as deterministic and can't get their perspective adjusted to the reality that life is all about widespread failure as trial-and-error to attain chaotic fitness thus resilience and adaptation.

This is why socialists think they can improve things as they fail to understand that inequality (i.e. diversity) and failure are integral with survival and fitness.
hero member
Activity: 616
Merit: 500
Firstbits.com/1fg4i :)
hero member
Activity: 518
Merit: 521
The entire discussion with you, you continue to ignore or miss the point that if it was possible to calculate the probabilities in real-time such that there would be some superior outcome locally or globally (meaning one outcome is objectively better than others along some metric, e.g. survival or robots and extinction or enslavement of humans), then that local or global result would collapse the present+future into the past and everything would be known within its context. Thus either it is global context and nothing exists [no (friction of) movement of time, present+future == past, i.e. no mass in the universe] or that local result collapses its context such that present+future = past, and thus must be disconnected from (unaffected by) the unknown global probabilities. (P.S. current science has no clue what mass really is, but I have explained what it must be)

I pointed out to you already that speed of computation or transmission does not transfer the entropy (because if it did, then the entropy would be destroyed and the present+future collapses into the past). Cripes man have you ever tried to manage something happening in another location over the phone or webcam? In fact, each local situation is dynamic and interacting with the diversity of the actors in that local situation. Even if you could virtually transmit yourself (or the master robot) to each location simultaneously, then if one entity is doing all the interaction then you no longer have diverse entropy. If instead your robots are autonomous and decentralized, the speed of their computation will not offset that their input entropy is more limited than the diversity of human entropy, because each human is unique along the entire timeline of DNA history through environment development in the womb. You see it is actually the mass of the universe and the zillion incremental interactions of chance over the movement of time that creates that entropy. You can't speed it up without collapsing some of the probabilities into each other and reducing the entropy-- study the equation for entropy (Shannon entropy or any other form of the equation such as the thermodynamic or biological form). In order to increase entropy, there must be more chance, i.e. more cases of failure not less. Remember the point I made about fitness and there being hypothetical infinite shapes and most failing to interlock. Computation is deterministic and not designed to have failure. Once you start building in failure into robots' CPU, then you would have to recreate biology. You simply won't be able to do a better job at maximizing entropy than nature already does, because the system of life will anneal to it. Adaption is all about diverse situations occurring simultaneously, but don't forget all the diverse failures occurring simultaneously also.

I mean the entire concept has flown right over your head and you continue repeating the same nonsense babble.

You just can't wrap your mind around what entropy and diversity are, and why speed of computation and transmission of data has nothing to do with it.

Kurzweil is apparently similarly intellectually handicapped.

I would be very much interested if someone would put this post in Kurzweil's face and challenge him to debate me.
hero member
Activity: 616
Merit: 500
Firstbits.com/1fg4i :)
There is no "best" outcome. There are only outcomes. If an organism is decentralized, then it can't run a global simulation (the data can't be brought to a centralized computation, i.e. consciousness, in real-time without collapsing the present and past into a one), thus it is still only outcomes not a overall "best" outcome. I will not repeat this again (more than 5 times already I have written it), even though you continue ignoring it.
The time it takes to send information to the other side of the world is pretty much instant when compared to biological evolution timescales.

And you keep insisting on best; i said before, it doesn't need to be the best, just better than humans.


If a decentralized AI can aid adaption, then it can be incorporated into the human brain so we become Cyborgs, e.g. Google is my external memory and the integration will be improving soon as there is research on direct tapping into the brain. Yet that isn't even the salient point. The human brain is more unique because (collectively) it has more entropy. I explained why in my blog article Information is Alive!.
Sure, but assimilation isn't the only route that will be followed. Standalone AIs are very likely.


So we can't get more entropy into the AI than the human brain already has, because that entropy isn't derived from speed or power of computation, but rather from the zillions of tiny localized annealed decentralized steps of distributed life and including environmental development in the womb.
The human brain isn't the ultimate step; there is always room for improvement.

Kurzweil doesn't understand that computational power has nothing to do with the entropy of the system of life. To the extent that it becomes a competitive factor, then it is integrated into the decentralized, distributed system of life.
I'm not saying a post-singularity AI wouldn't be alive.


My point essentially is: huge computational power + self-improvement ability + natural selection = a life form beyond human control and understanding.
hero member
Activity: 518
Merit: 521
There is no "best" outcome. There are only outcomes. If an organism is decentralized, then it can't run a global simulation (the data can't be brought to a centralized computation, i.e. consciousness, in real-time without collapsing the present and past into a one), thus it is still only outcomes not a overall "best" outcome. I will not repeat this again (more than 5 times already I have written it), even though you continue ignoring it.

If a decentralized AI can aid adaption, then it can be incorporated into the human brain so we become Cyborgs, e.g. Google is my external memory and the integration will be improving soon as there is research on direct tapping into the brain. Yet that isn't even the salient point. The human brain is more unique because (collectively) it has more entropy. I explained why in my blog article Information is Alive!.

So we can't get more entropy into the AI than the human brain already has, because that entropy isn't derived from speed or power of computation, but rather from the zillions of tiny localized annealed decentralized steps of distributed life and including environmental development in the womb.

Kurzweil doesn't understand that computational power has nothing to do with the entropy of the system of life. To the extent that it becomes a competitive factor, then it is integrated into the decentralized, distributed system of life.
hero member
Activity: 616
Merit: 500
Firstbits.com/1fg4i :)


...

...Chance and diversity require each other. It is a very interesting conceptual discovery. I would expect some other scientists or philosophers had this discovery, but I am not aware of who they are.

That is borderline tautological (in the sense that is self-evident). To have infinite possibilities but having the odds always lead to just one of them is pretty much the same as having only one possible outcome but the odds being the same for any possible outcome.

That doesn't make any sense. The orthogonal probabilities in a free market means many different things are possible and occurring. If there was only one, the entropy is minimized not maximized. Even the Second Law of Thermodynamics says the entropy of the universe trends to maximum.
I was just pushing it to the opposite extreme, where it is easier to see the interdependence of diversity and chance.

But still, nothing in that prevents machines from having access to both enough diversity and enough random values.

Sigh. Please reread what I wrote in reply to you upthread. For example, to be diverse requires that they can't think as one, and that the information doesn't move in real-time to a centralized consciousness (decision making) for a groupwise superiority over humans.
Who said anything about a centralized consciousness? A decentralized superorganism of global scale would have much better chances of surviving, not to mention of harvesting the fruits of diversity and chance.


Sure, chaotic systems are hard to manipulate towards any specific goal in the long term; but machines can do it better than humans. Perfection is unattainable, i agree; but machines don't need to be perfect, they just need to be better than humans.

I already explained upthread that is no measurement of "better", only there exist different subjective choices that different actors make.
You only know for sure if an action is good or worse after it happens; but by running millions of simulations beforehand you can significantly increase the odds that the action you choose to do will be closer to the best one possible most of the time.

And i don't see how anything short of extinction of the human species could prevent humanity from eventually giving birth to a self-improving AI that improves itself better than humans could. Such a life-form would be more adaptable than anything Earth has ever seen; capable of trying more simultaneous evolutive routes than the whole organic biome combined.

Because you still don't admit that there is no such metric for "more adaptable" or "better".
Once the process becomes self-sustaining, "life will find a way"; i don't see humanity steering away enough to avoid giving the initial kick on the snowball; the probabilistic cloud that is humanity is falling towards the singularity, and just like a star falling into a blackhole, even tough we can't predict the exact position of each subatomic particle that composes it we can be sure that at least a few of those will fall all the way.

Logic provides the evolutive pressure towards self-perpetuating patterns; those that continue to be are obviously better than those that ceased to, even if before the results it might not have been as obvious which ones were better.


Everything is chance and local gradients towards local choices about optimums. There is no global, omniscient God and if there was then for that God, the past and present are already 100% known, i.e. for God there is no more chance nor probabilities less than 1.
Like i said many times, perfection isn't necessary, as physical biological evolution has proved so many times.

Remember entropy is maximized when the orthogonal probabilities are minimized and spread among the most possible diverse orthogonal outcomes. Not when there is only one result.
Gray goo overtaking the globe and beyond sounds pretty entropic to me...
Pages:
Jump to: