It seems we're at a deadlock. Perhaps some input from some neutral third-parties could help the discussion move on?
Perhaps I can help find consensus here. In my opinion TiagoTiago you and AnonyMint are talking past each other.
You are arguing that through innovation we will create a being superior to humans and that will lead to human extinction via a tech singularity where computers vastly out think humans.
AnonyMint is arguing as per his blog
Information is Alive that for computers to match or exceed humanity they would essentially need to be alive aka human reproducing and contributing to the environment.
Is is possible to create AI that is better then humans? By better I mean AI that exceeds the creativity/potential of all of humanity.
Sure it's possible, but as argued by Anonymint such an AI would have to be dynamic, alive, and variable with a chance a failure and would thus not be universally superior.
The thing is, for a virtual or decentralized synthetic agent, the cost of failure can be much smaller than for organic species. And due to the fact they wouldn't restricted by DNA, they would be able to evolve much faster.
So what we are really talking about here is will the creation of sentient AI lead to a race of AI that will result in the inevitable extinction of humans. That answer to this is a definite no.
The dynamics of inter-species competition depend on the degree each species is dependent on shared limited resources. A simple model of pure competition between two species is the
Lotka-Volterra model of direct competition.
Even with pure competition species A is always and in every way bad for species B the outcome is not necessarily extinction. It depends on the competition coefficient (which is essentially a measure of how much the two species occupy the same niche).
Sure it is possible that the AI would cooperate with or otherwise be beneficial to humans. But the only pressure towards that direction is what humans would do; and we would only have an extremely short window of time to influence it before it gets beyond our reach.
Should we invent AI or even a society of AI that is collectively vastly superior to human society we would only be in danger of extinction if such robots were exactly like us (eating the same food, wanting the same shelter, ect add better endowed and lusting after human women if you want to add insult to injury
). Now obviously this would be very hard to do because we have evolved over a long time and are very very good at filling our personal ecological niche. We wiped out the last major contenders the neanderthal despite the fact that they were bigger, stronger, and had bigger brains (very good chance they were individually smarter).
The difference from neanderthals is a post-singularity AI would be self-improving, and would do so at timescales that are just about instant when compared to organic evolution or even human technological advancements.
Much more likely is that any AI species would occupy a completely different niche then we do (consuming electricity, living online, non organic chemistry, ect) Such an AI society would be in little to no direct competition with humans and would likely be synergistic.
Humans wouldn't take kindly to robots stealing their ore, messing with their electric grid etc, nor to viruses clogging their cat-tubes. But by then, they would already have advanced so much that we would at most piss them off; and it doesn't sound like a good idea to attract the wrath of a vastly superior entity.
The question in that case is not whether the AI society is collectively superior but is instead whether the combination of human and AI together is superior to AI alone.
If combination if better, we would be assimilated; resistance would be futile.
As the creativity of sentience is enhanced as the sentient population grows that answer is apparent.
The difference is a post-singularity AI would be able to increase it's effective population much faster than humans, while at the same time improving the efficiency of its previously existing "individuals".
Could humanity wipe itself out by creating some sort of super robot that is both more intelligent (on average) and occupies the exact same ecological niche we do? Sure we could do it in theory but it would be very very hard (much harder then just creating sentient AI). There are far easier ways to wipe out humanity.
The AI could for example decide it would be more efficient to convert all the biomass on the planet into fuel, or wipe all the forests to build robot factories, or cover the planet with solarpanels etc. Using-up-all-the-resources-of-the-planet seems like a very likely niche; humans themselves are already aiming to promote themselves up on the Kardashev scale in the long term...