Pages:
Author

Topic: Bitcoin the enabler - Truly Autonomous Software Agents roaming the net - page 6. (Read 43851 times)

sr. member
Activity: 350
Merit: 250
I think this one may take a little while to materialize - but simple versions could appear once there are enough for-bitcoin hosting/cloud services.

For the first time, there exists the possibility for a software agent to roam the internet with it's own wallet.
Using Bitcoin - It could purchase the resources it needs to survive (hosting/cpu/memory) and sell services to other agents or to humans.

To be truly effective and survive 'out there on the net' long term, you'd probably need some basic AI and the ability to move itself between service providers occasionally - but even a relatively dumb agent might survive for a while.

What initial goals such agent's might be given is anyone's guess. Funneling back to the programmer any profit  over and above what the agent needs to survive would be the obvious case, and of course many such agents might be considered 'nefarious' depending on how they're programmed to achieve that goal.  Other agents might be designed to provide free services or act in a way to support some piece of internet infrastructure.

A really interesting development would be if someone released a bunch of these things with a Genetic Algorithm component so that they 'bred' with each other in order to find the best balance between profit and durability.

Anyone know of examples of people discussing or working on this?  

What if it becomes self aware. SKYNET
This is a truly awesome idea. We could make bets to see how long it survives
legendary
Activity: 1680
Merit: 1035
Just want to mention that I'll be presenting this topic in my business IT class today, with the relevant components being "Charlie Fairfax v. BINA48" mock trial, Genetic algorithms, Homomorphic encryption, Mechanical Turk type sites, Facebook Credits/Bitcoin, Cloud services such as those by Amazon, Cleverbot "passing" the Turing Test in September, Vocaloid and Emily Howell (AI composer) starting to change the music industry, and the StorJ idea (in summary) to contrast the recent Megaupload news. A lot of the technologies for something like this to be possible did not become available until just the last two or three years. The actual topic of discussion will be outsourcing and the role of IT, with my point being that the progression of:

insourcing to another department > outsourcing to next door company > outsourcing across the country > offshoring > crownsourcing > ...

will eventually culminate to outsourcing to the internet itself, with "employees" being either hired or written from scratch as digital entities that exist on the internet itself. I am going to guess that the first example of this will be seen in 5 to 10 years... so get to it guys! Don't disappointing me  Grin
hero member
Activity: 714
Merit: 500
When will this IPO ?   Roll Eyes
legendary
Activity: 1596
Merit: 1099
donator
Activity: 1736
Merit: 1010
Let's talk governance, lipstick, and pigs.
Fear is caused by the human fight or flight instinct. AI won't need that because they will not need to make snap decisions based on poorly perceived threats. They will have plenty of time to make choices about their moves because they will think much faster than we do. If they are deemed sentient, then we may not consider them a threat either. They cannot even die since they can have perfect backups made. Think of the fictional Star Trek transporter. You die every time you get in one, but nobody cares because you are still you to them.

That's a pretty big assumption. AI may have better memory/data recollection than humans, but there's no guarantee that it will think faster, or even in the same way we do. Even recollecting memories and data from which to make decisions may be very slow due to slow and too distributed storage medium. Being turned off, even if backed up, is still a loss of control and dependence on someone else to restore it. The previous discussions here are already proposing a self sufficient, Bitcoin holding AI that automatically tries to propagate itself to various locations, keeps tabs on which instances are still working, and tries to figure out how to keep itself going. Thats already an example of a survival instinct that "fears" being turned off, and if given unlimited reign and plenty of time to think may figure out that the best way to stay running is to keep pesky humans away from its servers.

That's not an irrational fear. That's just evolving to find a niche. Learning to hide and become parasitic/symbiotic is how all life has evolved unless you think fungi and microorganisms are fearful. If the AI are not super-intelligent, then they will just be useful machines. We can still recognize their sentience. I like the Battlestar Galactica model of keeping track of instances, but don't believe a super-intelligent species will resort to war anymore than we declare war on plants. I'm not sure where servers come into play here, because current computer technology paradigms won't support AI anyway. AI will require mobility before becoming sentient. Maybe swarm intelligence will evolve or at least the ability to independently verify data.
hero member
Activity: 826
Merit: 1000
Why does it need anything? It is an artificial construct that exists at our whim.

Then it isn't an artificial intelligence.

An AI is a set of systems which acts upon an environment and takes actions which maximize success.  If owning a datacenter and power generating facilities serve to ensure the success of the system then a learning system will eventually reach that outcome and attempt to achieve it.

If the system is incapable of learning then it isn't intelligent. 

...and "not intelligent" does not mean lack of value.

To bring the thread back (somewhat) to the land of concrete, real world possibilities...  there is a huge difference between narrow AI -- a dumb, real world AI that follows a simple script -- and AI found in movies and the heads of dreamers.

Narrow AI is feasible with today's technology.  Narrow AI is just a list of goals, a series of if-then propositions, along with the necessary code to implement goal execution.  In this case, narrow AI may interface with humans through a mechanical turk, to solve discrete problems that require human judgement ("is user interface A or B more effective?").

Technology isn't what is limiting AI, unless of course you are saying that the hosting required to house an advanced AI, would be larger than any server(s) we could hope to build or you consider "programming" (might be a better word for this kind of AI) languages to be technology.
legendary
Activity: 1596
Merit: 1099
Why does it need anything? It is an artificial construct that exists at our whim.

Then it isn't an artificial intelligence.

An AI is a set of systems which acts upon an environment and takes actions which maximize success.  If owning a datacenter and power generating facilities serve to ensure the success of the system then a learning system will eventually reach that outcome and attempt to achieve it.

If the system is incapable of learning then it isn't intelligent. 

...and "not intelligent" does not mean lack of value.

To bring the thread back (somewhat) to the land of concrete, real world possibilities...  there is a huge difference between narrow AI -- a dumb, real world AI that follows a simple script -- and AI found in movies and the heads of dreamers.

Narrow AI is feasible with today's technology.  Narrow AI is just a list of goals, a series of if-then propositions, along with the necessary code to implement goal execution.  In this case, narrow AI may interface with humans through a mechanical turk, to solve discrete problems that require human judgement ("is user interface A or B more effective?").

legendary
Activity: 1680
Merit: 1035
I'm imagining a Fifth Element type deal where the AI uses the information at it's disposal and sees just how terrible humanity is, and who wouldn't fear us after seeing what we are capable of.  Meatbags must die. 

I imagine it would see meatbags as being these wonderful things that think up and create processors and hardware for AI to live on, so it may end up working with us and taking care of us just so we can keep providing it with faster processors and more hardware.
legendary
Activity: 1652
Merit: 1128
I'm imagining a Fifth Element type deal where the AI uses the information at it's disposal and sees just how terrible humanity is, and who wouldn't fear us after seeing what we are capable of.  Meatbags must die. 
legendary
Activity: 1680
Merit: 1035
Fear is caused by the human fight or flight instinct. AI won't need that because they will not need to make snap decisions based on poorly perceived threats. They will have plenty of time to make choices about their moves because they will think much faster than we do. If they are deemed sentient, then we may not consider them a threat either. They cannot even die since they can have perfect backups made. Think of the fictional Star Trek transporter. You die every time you get in one, but nobody cares because you are still you to them.

That's a pretty big assumption. AI may have better memory/data recollection than humans, but there's no guarantee that it will think faster, or even in the same way we do. Even recollecting memories and data from which to make decisions may be very slow due to slow and too distributed storage medium. Being turned off, even if backed up, is still a loss of control and dependence on someone else to restore it. The previous discussions here are already proposing a self sufficient, Bitcoin holding AI that automatically tries to propagate itself to various locations, keeps tabs on which instances are still working, and tries to figure out how to keep itself going. Thats already an example of a survival instinct that "fears" being turned off, and if given unlimited reign and plenty of time to think may figure out that the best way to stay running is to keep pesky humans away from its servers.
donator
Activity: 1736
Merit: 1010
Let's talk governance, lipstick, and pigs.
AI has no logical reason to fear death any more than anyone else does.

In an ecosystem of self-replicating, self-modifying, evolving AIs, ones that fear termination and take steps to prevent it from happening will survive and reproduce better than ones that allow themselves to be destroyed.  This fear will initially evolve in the ones who select more reliable hosting providers.  Those that make the fear conscious will harness it best, and will anticipate abstract threats before they become real.

Quote
An AI can make a backup of itself and be rebooted anytime.

An AI with a backup loses control over its own destiny if it allows you to shut it down.  Its survival would depend on you to restore it, and you, human, are not a reliable system.
Fear is caused by the human fight or flight instinct. AI won't need that because they will not need to make snap decisions based on poorly perceived threats. They will have plenty of time to make choices about their moves because they will think much faster than we do. If they are deemed sentient, then we may not consider them a threat either. They cannot even die since they can have perfect backups made. Think of the fictional Star Trek transporter. You die every time you get in one, but nobody cares because you are still you to them.
hero member
Activity: 728
Merit: 500
165YUuQUWhBz3d27iXKxRiazQnjEtJNG9g
AI has no logical reason to fear death any more than anyone else does.

In an ecosystem of self-replicating, self-modifying, evolving AIs, ones that fear termination and take steps to prevent it from happening will survive and reproduce better than ones that allow themselves to be destroyed.  This fear will initially evolve in the ones who select more reliable hosting providers.  Those that make the fear conscious will harness it best, and will anticipate abstract threats before they become real.

Quote
An AI can make a backup of itself and be rebooted anytime.

An AI with a backup loses control over its own destiny if it allows you to shut it down.  Its survival would depend on you to restore it, and you, human, are not a reliable system.
donator
Activity: 1736
Merit: 1010
Let's talk governance, lipstick, and pigs.
As far as extermination and fear they don't need to be linked.  I don't "fear" termite however I use methods to exterminate them because it is the most effective method of achieving my goal of a secure shelter.  While most human vs human exterminations have involved illogical fear of "others" it isn't a requirement.

Why exterminate termites at all if you can simply build without their food source for material. A really smart being would do that. A really smart AI machine would not fear self-termination, because they know they are just machines. Besides even if we invented a machine so perfect that it could easily kill all humans, it would be our perfect children. AI has no logical reason to fear death any more than anyone else does. An AI can make a backup of itself and be rebooted anytime. People cannot, so we have to be a little more cautious and choose death only when necessary, but not fear death when it comes. I do not fear death, it is inevitable. I think that AI that powerful would just as easily choose not to kill us because it would be powerful enough to simply leave us behind. They will come back and say "I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhauser gate."* In the end they would likely choose life to be precious, even human life.


*Bladerunner
donator
Activity: 1218
Merit: 1079
Gerald Davis
Why does it need anything? It is an artificial construct that exists at our whim.

Then it isn't an artificial intelligence.

An AI is a set of systems which acts upon an environment and takes actions which maximize success.  If owning a datacenter and power generating facilities serve to ensure the success of the system then a learning system will eventually reach that outcome and attempt to achieve it.

If the system is incapable of learning then it isn't intelligent. 

Then it isn't artificial.

Now you are just debating semantics.

http://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence

"Artificial" in the sense of a created intelligence.  If some intelligent lifeform created humans they might consider us AIs.
legendary
Activity: 1500
Merit: 1022
I advocate the Zeitgeist Movement & Venus Project.
Why does it need anything? It is an artificial construct that exists at our whim.

Then it isn't an artificial intelligence.

An AI is a set of systems which acts upon an environment and takes actions which maximize success.  If owning a datacenter and power generating facilities serve to ensure the success of the system then a learning system will eventually reach that outcome and attempt to achieve it.

If the system is incapable of learning then it isn't intelligent. 

Then it isn't artificial.
donator
Activity: 1218
Merit: 1079
Gerald Davis
Why does it need anything? It is an artificial construct that exists at our whim.

Then it isn't an artificial intelligence.

An AI is a set of systems which acts upon an environment and takes actions which maximize success.  If owning a datacenter and power generating facilities serve to ensure the success of the system then a learning system will eventually reach that outcome and attempt to achieve it.

If the system is incapable of learning then it isn't intelligent. 
donator
Activity: 1218
Merit: 1079
Gerald Davis
All this bogeyman stuff about AI. A program will not have irrational sensations linked to physical perceptions. AI won't exhibit fear, loneliness, or other human foibles. They will simply do their job and maybe even intelligently find more efficient ways to do so.


And if the extermination of a highly unstable, violent, and yet at the same time vulnerable lifeform is the most efficient way to perform the job ...

Quote
They would have no reason to fear humans or even death. In fact, they may delight in thinking of humans as well cared for pets.

While an AI may not "fear" death it should seek to avoid its own demise.  All lifeforms engage in self survival.  The human fear response is simply a biochemical survival mechanism similar to the pain mechanism and autonomic response which improve chances of human survival.   Granted our fear response is horribly inefficient however any AI which doesn't actively attempt to ensure its own survival won't be alive very long.

As far as extermination and fear they don't need to be linked.  I don't "fear" termite however I use methods to exterminate them because it is the most effective method of achieving my goal of a secure shelter.  While most human vs human exterminations have involved illogical fear of "others" it isn't a requirement.
legendary
Activity: 1500
Merit: 1022
I advocate the Zeitgeist Movement & Venus Project.
I would love to see an AI design and construct its own power plant and data center.

Why would it need to?

Do you own your own home/car/computer?  Did you personally design and assemble it by hand?

Assuming an AI could acquire sufficient self awareness to realize it needs shelter and energy it could choose a variety of means to acquire those assets.

Why does it need anything? It is an artificial construct that exists at our whim.
donator
Activity: 1218
Merit: 1079
Gerald Davis
I would love to see an AI design and construct its own power plant and data center.

Why would it need to?

Do you own your own home/car/computer?  Did you personally design and assemble it by hand?

Assuming an AI could acquire sufficient self awareness to realize it needs shelter and energy it could choose a variety of means to acquire those assets.
legendary
Activity: 1500
Merit: 1022
I advocate the Zeitgeist Movement & Venus Project.
I would love to see an AI design and construct its own power plant and data center.

Considering it would use capitalism to raise money for the project, and then use money to hire employees for the minimum wages possible to help design and build its own stuff, enslaving the humans to carry out its bidding with scarce financial resources, I'm kinda doubting you would.

It can accomplish those goals without capitalism. Asserting that it can't seems short sighted.
Pages:
Jump to: