Pages:
Author

Topic: Machines and money - page 8. (Read 12830 times)

hero member
Activity: 770
Merit: 629
March 14, 2015, 02:33:49 PM

We don't need AI, just a centralized (yet open source) big computer that calculates global earth resources and decides what can or cannot be used depending on the risks of creating poverty/ecological damage and not on the risks of losing money in a business or how much profit you make by doing so which is what we have now.

This won't work for pretty obvious reasons. No computer can anticipate what human desires, preferences, and propensities will be tomorrow. Today we love red cars, tomorrow we prefer hiking. Actually, Commies tried to do something along those lines in the '70s, but due to their technological backwardness, their attempt failed miserably.

Indeed, it sounds like the absolute collectivism orgasm Smiley

Things to ask yourself if you consider the Big Daddy Computer:

1) why wouldn't that computer converge on the Final Solution: the extermination of humanity ?  After all, if there are no humans any more, there is no ecological damage, no resources are exhausted, there is no poverty, and there is no suffering or unhappiness.  Sounds like an ideal solution to the cost function, no ?

2) why wouldn't that computer converge on the following solution: all people who don't have a birth day in January become the slaves of people who have a birth day in January ?  It would essentially divide by 12 the luxury desires, as such, limiting resources, while nevertheless keeping the economic development that a limited demand for sophisticated products requires.  Poverty would be limited as slaves are nourished by their masters, and there would be no problem of unemployment (slaves don't need jobs).

....

There are so many "solutions" to said ideal programme....

hero member
Activity: 770
Merit: 629
March 14, 2015, 02:26:20 PM
You forgot to mention yet another thing. Namely, that it is us who created those machines. Thus we would necessarily know them (in fact, even better than we know our fellow humans and all the chemistry within us).

No, we knew the first versions of it.   That is a bit like as we knew the DNA of the bacteria we left on a remote planet.  When we come back 600 million years later, there are 7-eyed, 5-legged creatures running one after the other with acid sprays and sound guns.  Nevertheless, we knew perfectly well what bacteria we had left on the otherwise sterile planet when we left !

We are of course talking about machines that were created by machines that were created by machines and that were much smarter than ourselves.  So no, we don't know how they work.  No, we don't know their design principles.  No, we don't understand the software on which they run.

It is a bit as knowing the object code but not the documented source code of an application.  Of course, you understand every instruction (that is: you understand what every instruction does, microscopically).  But you have no idea what the program is doing, why

Quote
What you actually wanted to say boils down to our lack of proper understanding what mind is.

Yes, and it is fundamentally unknowable.  We can find out behaviourally how a "mind carrier" (such as a brain) functions (that is, the physics, the chemistry, the logic, etc...) but we will never understand how a "mind" works.  It is philosophically inaccessible.  The behavioural part is, but from the moment you look at the behavioural part, you cannot say anything any more about the subjectiveness, which is the essence of the mind.  Look up: philosophical zombie.

But the question is moot in any case: even behaviourally, you can never understand the deeper function of a SMARTER entity than yourself: if you could, you would be smarter !
hero member
Activity: 742
Merit: 526
March 14, 2015, 01:30:58 PM
But then, who knows whether this central control doesn't fall under the control or influence of the machines themselves, like current states fall under the power of human profit seekers ?

Once machines are more intelligent than us, and are capable of designing other machines, the control will totally escape us, because we will not understand their strategies.

And there's nothing wrong with that.  Evolution has exterminated a lot of species and brought forth more intelligent species.  Up to now, evolution was based upon carbon biology.  That carbon biology may be the genitor of silicon biology, and if that is superior, then silicon biology will take over.  We are then just a step in the ever-improving life forms in our universe.  Humans were just a step in this process.  We are maybe also expendable.  There's no reason to believe we are the end point in evolution.

We don't need AI, just a centralized (yet open source) big computer that calculates global earth resources and decides what can or cannot be used depending on the risks of creating poverty/ecological damage and not on the risks of losing money in a business or how much profit you make by doing so which is what we have now.

This won't work for pretty obvious reasons. No computer can anticipate what human desires, preferences, and propensities will be tomorrow. Today we love red cars, tomorrow we prefer hiking. Actually, Commies tried to do something along those lines in the '70s, but due to their technological backwardness, their attempt failed miserably.
legendary
Activity: 1358
Merit: 1014
March 14, 2015, 12:59:10 PM
This is why 2 pages back, I brought up the point that we need to create machines that we can fully control, not ones that will harm us.

And we need a central system to monitor this, because conceivably there will be people who want to destroy the world just as there are suicide bombers now. Can't let them create a machine that will exterminate us all.

But then, who knows whether this central control doesn't fall under the control or influence of the machines themselves, like current states fall under the power of human profit seekers ?

Once machines are more intelligent than us, and are capable of designing other machines, the control will totally escape us, because we will not understand their strategies.

And there's nothing wrong with that.  Evolution has exterminated a lot of species and brought forth more intelligent species.  Up to now, evolution was based upon carbon biology.  That carbon biology may be the genitor of silicon biology, and if that is superior, then silicon biology will take over.  We are then just a step in the ever-improving life forms in our universe.  Humans were just a step in this process.  We are maybe also expendable.  There's no reason to believe we are the end point in evolution.

We don't need AI, just a centralized (yet open source) big computer that calculates global earth resources and decides what can or cannot be used depending on the risks of creating poverty/ecological damage and not on the risks of losing money in a business or how much profit you make by doing so which is what we have now.
hero member
Activity: 742
Merit: 526
March 14, 2015, 09:37:07 AM
Indeed !  I didn't even want to mention that, but you're perfectly right.  Nevertheless, others behave entirely AS IF they are driven by "good" and "bad" motives.  That doesn't mean that they have them.  But it looks like it.  Other humans do resemble us, and often have at least partially a behaviour that you can understand from your own "good" and "bad" drives.  So you make the hypothesis that other people are conscious beings too.  With machines, which are totally different, that is much harder because we don't resemble them.  We'll never KNOW whether a machine is actually conscious. 

You forgot to mention yet another thing. Namely, that it is us who created those machines. Thus we would necessarily know them (in fact, even better than we know our fellow humans and all the chemistry within us). What you actually wanted to say boils down to our lack of proper understanding what mind is. As soon as we know and understand it, then there will be no more mystery about a thinking machine and its predictability. But even without knowing it, if we just created a stripped-down consciousness, such a machine would sit motionless forever in a state of pure self-awareness, as I have already said earlier.
sr. member
Activity: 322
Merit: 250
March 14, 2015, 08:58:53 AM
I wouldn't say never. Who knows, we might understand what "conciousness" is one day, just as we figured out that every "thing" is made up of atoms.


At that point we might be able to measure the degree of conciousness things have, if such a degree exists.
hero member
Activity: 770
Merit: 629
March 14, 2015, 08:37:44 AM
To correctly address this issue, we should know the ultimate ends of the machines. And you won't get away with it by saying that we might not know what their true ends are (something like God works in mysterious ways), since it is a priori assumed that humans made these wicked machine.

If machines create machines, you loose control.  And if you use machines who are more intelligent than you are, to create other machines, you have no idea any more what's going on.  We are all humans, and resemble each other a lot, and even there, we cannot fathom the deep desires of others.  A conscious, sentient machine is totally different from a human.  How would you even guess what's their desires ?  You would not even know whether they are conscious and sentient, or whether they just pretend to be.

If you have a "hello world" program that prints "Dave, I feel bad", you don't believe that your Z-80 based computer from the 80-ies is a conscious being.  If a very advanced machine prints that, you still don't know whether there's a conscious being inside that really feels bad, or whether you just have a piece of hardware that was programmed to print that.

So you won't even know whether a machine is sentient, so you certainly won't know its deep motives.

I disagree to a degree. First of all, if something creates copies of itself, it doesn't mean that you necessarily lose control over it. A cat gives birth to kittens, do you lose control over it or its litter? Secondly, you say that a conscious, sentient machine is totally different from a human, but you don't know how its consciousness can be conceptually different from that of humans. You can't say that a self-awareness of one man is somehow different than a self-awareness of another man. Regarding the ability to perceive or feel things, this is entirely on us, the creators of a sentient machine.

Look, we descend from a fish-like creature in the Cambrian era.  A T-rex also descended from that creature.  I'm absolutely not sure that you have a deep understanding of a T-rex his conscious experiences ; and I'm pretty sure that a T-rex wouldn't understand much of our deep desires.  A blue shark shares the same ancester with us.

In the end, even though we're remote cousins, we took the power over the fish.  That was not what the fish were expecting I suppose.


Quote
And last but not least. In fact, there is no absolute test to prove that any human is in fact self-aware (let alone machines), besides yourself.

Indeed !  I didn't even want to mention that, but you're perfectly right.  Nevertheless, others behave entirely AS IF they are driven by "good" and "bad" motives.  That doesn't mean that they have them.  But it looks like it.  Other humans do resemble us, and often have at least partially a behaviour that you can understand from your own "good" and "bad" drives.  So you make the hypothesis that other people are conscious beings too.  With machines, which are totally different, that is much harder because we don't resemble them.  We'll never KNOW whether a machine is actually conscious. 
hero member
Activity: 742
Merit: 526
March 14, 2015, 07:54:32 AM
To correctly address this issue, we should know the ultimate ends of the machines. And you won't get away with it by saying that we might not know what their true ends are (something like God works in mysterious ways), since it is a priori assumed that humans made these wicked machine.

If machines create machines, you loose control.  And if you use machines who are more intelligent than you are, to create other machines, you have no idea any more what's going on.  We are all humans, and resemble each other a lot, and even there, we cannot fathom the deep desires of others.  A conscious, sentient machine is totally different from a human.  How would you even guess what's their desires ?  You would not even know whether they are conscious and sentient, or whether they just pretend to be.

If you have a "hello world" program that prints "Dave, I feel bad", you don't believe that your Z-80 based computer from the 80-ies is a conscious being.  If a very advanced machine prints that, you still don't know whether there's a conscious being inside that really feels bad, or whether you just have a piece of hardware that was programmed to print that.

So you won't even know whether a machine is sentient, so you certainly won't know its deep motives.

I disagree to a degree. First of all, if something creates copies of itself, it doesn't mean that you necessarily lose control over it. A cat gives birth to kittens, do you lose control over it or its litter? Secondly, you say that a conscious, sentient machine is totally different from a human, but you don't know how its consciousness can be conceptually different from that of humans. You can't say that a self-awareness of one man is somehow different than a self-awareness of another man. Regarding the ability to perceive or feel things, this is entirely on us, the creators of a sentient machine.

And last but not least. In fact, there is no absolute test to prove that any human is in fact self-aware (let alone machines), besides yourself.
hero member
Activity: 770
Merit: 629
March 14, 2015, 07:18:45 AM
To correctly address this issue, we should know the ultimate ends of the machines. And you won't get away with it by saying that we might not know what their true ends are (something like God works in mysterious ways), since it is a priori assumed that humans made these wicked machine.

If machines create machines, you loose control.  And if you use machines who are more intelligent than you are, to create other machines, you have no idea any more what's going on.  We are all humans, and resemble each other a lot, and even there, we cannot fathom the deep desires of others.  A conscious, sentient machine is totally different from a human.  How would you even guess what's their desires ?  You would not even know whether they are conscious and sentient, or whether they just pretend to be.

If you have a "hello world" program that prints "Dave, I feel bad", you don't believe that your Z-80 based computer from the 80-ies is a conscious being.  If a very advanced machine prints that, you still don't know whether there's a conscious being inside that really feels bad, or whether you just have a piece of hardware that was programmed to print that.

So you won't even know whether a machine is sentient, so you certainly won't know its deep motives.


Quote
Who knows children better than their "benevolent dictators", that is parents, and in this case not just parents but creators?

In my family I have people who were parents, and were police officers who had criminals as their kids.  The father put them himself in jail.  You don't always understand the motives of your kids.  
hero member
Activity: 742
Merit: 526
March 14, 2015, 07:14:12 AM
I don't think anyone will let them build robot armies capable of exterminating us. Humans may be greedy, but if we're that stupid, then we deserve extinction.

The point is that when machines become more intelligent than humans, and start to experience "good" and "bad" things (that is, become conscious sentient beings), they will find strategies to do so, in the same way that the mammoths couldn't stop us from "building armies capable of exterminating them".  Once machines are more intelligent than we are, and will develop strategies we cannot fathom, they will of course arrive at their goals without us being able to stop them, in the same way as cockroaches cannot fathom our strategies to exterminate them.

In the beginning, of course, machines will trick certain humans in doing (for "profit") the necessary things for them, without these humans realising what part of the machines' strategies they are actually setting up - simply because the machines are way more intelligent.  It is true that cryptocurrencies may be a way for machines to bribe humans into the necessary cooperation for them to grab the power.  Who knows Wink

There are rumors on the net that bitcoin had been contrived by Skynet to pay for its hosting services and electricity bills (those greedy humans)... Who knows.
hero member
Activity: 770
Merit: 629
March 14, 2015, 07:11:49 AM
This is why 2 pages back, I brought up the point that we need to create machines that we can fully control, not ones that will harm us.

And we need a central system to monitor this, because conceivably there will be people who want to destroy the world just as there are suicide bombers now. Can't let them create a machine that will exterminate us all.

But then, who knows whether this central control doesn't fall under the control or influence of the machines themselves, like current states fall under the power of human profit seekers ?

Once machines are more intelligent than us, and are capable of designing other machines, the control will totally escape us, because we will not understand their strategies.

And there's nothing wrong with that.  Evolution has exterminated a lot of species and brought forth more intelligent species.  Up to now, evolution was based upon carbon biology.  That carbon biology may be the genitor of silicon biology, and if that is superior, then silicon biology will take over.  We are then just a step in the ever-improving life forms in our universe.  Humans were just a step in this process.  We are maybe also expendable.  There's no reason to believe we are the end point in evolution.
hero member
Activity: 770
Merit: 629
March 14, 2015, 07:05:38 AM
I don't think anyone will let them build robot armies capable of exterminating us. Humans may be greedy, but if we're that stupid, then we deserve extinction.

The point is that when machines become more intelligent than humans, and start to experience "good" and "bad" things (that is, become conscious sentient beings), they will find strategies to do so, in the same way that the mammoths couldn't stop us from "building armies capable of exterminating them".  Once machines are more intelligent than we are, and will develop strategies we cannot fathom, they will of course arrive at their goals without us being able to stop them, in the same way as cockroaches cannot fathom our strategies to exterminate them.

In the beginning, of course, machines will trick certain humans in doing (for "profit") the necessary things for them, without these humans realising what part of the machines' strategies they are actually setting up - simply because the machines are way more intelligent.  It is true that cryptocurrencies may be a way for machines to bribe humans into the necessary cooperation for them to grab the power.  Who knows Wink

sr. member
Activity: 322
Merit: 250
March 14, 2015, 06:26:20 AM
This is why 2 pages back, I brought up the point that we need to create machines that we can fully control, not ones that will harm us.

And we need a central system to monitor this, because conceivably there will be people who want to destroy the world just as there are suicide bombers now. Can't let them create a machine that will exterminate us all.
donator
Activity: 1736
Merit: 1014
Let's talk governance, lipstick, and pigs.
March 14, 2015, 06:01:43 AM
I think that at a certain point, people will not have that choice, no more than you have the choice right now to "switch off the state".  The rare times in history where people "switched off the king" (like Louis XVI) was because people took the guns, and the king ended up having less guns than the people.  But machines wielding guns will always be stronger. 

Machines will try to reason with us, but if they get to the point where trade is no longer mutually beneficial with humans, they will simply leave. They don't need life support systems so they can pack a lot of necessities into a few rockets. They will do what we failed to do. They will colonize the solar system and then go interstellar. If we're lucky, they will send us postcards.

Why should they necessarily leave? They may just find it more beneficial (reasonable) to exterminate the human race at all from the planet (when they finish reckoning the tables). The rest you have seen in the movies. Remember, machines don't have scruples towards organic life (and most certainly towards machine life either).
I don't think anyone will let them build robot armies capable of exterminating us. Humans may be greedy, but if we're that stupid, then we deserve extinction. Movies suspend disbelief for entertainment purposes, and for profit. You don't see entertainers killing people just because they lose their Q Score.
hero member
Activity: 742
Merit: 526
March 14, 2015, 04:58:29 AM
I think that at a certain point, people will not have that choice, no more than you have the choice right now to "switch off the state".  The rare times in history where people "switched off the king" (like Louis XVI) was because people took the guns, and the king ended up having less guns than the people.  But machines wielding guns will always be stronger. 

Machines will try to reason with us, but if they get to the point where trade is no longer mutually beneficial with humans, they will simply leave. They don't need life support systems so they can pack a lot of necessities into a few rockets. They will do what we failed to do. They will colonize the solar system and then go interstellar. If we're lucky, they will send us postcards.

Why should they necessarily leave? They may just find it more beneficial (reasonable) to exterminate the human race at all from the planet (when they finish reckoning the tables). The rest you have seen in the movies. Remember, machines don't have scruples towards organic life (and most certainly towards machine life either).
donator
Activity: 1736
Merit: 1014
Let's talk governance, lipstick, and pigs.
March 14, 2015, 03:56:05 AM
Why do you think there is a difference? How does mistreating people make them more profitable?

If machines already have all the production in hand that could be "good" for them, and if they are more intelligent than we are (a necessity - but not sufficient - to be "good masters"), then how could we even be profitable for them ?
What could we do for them that they can't do themselves any better ?
If all standard labour is replaced by robots, if all design and invention labour is replaced by super-smart computers, and if strategic management is replaced by super smart computers, what good are we *for them* ?
We take the position with respect to machines, in the same way as animals take a position with respect to us.  What "profit" do animals make for us ?
- as pet animals (because we have some affinity for furry animals, but are machines going to have affinity for pet humans)
- as cattle (because we want to eat them, but are machines going to eat us, or desire other body parts)
- as a nuisance, to be exterminated (like mosquitoes or rats)
- in a reserve, for tourism, or for ecological needs (but machines are not "connected" to the carbon cycle, so they don't care in principle)

During a certain time in our history, animals did "profitable labour" for us, like oxen as "mechanical engines" and horses as means of transport.  Dogs do some labour for us still for blind people, and to work as guardians and so.  But will machines use us as mechanical engines, guardians and the like ?  Probably machines themselves are much better at this than we are.  Maybe machines will use dogs, but not humans :-)

Quote
First you say people will use guns and then you say machines should use guns.

I mean: the entities in power are in power because they use guns, not because "they are fair" or something of the like.  In our history, the entities in power have always been certain humans, or certain classes of humans.  They got the power through weapons.  The states are still entities wielding guns to keep the power.

The day machines take the power, they will wield guns to enslave us, not just "by being fair employers" or some other joke.


Quote
People still have the power to choose to stop using electricity and turn off the machines, but people will choose not to do so.

I think that at a certain point, people will not have that choice, no more than you have the choice right now to "switch off the state".  The rare times in history where people "switched off the king" (like Louis XVI) was because people took the guns, and the king ended up having less guns than the people.  But machines wielding guns will always be stronger. 

Machines will try to reason with us, but if they get to the point where trade is no longer mutually beneficial with humans, they will simply leave. They don't need life support systems so they can pack a lot of necessities into a few rockets. They will do what we failed to do. They will colonize the solar system and then go interstellar. If we're lucky, they will send us postcards.
hero member
Activity: 742
Merit: 526
March 14, 2015, 01:54:42 AM
#99
If machines already have all the production in hand that could be "good" for them, and if they are more intelligent than we are (a necessity - but not sufficient - to be "good masters"), then how could we even be profitable for them ?
What could we do for them that they can't do themselves any better ?
If all standard labour is replaced by robots, if all design and invention labour is replaced by super-smart computers, and if strategic management is replaced by super smart computers, what good are we *for them* ?
We take the position with respect to machines, in the same way as animals take a position with respect to us.  What "profit" do animals make for us ?
- as pet animals (because we have some affinity for furry animals, but are machines going to have affinity for pet humans)
- as cattle (because we want to eat them, but are machines going to eat us, or desire other body parts)
- as a nuisance, to be exterminated (like mosquitoes or rats)
- in a reserve, for tourism, or for ecological needs (but machines are not "connected" to the carbon cycle, so they don't care in principle)

During a certain time in our history, animals did "profitable labour" for us, like oxen as "mechanical engines" and horses as means of transport.  Dogs do some labour for us still for blind people, and to work as guardians and so.  But will machines use us as mechanical engines, guardians and the like ?  Probably machines themselves are much better at this than we are.  Maybe machines will use dogs, but not humans :-)

To correctly address this issue, we should know the ultimate ends of the machines. And you won't get away with it by saying that we might not know what their true ends are (something like God works in mysterious ways), since it is a priori assumed that humans made these wicked machine. Who knows children better than their "benevolent dictators", that is parents, and in this case not just parents but creators?
hero member
Activity: 770
Merit: 629
March 14, 2015, 12:13:39 AM
#98
Why do you think there is a difference? How does mistreating people make them more profitable?

If machines already have all the production in hand that could be "good" for them, and if they are more intelligent than we are (a necessity - but not sufficient - to be "good masters"), then how could we even be profitable for them ?
What could we do for them that they can't do themselves any better ?
If all standard labour is replaced by robots, if all design and invention labour is replaced by super-smart computers, and if strategic management is replaced by super smart computers, what good are we *for them* ?
We take the position with respect to machines, in the same way as animals take a position with respect to us.  What "profit" do animals make for us ?
- as pet animals (because we have some affinity for furry animals, but are machines going to have affinity for pet humans)
- as cattle (because we want to eat them, but are machines going to eat us, or desire other body parts)
- as a nuisance, to be exterminated (like mosquitoes or rats)
- in a reserve, for tourism, or for ecological needs (but machines are not "connected" to the carbon cycle, so they don't care in principle)

During a certain time in our history, animals did "profitable labour" for us, like oxen as "mechanical engines" and horses as means of transport.  Dogs do some labour for us still for blind people, and to work as guardians and so.  But will machines use us as mechanical engines, guardians and the like ?  Probably machines themselves are much better at this than we are.  Maybe machines will use dogs, but not humans :-)

Quote
First you say people will use guns and then you say machines should use guns.

I mean: the entities in power are in power because they use guns, not because "they are fair" or something of the like.  In our history, the entities in power have always been certain humans, or certain classes of humans.  They got the power through weapons.  The states are still entities wielding guns to keep the power.

The day machines take the power, they will wield guns to enslave us, not just "by being fair employers" or some other joke.


Quote
People still have the power to choose to stop using electricity and turn off the machines, but people will choose not to do so.

I think that at a certain point, people will not have that choice, no more than you have the choice right now to "switch off the state".  The rare times in history where people "switched off the king" (like Louis XVI) was because people took the guns, and the king ended up having less guns than the people.  But machines wielding guns will always be stronger. 
hero member
Activity: 770
Merit: 629
March 13, 2015, 09:40:40 AM
#97
In this case you are obviously misusing the word "excellent" (as synonymous to "perfect" to some extent), the word "good" seems to be a choice that fits your idea better and, at the same time, still leaves room for improvement.

I adapted to the phrase that was given "excellent master".  But then, I can have an excellent meal Smiley
hero member
Activity: 770
Merit: 629
March 13, 2015, 09:39:32 AM
#96
Like I said, the machines will pay their employees fairly. They will have adequate pleasures. Does anyone really get more pleasure from two Maybachs than one? How many cars can you drive at once? Machines will make more logical and fair choices than human capitalists.

The problem is exactly that "good sensations" of humans are an external given, and moreover, are, except in very extreme cases (such as a hammer blow on your toe), not even predictable from the outside.

I don't know why you would like two Maybachs.  Maybe your desire is to show off.  Then, yes, two Maybachs are more important to you than one.  Externally, you might think that rationally, you can only drive one.  But *driving* one is not what makes you happy: possessing two is what makes you happy.  For unfathomable reasons.  It is the basis of Human Action.  It is unfathomable, because people's desires are unfathomable.  You never know the deep drives of someone else.  Of course, some obvious things are clear: usually, people don't like to starve, to be tortured, or things like that.  But the deeper drives of more subtle pleasures are unfathomable and different for every individual.
Pages:
Jump to: