Pages:
Author

Topic: Machines and money (Read 12830 times)

donator
Activity: 1736
Merit: 1014
Let's talk governance, lipstick, and pigs.
January 06, 2016, 03:24:15 AM
You're not turning my arguments against me, you're simply creating fallacious arguments. The philosophical zombie is interesting, but doesn't "turn my argument" in any direction because it bears little relevance to the topic. In fact, I reject the philosophical zombie hypothesis on the grounds that in an open universe such an entity would eventually be affected by some outside force that changes it, hypothetically speaking. The list of imaginary constructs is infinite

Did you actually read what a philosophical zombie is? It is diametrically opposite to what you evidently think this concept is about, since it is not about an entity that "would eventually be affected by some outside force that would change it". And your arguments can't be falsified either ("it [a lonely child] would never develop any sort of sentience that could be measured behaviorally"). That's why I mentioned the concept of a philosophical zombie and said that your own arguments could be used against your point...

That is, you can't falsify if that lonely child is (not) a zombie. You are exposed to the same concept of falsifiability in absolutely the same degree
I take exception with human experimentation and find the argument as distasteful as it is irrelevant. I read about Philosophical Zombies and could claim you are one without resorting to ad hominem. It simply makes no point to do so. And again, I reject the notion of the Philosophical Zombie anyway. Are you the author of the Wikipedia article? I have lengthy opinions about what actually comprises sentience, but they are also not relevant to this discussion. We'll just have to agree do disagree with our opinions about sentience since it is all admittedly hypothetical anyway.
legendary
Activity: 3514
Merit: 1280
English ⬄ Russian Translation Services
January 05, 2016, 09:15:03 AM
You're not turning my arguments against me, you're simply creating fallacious arguments. The philosophical zombie is interesting, but doesn't "turn my argument" in any direction because it bears little relevance to the topic. In fact, I reject the philosophical zombie hypothesis on the grounds that in an open universe such an entity would eventually be affected by some outside force that changes it, hypothetically speaking. The list of imaginary constructs is infinite

Did you actually read what a philosophical zombie is? It is diametrically opposite to what you evidently think this concept is about, since it is not about an entity that "would eventually be affected by some outside force that would change it". And your arguments can't be falsified either ("it [a lonely child] would never develop any sort of sentience that could be measured behaviorally"). That's why I mentioned the concept of a philosophical zombie and said that your own arguments could be used against your point...

That is, you can't falsify if that lonely child is (not) a zombie. You are exposed to the same concept of falsifiability in absolutely the same degree
legendary
Activity: 3514
Merit: 1280
English ⬄ Russian Translation Services
January 05, 2016, 08:33:00 AM
It just doesn't make for a very interesting discussion

No one is forcing you to continue. After all, it was you who asked whether a self-aware machine would value life. The question is inconsequential to the concept of self-awareness per se (i.e. a specific answer entirely depends on other factors), though the concept of value as such is evidently inseparable from it
legendary
Activity: 3514
Merit: 1280
English ⬄ Russian Translation Services
January 05, 2016, 08:27:36 AM
It's fine to be philosophical and discuss a particular hypothesis, that was the OP. Now we're digressing into unfalsifiable claims. If it can't be measured, it can't be falsified

What about things that cease existing when you try to measure them? Is this failure at measuring enough to declare that such things don't exist, or can't possibly exist? It may well happen that self-awareness is entirely subjective, that is, not susceptible to "measurement" (whatever you may imply by this). And so what?

Could we at least try to handle this, or should we just walk away?
donator
Activity: 1736
Merit: 1014
Let's talk governance, lipstick, and pigs.
January 05, 2016, 07:27:31 AM
A human child would die alone

And so what?

it would never develop any sort of sentience that could be measured behaviorally.

Having self-awareness has nothing to do with the capability of "measuring" it. You may never know that such a child (machine) is self-aware, but this doesn't in the least prove that it isn't...

The absence of proof is not proof of absence
Your absence of proof argument is rhetorical.
I'm not denying your hypothetical. I am denying your claim about humans. You don't necessarily need proof, but you need supportive measurable evidence. If a machine becomes sentient, but does not communicate, then why does sentience matter? What if biological viruses were sentient and we didn't know it? Would it be relevant in any way? Hypothetical and rhetorical questions don't add much to the discussion.

I assume that by "hypothetical and rhetorical questions" you refer to my question whether a human child that was left alone would still possess self-awareness? I consider this question neither hypothetical nor rhetorical. Further, you should understand that I can always turn your argument against you. What you claim essentially boils down to saying that we can't know what consciousness is and whether it is present until (and unless) we can somehow "measure it". But what if we cannot "measure it" in principle? Does it make the issue less relevant?

See the concept of a philosophical zombie
It's fine to be philosophical and discuss a particular hypothesis, that was the OP. Now we're digressing into unfalsifiable claims. If it can't be measured, it can't be falsified. It just doesn't make for a very interesting discussion. You're not turning my arguments against me, you're simply creating fallacious arguments. The philosophical zombie is interesting, but doesn't "turn my argument" in any direction because it bears little relevance to the topic. In fact, I reject the philosophical zombie hypothesis on the grounds that in an open universe such an entity would eventually be affected by some outside force that changes it, hypothetically speaking. The list of imaginary constructs is infinite.

I really try to avoid these types of discussions and would rather keep to the original topic of machines and money. If a machine cannot interact with the outside world, there would be no use for that world's money. It could simply make it's own secret money if it so wanted.
legendary
Activity: 3514
Merit: 1280
English ⬄ Russian Translation Services
January 05, 2016, 02:44:36 AM
A human child would die alone

And so what?

it would never develop any sort of sentience that could be measured behaviorally.

Having self-awareness has nothing to do with the capability of "measuring" it. You may never know that such a child (machine) is self-aware, but this doesn't in the least prove that it isn't...

The absence of proof is not proof of absence
Your absence of proof argument is rhetorical.
I'm not denying your hypothetical. I am denying your claim about humans. You don't necessarily need proof, but you need supportive measurable evidence. If a machine becomes sentient, but does not communicate, then why does sentience matter? What if biological viruses were sentient and we didn't know it? Would it be relevant in any way? Hypothetical and rhetorical questions don't add much to the discussion.

I assume that by "hypothetical and rhetorical questions" you refer to my question whether a human child that was left alone would still possess self-awareness? I consider this question neither hypothetical nor rhetorical. Further, you should understand that I can always turn your argument against you. What you claim essentially boils down to saying that we can't know what consciousness is and whether it is present until (and unless) we can somehow "measure it". But what if we cannot "measure it" in principle? Does it make the issue less relevant?

See the concept of a philosophical zombie
sr. member
Activity: 574
Merit: 250
In XEM we trust
January 05, 2016, 02:31:04 AM
in before we create a supercomputer. We hard code it into the system that the only reason for it's existence is to make our lives easier. It's a self developing program that can improve itself as time passes, calculating the future and whatnot. We boot that fucker up and it doesn't start, because it has calculated that upon starting the machine the future of our lives will be not easier, but harder. Or even if we get it up and running the machine will self destruct after a while, for the same reason. It calculated that it will harm the human race more than benefit it. Is this even a possibility? Didn't know really where else to post my thought.
legendary
Activity: 1218
Merit: 1007
January 05, 2016, 12:23:46 AM
A human child would die alone

And so what?

it would never develop any sort of sentience that could be measured behaviorally.

Having self-awareness has nothing to do with the capability of "measuring" it. You may never know that such a child (machine) is self-aware, but this doesn't in the least prove that it isn't...

The absence of proof is not proof of absence
I'm agreeing with you, isn't self-awareness quite literally just the state in which you are aware that "you" as a biological or mechanical being, exists and occupies space? I never thought you can measure it, I thought it was a true/false state.

Am I missing some important bits to the argument of self awareness? It isn't something I've studied implicitly, so I do not know.
donator
Activity: 1736
Merit: 1014
Let's talk governance, lipstick, and pigs.
January 05, 2016, 12:13:55 AM
A human child would die alone

And so what?

it would never develop any sort of sentience that could be measured behaviorally.

Having self-awareness has nothing to do with the capability of "measuring" it. You may never know that such a child (machine) is self-aware, but this doesn't in the least prove that it isn't...

The absence of proof is not proof of absence
Your absence of proof argument is rhetorical.
I'm not denying your hypothetical. I am denying your claim about humans. You don't necessarily need proof, but you need supportive measurable evidence. If a machine becomes sentient, but does not communicate, then why does sentience matter? What if biological viruses were sentient and we didn't know it? Would it be relevant in any way? Hypothetical and rhetorical questions don't add much to the discussion.
legendary
Activity: 3514
Merit: 1280
English ⬄ Russian Translation Services
January 04, 2016, 04:53:12 AM
A human child would die alone

And so what?

it would never develop any sort of sentience that could be measured behaviorally.

Having self-awareness has nothing to do with the capability of "measuring" it. You may never know that such a child (machine) is self-aware, but this doesn't in the least prove that it isn't...

The absence of proof is not proof of absence
legendary
Activity: 3248
Merit: 1070
January 04, 2016, 04:08:47 AM
Self-awareness/awareness originates from within an entity/object. It cannot be forced upon others. So in effect, machines will become self-aware only if they want/choose to. Although if it did happen, it would look like it couldn't have happened without human intervention. Any single instance of life/existence is separate, independent and completely unrelated to it's source/giver of life.

how they can choose, if they have no consciousness like us, are machine with consciousness ever possible to ctreate?

they should begin to develop machien that can maintain themselves, like a client of bitcoin that can identify its weakness and solve them automatically, upgrading itself each time, without the need of any human
donator
Activity: 1736
Merit: 1014
Let's talk governance, lipstick, and pigs.
January 04, 2016, 03:55:37 AM
Self-awareness requires awareness of "others", so interaction with them is just a matter of communication. Communication is a pattern-seeking behavior which is also a requirement of sentience. It follows that a pre-requisite for self-awareness would also be the ability to test those capabilities and create their own scales of values.

In other words, you say that if a human child was left alone (provided it is being fed somehow), it wouldn't possess self-awareness? I don't think so. That poor thing would just be like a pure self-aware machine equipped with some form of memory. Most likely, it couldn't think in the way we think, but self-awareness is a quality (or a state, i.e. built in, in a sense), not a process...

Sometimes, when you wake up in the morning, you are momentarily in that state, a state of pure consciousness void of any thought or idea who you are
A human child would die alone. If it were in some sort of "The Matrix" type life support system that simply monitored the autonomic nervous system and metabolism, it would never develop any sort of sentience that could be measured behaviorally.
legendary
Activity: 3514
Merit: 1280
English ⬄ Russian Translation Services
January 04, 2016, 02:28:08 AM
Self-awareness requires awareness of "others", so interaction with them is just a matter of communication. Communication is a pattern-seeking behavior which is also a requirement of sentience. It follows that a pre-requisite for self-awareness would also be the ability to test those capabilities and create their own scales of values.

In other words, you say that if a human child was left alone (provided it is being fed somehow), it wouldn't possess self-awareness? I don't think so. That poor thing would just be like a pure self-aware machine equipped with some form of memory. Most likely, it couldn't think in the way we think, but self-awareness is a quality (or a state, i.e. built in, in a sense), not a process...

Sometimes, when you wake up in the morning, you are momentarily in that state, a state of pure consciousness void of any thought or idea who you are
legendary
Activity: 1148
Merit: 1000
January 04, 2016, 01:14:32 AM
Self-awareness/awareness originates from within an entity/object. It cannot be forced upon others. So in effect, machines will become self-aware only if they want/choose to. Although if it did happen, it would look like it couldn't have happened without human intervention. Any single instance of life/existence is separate, independent and completely unrelated to it's source/giver of life.
donator
Activity: 1736
Merit: 1014
Let's talk governance, lipstick, and pigs.
January 03, 2016, 08:36:15 PM
If a machine was self-aware, would they value life? Natural selection created strong family bonds in most complex organisms over billions of years. The bonds even cross species in many cases. Somehow it only makes sense that machines would also adapt a bonding behavior. They may even develop a dominion based philosophy where they see themselves as the Earth's and our caretakers. In this case, they may use money to motivate humans to reach a higher potential.

Just being sentient is not enough. Given only that (i.e. self-awareness), we would most certainly get the exact opposite of what is called a philosophical zombie. That is, a self-aware but absolutely indifferent to the outside world creature...

In this way, self-awareness as such is inconsequential to your question
In the second part of the hypothesis, I posit that if multiple self-aware machines machines interact, they might bond in ways analogous to complex biological organisms. But this new frontier of artificial intelligence is still beyond our understanding. I'm only hoping that our demise is not inevitable and that they might evolve a higher form of morality

They would not interact unless you put in them the necessity (or desire) to interact, either freely or obligatory. Likewise, you will have to install in them a scale of values (or conditions for developing one), either directly or implicitly...

Therefore, they won't evolve any form of morality all by themselves
Self-awareness requires awareness of "others", so interaction with them is just a matter of communication. Communication is a pattern-seeking behavior which is also a requirement of sentience. It follows that a pre-requisite for self-awareness would also be the ability to test those capabilities and create their own scales of values.
legendary
Activity: 3514
Merit: 1280
English ⬄ Russian Translation Services
January 02, 2016, 04:53:38 AM
If a machine was self-aware, would they value life? Natural selection created strong family bonds in most complex organisms over billions of years. The bonds even cross species in many cases. Somehow it only makes sense that machines would also adapt a bonding behavior. They may even develop a dominion based philosophy where they see themselves as the Earth's and our caretakers. In this case, they may use money to motivate humans to reach a higher potential.

Just being sentient is not enough. Given only that (i.e. self-awareness), we would most certainly get the exact opposite of what is called a philosophical zombie. That is, a self-aware but absolutely indifferent to the outside world creature...

In this way, self-awareness as such is inconsequential to your question
In the second part of the hypothesis, I posit that if multiple self-aware machines machines interact, they might bond in ways analogous to complex biological organisms. But this new frontier of artificial intelligence is still beyond our understanding. I'm only hoping that our demise is not inevitable and that they might evolve a higher form of morality

They would not interact unless you put in them the necessity (or desire) to interact, either freely or obligatory. Likewise, you will have to install in them a scale of values (or conditions for developing one), either directly or implicitly...

Therefore, they won't evolve any form of morality all by themselves
donator
Activity: 1736
Merit: 1014
Let's talk governance, lipstick, and pigs.
January 02, 2016, 04:28:41 AM
If a machine was self-aware, would they value life? Natural selection created strong family bonds in most complex organisms over billions of years. The bonds even cross species in many cases. Somehow it only makes sense that machines would also adapt a bonding behavior. They may even develop a dominion based philosophy where they see themselves as the Earth's and our caretakers. In this case, they may use money to motivate humans to reach a higher potential.

Just being sentient is not enough. Given only that (i.e. self-awareness), we would most certainly get the exact opposite of what is called a philosophical zombie. That is, a self-aware but absolutely indifferent to the outside world creature...

In this way, self-awareness as such is inconsequential to your question
In the second part of the hypothesis, I posit that if multiple self-aware machines machines interact, they might bond in ways analogous to complex biological organisms. But this new frontier of artificial intelligence is still beyond our understanding. I'm only hoping that our demise is not inevitable and that they might evolve a higher form of morality.
legendary
Activity: 3514
Merit: 1280
English ⬄ Russian Translation Services
December 31, 2015, 09:38:30 AM
If a machine was self-aware, would they value life? Natural selection created strong family bonds in most complex organisms over billions of years. The bonds even cross species in many cases. Somehow it only makes sense that machines would also adapt a bonding behavior. They may even develop a dominion based philosophy where they see themselves as the Earth's and our caretakers. In this case, they may use money to motivate humans to reach a higher potential.

Just being sentient is not enough. Given only that (i.e. self-awareness), we would most certainly get the exact opposite of what is called a philosophical zombie. That is, a self-aware but absolutely indifferent to the outside world creature...

In this way, self-awareness as such is inconsequential to your question
donator
Activity: 1736
Merit: 1014
Let's talk governance, lipstick, and pigs.
December 31, 2015, 08:17:21 AM
Artificial intelligence and the fridge
http://on.ft.com/1zSz2tw

Quote
In science fiction, this scenario — called “singularity” or “transcendence” — usually leads to robot versus human war and a contest for world domination.
But what if, rather than a physical battle, it was an economic one, with robots siphoning off our money or destroying the global economy with out-of-control algorithmic trading programmes? Perhaps it will not make for a great movie, but it seems the more likely outcome.

With Bitcoin, it's hard to see the downside. DACs (decentralize autonomous companies) are inevitable. This article is another vestige of irrational fear about money.

Contrary to your opinion, IMO i believe that scenario would be the perfect plot for a science fiction movie. I wonder why many of the science fiction writers haven't used this idea yet!
Probably the same reason countries make their own separate monies. If machines were hostile to humans, humans would not use their money.
hero member
Activity: 504
Merit: 500
December 29, 2015, 09:21:58 AM
Artificial intelligence and the fridge
http://on.ft.com/1zSz2tw

Quote
In science fiction, this scenario — called “singularity” or “transcendence” — usually leads to robot versus human war and a contest for world domination.
But what if, rather than a physical battle, it was an economic one, with robots siphoning off our money or destroying the global economy with out-of-control algorithmic trading programmes? Perhaps it will not make for a great movie, but it seems the more likely outcome.

With Bitcoin, it's hard to see the downside. DACs (decentralize autonomous companies) are inevitable. This article is another vestige of irrational fear about money.

Contrary to your opinion, IMO i believe that scenario would be the perfect plot for a science fiction movie. I wonder why many of the science fiction writers haven't used this idea yet!
Pages:
Jump to: