Pages:
Author

Topic: Machines and money - page 5. (Read 12830 times)

hero member
Activity: 742
Merit: 526
March 20, 2015, 04:13:34 PM
You are confusing (as is standard in AI visibly) subjective consciousness and intelligence.  Intelligence is observable, objective and so on.  Consciousness isn't.  We can never know whether an entity is really conscious ; but we clearly can observe that an entity is intelligent. 

You are wrong. Ever heard of a consciousness test? Just a short explanation i quickly googled for you:

(...) only a conscious machine can demonstrate a subjective understanding of whether a scene depicted in some ordinary photograph is “right” or “wrong.” This ability to assemble a set of facts into a picture of reality that makes eminent sense—or know, say, that an elephant should not be perched on top of the Eiffel Tower—defines an essential property of the conscious mind. A roomful of IBM supercomputers, in contrast, still cannot fathom what makes sense in a scene.

I think by intelligence he means anything that doesn't correspond a linear train of events. So, any safety shutoff valve (which are designed to automatically shut off the flow of gas or liquid in case the pressure is above the shut-off limit) will be an intelligent device according to his logic.
hero member
Activity: 770
Merit: 629
March 20, 2015, 04:10:58 PM
So you obviously consider an automatic mechanical switch (or an automatic control valve) as being intelligent? You may contrive as many definitions for intelligence (or whatever) as you see appropriate, but surely this is not what the current mainstream thought suggests.

It is a very elementary form of intelligence.  It can solve a logical problem.  A calculator is somewhat smarter: it can do arithmetic operations.

What is the fundamental conceptual difference between:

P AND Q, where P, Q are elements of {true, false}

and

X / Y where X and Y are elements of a subset of the rational numbers (namely those that can be represented by a calculator)

If you think that being able to do a division of rational numbers has something intelligent to it, then why is doing the logical multiplication (which is AND) in the set of {true, false}, not a form of intelligence ?

Now, if you consider X/Y not intelligent, would you consider being able to do:

integrate{x^2, x} = x^3/3 + C

a form of intelligence ?

But that's still a similar form of relationship ! 
hero member
Activity: 770
Merit: 629
March 20, 2015, 04:06:44 PM
You are confusing (as is standard in AI visibly) subjective consciousness and intelligence.  Intelligence is observable, objective and so on.  Consciousness isn't.  We can never know whether an entity is really conscious ; but we clearly can observe that an entity is intelligent. 

You are wrong. Ever heard of a consciousness test? Just a short explanation i quickly googled for you:

(...) only a conscious machine can demonstrate a subjective understanding of whether a scene depicted in some ordinary photograph is “right” or “wrong.” This ability to assemble a set of facts into a picture of reality that makes eminent sense—or know, say, that an elephant should not be perched on top of the Eiffel Tower—defines an essential property of the conscious mind. A roomful of IBM supercomputers, in contrast, still cannot fathom what makes sense in a scene.

No, this is when one changes "consciousness" for some or other behavioural pattern.  Neuroscience and AI are full of it, but they simply re-define the concept into something behavioural.  However, there's nothing behavioural to conscious experience, as the "philosophical zombie" attests.

If you can train a pattern recognition algorithm sufficiently (style Google translate) to do the above, is that algorithm then conscious ?  These are really very very naive attempts.

60 years ago, such a kind of definition would probably include "winning a game of chess against the world champion" or something.
40 years ago, we would have said such a thing about voice recognition and Google. 
 What you are describing is a problem of INTELLIGENCE, and visual pattern recognition, in accord with standard visual experience. 

It is in principle even not extremely difficult to set up such a system.  In practice that's something else, but it works in the same way as Google translate: from tons and tons and tons of text pairs, find patterns of bits of phrases that always match.  If your text to be translated consists of these bits, put those systematically translated bits together with certain statistical properties.  It works better than most grammar-based translation systems !
It is also what our visual system does: we have seen and recorded so many "usual" scenes, that the unusual thing jumps up.  The elephant on top of the Eiffel tower would be such a thing.
In fact, many people would FAIL such a test if put in front of scenes of totally different scale, say, atomic scale, where physically, very strange things happen that defy all standard visual conceptions we're used to.

So we have substituted a definition of "consciousness" by one or other intelligence test.
sr. member
Activity: 756
Merit: 250
Infleum
March 20, 2015, 01:12:40 PM
You are confusing (as is standard in AI visibly) subjective consciousness and intelligence.  Intelligence is observable, objective and so on.  Consciousness isn't.  We can never know whether an entity is really conscious ; but we clearly can observe that an entity is intelligent. 

You are wrong. Ever heard of a consciousness test? Just a short explanation i quickly googled for you:

(...) only a conscious machine can demonstrate a subjective understanding of whether a scene depicted in some ordinary photograph is “right” or “wrong.” This ability to assemble a set of facts into a picture of reality that makes eminent sense—or know, say, that an elephant should not be perched on top of the Eiffel Tower—defines an essential property of the conscious mind. A roomful of IBM supercomputers, in contrast, still cannot fathom what makes sense in a scene.
hero member
Activity: 742
Merit: 526
March 20, 2015, 03:04:37 AM
Quote
You can't have it both ways. And you won't be able to get away with the idea that "we'll never know, not being an AND gate ourselves" since if you take this position, you can no longer claim that something is being intelligent at all, and all your arguments are momentarily rendered null and void.

You are confusing (as is standard in AI visibly) subjective consciousness and intelligence.  Intelligence is observable, objective and so on.  Consciousness isn't.  We can never know whether an entity is really conscious ; but we clearly can observe that an entity is intelligent.  Our computers clearly ARE intelligent.  We suppose they are not conscious, but there's no way to know.  An AND gate has a minimum of intelligence (it can solve a very elementary logic puzzle).  Whether it is conscious, we don't know (although we assume it isn't I suppose).  The only way to assume consciousness is by "similarity to ourselves", and it remains a guess.  We assume other people are conscious sentient beings.  We probably assume that most mammals are conscious sentient beings.  For fish, you can start discussing.  For insects, what do you think ?  I suppose most people assume that jelly fish aren't conscious sentient beings.  We base ourselves on the existence of a central nervous system of a "certain complexity" in their bodies. 

So in a certain way, we are assuming that a certain level of intelligence is necessary for the possibility of subjective experiences to emerge, to even exist.  But that's sheer guess work.

So you obviously consider an automatic mechanical switch (or an automatic control valve) as being intelligent? You may contrive as many definitions for intelligence (or whatever) as you see appropriate, but surely this is not what the current mainstream thought suggests.
hero member
Activity: 770
Merit: 629
March 20, 2015, 12:15:45 AM
Yes, of course.  They ARE our thoughts.  The mystery resides in that they are subjectively experienced.  That's unobservable by itself (except by the sentient "being" that emerged from it but is behaviourally unobservable from the outside).

So, if we emulate them (or even better mirror them somehow in some carrier) we should necessarily obtain an intelligent entity, right? If you argue against this point, you should then also accept the view that these signals are not intelligent.

If you emulate them, you get of course exactly the same intelligence, if they go at the same speed.  If you go faster, as you claimed, you will get more intelligence, simply because you can put in more "thoughts" to resolve a problem.  In the same way that a recent i7 processor is more intelligent than a Pentium III, even though they share a similar instruction set.  The same problem can be tackled in a much more sophisticated way on an i7 processor than on a Pentium III, simply because the i7 can afford much more instructions to be executed to a problem.

If, as a child, it takes you 10 minutes to do manually a multiplication of 4 digits, and as an adult, you've learned to "see" through relatively complex algebraic expressions in a second, your mathematical intelligence is totally different, right ?  What you may find exciting as a child, is sheer boredom as an adult.  You may enjoy playing tic-tac-toe as a child, but as an adult, it is boring, or its fun resides elsewhere (the social contact, and not the fun of the game itself for instance).  So at different levels of intelligence, your "good" and "bad" experiences are also totally different.  Too intelligent kills the fun of some "boring" things, if you see immediately already the outcome.

Imagine someone, intelligent enough to 'see through' 40 steps in a chess game (a normal casual player sees 1 or 2 steps, and a master player can see 5 or 6 steps).  You'd see the end game already when you start.  No fun playing chess any more.  So the level of intelligence changes also perception of "good" and "bad".

Quote
You can't have it both ways. And you won't be able to get away with the idea that "we'll never know, not being an AND gate ourselves" since if you take this position, you can no longer claim that something is being intelligent at all, and all your arguments are momentarily rendered null and void.

You are confusing (as is standard in AI visibly) subjective consciousness and intelligence.  Intelligence is observable, objective and so on.  Consciousness isn't.  We can never know whether an entity is really conscious ; but we clearly can observe that an entity is intelligent.  Our computers clearly ARE intelligent.  We suppose they are not conscious, but there's no way to know.  An AND gate has a minimum of intelligence (it can solve a very elementary logic puzzle).  Whether it is conscious, we don't know (although we assume it isn't I suppose).  The only way to assume consciousness is by "similarity to ourselves", and it remains a guess.  We assume other people are conscious sentient beings.  We probably assume that most mammals are conscious sentient beings.  For fish, you can start discussing.  For insects, what do you think ?  I suppose most people assume that jelly fish aren't conscious sentient beings.  We base ourselves on the existence of a central nervous system of a "certain complexity" in their bodies. 

So in a certain way, we are assuming that a certain level of intelligence is necessary for the possibility of subjective experiences to emerge, to even exist.  But that's sheer guess work.
hero member
Activity: 742
Merit: 526
March 19, 2015, 04:36:54 PM
As you can now see (I hope), these tools are no more intelligent than a calculator (in fact, even lesser than that). Can we call electrochemical processes in our brain that form the basis of our thoughts intelligent?

Yes, of course.  They ARE our thoughts.  The mystery resides in that they are subjectively experienced.  That's unobservable by itself (except by the sentient "being" that emerged from it but is behaviourally unobservable from the outside).

So, if we emulate them (or even better mirror them somehow in some carrier) we should necessarily obtain an intelligent entity, right? If you argue against this point, you should then also accept the view that these signals are not intelligent. You can't have it both ways. And you won't be able to get away with the idea that "we'll never know, not being an AND gate ourselves" since if you take this position, you can no longer claim that something is being intelligent at all, and all your arguments are momentarily rendered null and void.
hero member
Activity: 770
Merit: 629
March 19, 2015, 02:09:43 PM
You are saying that a fish, that could think like a human, would still be a fish ?

We call something "a fish" or "not a fish" simply based on whether it would be useful, for our communication, to do so. We name things based on utility. If the utility picture changes, as it does with a fish who can think like a human (and therefore might be able to kill you in your sleep by splashing water on your computer and starting an electrical fire), we no longer would likely feel that the word "fish" evokes the most useful imagery for equipping someone to deal with that creature when we communicate about it. We might feel compelled to qualify it as a "superintelligent fish" or even a "human-fish." Whatever is most useful for getting the point across that you don't want to underestimate its intelligence.

Once you understand that we name things based on (methodologically individual) utility, many paradoxes are resolved. Here are two examples.

Paradox of the Heap: How many grains of sand does it take to make a heap?

Utility phrasing makes it easy. A "heap" simply means a point where you yourself find no utility in trying to keep track of individual grains in the set, either because you're unable to easily count them or because it doesn't matter to you. "Meh, it's just a heap." The answer will differ depending on the person and the context. It is no set number; it's simply when you look over and stop caring about the individuated quantity. That is why this has the appearance of a paradox and why Wikipedia doesn't even mention this obvious and fully satisfying resolution. The fundamental error in Wikipedia's presentation is to consider what a heap "really is," rather than what the term "heap" can usefully mean for each person and context, even though it is self-evident that this is how language works.

Ship of Theseus Paradox:

Quote from: Wikipedia
"The ship wherein Theseus and the youth of Athens returned from Crete had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and stronger timber in their places, in so much that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same."

—Plutarch, Theseus

Plutarch thus questions whether the ship would remain the same if it were entirely replaced, piece by piece. Centuries later, the philosopher Thomas Hobbes introduced a further puzzle, wondering what would happen if the original planks were gathered up after they were replaced, and used to build a second ship. Hobbes asked which ship, if either, would be considered the original Ship of Theseus.

This is also easily and satisfyingly, though again un-excitingly, resolved by utility phrasing. "Ship of Theseus" is just a name we assign for utility purposes, basically to make life easier in our communications with ourselves and others. The name evokes certain associations for certain people, and based on that we will - in our communication efforts - call something "the Ship of Theseus" or "the original Ship of Theseus" whenever we believe that set of words will call up the most useful associations in the listener, to have them best understand our intent.

There is no such thing as a fully objective definition of the term "Ship of Theseus"; it always depends on what you're attempting to communicate to whom, and what you/they actually care about in the present context.

For example, if it matters to you that the ship was touched by Athenian hands, it wouldn't be useful to you to refer to it as the "Ship of Theseus" if all the parts had been replaced by non-Athenians. But if you simply cared about the way the ship looked and what it could do, because it has a unique shape and navicability compared with other ships, it would be useful in your mind to refer to it as the "Ship of Theseus" even if its parts had all been replaced for functionally and visually identical ones.

Once again it comes down to each person's utility of calling it one thing or another in each context. We will call the a second ship built in the image of the first a "replica" if we are speaking in a context of attributing credit for its design and original building, but simply call it "a Ship of Theseus" if we only care about its function and looks in this context, and we'll call it "the Ship of Theseus" even if it is not the original if the original has been destroyed and all we care about is the form and function, such as to answer a practical question like, "Can the Ship of Theseus sale to Minoa?"

To repeat the above point, the key error is in considering what the Ship of Theseus "really is," rather than what the term "Ship of Theseus" can usefully mean for each person and context. Even though it is self-evident that this is how language works in the first place, people are nevertheless highly prone to this kind of error (the reasons have to do with tribal instincts).

Brilliant !

But of course, the question matters somewhat if the concept is "ourselves".  It is not a matter of pure convenience to consider whether you are "you" of course.  That changes, I agree, if it is not just "you" but "us". 

The question was the following: assuming that machines became intelligent, sentient and would be a treat for "humanity", the suggestion was to modify humans so that they would also become much more intelligent, and gain the battle of intelligence with the machines.

My point was that these modified "humans" would not be "us" any more, not more than we are still fish.  We would simply have replaced ourselves with two entirely different, intelligent species: "the improved humans" on one hand, and the "machines" on the other hand.  But we as humans would be gone.
hero member
Activity: 770
Merit: 629
March 19, 2015, 01:54:04 PM
As you can now see (I hope), these tools are no more intelligent than a calculator (in fact, even lesser than that). Can we call electrochemical processes in our brain that form the basis of our thoughts intelligent?

Yes, of course.  They ARE our thoughts.  The mystery resides in that they are subjectively experienced.  That's unobservable by itself (except by the sentient "being" that emerged from it but is behaviourally unobservable from the outside).

Maybe an AND gate is also a sentient being.  We'll never know, not being an AND gate ourselves.  The physical process of the logical AND function can of course be understood by any student of solid state electronics.  But whether an AND gate has subjective experiences or not is unobservable if you're not that AND gate.

Quote
If we substitute them with more efficient and faster pure electrical signals (or even light signals), will they become more "intelligent"? Will our thoughts be essentially different in this case?

Of course they would.  In the same way as our thoughts are different from those of a fish.
legendary
Activity: 1036
Merit: 1000
March 18, 2015, 03:32:22 PM
You are saying that a fish, that could think like a human, would still be a fish ?

We call something "a fish" or "not a fish" simply based on whether it would be useful, for our communication, to do so. We name things based on utility. If the utility picture changes, as it does with a fish who can think like a human (and therefore might be able to kill you in your sleep by splashing water on your computer and starting an electrical fire), we no longer would likely feel that the word "fish" evokes the most useful imagery for equipping someone to deal with that creature when we communicate about it. We might feel compelled to qualify it as a "superintelligent fish" or even a "human-fish." Whatever is most useful for getting the point across that you don't want to underestimate its intelligence.

Once you understand that we name things based on (methodologically individual) utility, many paradoxes are resolved. Here are two examples.

Paradox of the Heap: How many grains of sand does it take to make a heap?

Utility phrasing makes it easy. A "heap" simply means a point where you yourself find no utility in trying to keep track of individual grains in the set, either because you're unable to easily count them or because it doesn't matter to you. "Meh, it's just a heap." The answer will differ depending on the person and the context. It is no set number; it's simply when you look over and stop caring about the individuated quantity. That is why this has the appearance of a paradox and why Wikipedia doesn't even mention this obvious and fully satisfying resolution. The fundamental error in Wikipedia's presentation is to consider what a heap "really is," rather than what the term "heap" can usefully mean for each person and context, even though it is self-evident that this is how language works.

Ship of Theseus Paradox:

Quote from: Wikipedia
"The ship wherein Theseus and the youth of Athens returned from Crete had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and stronger timber in their places, in so much that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same."

—Plutarch, Theseus

Plutarch thus questions whether the ship would remain the same if it were entirely replaced, piece by piece. Centuries later, the philosopher Thomas Hobbes introduced a further puzzle, wondering what would happen if the original planks were gathered up after they were replaced, and used to build a second ship. Hobbes asked which ship, if either, would be considered the original Ship of Theseus.

This is also easily and satisfyingly, though again un-excitingly, resolved by utility phrasing. "Ship of Theseus" is just a name we assign for utility purposes, basically to make life easier in our communications with ourselves and others. The name evokes certain associations for certain people, and based on that we will - in our communication efforts - call something "the Ship of Theseus" or "the original Ship of Theseus" whenever we believe that set of words will call up the most useful associations in the listener, to have them best understand our intent.

There is no such thing as a fully objective definition of the term "Ship of Theseus"; it always depends on what you're attempting to communicate to whom, and what you/they actually care about in the present context.

For example, if it matters to you that the ship was touched by Athenian hands, it wouldn't be useful to you to refer to it as the "Ship of Theseus" if all the parts had been replaced by non-Athenians. But if you simply cared about the way the ship looked and what it could do, because it has a unique shape and navicability compared with other ships, it would be useful in your mind to refer to it as the "Ship of Theseus" even if its parts had all been replaced for functionally and visually identical ones.

Once again it comes down to each person's utility of calling it one thing or another in each context. We will call the a second ship built in the image of the first a "replica" if we are speaking in a context of attributing credit for its design and original building, but simply call it "a Ship of Theseus" if we only care about its function and looks in this context, and we'll call it "the Ship of Theseus" even if it is not the original if the original has been destroyed and all we care about is the form and function, such as to answer a practical question like, "Can the Ship of Theseus sale to Minoa?"

To repeat the above point, the key error is in considering what the Ship of Theseus "really is," rather than what the term "Ship of Theseus" can usefully mean for each person and context. Even though it is self-evident that this is how language works in the first place, people are nevertheless highly prone to this kind of error (the reasons have to do with tribal instincts).
legendary
Activity: 1036
Merit: 1000
March 18, 2015, 02:50:51 PM
If the mind is purely subjective, then what makes you think anything is real and not just a figment of your imagination?

That's a position that is very real Smiley  It is called strong solipsism.  

In fact, my stance on solipsism is that it might very well be true, but that that actually doesn't matter.  After all, what matters (for you) are your personal subjective perceptions and sensations.  Now, if those perceptions and sensations are *well explained* by *postulating* an (eventually non-existing) external world, then even though it would be ontologically erroneous to do so, it would be a very practical working hypothesis.  So, taking as a working hypothesis that the external world exists, is by itself, a good working hypothesis, because it can help you understand the correlations between your sensations.  Whether that external world actually ontologically exists or not, doesn't, in fact, really matter !

Let me explain with an example.  If you have the sensations that agree with "I take a hammer in my hand and I give a blow with it on my toes", and the next sensations are "goddammit, my foot hurts like hell !", then it makes much more sense to take as a working hypothesis that your body exists, that the external world exists, that that hammer exists and that you really hit your foot, rather than postulating that all that is a figment of your imagination - even if the latter would be ontologically true.

So whether that hammer really exists or not does in fact not matter.  You understand your subjective sensations much better by taking as a working hypothesis that it does.  And that's sufficient to do so.

Right, ontological/epistemological phrasing is just a higher-level phrasing than utility phrasing. In other words, in everyday talk it is extremely cumbersome to phrase everything in terms of utility, so we speak about things being "real" or "imagined," but these are just shorthand for different sets of utility statements, as your example with the hammer illustrates.

As we start to analyze things with unusual care, we eventually come to a point where utility phrasing is the clearest. If we try to carry the terms of everyday talk ("reality," "other people," etc.) into such an analysis, we just run around in semantic circles and confuse ourselves.
hero member
Activity: 742
Merit: 526
March 18, 2015, 12:39:54 PM
Now, you are saying that in order to render us just as intelligent as our tools, we should use intelligent tools which are so intelligent that they get their own life.  That begs the question, no ?  The only way for US to be as intelligent as they are, would be for us to be intrinsically so intelligent.  But that would mean that those "we" would be totally different from what we are now.

As you can now see (I hope), these tools are no more intelligent than a calculator (in fact, even lesser than that). Can we call electrochemical processes in our brain that form the basis of our thoughts intelligent? If we substitute them with more efficient and faster pure electrical signals (or even light signals), will they become more "intelligent"? Will our thoughts be essentially different in this case?

By the way, the answer to these questions is already known.
hero member
Activity: 742
Merit: 526
March 18, 2015, 12:39:03 PM
A calculator on your desktop essentially makes you into a super-human (in respect to calculations), but did it actually change your mind (even if you had it right in your head)?

Of course it didn't change my mind !  A calculator is an external tool.  Now, we started this discussion assuming that our "external tools" became so terribly intelligent (and maybe sentient) that they might start having goals of themselves (being sentient beings, and hence having "good" and "bad" sensations, which is the basis of all desires, goals and so on).  Them being much more intelligent than ourselves, we might probably not even notice (in the beginning) their strategies, and they would in any case be totally opaque to us.

Now you are obviously trying to confuse concepts, that is, the notion of intelligence with the notion of tool. They are not synonymous.

Your memory (and mine too, for that matter) is also an "external" tool to our mind. Could we say that memory is intelligent or sentient? Indeed, no. I come to think that out thought processes are also in a way external to our mind, that is to self-awareness as such. I could even go so far as to say that the difference between a human being and animals that are thought to have consciousness (dolphins, elephants, primates and other animals that recognize themselves in the mirror) is determined entirely by the level of these "external tools" development, not the mind itself.
legendary
Activity: 1358
Merit: 1014
March 18, 2015, 11:04:36 AM
I think Resource Based Economy is on point and our ultimate fate, but we are still far from it, as species we are not ready and still need a form of money. Bitcoin is objectively the best we have today as money/storage of value.
hero member
Activity: 770
Merit: 629
March 18, 2015, 09:16:12 AM
A calculator on your desktop essentially makes you into a super-human (in respect to calculations), but did it actually change your mind (even if you had it right in your head)?

Of course it didn't change my mind !  A calculator is an external tool.  Now, we started this discussion assuming that our "external tools" became so terribly intelligent (and maybe sentient) that they might start having goals of themselves (being sentient beings, and hence having "good" and "bad" sensations, which is the basis of all desires, goals and so on).  Them being much more intelligent than ourselves, we might probably not even notice (in the beginning) their strategies, and they would in any case be totally opaque to us.

Now, you are saying that in order to render us just as intelligent as our tools, we should use intelligent tools which are so intelligent that they get their own life.  That begs the question, no ?  The only way for US to be as intelligent as they are, would be for us to be intrinsically so intelligent.  But that would mean that those "we" would be totally different from what we are now.

Quote
The process of understanding something (our apple of discord) is indeed different but not far from calculating. An ability to understand faster and sharper won't change your mind by any means. The difference will be only quantitative.

Of course it would.  It is even the essence of our being.  You are saying that a fish, that could think like a human, would still be a fish ?  A fish that could do philosophy would still be a fish like his fellow fish ?

If you are vastly more intelligent, of course your sensations, your desires, your good and bad experiences will be totally different.  A fish that can do philosophy will probably be bored dead in an aquarium !  It would be a totally other sentient being.

Quote
Have you seen Limitless?

No.
hero member
Activity: 742
Merit: 526
March 18, 2015, 06:06:16 AM
The argument was that the sentient machines would be, first, more intelligent than humans, and, second, unpredictable (as a consequence of the first).

Yes.

Quote
Regarding new beings, it doesn't matter that these creatures will be as different from humans as the latter are different from fish, since them will still be us. For example, if you count with a calculator, does this process change your inner self somehow, despite the fact that your counting capacity grows tremendously and in this respect you stop being "human"?

Isn't the bold-faced stuff self-contradictory ?  If they are totally different, how are they "us" ?

You chose the wrong sentence to highlight. Did you read the next sentence? A calculator on your desktop essentially makes you into a super-human (in respect to calculations), but did it actually change your mind (even if you had it right in your head)? The process of understanding something (our apple of discord) is indeed different but not far from calculating. An ability to understand faster and sharper won't change your mind by any means. The difference will be only quantitative.

Have you seen Limitless?
hero member
Activity: 770
Merit: 629
March 18, 2015, 05:49:17 AM
The argument was that the sentient machines would be, first, more intelligent than humans, and, second, unpredictable (as a consequence of the first).

Yes.

Quote
Regarding new beings, it doesn't matter that these creatures will be as different from humans as the latter are different from fish, since them will still be us. For example, if you count with a calculator, does this process change your inner self somehow, despite the fact that your counting capacity grows tremendously and in this respect you stop being "human"?

Isn't the bold-faced stuff self-contradictory ?  If they are totally different, how are they "us" ?
hero member
Activity: 742
Merit: 526
March 18, 2015, 02:49:03 AM

So if I obtained a Ferrari by putting its pieces one by one as a replacement on a Ford-T, I would have a redesigned ford-T, but if I made exactly the same Ferrari by assembling directly all those pieces, and never put them first on a modified Ford-T, it would be a Ferrari ?

In both of these cases, the end result will be a Ferrari (what you got), in fact, it will the same Ferrari. As I said, it is the process how you got what you got and what you took as its basis that matters in the differentiation between desinging something anew and redesigning something already existing.

Strictly speaking, you neither designed a new Ferrari nor redesigned an old Ford-T, right?

The point was: if a human is so much "re designed" or if a new biological creature is "designed" that we get a totally different organic body, why would we still consider it to be a "human" ? 

The argument was that if we succeed, indirectly, to design more intelligent machines (by having them invent more and more intelligent machines themselves), we could also (re?) design human beings in becoming more and more intelligent.  However, my point was that in the end, the resulting creature would be as different from a human as humans are different from fish.  So why would we still consider those new beings as "humans", and not consider us as "fish" ?

In what way will those biological creatures be more "human" than the machines that were ALSO designed initially on purpose by us ?

The argument was that the sentient machines would be, first, more intelligent than humans, and, second, unpredictable (as a consequence of the first). Regarding new beings, it doesn't matter that these creatures will be as different from humans as the latter are different from fish, since them will still be us. For example, if you count with a calculator, does this process change your inner self somehow, despite the fact that your counting capacity grows tremendously and in this respect you stop being "human"?

I think you are trying to endow your machines with abilities that are not just beyond our comprehension but also beyond the ability scope of this universe.
hero member
Activity: 770
Merit: 629
March 18, 2015, 02:36:40 AM
To follow with your hypothesis and make it repeatable, I would also have to smack your toes with a hammer and see your foot swell. Your solipsism becomes my empiricism. Humans have mirror neurons to assist with this process. Machines would need to simulate pain and empathy to test these hypotheses. Would that make them solipsistic? Would robots dream of electric sheep?

No, because under the hypothesis of (mutual) solipsism, I only exist as a figment of your imagination - while you only exist as a figment of my imagination.  There's no point for YOU to want to repeat "experiments" which are only existing in your own imagination but of which you imagine that they correspond to "my imagination", right ?  (hum, that gets weird Smiley ).
hero member
Activity: 770
Merit: 629
March 18, 2015, 02:34:38 AM

So if I obtained a Ferrari by putting its pieces one by one as a replacement on a Ford-T, I would have a redesigned ford-T, but if I made exactly the same Ferrari by assembling directly all those pieces, and never put them first on a modified Ford-T, it would be a Ferrari ?

In both of these cases, the end result will be a Ferrari (what you got), in fact, it will the same Ferrari. As I said, it is the process how you got what you got and what you took as its basis that matters in the differentiation between desinging something anew and redesigning something already existing.

Strictly speaking, you neither designed a new Ferrari nor redesigned an old Ford-T, right?

The point was: if a human is so much "re designed" or if a new biological creature is "designed" that we get a totally different organic body, why would we still consider it to be a "human" ? 

The argument was that if we succeed, indirectly, to design more intelligent machines (by having them invent more and more intelligent machines themselves), we could also (re?) design human beings in becoming more and more intelligent.  However, my point was that in the end, the resulting creature would be as different from a human as humans are different from fish.  So why would we still consider those new beings as "humans", and not consider us as "fish" ?

In what way will those biological creatures be more "human" than the machines that were ALSO designed initially on purpose by us ?

It is the end result that counts, I would say, and not the way to get there.

Pages:
Jump to: