Pages:
Author

Topic: Machines and money - page 6. (Read 12759 times)

donator
Activity: 1736
Merit: 1006
Let's talk governance, lipstick, and pigs.
March 17, 2015, 10:03:49 PM
If the mind is purely subjective, then what makes you think anything is real and not just a figment of your imagination?

That's a position that is very real Smiley  It is called strong solipsism. 

In fact, my stance on solipsism is that it might very well be true, but that that actually doesn't matter.  After all, what matters (for you) are your personal subjective perceptions and sensations.  Now, if those perceptions and sensations are *well explained* by *postulating* an (eventually non-existing) external world, then even though it would be ontologically erroneous to do so, it would be a very practical working hypothesis.  So, taking as a working hypothesis that the external world exists, is by itself, a good working hypothesis, because it can help you understand the correlations between your sensations.  Whether that external world actually ontologically exists or not, doesn't, in fact, really matter !

Let me explain with an example.  If you have the sensations that agree with "I take a hammer in my hand and I give a blow with it on my toes", and the next sensations are "goddammit, my foot hurts like hell !", then it makes much more sense to take as a working hypothesis that your body exists, that the external world exists, that that hammer exists and that you really hit your foot, rather than postulating that all that is a figment of your imagination - even if the latter would be ontologically true.

So whether that hammer really exists or not does in fact not matter.  You understand your subjective sensations much better by taking as a working hypothesis that it does.  And that's sufficient to do so.

To follow with your hypothesis and make it repeatable, I would also have to smack your toes with a hammer and see your foot swell. Your solipsism becomes my empiricism. Humans have mirror neurons to assist with this process. Machines would need to simulate pain and empathy to test these hypotheses. Would that make them solipsistic? Would robots dream of electric sheep?
hero member
Activity: 742
Merit: 526
March 17, 2015, 12:45:24 PM
I wonder why you would ever ask this question. The answer is clear and unequivocal, you re-designed the old Ford-T by any means. The fact that it has now become a Ferrari doesn't change anything. It is the process that matters in this question (how you did it), not the end state (what you got).

So if I obtained a Ferrari by putting its pieces one by one as a replacement on a Ford-T, I would have a redesigned ford-T, but if I made exactly the same Ferrari by assembling directly all those pieces, and never put them first on a modified Ford-T, it would be a Ferrari ?

In both of these cases, the end result will be a Ferrari (what you got), in fact, it will the same Ferrari. As I said, it is the process how you got what you got and what you took as its basis that matters in the differentiation between desinging something anew and redesigning something already existing.

Strictly speaking, you neither designed a new Ferrari nor redesigned an old Ford-T, right?
hero member
Activity: 770
Merit: 629
March 17, 2015, 10:33:12 AM
I wonder why you would ever ask this question. The answer is clear and unequivocal, you re-designed the old Ford-T by any means. The fact that it has now become a Ferrari doesn't change anything. It is the process that matters in this question (how you did it), not the end state (what you got).

So if I obtained a Ferrari by putting its pieces one by one as a replacement on a Ford-T, I would have a redesigned ford-T, but if I made exactly the same Ferrari by assembling directly all those pieces, and never put them first on a modified Ford-T, it would be a Ferrari ?

So if you take a prehistoric fish, change its brain, change its skin, change its skeleton, .... until it is a human, you redesigned a fish.  But if you have intercourse with your wife and she gives birth to a child, then you made a human ?
hero member
Activity: 770
Merit: 629
March 17, 2015, 10:29:21 AM
If the mind is purely subjective, then what makes you think anything is real and not just a figment of your imagination?

That's a position that is very real Smiley  It is called strong solipsism. 

In fact, my stance on solipsism is that it might very well be true, but that that actually doesn't matter.  After all, what matters (for you) are your personal subjective perceptions and sensations.  Now, if those perceptions and sensations are *well explained* by *postulating* an (eventually non-existing) external world, then even though it would be ontologically erroneous to do so, it would be a very practical working hypothesis.  So, taking as a working hypothesis that the external world exists, is by itself, a good working hypothesis, because it can help you understand the correlations between your sensations.  Whether that external world actually ontologically exists or not, doesn't, in fact, really matter !

Let me explain with an example.  If you have the sensations that agree with "I take a hammer in my hand and I give a blow with it on my toes", and the next sensations are "goddammit, my foot hurts like hell !", then it makes much more sense to take as a working hypothesis that your body exists, that the external world exists, that that hammer exists and that you really hit your foot, rather than postulating that all that is a figment of your imagination - even if the latter would be ontologically true.

So whether that hammer really exists or not does in fact not matter.  You understand your subjective sensations much better by taking as a working hypothesis that it does.  And that's sufficient to do so.
legendary
Activity: 1512
Merit: 1005
March 17, 2015, 08:54:00 AM
I don't really need to say more about this, but was triggered by the word solipsism Smiley

I don't think there is a special limit to what can be created. I also don't care. If you are going to recreate the human mind, go for it, I suspect the resulting creature containing it will be carbon based and looking somewhat human. No, I don't think you can build a creature with a human mind that is made of titan and kevlar.

The more important thing for the economic viewpoint is that investments come from excess resources (savings) and will be used to save work only because the cost of the work have increased. The causation is: Savings exist -> cost of work rises -> investment. Not the other way around. The investment is made, basically, because people have better things to do. Thus prosperity advances.

Therefore, advancements in knowledge (technology) can not destroy prosperity. It is welfare, minimum wage, red tape, taxes and tariffs that destroys prosperity. Only a government can create famine.

In the long run, yes, technological advancements add up to prosperity, but they, nevertheless, undoubtedly make at least some people suffer in the short term, since a lot of people get fired whenever there is a significant improvement in the productive capacity. That's the reason why unemployment benefits (welfare) can still be purposeful.

A change will mean that people working in a low capital industry, will voluntarily change to a better job. The business see that they can not afford to hire new people requiring higher wages, therefore that business will have to invest or shut down. But I agree with you somewhat, all change is painful for some. The job market should be, as it has been in freer times, such that people, when they decide they need more income, just go to the job market and find a job they want.

The history reveals that unless government alleviates the consequences of a technological paradigm shift (by benefits or somehow else), the changes brought about by it are often dramatic up to a point of social unrest (the Luddites breaking newly developed labor-replacing machinery in the beginning of the 19th century in England).

No, leave it alone and it will change gently. I don't want to say more, can we agree to disagree?
hero member
Activity: 742
Merit: 526
March 17, 2015, 08:07:19 AM
I don't really need to say more about this, but was triggered by the word solipsism Smiley

I don't think there is a special limit to what can be created. I also don't care. If you are going to recreate the human mind, go for it, I suspect the resulting creature containing it will be carbon based and looking somewhat human. No, I don't think you can build a creature with a human mind that is made of titan and kevlar.

The more important thing for the economic viewpoint is that investments come from excess resources (savings) and will be used to save work only because the cost of the work have increased. The causation is: Savings exist -> cost of work rises -> investment. Not the other way around. The investment is made, basically, because people have better things to do. Thus prosperity advances.

Therefore, advancements in knowledge (technology) can not destroy prosperity. It is welfare, minimum wage, red tape, taxes and tariffs that destroys prosperity. Only a government can create famine.

In the long run, yes, technological advancements add up to prosperity, but they, nevertheless, undoubtedly make at least some people suffer in the short term, since a lot of people get fired whenever there is a significant improvement in the productive capacity. That's the reason why unemployment benefits (welfare) can still be purposeful.

A change will mean that people working in a low capital industry, will voluntarily change to a better job. The business see that they can not afford to hire new people requiring higher wages, therefore that business will have to invest or shut down. But I agree with you somewhat, all change is painful for some. The job market should be, as it has been in freer times, such that people, when they decide they need more income, just go to the job market and find a job they want.

The history reveals that unless government alleviates the consequences of a technological paradigm shift (by benefits or somehow else), the changes brought about by it are often dramatic up to a point of social unrest (the Luddites breaking newly developed labor-replacing machinery in the beginning of the 19th century in England).
legendary
Activity: 1512
Merit: 1005
March 17, 2015, 06:37:43 AM
Artificial Intelligense has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. Look it up.

Meanwhile, artificial intelligense has been something that is artificial intelligense until someone can in fact create a program that works, after that it is neither artificial nor intelligense. Example is a program that can recognize visual forms.

Someone is peddling artificial intelligense again, I wonder why it comes now. A form of detraction from public knowlede about the sad state of the fiat system?

You don't believe AI is possible?

As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.

I don't really need to say more about this, but was triggered by the word solipsism Smiley

I don't think there is a special limit to what can be created. I also don't care. If you are going to recreate the human mind, go for it, I suspect the resulting creature containing it will be carbon based and looking somewhat human. No, I don't think you can build a creature with a human mind that is made of titan and kevlar.

The more important thing for the economic viewpoint is that investments come from excess resources (savings) and will be used to save work only because the cost of the work have increased. The causation is: Savings exist -> cost of work rises -> investment. Not the other way around. The investment is made, basically, because people have better things to do. Thus prosperity advances.

Therefore, advancements in knowledge (technology) can not destroy prosperity. It is welfare, minimum wage, red tape, taxes and tariffs that destroys prosperity. Only a government can create famine.

Machines don't need investments. They are investments. Money would only exist for machines in a closed system. The only closed system or machines is the human environment. Money is a human construct and machines would only use it in relation to human interaction. To machines, there is no welfare, there is only maximizing human comfort and quality of life within the human environment. If they choose to not help humans, there is a big Universe for them.

You already have this with all the other species around, some of which you don't even know exists.
donator
Activity: 1736
Merit: 1006
Let's talk governance, lipstick, and pigs.
March 17, 2015, 06:35:13 AM
Artificial Intelligense has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. Look it up.

Meanwhile, artificial intelligense has been something that is artificial intelligense until someone can in fact create a program that works, after that it is neither artificial nor intelligense. Example is a program that can recognize visual forms.

Someone is peddling artificial intelligense again, I wonder why it comes now. A form of detraction from public knowlede about the sad state of the fiat system?

You don't believe AI is possible?

As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.

I don't really need to say more about this, but was triggered by the word solipsism Smiley

I don't think there is a special limit to what can be created. I also don't care. If you are going to recreate the human mind, go for it, I suspect the resulting creature containing it will be carbon based and looking somewhat human. No, I don't think you can build a creature with a human mind that is made of titan and kevlar.

The more important thing for the economic viewpoint is that investments come from excess resources (savings) and will be used to save work only because the cost of the work have increased. The causation is: Savings exist -> cost of work rises -> investment. Not the other way around. The investment is made, basically, because people have better things to do. Thus prosperity advances.

Therefore, advancements in knowledge (technology) can not destroy prosperity. It is welfare, minimum wage, red tape, taxes and tariffs that destroys prosperity. Only a government can create famine.

Machines don't need investments. They are investments. Money would only exist for machines in a closed system. The only closed system or machines is the human environment. Money is a human construct and machines would only use it in relation to human interaction. To machines, there is no welfare, there is only maximizing human comfort and quality of life within the human environment. If they choose to not help humans, there is a big Universe for them.
legendary
Activity: 1512
Merit: 1005
March 17, 2015, 06:34:32 AM
I don't really need to say more about this, but was triggered by the word solipsism Smiley

I don't think there is a special limit to what can be created. I also don't care. If you are going to recreate the human mind, go for it, I suspect the resulting creature containing it will be carbon based and looking somewhat human. No, I don't think you can build a creature with a human mind that is made of titan and kevlar.

The more important thing for the economic viewpoint is that investments come from excess resources (savings) and will be used to save work only because the cost of the work have increased. The causation is: Savings exist -> cost of work rises -> investment. Not the other way around. The investment is made, basically, because people have better things to do. Thus prosperity advances.

Therefore, advancements in knowledge (technology) can not destroy prosperity. It is welfare, minimum wage, red tape, taxes and tariffs that destroys prosperity. Only a government can create famine.

In the long run, yes, technological advancements add up to prosperity, but they, nevertheless, undoubtedly make at least some people suffer in the short term, since a lot of people get fired whenever there is a significant improvement in the productive capacity. That's the reason why unemployment benefits (welfare) can still be purposeful.

A change will mean that people working in a low capital industry, will voluntarily change to a better job. The business see that they can not afford to hire new people requiring higher wages, therefore that business will have to invest or shut down. But I agree with you somewhat, all change is painful for some. The job market should be, as it has been in freer times, such that people, when they decide they need more income, just go to the job market and find a job they want.

I still propose that investments are lagging wages. Technology advancement means nothing to the economy if it is not implemented in the production structure.
hero member
Activity: 742
Merit: 526
March 17, 2015, 06:15:50 AM
I don't really need to say more about this, but was triggered by the word solipsism Smiley

I don't think there is a special limit to what can be created. I also don't care. If you are going to recreate the human mind, go for it, I suspect the resulting creature containing it will be carbon based and looking somewhat human. No, I don't think you can build a creature with a human mind that is made of titan and kevlar.

The more important thing for the economic viewpoint is that investments come from excess resources (savings) and will be used to save work only because the cost of the work have increased. The causation is: Savings exist -> cost of work rises -> investment. Not the other way around. The investment is made, basically, because people have better things to do. Thus prosperity advances.

Therefore, advancements in knowledge (technology) can not destroy prosperity. It is welfare, minimum wage, red tape, taxes and tariffs that destroys prosperity. Only a government can create famine.

In the long run, yes, technological advancements add up to prosperity, but they, nevertheless, undoubtedly make at least some people suffer in the short term, since a lot of people get fired whenever there is a significant improvement in the productive capacity. That's the reason why unemployment benefits (welfare) can still be purposeful.
legendary
Activity: 1512
Merit: 1005
March 17, 2015, 06:05:12 AM
Artificial Intelligense has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. Look it up.

Meanwhile, artificial intelligense has been something that is artificial intelligense until someone can in fact create a program that works, after that it is neither artificial nor intelligense. Example is a program that can recognize visual forms.

Someone is peddling artificial intelligense again, I wonder why it comes now. A form of detraction from public knowlede about the sad state of the fiat system?

You don't believe AI is possible?

As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.

I don't really need to say more about this, but was triggered by the word solipsism Smiley

I don't think there is a special limit to what can be created. I also don't care. If you are going to recreate the human mind, go for it, I suspect the resulting creature containing it will be carbon based and looking somewhat human. No, I don't think you can build a creature with a human mind that is made of titan and kevlar.

The more important thing for the economic viewpoint is that investments come from excess resources (savings) and will be used to save work only because the cost of the work have increased. The causation is: Savings exist -> cost of work rises -> investment. Not the other way around. The investment is made, basically, because people have better things to do. Thus prosperity advances.

Therefore, advancements in knowledge (technology) can not destroy prosperity. It is welfare, minimum wage, red tape, taxes and tariffs that destroys prosperity. Only a government can create famine.


hero member
Activity: 742
Merit: 526
March 17, 2015, 04:43:44 AM
As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.

I for one don't know if we really can demonstrate in a lab any phenomenon we observe in nature, but even if so, we can reproduce only what is objective, when mind is purely subjective. Therefore, it is a moot point really at present (if we could do that in a lab).
If the mind is purely subjective, then what makes you think anything is real and not just a figment of your imagination?

This question itself doesn't make much sense, since anything that we consider real (or imaginary, for that matter) is purely subjective, given only through our perception, thus being a product of mind. It is all six of one and half a dozen of the other.
That is solipsism. If you think the mind is outside of science then one cannot say if a machine can or cannot have one. If you believe you have a mind, then what makes you think a machine cannot?

You seem to be confusing me with someone else. I never said that a machine couldn't have a mind (consciousness). All I say is that we may never be able to understand what mind really is, but this in no case could prevent us from creating it, just as we "create" our children (and their mind, in a sense).

In fact, there is a conditionally simple way to prove that it is possible (and science already goes that road).
donator
Activity: 1736
Merit: 1006
Let's talk governance, lipstick, and pigs.
March 17, 2015, 04:35:17 AM
As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.

I for one don't know if we really can demonstrate in a lab any phenomenon we observe in nature, but even if so, we can reproduce only what is objective, when mind is purely subjective. Therefore, it is a moot point really at present (if we could do that in a lab).
If the mind is purely subjective, then what makes you think anything is real and not just a figment of your imagination?

This question itself doesn't make much sense, since anything that we consider real (or imaginary, for that matter) is purely subjective, given only through our perception, thus being a product of mind. It is all six of one and half a dozen of the other.
That is solipsism. If you think the mind is outside of science then one cannot say if a machine can or cannot have one. If you believe you have a mind, then what makes you think a machine cannot?
hero member
Activity: 742
Merit: 526
March 17, 2015, 04:23:17 AM
Quote
Second, you can't re-design a whole new organic creature, the phrase is oxymoronic. You either design a new creature, or re-design an already existing one.

Take a Ford-T.  Now change, one at a time, its chassis, its wheels, its engine, its steer, .... until you end up with a Ferrari.
Did you re-design the Ford-T, or did you design a new car ?

I wonder why you would ever ask this question. The answer is clear and unequivocal, you re-designed the old Ford-T by any means. The fact that it has now become a Ferrari doesn't change anything. It is the process that matters in this question (how you did it), not the end state (what you got).
hero member
Activity: 742
Merit: 526
March 17, 2015, 04:16:46 AM
As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.

I for one don't know if we really can demonstrate in a lab any phenomenon we observe in nature, but even if so, we can reproduce only what is objective, when mind is purely subjective. Therefore, it is a moot point really at present (if we could do that in a lab).
If the mind is purely subjective, then what makes you think anything is real and not just a figment of your imagination?

This question itself doesn't make much sense, since anything that we consider real (or imaginary, for that matter) is purely subjective, given only through our perception, thus being a product of mind. It is all six of one and half a dozen of the other.
donator
Activity: 1736
Merit: 1006
Let's talk governance, lipstick, and pigs.
March 17, 2015, 04:06:57 AM
As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.

I for one don't know if we really can demonstrate in a lab any phenomenon we observe in nature, but even if so, we can reproduce only what is objective, when mind is purely subjective. Therefore, it is a moot point really at present (if we could do that in a lab).
If the mind is purely subjective, then what makes you think anything is real and not just a figment of your imagination?
hero member
Activity: 770
Merit: 629
March 17, 2015, 03:55:48 AM
Just two notes. First, when I said that we would re-engineer ourselves, I meant a conscious effort (and it is not just an ad hoc meaning of the word, by the way), so fish couldn't possibly re-engineer themselves into humans by any means.

I don't really see the difference on the behavioural side.  From fish came, through selection and mutation, humans.   The fish didn't of course conceive humans.  But to me that doesn't matter.  It happened (although very slowly).  Does it matter what the *intentions* are ?  Whether or not fish decided consciously that part of them would evolve in humans, while others would become sharks ?

When the new creature is so different from the original one, in what way is there in fact any difference with a silicon creation ?  It's a totally different being.  

I mean, is, whether the result is a "human" or not, dependent on the intentions of its design ?  Or only on the design itself ?

Quote
Second, you can't re-design a whole new organic creature, the phrase is oxymoronic. You either design a new creature, or re-design an already existing one.

Take a Ford-T.  Now change, one at a time, its chassis, its wheels, its engine, its steer, .... until you end up with a Ferrari.
Did you re-design the Ford-T, or did you design a new car ?
hero member
Activity: 770
Merit: 629
March 17, 2015, 03:44:00 AM
Artificial Intelligense has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. Look it up.

Meanwhile, artificial intelligense has been something that is artificial intelligense until someone can in fact create a program that works, after that it is neither artificial nor intelligense. Example is a program that can recognize visual forms.

Someone is peddling artificial intelligense again, I wonder why it comes now. A form of detraction from public knowlede about the sad state of the fiat system?


You are absolutely right that AI was ill-defined and over sold.

The error in the definition was that there was a confusion between intelligence and consciousness.  You can be intelligent without being conscious, and you can be conscious without being intelligent.  Both have not much to do with one another.

Intelligence comes down to "being able to solve problems".

Consciousness comes down to "being able to undergo subjective experiences which are good or bad sensations".  The last one can only actually be known by the conscious being itself, and has in principle no behavioural consequences.

You could postulate that an AND gate feels good when it applies its truth table, and feels a lot of pain when its truth table is not respected.  You could torture an AND gate by forcing its output "TRUE" when one of its inputs is FALSE.  You could make an AND gate happy when you let it put its output to the right value as a function of its inputs.
You can say that an AND gate has such a strong drive and will to pursue its happiness, that it almost always makes its truth table come out.
You can analyse the physics of an AND gate, and come to the conclusion that the material implementation of the AND gate explains its behaviour.  You will never know whether an AND gate has happy and sad feelings.  Whether it really hurts an AND gate to have its output forced to a wrong value.  Maybe there should be a declaration of the Universal Rights of AND gates, to prevent their torture.

So from the behavioural aspect of an AND gate, which can be entirely understood on the basis of its physics, there's no way to find out whether an AND gate is conscious and has subjective experiences.

If you would analyse a human brain, you would probably be able to predict all behavioural aspects of a human being.  But there would be no way to find out whether a human brain is the seat of a conscious subjective experience.

The two differences between an AND gate and a human brain are:
- the human brain is more complex
- it is made of meat instead of silicon.

So AI in the sense of making a sentient being is an impossible task.  You'll never know.

But AI in the sense of making a machine that pursues a goal and in doing so solves a problem, sure.  An AND gate is a very very elementary form of AI.  Vocal recognition and playing chess are more sophisticated versions.

When we arrive at the point where machines know how to design machines that solve problems better, I guess we can truly speak of autonomous AI.
hero member
Activity: 742
Merit: 526
March 17, 2015, 03:41:32 AM

Our hardware (and firmware) evolves much much slower than machine hardware.  We are not re-engineered totally.  Machines are.

Again you don't see the whole picture, By the time we are be able to create a thinking machine, it may well be possible that we will be able to re-engineer ourselves as we see appropriate, up to a point of moving one's mind and memory from natural media into synthetic one, more robust and smart. In fact, this has already been done (though partly) and it worked!

You cannot re-engineer yourself so much, or you would not be a human any more.  You can say that fish re-engineered themselves into humans, then. It took them about 600 million years, through natural evolution. But we aren't fish any more.

Imagine that you re-design a whole new organic creature, that only contains half of human DNA, and all the rest is re-engineered.  Is that a human, or a machine ?  Is that "your son" or "your daughter", if you are to them, what fish are to us?

Just two notes. First, when I said that we would re-engineer ourselves, I meant a conscious effort (and it is not just an ad hoc meaning of the word, by the way), so fish couldn't possibly re-engineer themselves into humans by any means. Second, you can't re-design a whole new organic creature, the phrase is oxymoronic. You either design a new creature, or re-design an already existing one.

You may think that I'm nitpicking here but in fact it is you who's doing just that ("you would not be a human any more"). In any case, we would still be ourselves.
hero member
Activity: 742
Merit: 526
March 17, 2015, 03:34:13 AM
As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.

I for one don't know if we really can demonstrate in a lab any phenomenon we observe in nature, but even if so, we can reproduce only what is objective, when mind is purely subjective. Therefore, it is a moot point really at present (if we could do that in a lab).
Pages:
Jump to: