Author

Topic: Artificial Intelligence (Read 984 times)

brand new
Activity: 0
Merit: 0
June 16, 2022, 03:50:42 AM
#18
I read interesting article about type of intelligence and Howard Gardner’s book Frames of Mind: The Theory of Multiple Intelligences. Gardner concluded that there are nine intelligence types, each of them with its unique manifestations and “smarts” and he gives the strategies for growing each intelligence type fast and efficiently. Useful article with book reviews.
brand new
Activity: 0
Merit: 0
June 16, 2022, 03:47:39 AM
#17
Our intelligence comes from learning. Learning comes from motivation. Motivation comes from desire. Desire comes from instinct of survival and self-preservation. This instinct comes from evolution.

totally agree with this opinion.
newbie
Activity: 1
Merit: 0
April 18, 2022, 10:27:16 AM
#16
I think the definition and distinction between real and artificial intelligence are not so important. The ability to use it is much more important. No matter how limited AI was, we still found enough areas for its application. Several articles clearly show the benefits of the application of AI in everyday life. https://scilifestyle.com/category/artificial-intelligence/
legendary
Activity: 1078
Merit: 1003
March 18, 2013, 02:49:59 PM
#15
Designed by nature, or by mans hand, we are all just machines. So why one should be called artificial and the other real? Once computer intelligence is mastered, maybe the conversation *they* will have is "Will humans ever be intelligent".


Good point Grin  Metal machines, organic machines, it's the same difference: a complex system of individual parts which come together to make one identifiable whole.  We're made from the matter and energy of the universe, and so will A.I. come from this same matter.

If we designed an organic brain from the base up, would it still be A.I.?
legendary
Activity: 947
Merit: 1042
Hamster ate my bitcoin
March 18, 2013, 01:55:58 PM
#14
Designed by nature, or by mans hand, we are all just machines. So why one should be called artificial and the other real? Once computer intelligence is mastered, maybe the conversation *they* will have is "Will humans ever be intelligent".
legendary
Activity: 1078
Merit: 1003
March 18, 2013, 01:00:20 AM
#13
You would have to program the machine to have basic desire.  But I think, before A.I. ever becomes a thing, we need to first fully understand how our own minds function.  We're still working on that bit.

What kind of desires would a machine have?  Depends on how advanced you expect the A.I. to be.  If you're aiming for a machine which can fully emulate a human, from needs and wants, emotion, limited thinking power (on top of the processes for everything else), you'll need to allow the machine complete freedom to do as it wants, and make it so inefficient that it will require food three times a day, lots of water, and sleep--else it's only pretending by doing these things.  The desire ceases to be genuine if it's programmed to act this way.  If, on the other hand, you're only aiming to create a machine intelligent enough to perform tasks from being fed verbal instruction, then you don't have to worry about any of that previously mentioned.  A machine with enough progress to be able to see, hear, and react on both, inside a physical body, or just as software, is very much in our reach.  Once we've accomplished this feat, it's only a matter of time until we want to give the machine thoughts and emotion.  AFAIC, a machine which can invent is the last stop in mirroring human kind.  A machine that is aware it is a machine without prior programming to let it know that.
legendary
Activity: 947
Merit: 1042
Hamster ate my bitcoin
March 17, 2013, 11:45:37 PM
#12
If we take the simulation argument into account (which is not a common prerequisite after all), then I have to put things a little differently.

My premise was also about evolution. So in this context I'd say in this simulation we apparently had plenty of time to go through evolution.

It's admittedly hard to define what the "artificial" part in "intelligence" actually means then. (I didn't invent that term  Smiley).

What I meant is contemporary AI, i.e. the current state of research, and that AI (think chat bots) is not convincing just because it lacks inherent motivation and direction of self-learning. And I assume that's because it lacks culture, which in turn is because it lacks its own evolutionary history.

It may very well be that one of the purposes to run simulations then is exactly to "breed" "artificial" intelligence, i.e. to get AI units that are faithful and convincing enough.

I agree that it would be simpler, to give an artificial intelligence human experiences in order to get it to behave in a more human way. As opposed to manually programming those human traits.

My simulated evolution program was an experiment in artificial intelligence. I discovered that, whilst evolution works, it is very slow.

The reason why AI is still very basic, is because we don't yet understand exactly how it works in nature. Also, driving an intelligence comparable to a human is going to take a lot of computational power.

I do not believe an evolutionary history is a fundamental prerequisite for intelligence.
legendary
Activity: 1764
Merit: 1007
March 17, 2013, 09:12:25 PM
#11
If you believe it's possible we are living in a simulation, then you disagree with you own premises. And, passing a Turing test does not indicate intelligence anymore than quacking indicates that you're a duck.

Quack!  Grin

If we take the simulation argument into account (which is not a common prerequisite after all), then I have to put things a little differently.

My premise was also about evolution. So in this context I'd say in this simulation we apparently had plenty of time to go through evolution.

It's admittedly hard to define what the "artificial" part in "intelligence" actually means then. (I didn't invent that term  Smiley).

What I meant is contemporary AI, i.e. the current state of research, and that AI (think chat bots) is not convincing just because it lacks inherent motivation and direction of self-learning. And I assume that's because it lacks culture, which in turn is because it lacks its own evolutionary history.

It may very well be that one of the purposes to run simulations then is exactly to "breed" "artificial" intelligence, i.e. to get AI units that are faithful and convincing enough.

Any software which has an internal state that can differentiate its actions (when compared to a copy of such software with a different state) and modify its own state is a "living" software. "Living" software differ from inert software in that their purpose can be tailored to the user, in this case a human. Although most current "living" software does not have the capability to reproduce or mutate on its own accord, they can do so with the assistance of humans.

For example, imagine an open-source speech-to-text software that can be trained for a specific person. If this software is more useful to humans that previous speech-to-text software, it will displace that software. In doing so, it attracts developers, who are humans that assist its mutation, and users, who are humans that assist its reproduction. Different copies of this software will naturally have different internal states. If some humans modify ("mutate") the software through forking it, and the mutation is favourable (the software becomes more useful to humans), there would have been some limited evolution through artificial selection.

Fast-forward 10 years into the future, and visualize the far descendents of this software. When compared to today's software, these descendents (which originally shared its codebase) are more useful to humans and more differentiated from each other. Although their specialization means that they use relatively few concepts of "intelligence", the software does not really need any more (and, indeed, software that becomes excessively intelligent is simply bloated and will be artificially selected against).

This software is neither reproducing on its own accord or actively attempting to preserve or improve itself beyond a basic level of machine learning. Even so, it has become more intelligent, effectively undergoing evolution. Although it is an eventual process, thanks to improved research in artificial intelligence, eventually software will acquire multiple facets of intelligence, including certain traits that resemble "self-preservation". Speculating the future is difficult, but a cursory guess indicates that these traits and behaviours may include marketing one's species, maximizing income for one's developers, detecting and reporting one's own deficiencies, etc.

Thanks for the interesting case, but I guess AI for such a specific use case is not what I meant. If you mean it would develop over time into something much richer in expression, I'm not sure, because it will for long be very dependent on the environment we feed it. And that is not an optimal or sufficiently neutral condition for the thought model about AI that I intended in the OP.
legendary
Activity: 1246
Merit: 1077
March 14, 2013, 04:31:09 PM
#10

Artificial selection can have the same effect as natural selection. Most of our current breeds of dogs or cats came from artificial selection. If people use the AI software that is better than other AI software, this is evolution.

Which "AI software"? My premise is that there doesn't exist any yet which deserves that name. Also it seems to me that this kind of "artifical selection" is detrimental. If the poodle is to be released into the wild again, it's less likely to survive than the wolf.


Any software which has an internal state that can differentiate its actions (when compared to a copy of such software with a different state) and modify its own state is a "living" software. "Living" software differ from inert software in that their purpose can be tailored to the user, in this case a human. Although most current "living" software does not have the capability to reproduce or mutate on its own accord, they can do so with the assistance of humans.

For example, imagine an open-source speech-to-text software that can be trained for a specific person. If this software is more useful to humans that previous speech-to-text software, it will displace that software. In doing so, it attracts developers, who are humans that assist its mutation, and users, who are humans that assist its reproduction. Different copies of this software will naturally have different internal states. If some humans modify ("mutate") the software through forking it, and the mutation is favourable (the software becomes more useful to humans), there would have been some limited evolution through artificial selection.

Fast-forward 10 years into the future, and visualize the far descendents of this software. When compared to today's software, these descendents (which originally shared its codebase) are more useful to humans and more differentiated from each other. Although their specialization means that they use relatively few concepts of "intelligence", the software does not really need any more (and, indeed, software that becomes excessively intelligent is simply bloated and will be artificially selected against).

This software is neither reproducing on its own accord or actively attempting to preserve or improve itself beyond a basic level of machine learning. Even so, it has become more intelligent, effectively undergoing evolution. Although it is an eventual process, thanks to improved research in artificial intelligence, eventually software will acquire multiple facets of intelligence, including certain traits that resemble "self-preservation". Speculating the future is difficult, but a cursory guess indicates that these traits and behaviours may include marketing one's species, maximizing income for one's developers, detecting and reporting one's own deficiencies, etc.
legendary
Activity: 947
Merit: 1042
Hamster ate my bitcoin
March 14, 2013, 03:31:33 PM
#9
If you believe it's possible we are living in a simulation, then you disagree with you own premise. And, passing a Turing test does not indicate intelligence anymore than quacking indicates that you're a duck.

Quack!  Grin
legendary
Activity: 1764
Merit: 1007
March 14, 2013, 01:37:00 PM
#8

Yes, I've written a program myself that simulates evolution. You can simulate anything. There are some who say we may be living in a simulation.


I'm not exactly unfamiliar.  Wink

However, I'm talking more about our own plane of existence. Did you release your artificial intelligences into the wild? Have they learnt enough about our contemporary culture in order to be not mistaken as "artificial" by fellow human beings?
legendary
Activity: 1386
Merit: 1000
English <-> Portuguese translations
March 14, 2013, 01:21:43 PM
#7
Well the army have some bots with interesting AI, I hope.
legendary
Activity: 947
Merit: 1042
Hamster ate my bitcoin
March 14, 2013, 12:11:58 PM
#6
You are making the false assumption that evolution can not occur inside a computer.

You mean running simulations?

Yes, I've written a program myself that simulates evolution. You can simulate anything. There are some who say we may be living in a simulation.
legendary
Activity: 1764
Merit: 1007
March 14, 2013, 11:19:41 AM
#5
You are making the false assumption that evolution can not occur inside a computer.

You mean running simulations?

Artificial selection can have the same effect as natural selection. Most of our current breeds of dogs or cats came from artificial selection. If people use the AI software that is better than other AI software, this is evolution.

Which "AI software"? My premise is that there doesn't exist any yet which deserves that name. Also it seems to me that this kind of "artifical selection" is detrimental. If the poodle is to be released into the wild again, it's less likely to survive than the wolf.

Not just preservation, growth.  Crystals grow, yet are not generally regarded as having "instincts".

Desire comes from growth?
legendary
Activity: 1330
Merit: 1000
March 14, 2013, 09:07:04 AM
#4
Desire comes from instinct of survival and self-preservation. This instinct comes from evolution. 

Not just preservation, growth.  Crystals grow, yet are not generally regarded as having "instincts".
legendary
Activity: 1246
Merit: 1077
March 14, 2013, 08:42:43 AM
#3
Our intelligence comes from learning. Learning comes from motivation. Motivation comes from desire. Desire comes from instinct of survival and self-preservation. This instinct comes from evolution. 
 
Machines do not have all of the latter. That's why the concept of "Artificial Intelligence" is questionable because machines do not have any intrinsic desire to learn anything. They never experienced evolutionary pressure and never had to go through natural selection. It's the instinct of self-preservation that would have to be programmed into them. Artificially. Fine, artificial self-preservation then. But I guess if it works at all, it would essentially have to be a chaos-theoretical system, and the consequences of such an experiment would be unpredictable.

Artificial selection can have the same effect as natural selection. Most of our current breeds of dogs or cats came from artificial selection. If people use the AI software that is better than other AI software, this is evolution.
legendary
Activity: 947
Merit: 1042
Hamster ate my bitcoin
March 14, 2013, 08:39:45 AM
#2
You are making the false assumption that evolution can not occur inside a computer.
legendary
Activity: 1764
Merit: 1007
March 14, 2013, 08:29:27 AM
#1
Our intelligence comes from learning. Learning comes from motivation. Motivation comes from desire. Desire comes from instinct of survival and self-preservation. This instinct comes from evolution. 
 
Machines do not have all of the latter. That's why the concept of "Artificial Intelligence" is questionable because machines do not have any intrinsic desire to learn anything. They never experienced evolutionary pressure and never had to go through natural selection. It's the instinct of self-preservation that would have to be programmed into them. Artificially. Fine, artificial self-preservation then. But I guess if it works at all, it would essentially have to be a chaos-theoretical system, and the consequences of such an experiment would be unpredictable.
Jump to: