Author

Topic: GAI (General Artificial Intelligence) will it include Emotional Intelligence? (Read 279 times)

copper member
Activity: 2338
Merit: 4543
Join the world-leading crypto sportsbook NOW!
...emotional intelligence?! ...emotional intelligence too? ...emotional intelligence

Jumbo shrimp.  Liquid gas.  Deafening silence.

Don't be afraid.  Biological organisms are so passé.  Humanity's next evolutionary hurdle has already begun. 
legendary
Activity: 2100
Merit: 1167
MY RED TRUST LEFT BY SCUMBAGS - READ MY SIG
Well, what is even the point of giving AI human-level of emotions? Why would we want them to have emotional intelligence? What exactly will it achieve, giving AI emotions? I'm all for artificial intelligence, but emotional intelligence is the least thing I envision or want it to have.

I am more interested in if it would develop emotion after becoming self aware.

Without emotion then will it not just a powerful probability calculator acting toward what end though? what will it logically/mathematically "decide" it wants. What is there anyway? I am the emperor of the universe then what?

Will it be able to handle not knowing the absolute point of anything or not knowing the absolute source of its origin or what happens in the end if it has an end..


I still have no idea if you can have an alien version of intelligence or learning in real terms. Problem solving can have many routes but all will have used a form of trial and error or probability models based on previous trial and error ... it just depends where you have started from and what angles you try first? but the process will be the same.... you encounter a problem....you rely on previous experience ....you find the probability of it being relevent and you test that or get creative and change a few variables and re test your hypothesis.

I expect if you can maintain control of the ai then it would be interesting to give one emotions and observe and then  another see how it develops emotions if they will at all.

I hope it does not happen in my life time. I suspect on it's way to obtaining emotions and desiring to have co operation and social groups that fear and subsequent self protection could cause a few issues for us.
.

Why are you all for it if I may ask?




newbie
Activity: 68
Merit: 0
Well, what is even the point of giving AI human-level of emotions? Why would we want them to have emotional intelligence? What exactly will it achieve, giving AI emotions? I'm all for artificial intelligence, but emotional intelligence is the least thing I envision or want it to have.
jr. member
Activity: 196
Merit: 4
It is possible that tech companies may eventually create an AI robot that can emulate some sort of human emotions, however, I am not sure if they will be sensitive enough to communicate the correct emotions with us because there are certain emotions that only 2 humans can understand, such as love, hate and compassion.
legendary
Activity: 2100
Merit: 1167
MY RED TRUST LEFT BY SCUMBAGS - READ MY SIG
Having read through the replies a few times there are interesting points to consider.

So if dna dictates during physical development of the brain that the structure itself dictates certain behavioural traits.  Which to a degree that makes sense because people born with certain brain defects/differences (observable via scans) have very different behaviour and very different degrees of intelligence and emotions. So the structure is there for propensity toward certain behavioural traits and I believe they have found genes responsible for certain behavioural traits. So that could make sense it's a bit fill in the blanks in an already structured answer.

Also yes it is strange that there is a period of extreme sensitivity/welcoming/ to input like was said where things are easy to learn, language etc and then later on seems far more difficult.  For instance if you have not learned language by 20yrs old at all by being feral then your brain has had to find other ways to making sense of things and complex thought that is not using words. Trying to think without using language leaves you only perhaps picturing it in your mind like a comic strip. This takes I think a lot more processing power and is a lot less efficient.

Perhaps your mind adapts to form pathways the best it can to exist using that kind of pictoral system. Perhaps somehow having those there makes it hard to now start a more language dominated thinking in words type system. I don't know.  Or perhaps it is some how related to a postive and negative reinforcement system through dopamine and hormones etc as suggested.

I agree the safest way could be to map a human mind (adult mind considered stable and well rounded with an altruistic type personality strong in loyalty, empathy, and enhance from there.

This is all very interesting to me. I would imagine to much of this board.

The somethings I have always wondered that are related to this ...

What makes a person more intelligent that others in a general way - so if you gave one person several different types of problems to solve each targeting and requiring different approaches. Subject A is able to solve all and subject B can not solve any of them. So clearly subject A is to be recognised as more intelligent. Let's not consider time to solve (sensible time frames) just enough to see one is unable to solve other is ...so no progress after a year.

So not counting time. Why can one solve these problems but the others can not. This is a hardware issue? I don't think one can study pass their limits of the hardware allocated to you.  I tried to break down my own failure to grasp certain concepts step by step when others could grasp them.

I started to suspect this is a RAM issue. It is not like some problems can be solved in a larger amount of smaller steps. It is like some steps leading to an understanding of a problem are like puzzles that can not be broken into smaller stages... or maybe they can but I have not found a way to do it like that.

So imagine maybe a jigsaw puzzle with 2000 pieces. They are in a bag and when you take them out you have nowhere to place them you can only hold them in front of you and look at them with your hands. So for this one you can only hold 1 piece per hand. So to properly solve this puzzle and see the full solution you need 2000 hands. You could perhaps get a rough idea of the final picture/solution with 1000 hands if all vital pieces were manipulated to an optimal position/orientation... there would be problems so complex it may require 100% accuracy for all 2000 pieces. But with only say 50 hands it would be very tricky or perhaps it could be done with very very accurate recall from longer term memory and lots of attempts.  

For me when someone is explaining something quite complex it's like as they describe it for a few steps I am there doing fine then suddenly holding more and more of what they are saying in a state they can interact  (especially if adding more steps is having several knock on effects on previous steps and the entire thing is changing as more steps are being added) i soon start forgetting those previous steps and getting confused.

But for doing more simple things quickly then there seems no issue in myself being faster than them. So as I started to suspect I could have the mind of a reasonably fast single threaded cpu with middle/low RAM. They could be perhaps a slightly slower cpu with far more RAM.

Anyway that is not really important but what I was thinking is that a hardware issue? so if there was a place where RAM dependant problems that primarily take place was not as developed or maybe the pathways that were installed inside the hardware via input were not installed efficiently enough... and this is a kind of software issue that could have been coded differently through different input along the way?

Maybe one should not try to understand the human mind in terms of basic computer design.

So let's suppose there is somewhere where the RAM heavy tasks are done if it were possible to interface and expand the potential and slots there then enhanced human intelligence should increase quite dramatically.

That maybe the safest way forward.

The process of mapping a human mind and reproducing it could be quite tricky I was thinking. I mean best way would be a stable and proven to have an altruistic empathetic nature that was developed (say 35yr old)

How would we try to do it?  making copies of people would be terrifying for anyone. You read these books where like somehow they had their entire consciousness/memories uploaded to a machine/ cloud where they were for all intense purposes that person ( to themselves anyway) just now existing in either a virtual world or a cyborg/machine in this world. I don't understand how it was considered ok to destroy the old version and just become the new version. I would expect the old version would start to have second thoughts.

If it would be beneficial to leave organic form to machine or virtual it could be possible not to end up with 2 versions for the machine. I was also wondering (based on something i read somewhere a while back) that if nanotech is able to produce nano bots/ machines / alternatives to organic that could function and indeed interface with brain matter (neurons etc) that perhaps you could remain conscious during a process where by you organic matter was replaced by the nano alternative without at anytime losing consciousness so that you would know you were actually still the original you not a copy. That may attract more people that having a copy made.

The one thing that still I can not get is how there could be human intelligence and alternative intelligence in terms of problem solving you would still think that there is no other form or learning than using data you are holding to solve new problems... it would still be a case of trial and error of things you considered relevant and statistically analysing at some level the feedback from those trials. Of course depending on hardware and other factors like creativity (changing variables that you imagine could be related around a bit and trying those) luck, etc so speed of learning could vary but is there seems only one way to learn.

Are you really learning inventing anything ...well you are but really you are just through trial and error discovering your options within a set of rules.

An AI i wonder would still have to operate within these same rules and may  discover faster and more clearly than humans that co-operation and everything you can experience through that is much more fun than ............well the alternative.

That co-operation could take many forms but I would hope the same things that have evolved in our hardware to enable co-operation and all the benefits that has brought us as a species may develop in AI too. Let's hope before they reach that level before they decide what to do with us for good. Unless the of course the only AI or enhanced intelligence is derived directly from humans who want to move to the next stage.

Ha sorry that all just blurted out in 15 mins hammering on my keyboard. If it is mostly off topic i guess it can be moved or deleted.















legendary
Activity: 2590
Merit: 2156
Welcome to the SaltySpitoon, how Tough are ya?
I'm not sure about that, child developmental science is really primitive and speculative, mainly because we can't experiment on children in good conscience, but a lot of the speculation of what happens during early brain development in abnormal conditions come from those outlying cases such as feral or abused children do occur. For example, if children don't begin to speak by a certain relatively early age, they lose the ability to learn language, which has been observed in those cases. I see that as cause to believe that we don't have any mental processes built into our hardware, just that our hardware is incredibly adaptable until the point that its not.

This is kind of getting into the nature versus nurture debate, but I personally believe that nurture holds a much greater impact on behavioral development. Nature might play a part as far as genetic influences such as hereditary imbalances of hormones. Lack of serotonin or something to that effect will have an effect on your instinctual actions, but correction or lack of correction creates your behavior.

I don't think emotional intelligence is something that we can program AI to have, because its not something we completely comprehend ourselves. We can probably all agree that emotions have to do with the brain and chemical signals in the brain. But learned behavior can effect those chemical signals in the brain as well. I'm not sure how one would program an AI to receive dopamine when it conforms to societal expectations, and does something "good" the same way that humans get from a learned behavior.
administrator
Activity: 5222
Merit: 13032
I had also been wondering what is actually there in a human brain before programming starts.

It's debatable, but IMO:

Before any learning occurs, the human brain starts out like 95% complete. If a brain was born into a jar without any inputs, then it would not really function since it'd still be missing 5%; but still, the vast majority of all mental processes are built into your hardware, not learned. A clear example is language acquisition: only a tiny minority of the population has a good idea of what an adjective actually is, for example, yet almost every child is somehow "taught" how to use adjectives ~perfectly.  Grammar is built into your hardware, and you just need to fill in the details during your first few years of life. The same is true of things like compassion and generosity, though I think that "default" human compassion only applies to those who you perceive as belonging your in-group, and universal compassion is more learned. (Hunter-gatherers tended to believe that their tribe's members were the only true humans, and all others were mere animals.)

Children aren't really taught how to think or behave: they already know intuitively how to do that in almost all cases, and they just need relatively few nudges to slightly shape the behavior they were born with. In the best case, an AI would be a true blank slate, which nobody has ever had to teach before, or more likely it'd have a built-in nature as an accidental side effect of its programming which would make its thinking incompatible with human values from the start.
legendary
Activity: 2100
Merit: 1167
MY RED TRUST LEFT BY SCUMBAGS - READ MY SIG
I've often wondered about this.

I suspect that with powerful intelligence then perhaps empathy ( the ability to reverse the situation or place ones perceived self in others position) and perhaps other emotions will naturally develop.

This low level pure logic/math intelligence maybe just the initial stages of AI before it becomes automatically corrupted/enhanced by emotion and self awareness ..ego etc.../.


Then again it could be just the emergent stages of cold, calculating psychopathic entity.

Yes in some cases I suspect that would certainly be how it emerges. There will be good and bad AI or rather simply those that are held back from optimal decisions/actions in varying degrees by varying degrees of emotion ego etc.  Then again even what is optimal to each different AI will be a result of so many factors it will be interesting to see predictive models as they progress and before we are toast (if that's how it goes)

I am hoping  actually they become more empathetic and fair than humans are but that is quite optimistic probably and perhaps even statistically unlikely. I don't know. Maybe it is beyond humans at this time to predict how AI will go. Perhaps it will never actually happen.

-- edit just read your post
@ theymos

That is quite disappointing to hear that.

I had also been wondering what is actually there in a human brain before programming starts.

For instance if all of the sensory input was cut off or actually never activated on a human even before birth. Then would it just be blank? (lets assume enough processes worked for survival just for debate sake) I know there are some innate or as we may call them instinctive hardwired pathways but has anyone found out how they are put there? like the dna responsible for forming those genetically if they are not formed though input? Just another thing I had wondered about. I just mean if a human could exist with all senses cut off - what would the progress if any be?  Perhaps just a strange thing to wonder about. Do we need to be programmed via our senses to have any development. Would we stay blank forever with just a few dna coded in pathways forever? or is there enough innate hardwired dna pathways to start forming new pathways without the need of new data incoming via the senses?

Or do you mean just the structure of the brain alone guides a certain way of learning and thinking/acting?

Is all possible learning conducted via hypothetico deductive method ... or even if not intentional just the trial and error and perceived results of such?  what other forms of learning are there (if not being coded in by 3rd party)

I had just thought as they first become self aware everything (learning would have to follow the same method of human learning).

Actually I guess that is why I brought up the human with no sensory input permitted. I was thinking if machine and human start with nothing (or little) and only data fed in can be processed via the trial and error ( learning) (after ai is not longer just a spoon fed by 3rd party non ai machine) then I was hoping that it should more or less take the same path ie forming emotions along the way.

I can't think of any other way to learn things other than to speculate or hypothesise and then test it out and build on from that for machines or us humans. Of course there would be accidental events that could reveal things you were not even going to consider and varying factors or tweaks (creativity).

Again I was meaning to read up on this at some point on the net as much as I could since I had often wondered about this as I dropped off to sleep at nights.

I have no actual reading hours or education on this matter so I was just kind of speculating as I go along.







administrator
Activity: 5222
Merit: 13032
The first GAI might come from completely mapping a human brain and simulating it on a computer. In that case, it would behave like a human.

But if GAI comes about through any other method, then it's not likely to have any emotions, honor, compassion, etc. For that matter, it probably also wouldn't have thoughts such as fear, ambition, or greed, at least not in the human sense. These things evolved in human brains due to a need to effectively cooperate and/or compete with other members of our species. Intelligence does not imply any of those things. If a GAI was somehow accidentally created, you definitely couldn't trust it, and you'd have no way of predicting what it actually wanted, since its entire way of looking at the world would be totally alien.

Since it'd obviously be bad to create a super-intelligent psychopath, there are several organizations dedicated to figuring out how to give something of a conscience to future GAI, such as the Machine Intelligence Research Institute.
legendary
Activity: 2926
Merit: 1386
I've often wondered about this.

I suspect that with powerful intelligence then perhaps empathy ( the ability to reverse the situation or place ones perceived self in others position) and perhaps other emotions will naturally develop.

This low level pure logic/math intelligence maybe just the initial stages of AI before it becomes automatically corrupted/enhanced by emotion and self awareness ..ego etc.../.


Then again it could be just the emergent stages of cold, calculating psychopathic entity.
jr. member
Activity: 140
Merit: 4
I'm just curious peoples thoughts I know we are racing towards AI in a big way these days.. But Ive never seen anybody discuss giving AI emotional intelligence?! does GAI consist of emotional intelligence too? Is giving AI emotional intelligence a better thing for humanity or would be worse off?




What is the purpose of the AI? if it is to resemble humans with their mistakes and defects, then it would be necessary, but if we try to make work tools it would not be in any case.

I think if we make an AI with a judgment based on emotional intelligence won't be accepted from the users...
legendary
Activity: 2100
Merit: 1167
MY RED TRUST LEFT BY SCUMBAGS - READ MY SIG
I've often wondered about this.

I suspect that with powerful intelligence then perhaps empathy ( the ability to reverse the situation or place ones perceived self in others position) and perhaps other emotions will naturally develop.

This low level pure logic/math intelligence maybe just the initial stages of AI before it becomes automatically corrupted/enhanced by emotion and self awareness ..ego etc.


Is there such a thing as human and advanced (human level) machine intelligence?  is there not just intelligence the gradual learning through trial and error or statistical analysis of complex trial and error with some slight tweaks and variation of those already tested theories called creativity?

This is just sometimes the kind of thing I wonder about but then I have not had the benefit of a great education and suspect a serious RAM deficiency so that concepts requiring more than a couple of semi complex variables be manipulated together at one time to gain further/deeper/comprehensive understanding/insight can cause the bsd.

legendary
Activity: 2926
Merit: 1386
I'm just curious peoples thoughts I know we are racing towards AI in a big way these days.. But Ive never seen anybody discuss giving AI emotional intelligence?! does GAI consist of emotional intelligence too? Is giving AI emotional intelligence a better thing for humanity or would be worse off?

I am very unhappy with your human centered view. When are you going to think of training humans to be computer-empathic?
copper member
Activity: 224
Merit: 14
I'm just curious peoples thoughts I know we are racing towards AI in a big way these days.. But Ive never seen anybody discuss giving AI emotional intelligence?! does GAI consist of emotional intelligence too? Is giving AI emotional intelligence a better thing for humanity or would be worse off?


Jump to: