Pages:
Author

Topic: Poll: Is the creation of artificial superinteligence dangerous? - page 2. (Read 24770 times)

legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
Some developments on AI that allowed them to present better results than humans can:

Deep Patient: https://www.nature.com/articles/srep26094

In 2015, at Mount Sinai Hospital in New York was created an AI system that can provide predictions on disease development taking in account early symptoms from patients.

It could make predictions on the development of a range of diseases as different as liver cancer or schizophrenia, better than specialists.



Deepmind: https://www.nature.com/articles/s41586-019-1799-6 (2020)

Google's Health division subsidiary Deepmind managed to create an AI that gave better results than humans analyzing scans of breasts and detecting breast cancer. It reduced the amount of human mistakes.



AI was able to forecast which offenders were likely to commit new crimes better than humans:

https://advances.sciencemag.org/content/6/7/eaaz0652.abstract (2020)
"The performance gap between humans and algorithms was particularly pronounced when (...) participants were not provided with immediate feedback on the accuracy of their responses. Algorithms also outperformed humans when the information provided for predictions included an enriched (versus restricted) set of risk factors. These results suggest that algorithms can outperform human predictions of recidivism in ecologically valid settings."

newbie
Activity: 37
Merit: 0
How about jarvis and friday of Tony Stark, aren't they helpful mentor of tony  Cool I think Ai and people can work together.
legendary
Activity: 2926
Merit: 1386
Bank of America Merrill Lynch just published another study predicting that AI can eliminate 800 millions of jobs on the next 15 years:////

Let me know when Bank of America Merrill Lynch is replaced by AI, and when that AI does a study on jobs to be lost by AI.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
Bank of America Merrill Lynch just published another study predicting that AI can eliminate 800 millions of jobs on the next 15 years:

https://www.inverse.com/article/60919-automation-jobs-millions-lost

As AI develops in its predictable way to reach human level, more and more jobs will be automated.

When AI reaches normal human level (not genius level, this might take more time), what kind of job will a normal human being be able to take, when AI will be faster, cheaper and able to work 24h/365 days a year?

There might be still opportunity for jobs that depend on empathy and human touch, but what more?

Technology might not offer opportunity for the creation of new kind of jobs in enough number. It will be as if agricultural workers in the forties and fifties wouldn't have alternative jobs.

This time might be different.
legendary
Activity: 2926
Merit: 1386
....Technology is progressing because we ourselves are becoming more intelligent. ....

Over the last several thousand years, humans have not became more intelligent.
hero member
Activity: 924
Merit: 502
CryptoTalk.Org - Get Paid for every Post!
I attended a seminar in which it talked about AI. I really enjoyed that seminar in which we thought that AI could be the solution in the overgrowing pollution, in climate change. I think that AI could be a threat and at the same time, AI could be helpful to humanity. I think it's good for us to have AI on our side since it can bring a better world for us. Right now we are really hurting mother nature. And because of that, our suffering became worse. AI could be good for us but it needs moderation, for AI could destroy us all with domination around the globe.

I agree. Weigh the positive and negative effects of a super ai and i think it'll lean more on the former than the latter. Technology is progressing because we ourselves are becoming more intelligent. I don't think there will be a time when ai will be able to overthrow human minds. We're adaptable. We react to terrible situations so we can survive. Ai will always need humans to run the show.
legendary
Activity: 2926
Merit: 1386
I attended a seminar in which it talked about AI. I really enjoyed that seminar in which we thought that AI could be the solution in the overgrowing pollution, in climate change. I think that AI could be a threat and at the same time, AI could be helpful to humanity. I think it's good for us to have AI on our side since it can bring a better world for us. Right now we are really hurting mother nature. And because of that, our suffering became worse. AI could be good for us but it needs moderation, for AI could destroy us all with domination around the globe.

AI doesn't care about your seminar, and it doesn't care what you think. It doesn't care about your ideas about climate charge, or moderation, or pollution, or being "helpful to humanity," or "hurting Mother Nature."

We're not able to ask AI what it does care about, or predict it.

A great short story relevant to "AI Paranoia" is Charles Stross "Antibodies." IMHO Stross's work varies, some is rather creepy others brilliant. This is the latter.

http://www.baen.com/Chapters/9781625791870/9781625791870___2.htm
sr. member
Activity: 840
Merit: 268
I attended a seminar in which it talked about AI. I really enjoyed that seminar in which we thought that AI could be the solution in the overgrowing pollution, in climate change. I think that AI could be a threat and at the same time, AI could be helpful to humanity. I think it's good for us to have AI on our side since it can bring a better world for us. Right now we are really hurting mother nature. And because of that, our suffering became worse. AI could be good for us but it needs moderation, for AI could destroy us all with domination around the globe.
legendary
Activity: 2926
Merit: 1386
Really?

Speed of computation is related to level of intelligence?

No.


I wrote that once we have a human level AI it will be much ahead of us, because one of them will be able to to on a day what millions of humans can do on the same period.

And since millions of people are hard to coordinate when doing intellectual labor, the AI will be able to reach goals that millions of humans even cooperating won't.

Intelligence may be defined as the capacity to gather information, elaborate models/knowledge of reality and change reality with them to reach complex goals.

An human level AI will reach those goals faster than millions of us working together.

If by human level AI we'll have AI with the intelligence of some of the best of humanity, is like if we had millions of Einsteins working together. Just think on the possibilities.

Intelligence isn't only a qualitative capacity. Memory, speed and the capacity to handle an incredible amount of data are also a part of intelligence. A part decisive to reach goals.

Alphazero managed to discover new moves on Go and Chess that all humanity never discovered on more than a thousand years. So Alphazero is intelligent, even if it completely lacks consciousness and it's just a simple AI based on Reinforcement learning:

https://en.wikipedia.org/wiki/Reinforcement_learning
https://medium.freecodecamp.org/an-introduction-to-reinforcement-learning-4339519de419

No. This is sloppy logic coupled with imprecise terms together buttressing the initial premise.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
Faced with the predictions that AI will remove millions of jobs and create millions of unemployed persons, most economists answer that this prediction was made several times in the past and failed miserably.

That millions of people lost indeed their jobs, but found new ones on new sectors.

Just think about agriculture. On 1900, France had 33% of its working force on agriculture. Now it has 2.9%. England had 15% on 1900. Now it has 1.2%.
https://ourworldindata.org/employment-in-agriculture

There can be no doubt that mechanization destroyed millions of jobs.

But that was compensated by the creation of new jobs.

The question is: this time will be different?

It will be if AI can replace humans on a scale faster than the pace at which the Economy can create new jobs.

AI (particularly intelligent robots) isn't cheap and so currently this isn't happen, but can happen in the near future.

It will be if AI can assume all functions of the average human (manual and intellectual) on terms that will leave no alternative jobs for them to do.

Again this is far from happening. But it can happen on the next 10 to 20 years.

Think about horses.

On 1900, there were 21.5 million horses on the USA, even if its human population was much smaller. On 1960 there were only 3 million horses. Since then, the number fluctuated, but below 5 million: http://www.humanesociety.org/sites/default/files/archive/assets/pdfs/hsp/soaiv_07_ch10.pdf

Will we be the new horses?

Some AI experts are building bunkers, fearing that society will collapse because of violent reactions from raising unemployment:
https://www.independent.co.uk/life-style/gadgets-and-tech/silicon-valley-billionaires-buy-underground-bunkers-apocalypse-california-a7545126.html
https://www.theguardian.com/news/2018/feb/15/why-silicon-valley-billionaires-are-prepping-for-the-apocalypse-in-new-zealand
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
Really?

Speed of computation is related to level of intelligence?

No.


I wrote that once we have a human level AI it will be much ahead of us, because one of them will be able to to on a day what millions of humans can do on the same period.

And since millions of people are hard to coordinate when doing intellectual labor, the AI will be able to reach goals that millions of humans even cooperating won't.

Intelligence may be defined as the capacity to gather information, elaborate models/knowledge of reality and change reality with them to reach complex goals.

An human level AI will reach those goals faster than millions of us working together.

If by human level AI we'll have AI with the intelligence of some of the best of humanity, is like if we had millions of Einsteins working together. Just think on the possibilities.

Intelligence isn't only a qualitative capacity. Memory, speed and the capacity to handle an incredible amount of data are also a part of intelligence. A part decisive to reach goals.

Alphazero managed to discover new moves on Go and Chess that all humanity never discovered on more than a thousand years. So Alphazero is intelligent, even if it completely lacks consciousness and it's just a simple AI based on Reinforcement learning:

https://en.wikipedia.org/wiki/Reinforcement_learning
https://medium.freecodecamp.org/an-introduction-to-reinforcement-learning-4339519de419
legendary
Activity: 2926
Merit: 1386
We won't ever have a human level AI.

Once they have a general intelligence similar to ours, they will be much ahead of humans, because they function very close to the speed of light while humans' brains have a speed of less than 500km per hour.

And they will be able to do calculations at a much higher rate than us.

Really?

Speed of computation is related to level of intelligence?

No.

Intelligence is easiest thought of as the ability to understand things that people of lower intelligence cannot understand. EVER.

This is really pretty simple. There are many subjects in advanced math that many people will never understand. I am not going to say "You" or "You or I" because I don't have a clue what you might or might not understand.

One example of this that many people have heard of is "p or np."

In my opinion, another is general relativity.

I am not talking here about the popular "buzz" on the subject, but the exact subject.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
We won't ever have a human level AI.

Once they have a general intelligence similar to ours, they will be much ahead of humans, because they function very close to the speed of light while humans' brains have a speed of less than 500km per hour.

And they will be able to do calculations at a much higher rate than us.
jr. member
Activity: 32
Merit: 10
Ray Kurzweil predictions of a human level general AI by 2029 and the singularity by 2045 (https://en.wikipedia.org/wiki/Ray_Kurzweil#Future_predictions) might be wrong, because he bases his predictions on the enduring validity of Moore's Law.

Moore's Law (which says that components on a integrated circuit double every two years and, hence, also its speed) is facing challenges.

Currently, the rate of speed increase is more 2.5 or 3 years than 2 and it's not clear if even this is sustainable.

As the nods on chips keep shrinking, quantum mechanics steps in and electrons start becoming hard to control (https://en.wikipedia.org/wiki/Moore's_law).

I doubt they will use silicon for long. Also amdahls law says once we reach 200 or so cores worth of this architecture anything else is wasted even in parallel.

My answer is here.

https://bitcointalksearch.org/topic/decentralized-law-and-political-systems-through-consensus-based-technology-5065031
newbie
Activity: 71
Merit: 0
I think it all depends on the intent of creating it and the type of AI that will be created. For example, China's social credit system is gaining a lot of criticisms from several people because of its effects on its residents, and it's not even AI yet. It's more like machine learning. If this continues, they may be the first country to ever produce ASI. Whether or not it will be a threat to us in the future? We never know.
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
Ray Kurzweil predictions of a human level general AI by 2029 and the singularity by 2045 (https://en.wikipedia.org/wiki/Ray_Kurzweil#Future_predictions) might be wrong, because he bases his predictions on the enduring validity of Moore's Law.

Moore's Law (which says that components on a integrated circuit double every two years and, hence, also its speed) is facing challenges.

Currently, the rate of speed increase is more 2.5 or 3 years than 2 and it's not clear if even this is sustainable.

As the nods on chips keep shrinking, quantum mechanics steps in and electrons start becoming hard to control (https://en.wikipedia.org/wiki/Moore's_law).
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
Boston Dynamics' robots are something amazing.

For instance, watch its Atlas doing a back-flip here: https://www.youtube.com/watch?v=WcbGRBPkrps
legendary
Activity: 1455
Merit: 1033
Nothing like healthy scepticism and hard evidence
AI 'poses less risk to jobs than feared' says OECD
https://www.bbc.co.uk/news/technology-43618620

OECD is talking about 10-12% job cuts on the USA and UK.

The famous 2013 study from Oxford University academics argued for a 47% cut.

It defended as the least safe jobs:
Telemarketer
Chance of automation 99%
Loan officer
Chance of automation 98%
Cashier
Chance of automation 97%
Paralegal and legal assistant
Chance of automation 94%
Taxi driver
Chance of automation 89%
Fast food cook
Chance of automation 81%

Yes, today (not in 10 years) automation is “blind to the color of your collar"
https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robots-skills-creative-health

The key is creativity and social intelligence requirements, complex manual tasks (plumbers, electricians, etc) and unpredictability of your job.


Pessimistic studies keep popping: By 2028 AI Could Take Away 28 Million Jobs in ASEAN Countries
https://www.entrepreneur.com/article/320121

Of course, the problem is figuring out what is going to happen on AI development.



Check the BBC opinion about the future of your current or future job at:
https://www.bbc.co.uk/news/technology-34066941
legendary
Activity: 2926
Merit: 1386
newbie
Activity: 19
Merit: 0
People who say that AI isn't dangerous simply aren't in the know. Scientists even convened earlier this year to talk about toning down their research in artificial intelligence to protect humanity.

The short answer is: it can be. The long answer is: hopefully not.

Artificial intelligence is on the way and we will create it. We need to tread carefully with how we deal with it.
The right technique is to develop robots with singular purposes rather than fully autonomous robots that can do it all. Make a set of robots that chooses targets and another robot that does the shooting. Make one robot choose which person needs healing and another robot does the traveling and heals the person.

Separate the functionality of robots so we don't have T1000's roaming the streets.

That is Plan B, in my opinion. The best option is for human-cybernetics. Our scientists and engineers should focus on enhancing human capabilities rather than outsourcing decision making to artificial intelligence.
I think giving robots different roles is a good idea. If they truly had AI, I guess it wouldn't be that hard to imagine that they could learn to communicate with each other and plot something new. I don't think enhancing human capability should necessarily be a priority over robots. I think both should be developed. You could develop technology that would make it easier for a human to work in an assembly line. It's a somewhat useful tool, but it would be much better to just make a robot to replace the human. Humans shouldn't have to do mundane tasks, if they can create robots to do the same tasks.
Pages:
Jump to: