Author

Topic: ‘Black Box’ proble: people don’t trust AI because they don't know how it decides (Read 211 times)

jr. member
Activity: 72
Merit: 2
There's a post I read over at reddit where it said that it's impossible to have an AI that can think for itself in the future, and this was said by someone who is studying AI in college. So what do we know? Maybe it's not for us to understand how the AI thinks, because, really. What will you see when you crack open an AI's black box anyway?
jr. member
Activity: 85
Merit: 1

people simply fear what they don’t understand. It’s a default reaction, I believe. But once people learn its potential benefits, they’ll eventually embrace it, right?
jr. member
Activity: 196
Merit: 1
I agree. We need to learn to trust AI. That’s the only way we can co-exist, right?
This article says it best,
"To trust an AI system, we must have confidence in its decisions. We need to know that a decision is reliable and fair, that it can be accounted for, and that it will cause no harm. We need assurance that it cannot be tampered with and that the system itself is secure. Reliability, fairness, interpretability, robustness, and safety are the underpinnings of trusted AI."

Makes sense, right?

https://thenextweb.com/contributors/2018/10/06/we-need-to-build-ai-systems-we-can-trust/
jr. member
Activity: 126
Merit: 1
This is a case of “does the end justify the means?”  Yeah, AI promises so many advantages but how does it arrive to its decisions? And I think the mystery worries society. AI has this stigma of being a replacement for society, which isn’t the case. It’s only meant to upgrade our lives. People just need to be more informed of how AI operates, in my opinion.
jr. member
Activity: 196
Merit: 4
Artificial Intelligence scares the hell out of me.  Everyone keeps saying its inevitable that AI robots will overtake humans, umm WHY THE HELL are we still developing AI if its basically a guarantee this will happen.

I think this is a valid argument. However, the more I get to understand AI and cryptocurrency, the less scared I get about the coming of artificial intelligence. Of course, we are not sure if it will really have the ability to think for itself and decide humans are a waste of space, but perhaps we will not come to that point. What do you think?
legendary
Activity: 2926
Merit: 1386
...
“AI’s decision-making process is usually too difficult for most people to understand,” Polonski continues. “And interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control.”

I think this is also happening to cryptocurrency. People do not understand how it works so they do not trust it. Since they do not trust it, they do not "buy" it, and just laugh at people who are into it.

Let me know when we understand the human brain.
full member
Activity: 574
Merit: 152

The AI is normally right.

Take a look at AlphaGo. That deep neural network is the best Go player in all of history. Even the scientists that wrote AlphaGo have no idea how it comes up with its movements at this point.

Theoretically, we could examine the neural network and watch the playback in real time. However, the amount of data and processing of that is absolutely mind numbing with its staggering complexities.

At this point, I trust Watson more than I do an average doctor.

AlphaGo creates movements that are inconceivable to normal GO players, because it has made millions of computations in its brain, and thats just for GO. If we were to apply it to various problems in the Science world, don't you think it would make solving them a lot easier?

Nope. AlphaGo is application specific. Turning an application specific intelligence into a general intelligence isn't the right path.

It'd be akin to trying to mine Ethereum with a Bitcoin ASIC, it's just not going to go very well.
jr. member
Activity: 140
Merit: 2
Artificial Intelligence scares the hell out of me.  Everyone keeps saying its inevitable that AI robots will overtake humans, umm WHY THE HELL are we still developing AI if its basically a guarantee this will happen.

Because we as a species are very stupid.

Development of something like AI should be done very slowly if at all and globally coordinated to minimize the chance of very poor outcomes.

We lack the wisdom for such coordination so we will rush full speed ahead each human faction seeking to one up the other with better AI.

I think it is not the case of stupid though when AI overtake humans, I think it would stem out from human greed or hubris. AI for now will only do what it is expected to do. Once someone power plays and codes it to do something more than that in exchange for money or just to be famous then the problem starts.
Currently, AI is on your phone apps, medical equipment, your fitbit. It's doing more good and not any harm afaik.
Its more on the idea that AI's can calculate operations and stuff that we can't. Some problems require very complex equations and solutions that the human brain cant handle it, so there comes the AI. just like what this post said.
The AI is normally right.

Take a look at AlphaGo. That deep neural network is the best Go player in all of history. Even the scientists that wrote AlphaGo have no idea how it comes up with its movements at this point.

Theoretically, we could examine the neural network and watch the playback in real time. However, the amount of data and processing of that is absolutely mind numbing with its staggering complexities.

At this point, I trust Watson more than I do an average doctor.

AlphaGo creates movements that are inconceivable to normal GO players, because it has made millions of computations in its brain, and thats just for GO. If we were to apply it to various problems in the Science world, don't you think it would make solving them a lot easier?
jr. member
Activity: 112
Merit: 2
Artificial Intelligence scares the hell out of me.  Everyone keeps saying its inevitable that AI robots will overtake humans, umm WHY THE HELL are we still developing AI if its basically a guarantee this will happen.

Because we as a species are very stupid.

Development of something like AI should be done very slowly if at all and globally coordinated to minimize the chance of very poor outcomes.

We lack the wisdom for such coordination so we will rush full speed ahead each human faction seeking to one up the other with better AI.

I think it is not the case of stupid though when AI overtake humans, I think it would stem out from human greed or hubris. AI for now will only do what it is expected to do. Once someone power plays and codes it to do something more than that in exchange for money or just to be famous then the problem starts.
Currently, AI is on your phone apps, medical equipment, your fitbit. It's doing more good and not any harm afaik.
legendary
Activity: 1946
Merit: 1055
Artificial Intelligence scares the hell out of me.  Everyone keeps saying its inevitable that AI robots will overtake humans, umm WHY THE HELL are we still developing AI if its basically a guarantee this will happen.

Because we as a species are very stupid.

Development of something like AI should be done very slowly if at all and globally coordinated to minimize the chance of very poor outcomes.

We lack the wisdom for such coordination so we will rush full speed ahead each human faction seeking to one up the other with better AI.
full member
Activity: 574
Merit: 152
Artificial Intelligence scares the hell out of me.  Everyone keeps saying its inevitable that AI robots will overtake humans, umm WHY THE HELL are we still developing AI if its basically a guarantee this will happen.

Because AI could give us singularity, cure the human condition, and become our care takers for the rest of eternity.

Utopia is close to dystopia.
full member
Activity: 385
Merit: 101
Artificial Intelligence scares the hell out of me.  Everyone keeps saying its inevitable that AI robots will overtake humans, umm WHY THE HELL are we still developing AI if its basically a guarantee this will happen.
full member
Activity: 574
Merit: 152
The AI is normally right.

Take a look at AlphaGo. That deep neural network is the best Go player in all of history. Even the scientists that wrote AlphaGo have no idea how it comes up with its movements at this point.

Theoretically, we could examine the neural network and watch the playback in real time. However, the amount of data and processing of that is absolutely mind numbing with its staggering complexities.

At this point, I trust Watson more than I do an average doctor.
jr. member
Activity: 126
Merit: 3
“IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Oncology) was a PR disaster,”
“The problem with Watson for Oncology was that doctors simply didn’t trust it.”
When Watson’s results agreed with physicians, it provided confirmation, but didn’t help reach a diagnosis. When Watson didn’t agree, then physicians simply thought it was wrong.
“AI’s decision-making process is usually too difficult for most people to understand,” Polonski continues. “And interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control.”

I think this is also happening to cryptocurrency. People do not understand how it works so they do not trust it. Since they do not trust it, they do not "buy" it, and just laugh at people who are into it.
Jump to: