Pages:
Author

Topic: Machine Learning and the Death of Accountability - page 2. (Read 385 times)

legendary
Activity: 3654
Merit: 8909
https://bpip.org
This combination of factors, machine-learning on the one hand, and the computer being unable to explain its reasoning on the other, leads to an absolute removal of all accountability for decisions that can have a profound impact on people's lives.

I'm not so certain that it's really that significant. I'm old enough to remember writing assembly code so that was obviously the most direct responsibility a person could have over giving instructions to a computer but you know what - it's a great thing that we generally don't do that anymore outside of spaceships perhaps. Even friggin' dishwasher software is probably written using some sort of toolkit these days, taking care of mundane stuff like memory allocation which humans would definitely mess up.

Machine learning can definitely create issues if used by incompetent dolts but on the other hand - used responsibly it can solve massive problems that are simply unsolvable otherwise. I think we'll find a balance where we will use proven well-tested models as building blocks for more complex systems and we'll learn for figure out whom to blame... just like we don't blame Microsoft when we write buggy code in C# or feed garbage data to well-intended code.
member
Activity: 140
Merit: 56
I'm assumining (and hoping) that they have been shamed publicly. You didn't even give people the chance to improve (if they had poor performance in class) and just simply handed off good grades to people who may not have performed as well when the big test came.
On the other hand, students who cheated on smaller assessments like assignments, graded homework, quizzes, etc may have been projected to finish with an exceptional grade, when in reality they would have been knocked down to a satisfactory one after failing their tests.

But onto the part about AI and such. I don't think there is anything WRONG with using AI for certain things. Amazons recruiting AI could've truly been amazing, the problem is that they fed it 'bad data' which meant that it was skewed towards men. Here's the quote from the article:
Sure, I think there are lots of great applications of AI/ML, but I don't think those fields should receive as much attention as they do. At the end of the day, most of our modern AI is just curve fitting. Neural networks are turning out to be great with computer vision and image recognition, which gives us cool stuff like level 2 autonomous vehicles, some interesting facial recognition technology, and maybe in the short term, better targeted advertising. My worry is that people will start using 'AI' as a default solution when some other statistical or even pure mathematical approach may work better.

AI shouldn't be a substitute for thinking.

So yeah, garbage in is garbage out. They'll be able to improve on this if they feed the machine the resumes of 'successful' women in the industry to help balance out the problem.
There's still the open ended question of when is your model good enough. When have you balanced it out? Your model might give you perfect candidates, but since you can't be certain your model gives you the best candidates, you're still going to have to have a human intervene and provide their own input.

Then there's the question of what makes a good candidate. Suppose you decide the most fair way to treat all gender's equally is to completely ignore gender altogether. If you select for people who you would like to work with, and you would prefer to hire people who aren't combative (since they'll do what they're told and won't ask for a raise as often), your model may end up preferring women. It's not that the model is 'flawed', it's simply what it gave you based on your desire to have agreeable coworkers.

Pretend you can somehow build a model that can accurately rank a person by intelligence. Assume the distribution of applicants follows some type of normal distribution with barely any applicants on the tail ends. Now, when you run your model on your applicants dataset, you're surprised because it seems to skew heavily in favour of male applicants. What went wrong? It wasn't the model, it was the applicants themselves. The male and female intelligence distributions are extremely similar, you have lots of people in the middle around the average and barely any really stupid or really smart people. Your applicant pool is a subset of the set of all people, and since technology is known to be dominated by men, you get a lot more male applicants than female applicants. The result is you're more likely to find one of the really smart people in the pool of men than you are in the pool of women.

I guess the question to ask is then: how can you trust your model to map your inputs to the desired outputs when you aren't even sure what the desired output is?

There can be some genuinely good uses of AI in the world. Imagine if a lawyer AI could help handle VERY SIMPLE legal documents that are typically very repetitive and mundane. These would be quickly looked over by a real lawyer and then sent on their way. Just one example though.
How long before the lawyers get a law signed saying AI lawyers are illegal  Wink
Vod
legendary
Activity: 3668
Merit: 3010
Licking my boob since 1970
the computer only predicts an outcome on the basis of the data fed in and how the coders programmed it to react on a standardized case by case basis

Maybe last millennium.    Tongue  In the early 2000s, computer scientists took what we know about human learning, and created Machine Learning (in the thread title too).   Now, with each exam analyzed anywhere by any system, the model and the computer learns - and gets better.  By 2050 there should be nothing a computer cannot do better than our best humans.

What Coyster describes in the quote above is not Machine Learning - ML is more than programming, much like we are more than our instincts.  
legendary
Activity: 1666
Merit: 1285
Flying Hellfish is a Commie
Funny how you always hear about these machine learning experiments going wrong, does anyone remember Microsoft's Tay bot on Twitter? How about all the stories like 'AI can't recognize black faces' or 'AI converts speech-to-text better when the speaker is white than when they're black', or Amazon's recruiter AI that didn't like women.

I'm not really a fan of machine learning and AI. It isn't that it's not interesting or desirable, I just think it's sad that people think we've reached the limits of human ingenuity and we feel that we have to develop something to think for us. I'm not sure that people will ever be ready for a world where AI can make perfectly unbiased decisions, keep in mind most decisions result in winners and losers. When people make the decision, they can come to a compromise; when a machine designed to make a clear cut choice has to decide, by design it has to avoid letting the loser down easy, otherwise it would be biased in favour of the loser (in this case, the losers might be kids who 'deserve' [i.e are predicted] to fail their class).

As a side note, assigning students scores based on machine learning predictions is just ridiculous. Whoever thought this would work should be shamed publicly.

I'm assumining (and hoping) that they have been shamed publicly. You didn't even give people the chance to improve (if they had poor performance in class) and just simply handed off good grades to people who may not have performed as well when the big test came.

But onto the part about AI and such. I don't think there is anything WRONG with using AI for certain things. Amazons recruiting AI could've truly been amazing, the problem is that they fed it 'bad data' which meant that it was skewed towards men. Here's the quote from the article:

But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.


So yeah, garbage in is garbage out. They'll be able to improve on this if they feed the machine the resumes of 'successful' women in the industry to help balance out the problem.

There can be some genuinely good uses of AI in the world. Imagine if a lawyer AI could help handle VERY SIMPLE legal documents that are typically very repetitive and mundane. These would be quickly looked over by a real lawyer and then sent on their way. Just one example though.
member
Activity: 140
Merit: 56
Funny how you always hear about these machine learning experiments going wrong, does anyone remember Microsoft's Tay bot on Twitter? How about all the stories like 'AI can't recognize black faces' or 'AI converts speech-to-text better when the speaker is white than when they're black', or Amazon's recruiter AI that didn't like women.

I'm not really a fan of machine learning and AI. It isn't that it's not interesting or desirable, I just think it's sad that people think we've reached the limits of human ingenuity and we feel that we have to develop something to think for us. I'm not sure that people will ever be ready for a world where AI can make perfectly unbiased decisions, keep in mind most decisions result in winners and losers. When people make the decision, they can come to a compromise; when a machine designed to make a clear cut choice has to decide, by design it has to avoid letting the loser down easy, otherwise it would be biased in favour of the loser (in this case, the losers might be kids who 'deserve' [i.e are predicted] to fail their class).

As a side note, assigning students scores based on machine learning predictions is just ridiculous. Whoever thought this would work should be shamed publicly.
legendary
Activity: 1666
Merit: 1285
Flying Hellfish is a Commie
No data or algorithm can be used to accurately determine the outcome of an examination that's yet to be written, imo, this should not be an option to grade students in schools. What data would either the teachers or the computer use to determine the grade: previous grades, social life of the student, level of intelligence, IQ level, background etc? I've seen good students go into an examination unprepared and fail, likewise bad students change their mindset, study hard and come out top to everyone's surprise.

If a student doesn't take an examination themself, then there's no algorithm or computer that can bring out a result that'll be accurate, the computer only predicts an outcome on the basis of the data fed in and how the coders programmed it to react on a standardized case by case basis, but there's no arbitrary standard for a student in every exam, they can be bad today and excellent tomorrow.

+1 to this.

Not exactly sure why this was even allowed in any school setting, but I would LOVE to see a source on this. There's literally no way that parents / children were happy that this was being used. At first it sounds like a good idea, until you notice that the machine learning technology is HEAVILY reliant on teacher input / past performance to come to a conclusion.

But yeah, I had tons of friends who were horrible at regular class and great test takers -- and I've also seen the opposite. Not sure why this would be used though.

This WONT be the norm in education though, people aren't happy when you give them a bad grade and then you try to blame a computer.....lol
legendary
Activity: 1904
Merit: 1277
Just do online exams with video conference apps, no need for this madness.
This presents other issues. For example, the connection dropping, or someone sitting just out of shot feeding the pupil the correct answers.

No data or algorithm can be used to accurately determine the outcome of an examination that's yet to be written, imo, this should not be an option to grade students in schools. What data would either the teachers or the computer use to determine the grade
In this instance there was bias based on the previous performance of the school, such that good students in badly performing schools were unfairly penalised, and bad students at good schools escaped unscathed. There's a quick overview here.



the computer only predicts an outcome on the basis of the data fed in and how the coders programmed it
This really is the whole point of my thread. At the moment, in situations like that of the exam grades in my example, an unfair algorithm can be unpicked and assessed for bias, and corrections can be made or the whole thing can be thrown out. There is a degree of transparency to the process. What I am suggesting is that in the very near future, we are likely to lose that transparency, that our lives and life-chances will in part be determined by machine-learning algorithms where there is no accountability, and no possibility of even understanding how the results were arrived at. In the exams example, it was a simple human algorithm. If instead the outcomes had been determined by machine-learning, there may have been a suspicion of unfairness, but no way to prove if that was actually the case, and no evidence available to challenge the decision.
legendary
Activity: 2184
Merit: 1302
No data or algorithm can be used to accurately determine the outcome of an examination that's yet to be written, imo, this should not be an option to grade students in schools. What data would either the teachers or the computer use to determine the grade: previous grades, social life of the student, level of intelligence, IQ level, background etc? I've seen good students go into an examination unprepared and fail, likewise bad students change their mindset, study hard and come out top to everyone's surprise.

If a student doesn't take an examination themself, then there's no algorithm or computer that can bring out a result that'll be accurate, the computer only predicts an outcome on the basis of the data fed in and how the coders programmed it to react on a standardized case by case basis, but there's no arbitrary standard for a student in every exam, they can be bad today and excellent tomorrow.
copper member
Activity: 2324
Merit: 2142
Slots Enthusiast & Expert
Removing exams and replace it with trend analysis + teacher's opinion? It doesn't sound right!

An exam is the most objective way of measuring academics knowledge at a particular time. Well, judging from my experience where I only studied hard several months or weeks before the assessment (and spent most of my time as a clown), I'd be a pedicab driver by now if the system exists back then.

The thing is, the teacher's opinion is subjective, garbage in, garbage out, the data feeds into the algorithm can also be garbage and subjective. The coders are human with limited knowledge and wisdom, and thus, the product is far from perfection to determine who gets what. What the algorithm did was forecasting, and reality would likely differ from the forecast.

Just do online exams with video conference apps, no need for this madness.
legendary
Activity: 1904
Merit: 1277
Recently in the UK there has been a furore over algorithm-determined school exam results. The pandemic meant that pupils couldn't sit exams, and an algorithm was devised that determined what results each pupil would get. However, many pupils, particularly those from disadvantaged backgrounds, received worse than predicted results, whereas pupils from more affluent backgrounds suffered no ill effects. There were widespread protests at the perceived unfairness, and the algorithm was hauled out into the open and dissected. The formula was quite rudimentary, and the inbuilt bias perfectly clear for anyone with a basic grasp of maths to see. The outcome was  that the protests were upheld, and the unfair results overturned.

The reason this could happen is that the algorithm was devised by people. Their assumptions and their methods could be unpicked and understood. However, the trend, now that we are in the era of big data, is for machine-learning. Computers can devise much more efficient processes than can humans. If the same thing had happened in a few years' time, it is quite likely that the grades would have been determined by machine-learning, with initial data fed in, results coming out, and no human understanding how the processing from input to output works. Indeed, with the computer itself unable to explain (because we have not yet reached that level of AI). This combination of factors, machine-learning on the one hand, and the computer being unable to explain its reasoning on the other, leads to an absolute removal of all accountability for decisions that can have a profound impact on people's lives. Humans can argue convincingly that they have simply input some initial parameters, and had no part in the decision-making. But those machine-learning decisions can't be pulled into the open, can't be dissected, can't be understood. Machine-learning without sentient computers means that all accountability is thrown away. No-one is responsible for anything. Now this may change once AI reaches a sufficient level that a computer can explain its reasoning in terms that humans can understand... but that is years or decades away, and until we reach that point, the possibilities look quite scary. We live in a competitive world, and the advantages of machine-learning are too tempting for countries and companies to pass up. ML is pursued fervently, no matter the implications. Will we really throw away accountability for (and understanding of) a lot of really important decisions?
Pages:
Jump to: