I'm assumining (and hoping) that they have been shamed publicly. You didn't even give people the chance to improve (if they had poor performance in class) and just simply handed off good grades to people who may not have performed as well when the big test came.
On the other hand, students who cheated on smaller assessments like assignments, graded homework, quizzes, etc may have been projected to finish with an exceptional grade, when in reality they would have been knocked down to a satisfactory one after failing their tests.
But onto the part about AI and such. I don't think there is anything WRONG with using AI for certain things. Amazons recruiting AI could've truly been amazing, the problem is that they fed it 'bad data' which meant that it was skewed towards men. Here's the quote from the article:
Sure, I think there are lots of great applications of AI/ML, but I don't think those fields should receive as much attention as they do. At the end of the day, most of our modern AI is just curve fitting. Neural networks are turning out to be great with computer vision and image recognition, which gives us cool stuff like level 2 autonomous vehicles, some interesting facial recognition technology, and maybe in the short term, better targeted advertising. My worry is that people will start using 'AI' as a default solution when some other statistical or even pure mathematical approach may work better.
AI shouldn't be a substitute for thinking.
So yeah, garbage in is garbage out. They'll be able to improve on this if they feed the machine the resumes of 'successful' women in the industry to help balance out the problem.
There's still the open ended question of
when is your model good enough. When have you balanced it out? Your model might give you perfect candidates, but since you can't be certain your model gives you the best candidates, you're still going to have to have a human intervene and provide their own input.
Then there's the question of what makes a good candidate. Suppose you decide the most fair way to treat all gender's equally is to completely ignore gender altogether. If you select for people who you would like to work with, and you would prefer to hire people who aren't combative (since they'll do what they're told and won't ask for a raise as often), your model may end up preferring women. It's not that the model is 'flawed', it's simply what it gave you based on your desire to have agreeable coworkers.
Pretend you can somehow build a model that can accurately rank a person by intelligence. Assume the distribution of applicants follows some type of normal distribution with barely any applicants on the tail ends. Now, when you run your model on your applicants dataset, you're surprised because it seems to skew heavily in favour of male applicants. What went wrong? It wasn't the model, it was the applicants themselves. The male and female intelligence distributions are extremely similar, you have lots of people in the middle around the average and barely any
really stupid or
really smart people. Your applicant pool is a subset of the set of all people, and since technology is
known to be dominated by men, you get a lot more male applicants than female applicants. The result is you're more likely to find one of the really smart people in the pool of men than you are in the pool of women.
I guess the question to ask is then: how can you trust your model to map your inputs to the desired outputs when you aren't even sure what the desired output is?
There can be some genuinely good uses of AI in the world. Imagine if a lawyer AI could help handle VERY SIMPLE legal documents that are typically very repetitive and mundane. These would be quickly looked over by a real lawyer and then sent on their way. Just one example though.
How long before the lawyers get a law signed saying AI lawyers are illegal