Using People Analytics Fairly: Guarding Against Bias in Machine Learning

Photo by Jason Goodman on Unsplash

On one of our recent Workspan podcasts, we were joined by the perfect person to demystify AI, machine learning and people analytics: Margret Bjarnadottir, Associate Professor of Management Science and Statistics, at the University of Maryland. Margret gave us a sneak preview of her forthcoming session at WorldatWork’s Total Rewards show in San Diego this coming June: the explosive growth in the use of people analytics tools, what’s driving this growth, and perhaps most interestingly, how to guard against machine learning bias. Fascinating stuff.

If you’ve not had a chance to play around with ChatGPT (or Google’s Generative AI), I highly recommend you set aside some time and check it out. Though be forewarned: you’ll wind up spending far, far more time there once you experience its astonishing power. It’s almost frightening how fast it spits out answers that are also amazingly detailed and polished. I don’t use the word frightening lightly – it won’t take long before you start wondering how many jobs it can do and people it can replace, if it – and by it, I mean us – wanted it to. If you’re a young copywriter, paralegal, coder, architect, lab technician – the list goes on and on – I’d be worried. AI and machine learning has gotten very powerful in a relatively short space of time, and it will only get more so.     

My apologies, I didn’t mean to go there and cause you any more anxiety than you may already have when this topic comes up. As I said, just be prepared if/when you find yourself hovering at the “event horizon” – not saying it’s a black hole, but it may be a while till you come out the other end, and shape you’re in when you do come out is anybody’s guess. 

But enough of that. Back to Dr. Bjarnadottir and bias in machine learning. In our interview she cites several specific examples, including of course the famous Amazon one.  I strongly encourage you to give the pod a listen. Following the interview, we took a deeper dive into how to avoid machine learning bias, specifically as it’s applied to evaluating job candidates. Here’s a brief sampling of recommendations/caveats that we found:  

  1. Train on unbiased data: The data used to train machine learning models should be free from any bias. It is important to ensure that the training data is representative of the population and that the data used is diverse in terms of gender, race, ethnicity, and other demographic factors.
  2. Feature selection: Feature selection is the process of identifying the most relevant attributes or features that can predict a candidate’s suitability for a job. It is important to ensure that the features selected are relevant and free from any bias.
  3. Algorithm selection: The algorithm used in machine learning should be chosen carefully to avoid any bias. For example, algorithms that use decision trees can introduce bias if the training data is biased.
  4. Regularly audit and monitor the model: Machine learning models should be audited and monitored regularly to identify any bias. This can be done by analyzing the model’s output and comparing it to the actual outcomes.
  5. Human review: Machine learning models can be supplemented with human review to ensure that the model’s decisions are fair and unbiased. This can be done by having a human review the model’s output and making any necessary adjustments.
  6. Continuously improve the model: Machine learning models should be continuously improved by incorporating feedback from users and incorporating new data to reduce any bias.

Stanford prof and AI whisperer Andrew Ng describes machine learning as “the science of getting computers to act without being explicitly programmed.” That’s certainly one way of looking at it, and from an organizational standpoint, a healthy perspective. It defangs the implications of the technology and focuses the mind on how to harness it for good in the context of our work – primarily to improve the evaluative and predictive power of people analytics, which, in this instance means mitigating or eliminating selection bias.  

Ok, now that I’ve made my point, and given you several moments to regroup from the nightmare scenarios, I encourage you to take a peek at ChatGPT. Leave a comment, we’d like to know what you think – if we’ve gratuitously send shivers down your spine, understand that, like AI itself, our intentions are pure…we can’t always be held to account for the outcomes.    

Categories: Uncategorized.

Leave a Reply

Your email address will not be published. Required fields are marked *