I’ve written a few pieces on AI, robotics and the like, and listening to a podcast about the danger of algorithms and how they can be “weapons of math destruction” really made me stop and think.
Ultimately my team and I are now in a position where we have easy access to lots of very interesting data which helps us inform actions we take in terms of attraction – what does success look like, should we be measuring clicks and impressions or applications and hires?
But it could possibly go deeper than that, a lot deeper. What if hires from a particular job board are low, but it provides 80% of your female hires – if you don’t see the whole picture or ask the right questions you could make some horrible mistakes particularly when it comes to diversity.
Below is an article about how Amazon tried to use an algorithm to define whether people are employable or not. Much like the AI trauma of the US Judicial system if the data going in is bias, the data out and actions suggested will also be bias.
Just because algorithms are essentially maths, and maths can’t have an agenda, it doesn’t mean that something outside of the algorithm isn’t shaping things in an unexpected/unintended direction.
That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.