We’re led to believe that machines are objective. That they simply process inputs to provide an output. But as we saw in the last issue, machines can be trained. And how they’re trained matters.
You can see this for yourself. Try googling the following professions and see what images are shown:
Although Google changes search results based on location, you were likely shown some pretty stereotypical results. Mostly men for doctor and manager, and mostly women for nurse and secretary.
And most of them were white faces. And some of the secretary images may have been a little offensive.
All of this from a machine.
But the machine has no values or opinions. It has no concept of right or wrong. It doesn’t think.
It just does what it was designed to do. Just as who writes a program matters, so does who trains a machine. What data they use. What they say is the ‘correct’ answer.
If this all seems a bit abstract, consider this - in the next decade developers will have more impact on the future of healthcare than doctors (ref: ch 7).
I’ve been away so this newsletter is a bit shorter. I’ve continued to look at AI, and how it can impact us. This is the second of the series on AI - next time I’ll look at how AI is used in recruitment. And I promise a topic on candidate experience as well!
P.S. Know someone who’d like to get this email? You can point them to idealrole.com/newsletter to sign up.
A good introduction to the different types of bias that exist. Which ones are relevant for machine learning and AI and why they’re important. It’s short and to the point. Also, the analogies make it easy to follow.
A look at four areas where bias can get into AI:
The article provides context with some great real-world examples.
The article looks at three key stages that result in AI bias - how the problem is framed, how the data is collected and how it’s prepared. It then highlights the four main challenges in fixing this bias.
Most of the challenges seem to be due to the complexity of social norms. Specifically, how to explain them in terms a machine will understand. Perhaps more input from the social sciences could help.
Chinese researchers claim their AI can pick criminals using face images. An excellent review of this claim and their naive assumptions. But also a warning of how AI can be misused and sold as superior to humans. I’d hate to think of the consequences if this was used by an authoritarian regime...
👉 You can find the written version here
"The foxes are guarding the henhouse". Can companies balance their financial interests with the interests of society? And whose values should they follow?
While focused on AI, the themes covered are broad. Would you work for a company designing more accurate weapons? What about if those weapons saved lives by reducing civilian casualties? There are no easy answers. But it’s an essential debate.
Data can be messy. Some points just don’t fit in. But by removing these outliers we can make a much cleaner data set. One that will train our AI algorithm quicker. A great conversation about how those data points can lead to exclusion. Particularly for people with disabilities and minority groups.
That’s all for now. If you have some tips on how I can improve this newsletter I’d love to hear them.