Many people seem to assume that AI is by definition "neutral" because we are talking about computers. I taught a class on big data to social science students and we discussed racial issues in facial recognition. Many students were shocked to find out the many ways bias was built in, even if it wasn't out of malice. The issue you talk about is similar. I can't believe it is 2025 and we keep not learning from the past!
Yes! That illusion of neutrality is so dangerous especially in health. What you described in your class is exactly what we’re now seeing in diagnostics and digital health tools. We must learn from the past because bias in AI isn’t just academic. It’s clinical. And costly. Thank you for adding your voice here.
Thank you Maryann for your care and driving this space. I do my bit to encourage women to be their own health advocates because of the many gaps we have to fill and solutions to fight for. I often think of the impacts of AI on health practitioners but not once have I thought about the databank shaping our future. So thank you for opening my eyes to that. I see the conflicts and contradictions which makes it a lottery when it comes to finding the right practitioners to get the best solutions. In more recent editions of my weekly newsletter I ask women to ask their practitioners what information sources they are using to arrive at their recommendations. I will certainly be leaning into that more with them. And I’ll be following your efforts too. Well done and thank you. Anita xx p.s. I have a podcast here in Substack. What a fantastic topic this would be to talk about if you’d like to join me. Axx
Anita, thank you so much for this thoughtful note and for the work you’re doing to equip women as their own health advocates. You are absolutely right: there is a lottery effect in how care is delivered, and interrogating the data behind those decisions is a vital piece of the puzzle. I’m glad the piece sparked something for you and I’ll be sure to keep an eye on the work you’re doing as well.
Many people seem to assume that AI is by definition "neutral" because we are talking about computers. I taught a class on big data to social science students and we discussed racial issues in facial recognition. Many students were shocked to find out the many ways bias was built in, even if it wasn't out of malice. The issue you talk about is similar. I can't believe it is 2025 and we keep not learning from the past!
Yes! That illusion of neutrality is so dangerous especially in health. What you described in your class is exactly what we’re now seeing in diagnostics and digital health tools. We must learn from the past because bias in AI isn’t just academic. It’s clinical. And costly. Thank you for adding your voice here.
Thank you Maryann for your care and driving this space. I do my bit to encourage women to be their own health advocates because of the many gaps we have to fill and solutions to fight for. I often think of the impacts of AI on health practitioners but not once have I thought about the databank shaping our future. So thank you for opening my eyes to that. I see the conflicts and contradictions which makes it a lottery when it comes to finding the right practitioners to get the best solutions. In more recent editions of my weekly newsletter I ask women to ask their practitioners what information sources they are using to arrive at their recommendations. I will certainly be leaning into that more with them. And I’ll be following your efforts too. Well done and thank you. Anita xx p.s. I have a podcast here in Substack. What a fantastic topic this would be to talk about if you’d like to join me. Axx
Anita, thank you so much for this thoughtful note and for the work you’re doing to equip women as their own health advocates. You are absolutely right: there is a lottery effect in how care is delivered, and interrogating the data behind those decisions is a vital piece of the puzzle. I’m glad the piece sparked something for you and I’ll be sure to keep an eye on the work you’re doing as well.