6 Comments
User's avatar
Jan Van den Bulck's avatar

Many people seem to assume that AI is by definition "neutral" because we are talking about computers. I taught a class on big data to social science students and we discussed racial issues in facial recognition. Many students were shocked to find out the many ways bias was built in, even if it wasn't out of malice. The issue you talk about is similar. I can't believe it is 2025 and we keep not learning from the past!

Expand full comment
Maryann's avatar

Yes! That illusion of neutrality is so dangerous especially in health. What you described in your class is exactly what we’re now seeing in diagnostics and digital health tools. We must learn from the past because bias in AI isn’t just academic. It’s clinical. And costly. Thank you for adding your voice here.

Expand full comment
Midlife Unfiltered's avatar

Thank you Maryann for your care and driving this space. I do my bit to encourage women to be their own health advocates because of the many gaps we have to fill and solutions to fight for. I often think of the impacts of AI on health practitioners but not once have I thought about the databank shaping our future. So thank you for opening my eyes to that. I see the conflicts and contradictions which makes it a lottery when it comes to finding the right practitioners to get the best solutions. In more recent editions of my weekly newsletter I ask women to ask their practitioners what information sources they are using to arrive at their recommendations. I will certainly be leaning into that more with them. And I’ll be following your efforts too. Well done and thank you. Anita xx p.s. I have a podcast here in Substack. What a fantastic topic this would be to talk about if you’d like to join me. Axx

Expand full comment
Maryann's avatar

Anita, thank you so much for this thoughtful note and for the work you’re doing to equip women as their own health advocates. You are absolutely right: there is a lottery effect in how care is delivered, and interrogating the data behind those decisions is a vital piece of the puzzle. I’m glad the piece sparked something for you and I’ll be sure to keep an eye on the work you’re doing as well.

Expand full comment
Cassi Clark's avatar

Can/will AI be able to detect flawed/bad studies like the menopause-HRT-cancer study or the breech birth-infant mortality study?

Expand full comment
Hyperfocus Femme's avatar

Thank you for sharing your work and this important information (and love your article formatting, so easy and enjoyable to read!)! I am just a regular woman concerned about her health as anyone else is (and subscribed because you are a unique type of investor with interesting perspectives I wish I saw more of in business!), but this is helping me stay informed about what to watch out for when I interact with the healthcare system, especially knowing that AI integration is already here and only planning to expand.

Currently reading “AI Snake Oil” by Arvind Narayanan and Sayash Kapoor (fascinating read so far!), and this article very much echoes their sentiment about how we need to be critical on WHAT these AI models are being trained on because often “bad AI” products are more so bad inputs leading to bad outputs, not so much the computational method itself being bad.

However, the book also cautions on over relying on AI to predict behavior and outcomes because AI cannot fully take into account unpredictable events to reliably foretell the future at even greater than 50%, let alone close to 100%. I hope this is a limitation that more AI companies accept and take into account so that they can accurately market their products instead of overpromising results they cannot deliver. And to add to that, I strongly believe that AI regulation needs to include independent, rigorous statistical accuracy standards, especially if these products are intended for use in human health outcomes.

Expand full comment