Clinical use of AI in Medicine
Reading for Tuesday, 3 February (repeated from Previous Post, but with date updated due to snow day):
- Ethan Goh, Robert J. Gallo, Eric Strong, Yingjie Weng, Hannah Kerman, Jason A. Freed, Joséphine A. Cool, Zahir Kanjee, Kathleen P. Lane, Andrew S. Parsons, Neera Ahuja, Eric Horvitz, Daniel Yang, Arnold Milstein, Andrew P. J. Olson, Jason Hom, Jonathan H. Chen and Adam Rodman. GPT-4 assistance for improvement of physician performance on patient care tasks: a randomized controlled trial. Nature Medicine, February 2025. [PDF Link] [Web Link]
AI Bias and Interpretability
Our next main topic, which will be for the Thursday, 5 February and Tuesday, 10 February, is on biases in AI systems and how to measure and mitigate them. There is already a vast literature on this topic, and entire courses and research agendas focused on it, so we will only see a small slice of it in these two class (and may have more classes later that go in more depth or touch on other aspects).
Readings for Thursday, 5 February:
- (Expected for Everyone) Kyra Wilson and Aylin Caliskan. Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval. AAAI/ACM Conference on AI, Ethics, and Society (AIES2024). [PDF Link] [Original Source]
Some additional readings (that are not expected for everyone, but are optional, and potentially readings the Lead Team will include):
- Chapter 3: Classification from Solon Barocas, Moritz Hardt, Arvind Narayanan. Fairness and machine learning: Limitations and Opportunities (“FairML book”). MIT Press, 2023. (The full book is openly available at https://fairmlbook.org)].) [Chapter Link]
Readings for Tuesday, 10 February:
- Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, May 2019. [PDF Link] [arXiv version (less nicely formatted, but with fixed equations)]