CS Colloquium - Towards a Future Powered By Safe and Human-Centered AI and ML

Feb 14, 2024

03:30 PM - 04:30 PM

MacLean Hall, 110

2 West Washington Street, Iowa City, IA 52240

Save to My Events

Anshuman Chhabra portrait - courtesy of https://dataskeptic.com/blog/episodes/2022/fair-hierarchical-clustering


Anshuman Chhabra


With the growing adoption of ML and AI in society, there is an increasing need to ensure that models (and systems employing these models) are centered around human safety so as to minimize harm to end users. In this talk, I will motivate this problem through multiple examples, and delve into my works on the safety risks associated with security, robustness, fairness, and accuracy of model performance. I will describe my efforts along two thrusts: (1) advancing the science behind safe ML/AI, and (2) improving real-world systems to minimize harm. These efforts offer both theoretical guarantees and empirical efficacy, illustrating the synergy of theory and pragmatic application. To conclude, I will discuss two exciting directions for future work in this domain: (1) accelerating Generative AI alignment efforts, and (2) utilizing Generative AI for social good.


Anshuman Chhabra is a recent PhD graduate (September 2023) at the University of California, Davis advised by Prof. Prasant Mohapatra. His research seeks to safeguard users from harm by curbing the negative behavior of foundational ML/AI models as well as real-world systems employing these models. He received the UC Davis Graduate Student Fellowship in 2018, and has held research positions at Lawrence Berkeley National Laboratory (2017), the Max Planck Institute for Software Systems, Germany (2020), and the University of Amsterdam, Netherlands (2022). His research has been funded by the NSF, Army Research Laboratory, and Robert N. Noyce Foundation.

Individuals with disabilities are encouraged to attend all University of Iowa–sponsored events. If you are a person with a disability who requires a reasonable accommodation in order to participate in this program, please contact in advance at