Gal
Yona

How Fair Can We Be

Computer Science PhD candidate at the Weizmann Institute of Science

Gal Yona

Gal
Yona

How Fair Can We Be

Computer Science PhD candidate at the Weizmann Institute of Science

Gal Yona

Bio

​Gal Yona is a Computer Science PhD candidate at the Weizmann Institute of Science. Her research is aimed at making machine learning methods more reliable in human-facing applications, with a focus on defining and promoting fairness and non-discrimination. Before her PhD, Gal worked as a data scientist at the digital forensics company Cellebrite.

Bio

​Gal Yona is a Computer Science PhD candidate at the Weizmann Institute of Science. Her research is aimed at making machine learning methods more reliable in human-facing applications, with a focus on defining and promoting fairness and non-discrimination. Before her PhD, Gal worked as a data scientist at the digital forensics company Cellebrite.

Abstract

Machine learning is increasingly used to drive predictions and inform consequential decisions about individuals; examples range from estimating a felon’s recidivism risk to determining whether a patient is a good candidate for a medical treatment. There is, however, a growing concern that these tools may inadvertently (or not) discriminate against individuals or groups.

 

In this talk, I will give an overview of some of the recent attempts at formally defining when a machine learning procedure is unfair and providing algorithms that provably mitigate such unfairness. My focus in this talk will be on subgroup fairness, a particular type of guarantee that significantly strengthens existing fairness notions by asking that they hold with respect to a rich collection of (possibly intersecting) subgroups of individuals.

 

I will give some intuition to the theory behind this approach, and present the results of our recent collaboration with the Clalit Research Institute, demonstrating that this approach can be made practical on real medical risk prediction tasks

Abstract

Machine learning is increasingly used to drive predictions and inform consequential decisions about individuals; examples range from estimating a felon’s recidivism risk to determining whether a patient is a good candidate for a medical treatment. There is, however, a growing concern that these tools may inadvertently (or not) discriminate against individuals or groups.

 

In this talk, I will give an overview of some of the recent attempts at formally defining when a machine learning procedure is unfair and providing algorithms that provably mitigate such unfairness. My focus in this talk will be on subgroup fairness, a particular type of guarantee that significantly strengthens existing fairness notions by asking that they hold with respect to a rich collection of (possibly intersecting) subgroups of individuals.

 

I will give some intuition to the theory behind this approach, and present the results of our recent collaboration with the Clalit Research Institute, demonstrating that this approach can be made practical on real medical risk prediction tasks

Planned Agenda

Planned Agenda