Interpretable Machine Learning: Methods for understanding complex models4:25pm - 4:55pm on Friday, October 5 in PennTop South
- Audience Level:
As machine learning models become prevalent, it has become increasingly important to understand how these models came to a particular decision.In this talk, I will discuss various methodologies data scientist can use to understand how black-box models created a particular prediction.
Today, businesses use algorithmic decision-making in various applications, such as determining who gets a bank loan, evaluating a teacher’s performance, and other areas that greatly affect people’s livelihood. In these applications, understanding why a statistical model makes a particular prediction can be as important as its accuracy. However often times, these models are complex black-boxes that are difficult or impossible to understand by humans. For persons whose lives are impacted by these algorithms, this lack of interpretability creates serious problems as these individuals are unable to improve their outcome. In this talk, I will discuss various definitions of global and local interpretability for machine learning models. Next, I will discuss methodologies for better understanding how a model created a prediction for a particular test instance.