About the Book
Machine learning is transforming fields from healthcare diagnostics to climate change predictions through their predictive performance. However, these complex machine learning models often lack interpretability, which is becoming more essential than ever for debugging, fostering trust, and communicating model insights.
Introducing SHAP, the Swiss army knife of machine learning interpretability:
- SHAP can be used to explain individual predictions.
- By combining explanations for individual predictions, SHAP allows to study the overall model behavior.
- SHAP is model-agnostic – it works with any model, from simple linear regression to deep learning.
- With its flexibility, SHAP can handle various data formats, whether it’s tabular, image, or text.
- The Python package
shapmakes the application of SHAP for model interpretation easy.
This book will be your comprehensive guide to mastering the theory and application of SHAP. It starts with the quite fascinating origins in game theory and explores what splitting taxi costs has to do with explaining machine learning predictions. Starting with using SHAP to explain a simple linear regression model, the book progressively introduces SHAP for more complex models. You’ll learn the ins and outs of the most popular explainable AI method and how to apply it using the
In a world where interpretability is key, this book is your roadmap to mastering SHAP. For machine learning models that are not only accurate but also interpretable.
Who This Book Is For
This book is for data scientists, statisticians, machine learners, and anyone who wants to learn how to make machine learning models more interpretable. Ideally, you are already familiar with machine learning to get the most out of this book. And you should know your way around Python to follow the code examples.
What's in the Book
Note: Please be aware that the ePub version utilizes MathML for mathematical notations and may not be compatible with all eReaders. Leanpub has a 60-day "100% Happiness Guarantee", so don't hesitate to just try it out. And you'll also get the PDF where the equations look good.
- A Short History of Shapley Values and SHAP
- Theory of Shapley Values
- From Shapley Values to SHAP
- Estimating SHAP Values
- SHAP for Linear Models
- Classification with Logistic Regression
- SHAP for Additive Models
- Understanding Feature Interactions with SHAP
- The Correlation Problem
- Regressing Using a Random Forest
- Image Classification with Partition Explainer
- Image Classification with Deep and Gradient Explainer
- Explaining Language Models
- Limitations of SHAP
- Building SHAP Dashboards with Shapash
- Alternatives to the shap Library
- Extensions of SHAP
- Other Applications of Shapley Values in Machine Learning
- SHAP Estimators
- The Role of Maskers and Background Data
About me (Christoph Molnar)
Author of the free online book Interpretable Machine Learning. I have a background in both statistics and machine learning and did my Ph.D. in interpretable machine learning. After a mix of data scientist jobs and academia, I'm now a full-time machine learning book author.
About the Author
On a mission to make algorithms more interpretable by combining machine learning and statistics.