,

Interpreting Machine Learning Models

Learn Model Interpretability and Explainability Methods

Specificaties
Paperback, blz. | Engels
Apress | e druk, 2021
ISBN13: 9781484278017
Rubricering
Apress e druk, 2021 9781484278017
Verwachte levertijd ongeveer 9 werkdagen

Samenvatting

Understand model interpretability methods and apply the most suitable one for your machine learning project. This book details the concepts of machine learning interpretability along with different types of explainability algorithms.

You’ll begin by reviewing the theoretical aspects of machine learning interpretability. In the first few sections you’ll learn what interpretability is, what the common properties of interpretability methods are, the general taxonomy for classifying methods into different sections, and how the methods should be assessed in terms of human factors and technical requirements. Using a holistic approach featuring detailed examples, this book also includes quotes from actual business leaders and technical experts to showcase how real life users perceive interpretability and its related methods, goals, stages, and properties. 

Progressing through the book, you’ll dive deep into the technical details of the interpretability domain. Starting off with the general frameworks of different types of methods, you’ll use a data set to see how each method generates output with actual code and implementations. These methods are divided into different types based on their explanation frameworks, with some common categories listed as feature importance based methods, rule based methods, saliency maps methods, counterfactuals, and concept attribution. The book concludes by showing how data effects interpretability and some of the pitfalls prevalent when using explainability methods.  

What You’ll LearnUnderstand machine learning model interpretability Explore the different properties and selection requirements of various interpretability methodsReview the different types of interpretability methods used in real life by technical experts Interpret the output of various methods and understand the underlying problems

Who This Book Is For 

Machine learning practitioners, data scientists and statisticians interested in making machine learning models interpretable and explainable; academic students pursuing courses of data science and business analytics.

Specificaties

ISBN13:9781484278017
Taal:Engels
Bindwijze:paperback
Uitgever:Apress

Inhoudsopgave

Chapter 1: Introduction to Machine Learning Domain<div>Chapter Goal: The book’s opening chapter will talk about the journey of machine learning models and why model interpretability became so important in the recent times. This chapter will also cover some of the basic black box modelling algorithms in brief&nbsp;&nbsp;</div><div>Sub-Topics:</div><div>• Journey of machine learning domain</div><div>• Journey of machine learning algorithms&nbsp;</div><div>• Why only reporting accuracy is not enough for models</div><div><br></div><div>Chapter 2: Introduction to Model Interpretability</div><div>Chapter Goal: This chapter will talk about the importance and need of interpretability and how businesses employ model interpretability for their decisions</div><div>Sub-Topics:</div>• Why is interpretability needed for machine learning models<div>• Motivation behind using model interpretability</div><div>• Understand social and commercial motivations for machine learning interpretability, fairness, accountability, and transparency</div><div>• Get a definition of interpretability and learn about the groups leading interpretability research</div><div><br></div><div>Chapter 3: Machine Learning Interpretability Taxonomy</div><div>Chapter Goal: A machine learning taxonomy is presented in this section. This will be used to characterize the interpretability of various popular machine learning techniques.</div><div>Sub topics:</div><div>• Understanding and trust</div><div>• A scale for interpretability</div><div>• Global and local interpretability</div><div>• Model-agnostic and model-specific interpretability</div><div><br></div><div>Chapter 4: Common Properties of Explanations Generated by Interpretability Methods</div><div>Chapter goal: The purpose of this chapter to explain readers about evaluation metrics for various interpretability methods. This will help readers understand which methods to choose for specific use cases&nbsp;</div><div><br></div><div>Sub topics:&nbsp;</div><div>• Degree of importance&nbsp;</div><div>• Stability</div><div>• Consistency&nbsp;</div><div>• Certainty</div><div>• Novelty</div><br><div>Chapter 5: Timeline of Model interpretability Methods Discovery</div><div>Chapter goal: This chapter will talk about the timeline and will give details about when most common methods of interpretability were discovered</div><div><br></div><div>Chapter 6: Unified Framework for Model Explanations</div><div>Chapter goal: Each method is determined by three choices: how it handles features, what model behavior it analyzes, and how it summarizes feature influence. The chapter will focus in detail about each step and will try to map different methods to each step by giving detailed examples</div><div>Sub topics1:&nbsp;</div><div>• Removal based explanations</div><div>• Summarization based explanations</div><div><br></div><div>Chapter 7: Different Types of Removal Based Explanations</div><div>Chapter goal: This chapter will talk about the different types of removal based methods and how to implement them along with details of examples and Python packages, real life use cases etc.</div><div>Sub topics:&nbsp;</div><div>• IME(2009)</div><div>• IME(2010)</div><div>• QII</div><div>• SHAP</div>• KernelSHAP<div>• TreeSHAP</div><div>• LossSHAP</div><div>• SAGE</div><div>• Shapley</div><div>• Shapley</div><div>• Permutation</div><div>• Conditional</div><div>• Feature</div><div>• Univariate</div><div>• L2X</div><div>• INVASE</div><div>• LIME</div>• LIME<div>• PredDiff</div><div>• Occlusion</div><div>• CXPlain</div><div>• RISE</div><div>• MM</div><div>• MIR</div><div>• MP</div><div>• EP</div><div>• FIDO-CA</div><div><br></div><div>Chapter 8: Different Types of Summarization Based Explanations</div><div>Chapter goal: This chapter will talk about the different types of summarization based methods and how to implement them along with details of examples and python packages, real life use cases etc.<br></div><div>Sub topics:</div><div>• Magie</div><div>• Anchor</div><div>• Recursive partitioning</div><div>• GlocalX</div><div><br></div><div>Chapter 9: Model Debugging Using Output of the Interpretability Methods</div><div>Chapter goal: This chapter will help reader understand how to use the output of the interpretability methods and convert those outputs in to a business ready text which can be understood by non tech business teams</div><div><br></div><div>Chapter 10: Limitation of Popular Methods and Future of Model Interpretability</div><div>Chapter goal: Give users a brief understanding of limitations of some commonly used methods and how business teams find it difficult to deploy models even after usage of interpretability methods. The chapter will also touch upon the future advances in the domain of interpretability</div><div><br></div><div>Chapter 11: Use of Counterfactual Explanations to Better Understand Model Performance and Behaviour</div><div>Chapter goal: This chapter will introduce readers to the concept of counterfactual explanations and will cover both the basics and the advanced explanation of the algorithm. The chapter will also cover some of the advance Counterfactual explanations methods in details</div><div>Sub topics:</div><div>• CounterFactual guided by prototypes</div><div>• CounterFactual explanations&nbsp;</div><div>• MOC (Multi Objective Counterfactuals)</div><div>• DiCE</div><div>• CEML</div><div><br></div><div>Chapter 12: Limitations and Future Use of Counterfactual Explanations</div><div>Chapter goal: The chapter will be a closing chapter on counterfactual explanations and will talk about the future scope and advancements in the domain of model explainability<br></div><div><br></div><div><br></div>

Rubrieken

    Personen

      Trefwoorden

        Interpreting Machine Learning Models