Get Free Shipping on orders over $79
Interpretability and Explainability in AI Using Python : Decrypt AI Decision-Making Using Interpretability and Explainability with Python to Build Reliable Machine Learning Systems (English Edition) - Aruna Chakkirala

Interpretability and Explainability in AI Using Python

Decrypt AI Decision-Making Using Interpretability and Explainability with Python to Build Reliable Machine Learning Systems (English Edition)

By: Aruna Chakkirala

eText | 15 April 2025 | Edition Number 1

At a Glance

eText


$27.45

or 4 interest-free payments of $6.86 with

Instant online reading in your Booktopia eTextbook Library *

Why choose an eTextbook?

Instant Access *

Purchase and read your book immediately

Read Aloud

Listen and follow along as Bookshelf reads to you

Study Tools

Built-in study tools like highlights and more

* eTextbooks are not downloadable to your eReader or an app and can be accessed via web browsers only. You must be connected to the internet and have no technical issues with your device or browser that could prevent the eTextbook from operating.
Demystify AI Decisions and Master Interpretability and Explainability Today

Key Features ? Master Interpretability and Explainability in ML, Deep Learning, Transformers, and LLMs ? Implement XAI techniques using Python for model transparency ? Learn global and local interpretability with real-world examples

Book Description Interpretability in AI/ML refers to the ability to understand and explain how a model arrives at its predictions. It ensures that humans can follow the model's reasoning, making it easier to debug, validate, and trust.

Interpretability and Explainability in AI Using Python takes you on a structured journey through interpretability and explainability techniques for both white-box and black-box models.

You'll start with foundational concepts in interpretable machine learning, exploring different model types and their transparency levels. As you progress, you'll dive into post-hoc methods, feature effect analysis, anchors, and counterfactuals—powerful tools to decode complex models. The book also covers explainability in deep learning, including Neural Networks, Transformers, and Large Language Models (LLMs), equipping you with strategies to uncover decision-making patterns in AI systems.

Through hands-on Python examples, you'll learn how to apply these techniques in real-world scenarios. By the end, you'll be well-versed in choosing the right interpretability methods, implementing them efficiently, and ensuring AI models align with ethical and regulatory standards—giving you a competitive edge in the evolving AI landscape.

What you will learn ? Dissect key factors influencing model interpretability and its different types. ? Apply post-hoc and inherent techniques to enhance AI transparency. ? Build explainable AI (XAI) solutions using Python frameworks for different models. ? Implement explainability methods for deep learning at global and local levels. ? Explore cutting-edge research on transparency in transformers and LLMs. ? Learn the role of XAI in Responsible AI, including key tools and methods.

Table of Contents 1. Interpreting Interpretable Machine Learning 2. Model Types and Interpretability Techniques 3. Interpretability Taxonomy and Techniques 4. Feature Effects Analysis with Plots 5. Post-Hoc Methods 6. Anchors and Counterfactuals 7. Interpretability in Neural Networks 8. Explainable Neural Networks 9. Explainability in Transformers and Large Language Models 10. Explainability and Responsible AI Index

About the Authors Aruna Chakkirala a seasoned technical leader and currently serves as an AI Solutions Architect at Microsoft. She was instrumental in the early adoption of Generative AI and constantly strives to keep pace with the evolving domain. As a Data Scientist, she has built Supervised and Unsupervised models to address cybersecurity problems. She holds a patent for her pioneering work in community detection for DNS querying. Her technical expertise spans multiple domains, including Networks, Security, Cloud, Big Data, and AI. She believes that the success of real-world AI applications increasingly depends on well- defined architectures across all encompassing domains. Her current interests include Generative AI, applications of LLMs and SLMs, Causality, Mechanistic Interpretability, and Explainability tools.
on
Desktop
Tablet
Mobile

More in Machine Learning

HBR Guide to Generative AI for Managers : HBR Guide - Elisa Farri

eBOOK

Hugging Face in Action - Wei-Meng Lee

eBOOK

Investing for Programmers - Stefan Papp

eBOOK

Transformers in Action - Nicole Koenigstein

eBOOK