Get more out of XAI: 10 Tips. Explainable AI is about more than… | by Conor O’Sullivan | Feb, 2024

Editor
2 Min Read


Explainable AI is about more than applying algorithms

Photo by Marten Newhall on Unsplash

I remember the first time I used SHAP. Well, tried to use it. I wanted to understand an XGBoost model trained with over 40 features and many of those were highly correlated. The plots looked cool! But, that was pretty much it.

It wasn’t at all clear how the model was making predictions. And, it wasn’t the XAI method’s fault… the underlying data was a mess. This was my first realisation that:

XAI methods are not a golden bullet.

You can’t fire them at complex models and expect reasonable explanations for their inner workings. Yet, if used correctly, they can provide incredible insight.

I have learned a lot since my first attempt at understanding a black-box model. I’ve narrowed the lessons down to 10 tips. Seen below, they are roughly divided into 3 groups. The first four tips focus on the underlying data used to train models. The next four focus on you as a user of XAI methods. The last two delve into more technical considerations.

(source: author)

You may also enjoy this video on the topic. And, if you want to learn more , check out my course — XAI with Python. You can get free access if you sign up to my newsletter.

Share this Article
Please enter CoinGecko Free Api Key to get this plugin works.