Data Scientist
hardds-interpretability
How do you explain model predictions to stakeholders (SHAP, LIME, feature importance)?
Answer
Use explanation methods appropriate to the audience.
- Global importance: which features matter overall
- Local explanations: why a single prediction happened
- SHAP: consistent additive attributions
- LIME: local surrogate approximations
Also validate explanations with domain experts and warn about correlation vs causation.
Related Topics
InterpretabilityMachine LearningCommunication