Manipulating and Measuring Model Interpretability

With the increased use of machine learning in decision-making scenarios, there has been a growing interest in creating human-interpretable machine learning models. While many such models have been proposed, there have been relatively few experimental studies of whether these models achieve their intended effects, such as encouraging people to follow the model’s predictions when the model is correct and to deviate when it makes a mistake. We present a series of randomized, pre-registered experiments comprising 3,800 participants in which people were shown functionally identical models that varied only in two factors thought to influence interpretability: the number of input features and the model transparency (clear or black-box). Predictably, participants who were shown a clear model witha small number of features were better able to simulate the model’s predictions. However, contrary to what one might expect when manipulating interpretability, we found no improvements in the degree to which participants followed the model’s predictions when it was beneficial to do so. Even more surprisingly, increased transparency hampered people’s ability to detect when the model makes a sizable mistake and correct for it, seemingly due to information overload. These counter intuitive results suggest that decision scientists creating interpretable models should harbor a healthy skepticism of their intuitions and empirically verify that interpretable models achieve their intended effects.

Focus: Methods or Design
Source: arXiv
Readability: Expert
Type: Website Article
Open Source: No
Keywords: N/A
Learn Tags: AI and Machine Learning
Summary: A series of experiments determined that providing information to participants meant to increase interpretability of decision-making models had negligible and detrimental effects, suggesting more rigorous work needs to be done to improve interpretability of models.