The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
기계 학습 모델의 해석 가능성에 대한 관심이 높아지면서 블랙박스 모델의 동작을 사후적으로 설명하는 방법이 개발되었습니다. 그러나 이러한 사후 접근 방식은 모든 새로운 입력에 대해 새로운 설명을 생성하며 이러한 설명은 인간이 미리 확인할 수 없습니다. 신경망에 대한 설명으로 유한한 규칙 집합에서 결정 규칙을 선택하는 방법이 제안되었지만 다른 모델에는 사용할 수 없습니다. 본 논문에서는 주어진 블랙박스 모델에 의해 만들어진 모든 예측을 지원하기 위해 결정 규칙이 선택되는 사전 검증 가능한 유한 규칙 세트를 찾기 위한 모델 불가지론적 설명 방법을 제안합니다. 먼저 규칙 세트에서 가장 가까운 예측을 제공하는 규칙을 선택하는 설명 모델을 정의합니다. 이 규칙은 블랙박스 모델 예측에 대한 대안적인 설명이나 뒷받침하는 증거로 작용합니다. 규칙 세트는 향후 입력에 대한 긴밀한 예측을 제공하기 위해 높은 적용 범위를 가져야 하지만, 사람이 미리 확인할 수 있을 만큼 작아야 합니다. 그러나 높은 적용 범위를 유지하면서 규칙 세트를 최소화하면 계산적으로 어려운 조합 문제가 발생합니다. 따라서 우리는 이 문제가 Horn 절로만 구성된 가중 MaxSAT 문제로 축소될 수 있으며 최신 솔버를 사용하여 효율적으로 해결할 수 있음을 보여줍니다. 실험 결과, 우리의 방법은 거의 동일한 크기의 규칙 세트를 사용하는 기존 방법에 비해 구조화된 데이터에 대해 선택된 규칙이 더 높은 정확도를 달성할 수 있도록 작은 규칙 세트를 찾았습니다. 우리는 또한 제안된 방법을 두 가지 순수 규칙 기반 모델인 CORELS 및 defragTrees와 실험적으로 비교했습니다. 또한 실제 데이터 세트를 위해 구성된 규칙 세트를 검토하고 해석 가능성, 제한 사항 및 가능한 사용 사례를 포함한 다양한 관점에서 제안된 방법의 특성을 논의합니다.
Yoichi SASAKI
NEC Corporation
Yuzuru OKAJIMA
NEC Corporation
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Yoichi SASAKI, Yuzuru OKAJIMA, "Alternative Ruleset Discovery to Support Black-Box Model Predictions" in IEICE TRANSACTIONS on Information,
vol. E106-D, no. 6, pp. 1130-1141, June 2023, doi: 10.1587/transinf.2022EDP7176.
Abstract: The increasing attention to the interpretability of machine learning models has led to the development of methods to explain the behavior of black-box models in a post-hoc manner. However, such post-hoc approaches generate a new explanation for every new input, and these explanations cannot be checked by humans in advance. A method that selects decision rules from a finite ruleset as explanation for neural networks has been proposed, but it cannot be used for other models. In this paper, we propose a model-agnostic explanation method to find a pre-verifiable finite ruleset from which a decision rule is selected to support every prediction made by a given black-box model. First, we define an explanation model that selects the rule, from a ruleset, that gives the closest prediction; this rule works as an alternative explanation or supportive evidence for the prediction of a black-box model. The ruleset should have high coverage to give close predictions for future inputs, but it should also be small enough to be checkable by humans in advance. However, minimizing the ruleset while keeping high coverage leads to a computationally hard combinatorial problem. Hence, we show that this problem can be reduced to a weighted MaxSAT problem composed only of Horn clauses, which can be efficiently solved with modern solvers. Experimental results showed that our method found small rulesets such that the rules selected from them can achieve higher accuracy for structured data as compared to the existing method using rulesets of almost the same size. We also experimentally compared the proposed method with two purely rule-based models, CORELS and defragTrees. Furthermore, we examine rulesets constructed for real datasets and discuss the characteristics of the proposed method from different viewpoints including interpretability, limitation, and possible use cases.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2022EDP7176/_p
부
@ARTICLE{e106-d_6_1130,
author={Yoichi SASAKI, Yuzuru OKAJIMA, },
journal={IEICE TRANSACTIONS on Information},
title={Alternative Ruleset Discovery to Support Black-Box Model Predictions},
year={2023},
volume={E106-D},
number={6},
pages={1130-1141},
abstract={The increasing attention to the interpretability of machine learning models has led to the development of methods to explain the behavior of black-box models in a post-hoc manner. However, such post-hoc approaches generate a new explanation for every new input, and these explanations cannot be checked by humans in advance. A method that selects decision rules from a finite ruleset as explanation for neural networks has been proposed, but it cannot be used for other models. In this paper, we propose a model-agnostic explanation method to find a pre-verifiable finite ruleset from which a decision rule is selected to support every prediction made by a given black-box model. First, we define an explanation model that selects the rule, from a ruleset, that gives the closest prediction; this rule works as an alternative explanation or supportive evidence for the prediction of a black-box model. The ruleset should have high coverage to give close predictions for future inputs, but it should also be small enough to be checkable by humans in advance. However, minimizing the ruleset while keeping high coverage leads to a computationally hard combinatorial problem. Hence, we show that this problem can be reduced to a weighted MaxSAT problem composed only of Horn clauses, which can be efficiently solved with modern solvers. Experimental results showed that our method found small rulesets such that the rules selected from them can achieve higher accuracy for structured data as compared to the existing method using rulesets of almost the same size. We also experimentally compared the proposed method with two purely rule-based models, CORELS and defragTrees. Furthermore, we examine rulesets constructed for real datasets and discuss the characteristics of the proposed method from different viewpoints including interpretability, limitation, and possible use cases.},
keywords={},
doi={10.1587/transinf.2022EDP7176},
ISSN={1745-1361},
month={June},}
부
TY - JOUR
TI - Alternative Ruleset Discovery to Support Black-Box Model Predictions
T2 - IEICE TRANSACTIONS on Information
SP - 1130
EP - 1141
AU - Yoichi SASAKI
AU - Yuzuru OKAJIMA
PY - 2023
DO - 10.1587/transinf.2022EDP7176
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E106-D
IS - 6
JA - IEICE TRANSACTIONS on Information
Y1 - June 2023
AB - The increasing attention to the interpretability of machine learning models has led to the development of methods to explain the behavior of black-box models in a post-hoc manner. However, such post-hoc approaches generate a new explanation for every new input, and these explanations cannot be checked by humans in advance. A method that selects decision rules from a finite ruleset as explanation for neural networks has been proposed, but it cannot be used for other models. In this paper, we propose a model-agnostic explanation method to find a pre-verifiable finite ruleset from which a decision rule is selected to support every prediction made by a given black-box model. First, we define an explanation model that selects the rule, from a ruleset, that gives the closest prediction; this rule works as an alternative explanation or supportive evidence for the prediction of a black-box model. The ruleset should have high coverage to give close predictions for future inputs, but it should also be small enough to be checkable by humans in advance. However, minimizing the ruleset while keeping high coverage leads to a computationally hard combinatorial problem. Hence, we show that this problem can be reduced to a weighted MaxSAT problem composed only of Horn clauses, which can be efficiently solved with modern solvers. Experimental results showed that our method found small rulesets such that the rules selected from them can achieve higher accuracy for structured data as compared to the existing method using rulesets of almost the same size. We also experimentally compared the proposed method with two purely rule-based models, CORELS and defragTrees. Furthermore, we examine rulesets constructed for real datasets and discuss the characteristics of the proposed method from different viewpoints including interpretability, limitation, and possible use cases.
ER -