"explainable ai" Papers
33 papers found
AI2TALE: An Innovative Information Theory-based Approach for Learning to Localize Phishing Attacks
Van Nguyen, Tingmin Wu, Xingliang YUAN et al.
AIGI-Holmes: Towards Explainable and Generalizable AI-Generated Image Detection via Multimodal Large Language Models
Ziyin Zhou, Yunpeng Luo, Yuanchen Wu et al.
Data-centric Prediction Explanation via Kernelized Stein Discrepancy
Mahtab Sarvmaili, Hassan Sajjad, Ga Wu
Explainably Safe Reinforcement Learning
Sabine Rieder, Stefan Pranger, Debraj Chakraborty et al.
LeapFactual: Reliable Visual Counterfactual Explanation Using Conditional Flow Matching
Zhuo Cao, Xuan Zhao, Lena Krieger et al.
On Logic-based Self-Explainable Graph Neural Networks
Alessio Ragno, Marc Plantevit, Céline Robardet
Provable Gradient Editing of Deep Neural Networks
Zhe Tao, Aditya V Thakur
Regression-adjusted Monte Carlo Estimators for Shapley Values and Probabilistic Values
R. Teal Witter, Yurong Liu, Christopher Musco
Representational Difference Explanations
Neehar Kondapaneni, Oisin Mac Aodha, Pietro Perona
Seeing Through Deepfakes: A Human-Inspired Framework for Multi-Face Detection
Juan Hu, Shaojing Fan, Terence Sim
SHAP values via sparse Fourier representation
Ali Gorji, Andisheh Amrollahi, Andreas Krause
Smoothed Differentiation Efficiently Mitigates Shattered Gradients in Explanations
Adrian Hill, Neal McKee, Johannes Maeß et al.
VERA: Explainable Video Anomaly Detection via Verbalized Learning of Vision-Language Models
Muchao Ye, Weiyang Liu, Pan He
Accelerating the Global Aggregation of Local Explanations
Alon Mor, Yonatan Belinkov, Benny Kimelfeld
Attribution-based Explanations that Provide Recourse Cannot be Robust
Hidde Fokkema, Rianne de Heide, Tim van Erven
Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles
Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer et al.
CGS-Mask: Making Time Series Predictions Intuitive for All
Feng Lu, Wei Li, Yifei Sun et al.
Counterfactual Metarules for Local and Global Recourse
Tom Bewley, Salim I. Amoukou, Saumitra Mishra et al.
EiG-Search: Generating Edge-Induced Subgraphs for GNN Explanation in Linear Time
Shengyao Lu, Bang Liu, Keith Mills et al.
Enhance Sketch Recognition’s Explainability via Semantic Component-Level Parsing
Guangming Zhu, Siyuan Wang, Tianci Wu et al.
Faithful Model Explanations through Energy-Constrained Conformal Counterfactuals
Patrick Altmeyer, Mojtaba Farmanbar, Arie Van Deursen et al.
Gaussian Process Neural Additive Models
Wei Zhang, Brian Barr, John Paisley
Generating In-Distribution Proxy Graphs for Explaining Graph Neural Networks
Zhuomin Chen, Jiaxing Zhang, Jingchao Ni et al.
Good Teachers Explain: Explanation-Enhanced Knowledge Distillation
Amin Parchami, Moritz Böhle, Sukrut Rao et al.
Graph Neural Network Explanations are Fragile
Jiate Li, Meng Pang, Yun Dong et al.
Keep the Faith: Faithful Explanations in Convolutional Neural Networks for Case-Based Reasoning
Tom Nuno Wolf, Fabian Bongratz, Anne-Marie Rickmann et al.
Learning Performance Maximizing Ensembles with Explainability Guarantees
Vincent Pisztora, Jia Li
Manifold Integrated Gradients: Riemannian Geometry for Feature Attribution
Eslam Zaher, Maciej Trzaskowski, Quan Nguyen et al.
On Gradient-like Explanation under a Black-box Setting: When Black-box Explanations Become as Good as White-box
Yi Cai, Gerhard Wunder
Position: Do Not Explain Vision Models Without Context
Paulina Tomaszewska, Przemyslaw Biecek
Probabilistic Conceptual Explainers: Trustworthy Conceptual Explanations for Vision Foundation Models
Hengyi Wang, Shiwei Tan, Hao Wang
Towards More Faithful Natural Language Explanation Using Multi-Level Contrastive Learning in VQA
Chengen Lai, Shengli Song, Shiqi Meng et al.
Using Stratified Sampling to Improve LIME Image Explanations
Muhammad Rashid, Elvio G. Amparore, Enrico Ferrari et al.