Follow
Thomas Fel
Thomas Fel
Brown University, ANITI - Artificial and Natural Intelligence Toulouse Institute
Verified email at brown.edu - Homepage
Title
Cited by
Cited by
Year
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods
T Fel, J Colin, R Cadène, T Serre
NeurIPS 2022, Advances in Neural Information Processing Systems, 2021
742021
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
T Fel, R Cadene, M Chalvidal, M Cord, D Vigouroux, T Serre
NeurIPS 2021, Advances in Neural Information Processing Systems, 2021
642021
Craft: Concept recursive activation factorization for explainability
T Fel, A Picard, L Bethune, T Boissin, D Vigouroux, J Colin, R Cadène, ...
CVPR 2023, Proceedings of the IEEE/CVF Conference on Computer Vision and …, 2023
622023
How good is your explanation? algorithmic stability measures to assess the quality of explanations for deep neural networks
T Fel, D Vigouroux, R Cadène, T Serre
WACV 2022, Proceedings of the IEEE/CVF Winter Conference on Applications of …, 2022
59*2022
Harmonizing the object recognition strategies of deep neural networks with humans
T Fel, I Felipe, D Linsley, T Serre
NeurIPS 2022, Advances in Neural Information Processing Systems, 2022
542022
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
T Fel, M Ducoffe, D Vigouroux, R Cadène, M Capelle, C Nicodème, ...
CVPR 2023, Proceedings of the IEEE/CVF Conference on Computer Vision and …, 2022
312022
Xplique: A Deep Learning Explainability Toolbox
T Fel, L Hervier, D Vigouroux, A Poche, J Plakoo, R Cadene, M Chalvidal, ...
CVPR 2022, Workshop on Explainable Artificial Intelligence for Computer …, 2022
262022
Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure
P Novello, T Fel, D Vigouroux
NeurIPS 2022, Advances in Neural Information Processing Systems, 2022
212022
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation
T Fel, V Boutin, M Moayeri, R Cadène, L Bethune, M Chalvidal, T Serre
NeurIPS 2023, Advances in Neural Information Processing Systems, 2023
162023
On the Foundations of Shortcut Learning
KL Hermann, H Mobahi, T Fel, MC Mozer
ICLR 2024, International Conference on Learning Representations, 2023
132023
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective
M Serrurier, F Mamalet, T Fel, L Béthune, T Boissin
NeurIPS 2023, Advances in Neural Information Processing Systems, 2023
10*2023
Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization
T Fel, T Boissin, V Boutin, A Picard, P Novello, J Colin, D Linsley, ...
NeurIPS 2023, Advances in Neural Information Processing Systems, 2023
102023
Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex
D Linsley, IF Rodriguez, T Fel, M Arcaro, S Sharma, M Livingstone, ...
NeurIPS 2023, Advances in Neural Information Processing Systems, 2023
82023
Diffusion Models as Artists: Are we Closing the Gap between Humans and Machines?
V Boutin, T Fel, L Singhal, R Mukherji, A Nagaraj, J Colin, T Serre
ICML 2023, Proceedings of the International Conference on Machine Learning, 2023
82023
Can we reconcile safety objectives with machine learning performances?
L Alecu, H Bonnin, T Fel, L Gardes, S Gerchinovitz, L Ponsolle, F Mamalet, ...
ERTS 2022, 2022
82022
COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks
F Jourdan, A Picard, T Fel, L Risser, JM Loubes, N Asher
ACL 2023, Proceedings of the Annual Meeting of the Association for …, 2023
72023
Conviformers: Convolutionally guided vision transformer
M Vaishnav, T Fel, IF Rodríguez, T Serre
arXiv preprint arXiv:2208.08900, 2022
32022
Influenciæ: A library for tracing the influence back to the data-points
A Picard, L Hervier, T Fel, D Vigouroux
World Conference on Explainable Artificial Intelligence, 193-204, 2024
22024
Conformal prediction for trustworthy detection of railway signals
L Andéol, T Fel, F de Grancey, L Mossina
AI and Ethics 4 (1), 157-161, 2024
22024
Feature Accentuation: Revealing'What'Features Respond to in Natural Images
C Hamblin, T Fel, S Saha, T Konkle, G Alvarez
arXiv preprint arXiv:2402.10039, 2024
12024
The system can't perform the operation now. Try again later.
Articles 1–20