Follow
Angelos Katharopoulos
Angelos Katharopoulos
Verified email at apple.com - Homepage
Title
Cited by
Cited by
Year
Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
A Katharopoulos, A Vyas, N Pappas, F Fleuret
International Conference on Machine Learning (ICML), 2020
16392020
Not all samples are created equal: Deep learning with importance sampling
A Katharopoulos, F Fleuret
International Conference on Machine Learning (ICML), 2018
5942018
Fast transformers with clustered attention
A Vyas, A Katharopoulos, F Fleuret
Neural Information Processing Systems (NeurIPS), 2020
1742020
Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks
D Paschalidou, A Katharopoulos, A Geiger, S Fidler
Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021, 2021
1162021
Biased importance sampling for deep neural network training
A Katharopoulos, F Fleuret
arXiv preprint arXiv:1706.00043, 2017
952017
Processing megapixel images with deep attention-sampling models
A Katharopoulos, F Fleuret
International Conference on Machine Learning (ICML), 2019
712019
MLX: Efficient and flexible machine learning on Apple silicon
A Hannun, J Digani, A Katharopoulos, R Collobert
122023
Controllable music production with diffusion models and guidance gradients
M Levy, B Di Giorgi, F Weers, A Katharopoulos, T Nickson
arXiv preprint arXiv:2311.00613, 2023
112023
Masked autoencoding does not help natural language supervision at scale
F Weers, V Shankar, A Katharopoulos, Y Yang, T Gunter
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
112023
Fast supervised lda for discovering micro-events in large-scale video datasets
A Katharopoulos, D Paschalidou, C Diou, A Delopoulos
Proceedings of the 24th ACM international conference on Multimedia, 332-336, 2016
62016
Specialized Language Models with Cheap Inference from Limited Domain Data
D Grangier, A Katharopoulos, P Ablin, A Hannun
arXiv preprint arXiv:2402.01093, 2024
42024
Stop Wasting my FLOPS: Improving the Efficiency of Deep Learning Models
A Katharopoulos
EPFL, 2022
22022
Learning local feature aggregation functions with backpropagation
A Katharopoulos, D Paschalidou, C Diou, A Delopoulos
2017 25th European Signal Processing Conference (EUSIPCO), 748-752, 2017
12017
No Need to Talk: Asynchronous Mixture of Language Models
A Filippova, A Katharopoulos, D Grangier, R Collobert
arXiv preprint arXiv:2410.03529, 2024
2024
Projected Language Models: A Large Model Pre-Segmented Into Smaller Ones
D Grangier, A Katharopoulos, P Ablin, A Hannun
ICML 2024 Workshop on Foundation Models in the Wild, 2024
2024
Processing Megapixel Images with Deep Attention-Sampling Models, Katharopoulos, Angelos and Fleuret, Francois, Idiap-RR-07-2019
A Katharopoulos
2019
Segmenting the Unknown: Discrete Diffusion Models for Non-Deterministic Segmentation
E COURDIER, A Katharopoulos, F Fleuret
Formal Name (English) L'IDIAP Laboratory
C Atanasoaei, SO Ba, S Bengio, JI Biel Tres, R Boghetti, H Bourlard, ...
Supplementary Material for Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks.
D Paschalidou, A Katharopoulos, A Geiger, S Fidler
Not All Samples Are Created Equal Supplementary material
A Katharopoulos, F Fleuret
The system can't perform the operation now. Try again later.
Articles 1–20