Follow
Amit Daniely
Amit Daniely
Verified email at mail.huji.ac.il - Homepage
Title
Cited by
Cited by
Year
Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity
A Daniely, R Frostig, Y Singer
Advances in neural information processing systems 29, 2016
3922016
SGD learns the conjugate kernel class of the network
A Daniely
Advances in neural information processing systems 30, 2017
1972017
Strongly adaptive online learning
A Daniely, A Gonen, S Shalev-Shwartz
International Conference on Machine Learning, 1405-1411, 2015
1932015
Complexity theoretic limitations on learning halfspaces
A Daniely
Proceedings of the forty-eighth annual ACM symposium on Theory of Computing …, 2016
1532016
Complexity theoretic limitations on learning dnf’s
A Daniely, S Shalev-Shwartz
Conference on Learning Theory, 815-830, 2016
1202016
From average case complexity to improper learning complexity
A Daniely, N Linial, S Shalev-Shwartz
Proceedings of the forty-sixth annual ACM symposium on Theory of computing …, 2014
1182014
Depth separation for neural networks
A Daniely
Conference on Learning Theory, 690-696, 2017
1002017
Optimal learners for multiclass problems
A Daniely, S Shalev-Shwartz
Conference on Learning Theory, 287-316, 2014
1002014
Learning parities with neural networks
A Daniely, E Malach
Advances in Neural Information Processing Systems 33, 20356-20365, 2020
952020
Multiclass learnability and the ERM principle.
A Daniely, S Sabato, S Ben-David, S Shalev-Shwartz
J. Mach. Learn. Res. 16 (1), 2377-2404, 2015
952015
Multiclass learnability and the erm principle
A Daniely, S Sabato, S Ben-David, S Shalev-Shwartz
Proceedings of the 24th Annual Conference on Learning Theory, 207-232, 2011
902011
The implicit bias of depth: How incremental learning drives generalization
D Gissin, S Shalev-Shwartz, A Daniely
arXiv preprint arXiv:1909.12051, 2019
792019
Multiclass learning approaches: A theoretical comparison with implications
A Daniely, S Sabato, S Shwartz
Advances in Neural Information Processing Systems 25, 2012
592012
A PTAS for agnostically learning halfspaces
A Daniely
Conference on Learning Theory, 484-502, 2015
552015
Learning economic parameters from revealed preferences
MF Balcan, A Daniely, R Mehta, R Urner, VV Vazirani
Web and Internet Economics: 10th International Conference, WINE 2014 …, 2014
532014
Clustering is difficult only when it does not matter
A Daniely, N Linial, M Saks
arXiv preprint arXiv:1205.4891, 2012
492012
On the practically interesting instances of MAXCUT
Y Bilu, A Daniely, N Linial, M Saks
arXiv preprint arXiv:1205.4893, 2012
472012
More data speeds up training time in learning halfspaces over sparse vectors
A Daniely, N Linial, S Shalev-Shwartz
Advances in Neural Information Processing Systems 26, 2013
432013
From local pseudorandom generators to hardness of learning
A Daniely, G Vardi
Conference on Learning Theory, 1358-1394, 2021
352021
Most ReLU Networks Suffer from Adversarial Perturbations
A Daniely, H Shacham
Advances in Neural Information Processing Systems 33, 6629-6636, 2020
342020
The system can't perform the operation now. Try again later.
Articles 1–20