Follow
Linqing Liu
Linqing Liu
Databricks Applied AI
Verified email at ucl.ac.uk - Homepage
Title
Cited by
Cited by
Year
Distilling task-specific knowledge from bert into simple neural networks
R Tang*, Y Lu*, L Liu*, L Mou, O Vechtomova, J Lin
arXiv preprint arXiv:1903.12136, 2019
4652019
Generative Adversarial Network for Abstractive Text Summarization
L Liu, Y Lu, M Yang, Q Qu, J Zhu, H Li
AAAI Conference on Artificial Intelligence, 2018
2272018
PAQ: 65 million probably-asked questions and what you can do with them
P Lewis, Y Wu, L Liu, P Minervini, H Küttler, A Piktus, P Stenetorp, ...
Transactions of the Association for Computational Linguistics 9, 1098-1115, 2021
2012021
What the DAAM: Interpreting Stable Diffusion Using Cross Attention
R Tang*, L Liu*, A Pandey, Z Jiang, G Yang, K Kumar, J Lin, F Ture
arXiv preprint arXiv:2210.04885, 2022
1232022
Bridging the Gap Between Relevance Matching and Semantic Matching for Short Text Similarity Modeling
J Rao, L Liu, Y Tay, W Yang, P Shi, J Lin
EMNLP, 2019
852019
When do flat minima optimizers work?
J Kaddour, L Liu, R Silva, MJ Kusner
Advances in Neural Information Processing Systems 35, 16577-16595, 2022
742022
Neurips 2020 efficientqa competition: Systems, analyses and lessons learned
S Min, J Boyd-Graber, C Alberti, D Chen, E Choi, M Collins, K Guu, ...
NeurIPS 2020 Competition and Demonstration Track, 86-111, 2021
742021
Controllable abstractive dialogue summarization with sketch supervision
CS Wu*, L Liu*, W Liu, P Stenetorp, C Xiong
arXiv preprint arXiv:2105.14064, 2021
522021
Challenges in generalization in open domain question answering
L Liu, P Lewis, S Riedel, P Stenetorp
arXiv preprint arXiv:2109.01156, 2021
472021
Detecting" Smart" Spammers On Social Network: A Topic Model Approach
L Liu, Y Lu, Y Luo, R Zhang, L Itti, J Lu
Proceedings of the NAACL Student Research Workshop, 2016
472016
Mkd: a multi-task knowledge distillation approach for pretrained language models
L Liu, H Wang, J Lin, R Socher, C Xiong
arXiv preprint arXiv:1911.03588, 2019
46*2019
Multi-task knowledge distillation for language model
L Liu, C Xiong
US Patent 11,620,515, 2023
162023
Incorporating Contextual and Syntactic Structures Improves Semantic Similarity Modeling
L Liu, W Yang, J Rao, R Tang, J Lin
EMNLP, 2019
142019
Query expansion using contextual clue sampling with language models
L Liu, M Li, J Lin, S Riedel, P Stenetorp
arXiv preprint arXiv:2210.07093, 2022
122022
Coarse-to-fine abstractive dialogue summarization with controllable granularity
CS Wu, W Liu, C Xiong, L Liu
US Patent App. 17/159,625, 2022
102022
Distilling task-specific knowledge from bert into simple neural networks. arXiv 2019
R Tang, Y Lu, L Liu, L Mou, O Vechtomova, J Lin
arXiv preprint arXiv:1903.12136, 1903
91903
Distilling taskspecific knowledge from BERT into simple neural networks. arXiv
R Tang, Y Lu, L Liu, L Mou, O Vechtomova, J Lin
arXiv preprint arXiv:1903.12136, 2019
72019
Scalable content-based analysis of images in web archives with tensorflow and the archives unleashed toolkit
HW Yang, L Liu, I Milligan, N Ruest, J Lin
2019 ACM/IEEE Joint Conference on Digital Libraries (JCDL), 436-437, 2019
62019
Distilling task-specific knowledge from BERT into simple neural networks. CoRR abs/1903.12136 (2019)
R Tang, Y Lu, L Liu, L Mou, O Vechtomova, J Lin
URL: http://arxiv. org/abs, 1903
51903
Systems and methods for multi-scale pre-training with densely connected transformer
L Liu, C Xiong
US Patent 11,941,356, 2024
12024
The system can't perform the operation now. Try again later.
Articles 1–20