mLUKE: The power of entity representations in multilingual pretrained language models R Ri, I Yamada, Y Tsuruoka Proceedings of the 60th Annual Meeting of the Association for Computational …, 2021 | 47 | 2021 |
Pretraining with artificial language: Studying transferable knowledge in language models R Ri, Y Tsuruoka Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022 | 36 | 2022 |
EASE: Entity-Aware Contrastive Learning of Sentence Embedding S Nishikawa, R Ri, I Yamada, Y Tsuruoka, I Echizen Proceedings of the 2022 Conference of the North American Chapter of the …, 2022 | 31 | 2022 |
Designing the business conversation corpus M Rikters, R Ri, T Li, T Nakazawa arXiv preprint arXiv:2008.01940, 2020 | 25 | 2020 |
Document-aligned Japanese-English conversation parallel corpus M Rikters, R Ri, T Li, T Nakazawa arXiv preprint arXiv:2012.06143, 2020 | 11 | 2020 |
Japanese–English conversation parallel corpus for promoting context-aware machine translation research M Rikters, R Ri, T Li, T Nakazawa Journal of Natural Language Processing 28 (2), 380-403, 2021 | 10 | 2021 |
Revisiting the context window for cross-lingual word embeddings R Ri, Y Tsuruoka Proceedings of the 58th Annual Meeting of the Association for Computational …, 2020 | 8 | 2020 |
Zero-pronoun data augmentation for japanese-to-english translation R Ri, T Nakazawa, Y Tsuruoka arXiv preprint arXiv:2107.00318, 2021 | 6 | 2021 |
Data augmentation with unsupervised machine translation improves the structural similarity of cross-lingual word embeddings S Nishikawa, R Ri, Y Tsuruoka arXiv preprint arXiv:2006.00262, 2020 | 6 | 2020 |
Data Augmentation for Learning Bilingual Word Embeddings with Unsupervised Machine Translation S Nishikawa, R Ri, Y Tsuruoka arXiv preprint arXiv:2006.00262, 2020 | 4 | 2020 |
Emergent communication with attention R Ri, R Ueda, J Naradowsky arXiv preprint arXiv:2305.10920, 2023 | 3 | 2023 |
Finding sub-task structure with natural language instruction R Ri, Y Hou, R Marinescu, A Kishimoto Proceedings of the First Workshop on Learning with Natural Language …, 2022 | 2 | 2022 |
Self-preference bias in llm-as-a-judge K Wataoka, T Takahashi, R Ri arXiv preprint arXiv:2410.21819, 2024 | 1 | 2024 |
Self-translate-train: Enhancing cross-lingual transfer of large language models via inherent capability R Ri, S Kiyono, S Takase arXiv preprint arXiv:2407.00454, 2024 | 1 | 2024 |
Self-Translate-Train: A Simple but Strong Baseline for Cross-lingual Transfer of Large Language Models R Ri, S Kiyono, S Takase arXiv e-prints, arXiv: 2407.00454, 2024 | 1 | 2024 |
LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation I Yamada, R Ri arXiv preprint arXiv:2402.11485, 2024 | 1 | 2024 |
Modeling Target-side Inflection in Placeholder Translation R Ri, T Nakazawa, Y Tsuruoka arXiv preprint arXiv:2107.00334, 2021 | 1 | 2021 |
Large Vocabulary Size Improves Large Language Models S Takase, R Ri, S Kiyono, T Kato arXiv preprint arXiv:2406.16508, 2024 | | 2024 |
The University of Tokyo’s Submissions to the WAT 2020 Shared Task M Rikters, T Nakazawa, R Ri Proceedings of the 7th Workshop on Asian Translation, 147-153, 2020 | | 2020 |