Narutatsu Ri
Hi, my name is Edward, and I'm a rising senior at Columbia University. I'm very fortunate to be advised by Prof. Nakul Verma and Prof. Daniel Hsu.
I'm mainly interested in NLP—specifically, the principled understanding of self-supervised learning paradigms (text embeddings), self-attention models (transformers), and the geometric properties of representation learning (representation degeneration problem). I'm grateful to be supported by the Egleston Scholars Program and the Ezoe Memorial Recruit Foundation.
Email  / 
CV  / 
LinkedIn  / 
Twitter  / 
Github
|
|
Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations
Yanda Chen, Ruiqi Zhong, Narutatsu Ri, Chen Zhao, He He, Jacob Steinhardt, Zhou Yu, Kathleen McKeown
arXiv 2023
[paper]
|
Contrastive Loss is All You Need to Recover Analogies as Parallel Lines
Narutatsu Ri, Fei-Tzin Lee, Nakul Verma
ACL 2023 RepL4NLP
[paper] [code] [slides]
|
Enhancing Few-shot Text-to-SQL Capabilities of Large Language Models: A Study on Prompt Design Strategies
Linyong Nan, Yilun Zhao, Weijin Zou, Narutatsu Ri, Jaesung Tae, Ellen Zhang, Arman Cohan, Dragomir Radev
arXiv 2023
[paper]
|
TA Experience
COMS 4771: Machine Learning (Summer 2022, Fall 2022 (Head TA), Spring 2023 (Head TA))
|
|