Narutatsu Ri

My name is Edward, and I'm a first year MS Express student at Columbia University, where I'm very fortunate to be advised by Prof. Kathleen McKeown and Daniel Hsu.

I am broadly interested in enabling NLP models to be interpretable and controllable. I'm grateful to be supported by the Egleston Scholars Program and the Ezoe Memorial Recruit Foundation.

profile photo
Some Publications
Latent Space Interpretation for Stylistic Analysis and Explainable Authorship Attribution
Milad Alshomary, Narutatsu Ri, Marianna Apidianaki, Ajay Patel, Smaranda Muresan, Kathleen McKeown
arXiv 2024
[paper]
The Effect of Model Capacity on the Emergence of In-Context Learning
Berkan Ottlik*, Narutatsu Ri*, Daniel Hsu, Clayton Sanford
ICLR 2024 (ME-FoMo Workshop)
[paper]
Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations
Yanda Chen, Ruiqi Zhong, Narutatsu Ri, Chen Zhao, He He, Jacob Steinhardt, Zhou Yu, Kathleen McKeown
ICML 2024 (Spotlight)
[paper]
Contrastive Loss is All You Need to Recover Analogies as Parallel Lines
Narutatsu Ri, Fei-Tzin Lee, Nakul Verma
ACL 2023 (RepL4NLP Workshop)
[paper] [code] [slides]
Enhancing Few-shot Text-to-SQL Capabilities of Large Language Models: A Study on Prompt Design Strategies
Linyong Nan, Yilun Zhao, Weijin Zou, Narutatsu Ri, Jaesung Tae, Ellen Zhang, Arman Cohan, Dragomir Radev
EMNLP 2023 (Findings)
[paper]
TA Experience

  • COMS 4774: Unsupervised Learning
    • Fall 2024
  • COMS 4771: Machine Learning
    • Summer 2022
    • Fall 2022 (Head TA)
    • Spring 2023 (Head TA)
    • Spring 2024 (Head TA)

Miscellaneous

Website design taken from Jon Barron.