Latent Space Interpretation for Stylistic Analysis and Explainable Authorship Attribution
Milad Alshomary, Narutatsu Ri, Marianna Apidianaki, Ajay Patel, Smaranda Muresan, Kathleen McKeown
arXiv 2024
[paper]
|
The Effect of Model Capacity on the Emergence of In-Context Learning
Berkan Ottlik*, Narutatsu Ri*, Daniel Hsu, Clayton Sanford
ICLR 2024 (ME-FoMo Workshop)
[paper]
|
Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations
Yanda Chen, Ruiqi Zhong, Narutatsu Ri, Chen Zhao, He He, Jacob Steinhardt, Zhou Yu, Kathleen McKeown
ICML 2024 (Spotlight)
[paper]
|
Contrastive Loss is All You Need to Recover Analogies as Parallel Lines
Narutatsu Ri, Fei-Tzin Lee, Nakul Verma
ACL 2023 (RepL4NLP Workshop)
[paper] [code] [slides]
|
Enhancing Few-shot Text-to-SQL Capabilities of Large Language Models: A Study on Prompt Design Strategies
Linyong Nan, Yilun Zhao, Weijin Zou, Narutatsu Ri, Jaesung Tae, Ellen Zhang, Arman Cohan, Dragomir Radev
EMNLP 2023 (Findings)
[paper]
|
TA Experience
- COMS 4774: Unsupervised Learning
- COMS 4771: Machine Learning
- Summer 2022
- Fall 2022 (Head TA)
- Spring 2023 (Head TA)
- Spring 2024 (Head TA)
|
|