My name is Narutatsu, but please feel free to call me Edward. I am an incoming PhD student at Princeton University in the Computer Science Department and Princeton Language + Intelligence (PLI), where I will be fortunate to be advised by Professor Sanjeev Arora. I am gratefully supported by the Gordon Wu Fellowship and the Ezoe Memorial Recruit Foundation.
Previously, I completed my studies at Columbia University, where I was a part of the Egleston Scholars Program. I am indebted to Professors Kathleen McKeown, Nakul Verma, and Daniel Hsu, with whom I have had the great fortune of being advised by.
Understanding Analogy Parallelism in Vector Space Models
, Nakul Verma
(Coming Soon!)
Reranking-based Generation for Unbiased Perspective Summarization
, Nicholas Deas, Kathleen McKeown
(Coming Soon!)
Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple Interactions
Yik Siu Chan*, *, Yuxin Xiao*, Marzyeh Ghassemi
arxiv 2025
Latent Space Interpretation for Stylistic Analysis and Explainable Authorship
Attribution
Milad Alshomary, , Marianna Apidianaki, Ajay
Patel,
Smaranda Muresan, Kathleen McKeown
COLING 2025
The Effect of Model Capacity on the Emergence of In-Context Learning
Berkan Ottlik*, *, Daniel Hsu, Clayton Sanford
ICLR 2024 (ME-FoMo Workshop)
Do Models Explain Themselves? Counterfactual Simulatability of Natural Language
Explanations
Yanda Chen, Ruiqi Zhong, , Chen Zhao, He He,
Jacob Steinhardt,
Zhou Yu, Kathleen McKeown
ICML 2024 (Spotlight)
Contrastive Loss is All You Need to Recover Analogies as Parallel Lines
, Fei-Tzin Lee, Nakul Verma
ACL 2023 (RepL4NLP Workshop)
Enhancing Few-shot Text-to-SQL Capabilities of Large Language Models: A Study on Prompt
Design
Strategies
Linyong Nan, Yilun Zhao, Weijin Zou, , Jaesung
Tae, Ellen
Zhang, Arman Cohan, Dragomir Radev
EMNLP 2023 (Findings)