(she)
Email: isabelvp at stanford
Hello!
I'm an incoming assistant professor at UBC Linguistics (September 2025) and a Kempner Fellow at Harvard starting September 2024. I am currently finishing up my PhD at Stanford in the Natural Language Processing group, advised by Dan Jurafsky.
I am looking to recruit students this cycle (applying 2024, starting Sept 2025). If you are applying for a PhD in NLP or computational linguistics, apply to UBC and mention that you'd like to work with me!
I work on understanding and defining the capabilities of large language models in relation to the human language system.
I am especially interested in pursuing an interdisciplinary research program, combining computational empirical machine learning methods with theories of human language. My principal interests include: how language models learn and use generalizable grammatical abstractions, the interaction between structure and meaning representations in high-dimensional vector spaces, and using multilingual settings to test the limits of abstraction in language models.
I am funded by an NSF GRFP fellowship, and a Stanford Graduate Fellowship.
I did my undergraduate at Berkeley, where I got BAs in Computer Science and in History. My history thesis was based on research at the archives of the League For Democracy in Greece, a London-based solidarity organisation supporting the left in the Greek Civil War. It received the Kirk Underhill Prize.
Georgia Tech Modern Languages Colloquium, Nov 2023 [slides]
I was selected for the 2023 Rising Stars in EECS
Stanford CS 224N, "Insights between NLP and Linguistics" [slides]
Brown Computer Science, Carnegie Mellon LTI, July 2023
Decoding Communication in Nonhuman Species Workshop, June 2023 [slides] [recording]
NYU CAP Lab, Apr 2023
Cornell University C.Psyd Group, Dec 2022
SIGTYP 2022 keynote, July 2022 [slides] [recording]
UT Austin Computational Linguistics Group, April 2022
UC Santa Barbara Computational Linguistics Group, October 2020
Mission: Impossible Language Models - Julie Kalini, Isabel Papadimitriou , Richard Futrell, Kyle Mahowald, and Christopher Potts (preprint)
Injecting structural hints: Using language models to study inductive biases in language learning - Isabel Papadimitriou and Dan Jurafsky, Findings of EMNLP 2023
Multilingual BERT has an accent: Evaluating English influences on fluency in multilingual models - Isabel Papadimitriou* , Kezia Lopez*, and Dan Jurafsky, Findings of EACL 2023, SIGTYP 2023 [slides]
Oolong: Investigating What Makes Crosslingual Transfer Hard with Controlled Studies - Zhengxuan Wu*, Isabel Papadimitriou*, Alex Tamkin*, EMNLP 2023 [pdf]
The Greek possessive modal eho as a special agentive modality - Isabel Papadimitriou and Cleo Condoravdi, LSA 2023 (poster) [abstract]
When classifying grammatical role, BERT doesn't care about word order... except when it matters - Isabel Papadimitriou , Richard Futrell, and Kyle Mahowald, ACL 2022 (oral presentation) [pdf] [code]
Language, Section 2.1 - Isabel Papadimitriou and Christopher D. Manning
In On the Opportunities and Risks of Foundation Models (full list of co-authors)
Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERT - Isabel Papadimitriou , Ethan A. Chi, Richard Futrell, and Kyle Mahowald, EACL 2021 (oral presentation) [pdf] [code]
Learning Music Helps You Read: Using transfer to study linguistic structure in language models - Isabel Papadimitriou and Dan Jurafsky, EMNLP 2020 (oral presentation) [pdf] [code]
I was the TA for CS324H, History of Natural Language Processing, taught by Dan Jurafsky and Chris Manning in Winter 2024
I was a TA for CS224N, Natural Language Processing with Deep Learning, taught by Chris Manning in Winter 2023
I was the TA for the Independent Study in Machine Translation seminar taught by Noah Goodman in Winter 2020
At Berkeley I TAed CS70 Discrete Math and Probability Fall 2015, taught by Satish Rao and Jean Walrand