Colloquium 5/03 - Melanie Subbiah '17, Columbia University PhD Candidate

CS Colloquium
Friday, May 03
2:35pm in Wege
 
How did we get here?: The Rise of Large Language Models and the Problem of Evaluation

Large Language Models (LLMs) have permeated almost every field, but given their rapid rise and the breadth of their applications, we are still figuring out how to evaluate how good they really are at nuanced language tasks like summarization. In the first half of the talk, we will walk through the evolution of LLMs, focusing on an intuitive understanding of what makes them different, how they work, and why they work so well. Then, we will close with more recent work on evaluating LLMs in conjunction with domain experts in creative writing and psychology to determine what problems remain.

Melanie Subbiah ’17 is a 4th year Computer Science PhD student, advised in Natural Language Processing by Kathleen McKeown at Columbia University. Her research is focused on narrative understanding and summarization using Large Language Models. Prior to starting graduate school, she worked at OpenAI on ChatGPT’s predecessor, GPT-3. She is one of the first authors on the GPT-3 paper, which won a Best Paper award at NeurIPS 2020 and has been cited over 20,000 times. Melanie found her interest in language models by looking for ways to combine her love of reading and writing fiction with her love of computer science (something she first explored in her undergraduate thesis at Williams College!).