AI Research at OpenAI: Language Models and Unsupervised Learning
Are you interested in working in ML or AI after Williams? In this talk, I will cover my path from Williams to AI research in the tech industry, a variety of projects OpenAI has worked on (robotics, multi-agent environments, music generation, video gameplay, etc.), and a deep dive into what I currently work on: language models. Language modeling is the task of learning to predict the next word in a sequence of words. This task is challenging for AI systems as humans use a great deal of world knowledge and reasoning to coherently produce language. Prior to the last couple years, little progress was made on AI systems that could convincingly generate long passages of text across a variety of domains. Modern deep learning architectures, such as Transformers, have changed the landscape of the field and brought realistic language generators within reach. I will discuss the state of language models and natural language generation research with a focus on the OpenAI system GPT-2.
Melanie Subbiah is a member of the language team at OpenAI, an independent AI research lab focused on advancing AI policy as well as technical research. She contributes to research on natural language generation and understanding. Prior to joining OpenAI, Melanie was an AI researcher at Apple, covering work across many areas, including computer vision and reinforcement learning. Melanie graduated from Williams in 2017 and completed her senior thesis with Professor Andrea Danyluk.