Eduard Hovy is a research professor at the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. He also holds adjunct professorships in CMU’s Machine Learning Department and at USC (Los Angeles). Dr. Hovy completed a Ph.D. in Computer Science (Artificial Intelligence) at Yale University in 1987 and was awarded honorary doctorates from the National Distance Education University (UNED) in Madrid in 2013 and the University of Antwerp in 2015. He is one of the initial 17 Fellows of the Association for Computational Linguistics (ACL) and is also a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI). Dr. Hovy’s research focuses on computational semantics of language, and addresses various areas in Natural Language Processing and Data Analytics, including in-depth machine reading of text, information extraction, automated text summarization, question answering, the semi-automated construction of large lexicons and ontologies, and machine translation. In late 2020 his Google h-index was 89, with over 42,000 citations. Dr. Hovy is the author or co-editor of eight books and over 450 technical articles and is a popular invited speaker. From 2003 to 2015 he was co-Director of Research for the Department of Homeland Security’s Center of Excellence for Command, Control, and Interoperability Data Analytics, a distributed cooperation of 17 universities. In 2001 Dr. Hovy served as President of the international Association of Computational Linguistics (ACL), in 2001–03 as President of the International Association of Machine Translation (IAMT), and in 2010–11 as President of the Digital Government Society (DGS). Dr. Hovy regularly co-teaches Ph.D.-level courses and has served on Advisory and Review Boards for both research institutes and funding organizations in Germany, Italy, Netherlands, Ireland, Singapore, and the USA.
Title: Discourse processing in the time of DNNs
Eunsol Choi is an assistant professor in the computer science department at the University of Texas at Austin. Her research focuses on natural language processing, various ways to recover semantics from unstructured text. Recently, her research focused on question answering and entity analysis. Prior to UT, she was a Ph.D. student at the University of Washington, where she was advised by Luke Zettlemoyer and Yejin Choi. She received an undergraduate degree in mathematics and computer science at Cornell University.
Title: Learning to Understand Language in Context
Abstract: Many applications of natural language processing need to understand text from the rich context in which it occurs and present information in a new context. Interpreting the rich context of a sentence, either conversation history, social context, or preceding contents in the document, is challenging yet crucial to understand the sentence. In the first part of the talk, we study the context-reduction process by defining the problem of sentence decontextualization: taking a sentence together with its context and rewriting it to be interpretable out of context, while preserving its meaning. Typically a sentence taken out of a context is unintelligible, but decontextualization recovers key pieces of information and make sentences stand alone. We demonstrate the utility of this process, as a preprocessing for open-domain question answering and for generating an informative and concise answer to an information-seeking query. In the latter half of the talk, we focus on building models to integrate rich context to interpret single utterances more accurately. We study the challenges of interpreting rich context in question answering, by first integrating conversational history and by integrating entity information. Together, these works show how modeling interaction between text and the rich context in which it occurs can improve performances of NLP systems.