Veranstaltungen des kommenden Semesters
Robustness and generalisation in natural language processing
In natural language processing (NLP), we set out to solve language-related tasks (e.g., machine translation, question answering) but often evaluate on narrow, in-distribution test datasets. With recent advances in deep learning, modern systems have achieved high accuracy on many canonical datasets, but still seem far from solving general tasks. In this class, we will survey recent research on robustness and generalisation that studies this gap between in-distribution accuracy and task competency through out-of-distribution settings. We will learn about different settings in which NLP systems often fail to generalise well, including adversarial perturbations, settings that require compositional reasoning, and domain transfer. Across these topics, we will cover methods both for measuring these robustness and generalisation issues and ways that we can improve model robustness and generalisation.
Familiarity with natural language processing and/or machine learning at the level of
8.3470 Deep learning for natural language processing (students enrolled to 8.3470 in this semester are also eligible). Please email me if you want to enrol but are unsure if you meet the prerequisites.
Ort: 93/E31: Mo. 16:00 - 18:00 (13x),
66/E34: Di. 14:00 - 16:00 (14x)
Zeiten: Mo. 16:00 - 18:00 (wöchentlich), Ort: 93/E31, Di. 14:00 - 16:00 (wöchentlich), Ort: 66/E34
Erster Termin: Mo., 24.10.2022 16:00 - 18:00, Ort: 93/E31
Veranstaltungsart: Seminar (Offizielle Lehrveranstaltungen)
- Cognitive Science > Bachelor-Programm
- Cognitive Science > Master-Programm
- Cognitive Science > Promotionsprogramm
- Data Science