Veranstaltungen des kommenden Semesters
The course will provide a historical perspective on deep learning for natural language processing (NLP) and will address recent topics such as Transformers (e.g., BERT and GPT), attention-based models and recent models for dialogue. In addition, we will discuss language acquisition, the cognitive plausibility of AI models, and the extraction of semantic structure from raw text. We will take a look at the current revival of linguistic structure in the deep learning community, either through the analysis of attention patterns in Transformers (according to which linguistic structure is a 'by-product' of neural attention) or through diagnostic classifiers.
We will go through a bit of theory in the first part of every lecture, and proceed with a discussion of
recent literature in the second part, with an active role for students which will introduce papers on the collective reading list and work in groups on short practicals.
Students will obtain knowledge about the historical and current trends in deep learning-based NLP. They will be able to take a critical look at current literature and will have a rather advanced understanding of the challenges, opportunities and pitfalls of deep learning applied to language. Furthermore, they will have obtained practical knowledge about how to instantiate some of the latest NLP models.
Zeiten: Termine am Dienstag. 27.09.22 14:00 - 18:00, Mittwoch. 28.09.22 09:00 - 12:00, Mittwoch. 28.09.22 14:00 - 16:00, Donnerstag. 29.09.22 09:00 - 13:00
Erster Termin: Di., 27.09.2022 14:00 - 18:00, Ort: 93/E06
Veranstaltungsart: Seminar (Offizielle Lehrveranstaltungen)
- Cognitive Science > Bachelor-Programm
- Cognitive Science > Master-Programm
- Cognitive Science > Promotionsprogramm