
This course provides in-depth knowledge of concepts, methods, and applications of large language models (LLMs) in artificial intelligence (AI) using the example of natural language processing. In addition to the theoretical foundations of LLM architectures and training methods, important aspects such as prompting, fine-tuning, and evaluation methods are covered.
A particular focus is placed on modern application paradigms such as retrieval-augmented generation (RAG), the integration of large language models into information systems such as search engines, and the use of LLMs in interactive applications.
The content is supplemented by practical exercises in which students independently implement and evaluate LLM-based methods.
- Dozent*in: Thomas Arnold
- Dozent*in: Frank Niu