We offer a number of courses each semester that revolve around machine learning and security. These include lectures on learning algorithms in security systems and adversarial machine learning as well as our labs where people can experiment with attacks and malicious code. Teaching is fun for us and so we have been able to even win awards for our lectures and practical courses.
AML — Adversarial Machine Learning
This integrated lecture is concerned with adversarial machine learning. It explores various attacks on learning algorithms, including white-box and black-box adversarial examples, poisoning, backdoors, membership inference, and model extraction. It also examines the security and privacy implications of these attacks and discusses defensive strategies, ranging from threat modeling to integrated countermeasures.
This lab is a hands-on course that explores machine learning in computer security. Students design and develop intelligent systems for security problems such as attack detection, malware clustering, and vulnerability discovery. The developed systems are trained and evaluated on real-world data, providing insight into their strengths and weaknesses in practice. The lab is a continuation of the lecture "Machine Learning for Computer Security" and thus knowledge from that course is expected.
MONSOON — LLM-based Network Scanning
This project explores the use of large language models (LLMs) for automated vulnerability detection. Participants will extend Google’s Tsunami network scanner using LLM-generated plugins. The course involves experimenting with LLMs to translate vulnerability reports into scanner logic, evaluating the quality and reliability of generated plugins, and designing improvements. The overall goal is to assess the feasibility of LLM-driven network scanning and to gain hands-on experience at AI in security.
RAID — Reproducing AI Attacks and Defense
This project puts recent AI research to the test. Participants will re-implement current attack and defense techniques that utilize machine learning, evaluate their capabilities, and design improvements. Possible techniques include attacks and defenses for large language models and computer vision systems. The overall goal is to learn about the state of the art in AI security and reproduce results where possible.
CARE — Code Analysis and Reverse Engineering
This block seminar is concerned with the analysis and reverse engineering of code. We will cover different techniques for program analysis of source code and binary code. In addition, we will look at concepts for understanding unknown software, reverse engineering its functionality, and discovering security vulnerabilities. The seminar is intended for Master students.
SEPA — Security and Privacy of AI
This block seminar focuses on security and privacy in artificial intelligence and machine learning. We will examine recent attacks on learning algorithms and discuss their impact on practical security and privacy. We will also look at possible defenses and countermeasures to protect learning algorithms and the underlying data. The seminar is intended for Bachelor students.
Below is a list of all the courses we have offered in recent years. Note that some courses are not offered regularly, while others are planned and not yet available. Please consult the respective pages on the ISIS platform of TU Berlin.
Are you looking for an exciting topic for your Bachelor or Master thesis? We offer research-oriented thesis topics at the intersection of machine learning and computer security. The full list of topics is available exclusively through the STROD portal of TU Berlin.
As we have only a limited number of thesis slots, we require successful participation in relevant courses to ensure a good match. Please read the topic descriptions and requirements carefully. If you have any questions, feel free to contact the supervisors listed for each topic.