Attacks on LLMs
– Course intro
– LLM security intro
– Jailbreaks: manual attacks and automated jailbreaks
– Alignment removal
– Prompt injections and prompt extraction
– Token smuggling and sponge attacks
X
X
X
X
X
Learn the fundamentals of large language models (LLMs) security by exploring real-world attacks on LLMs, defense strategies and security frameworks.
Intermediate
$960 inc. tax per learner
Prerequisites
Intermediate
$960 inc. tax per learner
Prerequisites
The rise of large language models (LLMs) has not only transformed how we build and interact with AI systems but has also introduced new and complex security challenges.
As LLMs become increasingly integrated into real-world applications, understanding how they can be attacked, exploited, and protected is no longer optional — it’s essential.
Led by Vladislav Tushkanov, Group Manager at Kaspersky AI Technology Research Center, this course is built on his expertise in adversarial machine learning and, recently, large
language model (LLM) security. Through a mix of engaging video lectures, hands-on labs and interactive checkpoints, you’ll dive into how LLMs can be exploited using techniques like jailbreaks, prompt injections and more. You’ll also gain practical skills in defending against these threats at the model, prompt, system and service levels, using structured frameworks to assess and strengthen LLM security.
Group Manager at Kaspersky AI Technology Research Center
Vladislav has been with Kaspersky since 2015. He and his team work at applying data science and machine learning techniques to detect threats — such as malware, phishing and spam — faster and better, as, well as researching cutting-edge AI technologies to predict threats that are yet to come. Vladislav holds a Master of Arts in Computational Linguistics from National Research University Higher School of Economics.
Today his research focuses on practical applications of various AI technologies, such as language models, to threat detection, as well as adversarial machine learning – finding vulnerabilities in AI-enabled applications.
– Course intro
– LLM security intro
– Jailbreaks: manual attacks and automated jailbreaks
– Alignment removal
– Prompt injections and prompt extraction
– Token smuggling and sponge attacks
– Why and how to protect LLMs
– Model-level defenses: alignment and unlearning
– Prompt-level, system-level and service-level defenses
– LLM Security Toolbox
– Approaches to LLM Security Analysis
– LLM Security: cases
– LLM Security: further study and recap
Beginner AI pentesters
For those starting their career in offensive AI cybersecurity.
Security consultants
Ideal for consultants looking to understand and mitigate risks in LLM-based systems.
Developers
For engineers building or integrating LLMs who want to grasp potential vulnerabilities and secure their AI applications.
AI/LLM architects
Designed for those shaping AI infrastructure, offering insight into emerging threats and defense strategies for large language models.
Prompt engineers
Perfect for specialists working closely with LLMs who need to recognize attack surfaces and improve the robustness of prompt-based systems.
Guided video lectures
Learn from Vladislav Tushkanov, Group Manager at Kaspersky AI Technology Research Center, who shares insights from real-world LLM security research and adversarial AI work.
Practical virtual laboratory
Apply your knowledge in a virtual lab environment designed for hands-on experience with LLM attacks and defenses techniques.
Iterative learning
Follow a structured, modular approach which combines theory, checkpoint quizzes and practical assignments.
6 months to complete your course
Course delivered in English
Self guided learning that fits around your life
100 hours in a browser based Virtual lab with hands on training
Browser based via desktop, mobile or tablet
20 videos to guide you through the course
PDF document on a Kaspersky letterhead certifying the completion of the course, signed by the course leader(s)
Group Manager at Kaspersky AI Technology Research Center