X

Enroll your team

I agree to provide KASPERSKY LAB SWITZERLAND GmbH, Bahnhofstrasse 69, 8001 Zürich, Switzerland with the following information about me (First Name, Last Name, email) order to allow Kaspersky Lab Switzerland GmbH to contact me to participate in surveys and to send me information via email about Kaspersky Lab's products and services including personalized promotional offers and premium assets like white papers, webcasts, videos, events and other marketing materials. I confirm that I have been provided with this Privacy Policy for Web Sites. I understand that my consent is optional and I can withdraw this consent at any time via e-mail by clicking the “unsubscribe” link that I find at the bottom of any e-mail sent to me for the purposes mentioned above”. Web privacy policy https://xtraining.kaspersky.com/privacy/

X

Register

I agree to provide KASPERSKY LAB SWITZERLAND GmbH, Bahnhofstrasse 69, 8001 Zürich, Switzerland with the following information about me (First Name, Last Name, email) order to allow Kaspersky Lab Switzerland GmbH to contact me to participate in surveys and to send me information via email about Kaspersky Lab's products and services including personalized promotional offers and premium assets like white papers, webcasts, videos, events and other marketing materials. I confirm that I have been provided with this Privacy Policy for Web Sites. I understand that my consent is optional and I can withdraw this consent at any time via e-mail by clicking the “unsubscribe” link that I find at the bottom of any e-mail sent to me for the purposes mentioned above”. Web privacy policy https://xtraining.kaspersky.com/privacy/

X

Request Access

I agree to provide KASPERSKY LAB SWITZERLAND GmbH, Bahnhofstrasse 69, 8001 Zürich, Switzerland with the following information about me (First Name, Last Name, email) order to allow Kaspersky Lab Switzerland GmbH to contact me to participate in surveys and to send me information via email about Kaspersky Lab's products and services including personalized promotional offers and premium assets like white papers, webcasts, videos, events and other marketing materials. I confirm that I have been provided with this Privacy Policy for Web Sites. I understand that my consent is optional and I can withdraw this consent at any time via e-mail by clicking the “unsubscribe” link that I find at the bottom of any e-mail sent to me for the purposes mentioned above”. Web privacy policy https://xtraining.kaspersky.com/privacy/

X

Pre-register

I agree to provide KASPERSKY LAB SWITZERLAND GmbH, Bahnhofstrasse 69, 8001 Zürich, Switzerland with the following information about me (First Name, Last Name, email) order to allow Kaspersky Lab Switzerland GmbH to contact me to participate in surveys and to send me information via email about Kaspersky Lab's products and services including personalized promotional offers and premium assets like white papers, webcasts, videos, events and other marketing materials. I confirm that I have been provided with this Privacy Policy for Web Sites. I understand that my consent is optional and I can withdraw this consent at any time via e-mail by clicking the “unsubscribe” link that I find at the bottom of any e-mail sent to me for the purposes mentioned above”. Web privacy policy https://xtraining.kaspersky.com/privacy/

X

New course

Large language models security

Enroll here

 

Large language models security

Built for Tier 2 Analysts

Intermediate

$960 inc. tax per learner

Intermediate

$960 inc. tax per learner

Enroll my team
Request demo access

Background

The rise of large language models (LLMs) has not only transformed how we build and interact with AI systems but has also introduced new and complex security challenges.
As LLMs become increasingly integrated into real-world applications, understanding how they can be attacked, exploited, and protected is no longer optional — it’s essential.

Led by Vladislav Tushkanov,  Group Manager at Kaspersky AI Technology Research Center, this course is built on his expertise in adversarial machine learning and, recently, large
language model (LLM) security. Through a mix of engaging video lectures, hands-on labs and interactive checkpoints, you’ll dive into how LLMs can be exploited using techniques like jailbreaks, prompt injections and more. You’ll also gain practical skills in defending against these threats at the model, prompt, system and service levels, using structured frameworks to assess and strengthen LLM security.

Course leaders

Vladislav Tushkanov

Group Manager at Kaspersky AI Technology Research Center

Vladislav has been with Kaspersky since 2015. He and his team work at applying data science and machine learning techniques to detect threats — such as malware, phishing and spam — faster and better, as, well as researching cutting-edge AI technologies to predict threats that are yet to come. Vladislav holds a Master of Arts in Computational Linguistics from National Research University Higher School of Economics.

Today his research focuses on practical applications of various AI technologies, such as language models, to threat detection, as well as adversarial machine learning – finding vulnerabilities in AI-enabled applications.

Overview & Objectives

  • Gain a solid foundation in the emerging field of LLM security.
  • Understand key attack methods such as jailbreaks, prompt injections and token smuggling.
  • Learn practical defense techniques across model, prompt, system and service levels.
  • Apply structured frameworks to analyze and assess LLM security.
  • Develop the skills to evaluate, secure and design robust LLM-based systems using real-world cases and hands-on assignments.

Syllabus

Whos it for?

Beginner AI pentesters

For those starting their career in offensive AI cybersecurity.

Security consultants

Ideal for consultants looking to understand and mitigate risks in LLM-based systems.

Developers

For engineers building or integrating LLMs who want to grasp potential vulnerabilities and secure their AI applications.

AI/LLM architects

Designed for those shaping AI infrastructure, offering insight into emerging threats and defense strategies for large language models.

Prompt engineers

Perfect for specialists working closely with LLMs who need to recognize attack surfaces and improve the robustness of prompt-based systems.

How you'll learn

Guided video lectures

Learn from Vladislav Tushkanov, Group Manager at Kaspersky AI Technology Research Center, who shares insights from real-world LLM security research and adversarial AI work.

Practical virtual laboratory

Apply your knowledge in a virtual lab environment designed for hands-on experience with LLM attacks and defenses techniques.

Iterative learning 

Follow a structured, modular approach which combines theory, checkpoint quizzes and practical assignments.

Benefits

Access

6 months to complete your course

Language

Course delivered in English

Pace

Self guided learning that fits around your life

Access to Virtual lab

100 hours in a browser based Virtual lab with hands on training

Learning environment

Browser based via desktop, mobile or tablet

Guided videos

20 videos to guide you through the course

Certification of completion

PDF document on a Kaspersky letterhead certifying the completion of the course, signed by the course leader(s)

Course author

Group Manager at Kaspersky AI Technology Research Center