AIcademy

AI Security: Deploy AI safely and responsibly
Introduction
In this knowledge session, you will delve into the key security risks when using AI systems. You will learn how language and image models can be vulnerable to manipulation, how attackers abuse these systems, and what measures you can take to protect them. Consider techniques such as prompt injection, adversarial examples, model inversion, and data poisoning. You will also gain insight into the responsible and legally correct use of AI according to guidelines such as the AI Act and the AVG.
This session offers a combination of current real-world examples, technical explanations and organizational recommendations to use AI safely and responsibly.
What you will learn during this session
1. Recognize vulnerabilities in AI systems
You will gain insight into how AI models can be abused or manipulated.
- Prompt injection and jailbreak prompts at language models
- Output leaking and unwanted extraction of sensitive data from models
- Model inversion: retrieving training data from a model
- Data poisoning: influencing model behavior by incorrect input data
- Misuse of open models and APIs by unauthorized users
2. Adversarial attacks at language and image models.
You will learn about different forms of adversarial attacks and how they affect models.
- Evasion attacks: manipulating inputs so that a model makes incorrect predictions (e.g., images that are identical to the eye but misclassified)
- Poisoning attacks: influencing the learning process through erroneous or manipulated training data
- Inference attacks: infer sensitive information from the output of the model
- Backdoor attacks: intentionally built-in vulnerabilities that can be activated by malicious actors
3. Security measures and responsible use of AI
You get practical guidance on how to make AI systems safe(r).
- Input sanitization and validation to limit abuse via prompts or input
- Output assessment, filtering and scoring mechanisms on model answers
- Logging and monitoring of AI interactions for detection of misuse or anomalies
- Access control based on context, role or risk analysis
- Continual testing with adversarial test cases and red-teaming
4. Legal frameworks and compliance
In addition to the technical side, you will also learn what legal and ethical frameworks are relevant.
- Overview of the AI Act: risk categories, transparency requirements and obligations
- AVG in relation to AI: data minimization, explainability and data subjects' rights
- Application of Responsible AI principles in design and use
- Governance and accountability within your organization
Approach and forms of work
This knowledge session is built around practical examples, actual cases and interaction. You will gain insight into both technical and policy aspects of AI security. We combine explanations with concrete recommendations and offer space to discuss risks and solutions in the context of your organization.
For whom
This session is intended for AI developers, security professionals, compliance officers and policy makers who are involved in the deployment of AI and want to manage the associated risks. A basic understanding of AI and security is recommended.
Interested in this session?
Please feel free to contact us. We would be happy to work with you on a session that meets the needs of your team or organization.
AI Security: Deploy AI safely and responsibly
Introduction
In this knowledge session, you will delve into the key security risks when using AI systems. You will learn how language and image models can be vulnerable to manipulation, how attackers abuse these systems, and what measures you can take to protect them. Consider techniques such as prompt injection, adversarial examples, model inversion, and data poisoning. You will also gain insight into the responsible and legally correct use of AI according to guidelines such as the AI Act and the AVG.
This session offers a combination of current real-world examples, technical explanations and organizational recommendations to use AI safely and responsibly.
What you will learn during this session
1. Recognize vulnerabilities in AI systems
You will gain insight into how AI models can be abused or manipulated.
- Prompt injection and jailbreak prompts at language models
- Output leaking and unwanted extraction of sensitive data from models
- Model inversion: retrieving training data from a model
- Data poisoning: influencing model behavior by incorrect input data
- Misuse of open models and APIs by unauthorized users
2. Adversarial attacks at language and image models.
You will learn about different forms of adversarial attacks and how they affect models.
- Evasion attacks: manipulating inputs so that a model makes incorrect predictions (e.g., images that are identical to the eye but misclassified)
- Poisoning attacks: influencing the learning process through erroneous or manipulated training data
- Inference attacks: infer sensitive information from the output of the model
- Backdoor attacks: intentionally built-in vulnerabilities that can be activated by malicious actors
3. Security measures and responsible use of AI
You get practical guidance on how to make AI systems safe(r).
- Input sanitization and validation to limit abuse via prompts or input
- Output assessment, filtering and scoring mechanisms on model answers
- Logging and monitoring of AI interactions for detection of misuse or anomalies
- Access control based on context, role or risk analysis
- Continual testing with adversarial test cases and red-teaming
4. Legal frameworks and compliance
In addition to the technical side, you will also learn what legal and ethical frameworks are relevant.
- Overview of the AI Act: risk categories, transparency requirements and obligations
- AVG in relation to AI: data minimization, explainability and data subjects' rights
- Application of Responsible AI principles in design and use
- Governance and accountability within your organization
Approach and forms of work
This knowledge session is built around practical examples, actual cases and interaction. You will gain insight into both technical and policy aspects of AI security. We combine explanations with concrete recommendations and offer space to discuss risks and solutions in the context of your organization.
For whom
This session is intended for AI developers, security professionals, compliance officers and policy makers who are involved in the deployment of AI and want to manage the associated risks. A basic understanding of AI and security is recommended.
Interested in this session?
Please feel free to contact us. We would be happy to work with you on a session that meets the needs of your team or organization.

Description:
Learn how to protect AI systems from risks such as data breaches, prompt injection and adversarial attacks. Gain insight into best practices for safe deployment of AI and how to meet the requirements of the AI Act, among others.
Learning objectives:
Recognizing vulnerabilities such as prompt injection and model misuse.
Understanding adversarial attacks on image and language models.
Applying security measures such as input filters, output assessment and logging.
Understanding legal frameworks: AI Act, GDPR and responsible use of generative AI.
For whom: Developers, security professionals, IT professionals and policy makers who want to learn about the vulnerabilities of AI systems.
AI Security: Deploy AI safely and responsibly
Introduction
In this knowledge session, you will delve into the key security risks when using AI systems. You will learn how language and image models can be vulnerable to manipulation, how attackers abuse these systems, and what measures you can take to protect them. Consider techniques such as prompt injection, adversarial examples, model inversion, and data poisoning. You will also gain insight into the responsible and legally correct use of AI according to guidelines such as the AI Act and the AVG.
This session offers a combination of current real-world examples, technical explanations and organizational recommendations to use AI safely and responsibly.
What you will learn during this session
1. Recognize vulnerabilities in AI systems
You will gain insight into how AI models can be abused or manipulated.
- Prompt injection and jailbreak prompts at language models
- Output leaking and unwanted extraction of sensitive data from models
- Model inversion: retrieving training data from a model
- Data poisoning: influencing model behavior by incorrect input data
- Misuse of open models and APIs by unauthorized users
2. Adversarial attacks at language and image models.
You will learn about different forms of adversarial attacks and how they affect models.
- Evasion attacks: manipulating inputs so that a model makes incorrect predictions (e.g., images that are identical to the eye but misclassified)
- Poisoning attacks: influencing the learning process through erroneous or manipulated training data
- Inference attacks: infer sensitive information from the output of the model
- Backdoor attacks: intentionally built-in vulnerabilities that can be activated by malicious actors
3. Security measures and responsible use of AI
You get practical guidance on how to make AI systems safe(r).
- Input sanitization and validation to limit abuse via prompts or input
- Output assessment, filtering and scoring mechanisms on model answers
- Logging and monitoring of AI interactions for detection of misuse or anomalies
- Access control based on context, role or risk analysis
- Continual testing with adversarial test cases and red-teaming
4. Legal frameworks and compliance
In addition to the technical side, you will also learn what legal and ethical frameworks are relevant.
- Overview of the AI Act: risk categories, transparency requirements and obligations
- AVG in relation to AI: data minimization, explainability and data subjects' rights
- Application of Responsible AI principles in design and use
- Governance and accountability within your organization
Approach and forms of work
This knowledge session is built around practical examples, actual cases and interaction. You will gain insight into both technical and policy aspects of AI security. We combine explanations with concrete recommendations and offer space to discuss risks and solutions in the context of your organization.
For whom
This session is intended for AI developers, security professionals, compliance officers and policy makers who are involved in the deployment of AI and want to manage the associated risks. A basic understanding of AI and security is recommended.
Interested in this session?
Please feel free to contact us. We would be happy to work with you on a session that meets the needs of your team or organization.

How It All Started
This is a space to share more about the business: who's behind it, what it does and what this site has to offer. It's an opportunity to tell the story behind the business or describe a special service or product it offers. You can use this section to share the company history or highlight a particular feature that sets it apart from competitors.
Let the writing speak for itself. Keep a consistent tone and voice throughout the website to stay true to the brand image and give visitors a taste of the company's values and personality.