AIcademy

Ethical AI: Responsible Design and Application
Introduction
In this interactive knowledge session, you will delve into the ethical, social and legal issues associated with the use of AI. You will learn how to design and apply AI systems responsibly — with an eye for bias , explainability , transparency , inclusion and human control .
Through topical examples and discussions, you will gain insight into the ethical risks of AI, from discrimination to over-automation. You will be introduced to leading ethical frameworks such as the EU AI Act , UNESCO AI Ethics Framework and ELSA (Ethical, Legal and Societal Aspects) , and learn how to translate these into concrete strategies for designing, implementing and monitoring AI solutions.
This session combines theory with practical guidelines and is applicable to both developers and policy makers.
Learning objectives
1. Gain insight into ethical risks:
Bias and exclusion in datasets or algorithms
Unintentional discrimination or stereotyping
Overconfidence in AI systems ( automation bias )
Loss of human control and responsibility
2. Getting to know ethical frameworks and guidelines:
EU AI Act – risk-based approach to AI systems
UNESCO Recommendation on the Ethics of AI – global ethical standards
ELSA – focus on Ethical, Legal and Societal Aspects
Fundamental principles such as:
Transparency
Proportionality
Explainability
Fairness
3. Develop practical strategies for:
Ethical design of AI systems ( ethics by design )
Involving stakeholders in development and application
Integrating explainability into AI decision systems
Testing and documenting risks and decision-making
Implementing guidelines within governance and compliance
Approach and working methods
The session consists of short knowledge blocks interspersed with interactive work forms, case discussions and group discussions. You reflect on your own practical examples or AI projects within your organization and learn which ethical considerations play a role in this.
The training can be adapted based on the sector or issues within your organization, for example:
Use of AI in public services
Commercial applications with customer data
AI in recruitment and selection, decision-making or surveillance
For whom
This session is intended for policy makers, AI developers, ethics advisors, data specialists, researchers and administrators who want to use AI in a way that is responsible, transparent and socially acceptable. Suitable for both public and private organizations.
Interested in this session?
Feel free to contact us. We are happy to think along with you about a suitable interpretation focused on your team, sector or current issues within your organization.
Ethical AI: Responsible Design and Application
Introduction
In this interactive knowledge session, you will delve into the ethical, social and legal issues associated with the use of AI. You will learn how to design and apply AI systems responsibly — with an eye for bias , explainability , transparency , inclusion and human control .
Through topical examples and discussions, you will gain insight into the ethical risks of AI, from discrimination to over-automation. You will be introduced to leading ethical frameworks such as the EU AI Act , UNESCO AI Ethics Framework and ELSA (Ethical, Legal and Societal Aspects) , and learn how to translate these into concrete strategies for designing, implementing and monitoring AI solutions.
This session combines theory with practical guidelines and is applicable to both developers and policy makers.
Learning objectives
1. Gain insight into ethical risks:
Bias and exclusion in datasets or algorithms
Unintentional discrimination or stereotyping
Overconfidence in AI systems ( automation bias )
Loss of human control and responsibility
2. Getting to know ethical frameworks and guidelines:
EU AI Act – risk-based approach to AI systems
UNESCO Recommendation on the Ethics of AI – global ethical standards
ELSA – focus on Ethical, Legal and Societal Aspects
Fundamental principles such as:
Transparency
Proportionality
Explainability
Fairness
3. Develop practical strategies for:
Ethical design of AI systems ( ethics by design )
Involving stakeholders in development and application
Integrating explainability into AI decision systems
Testing and documenting risks and decision-making
Implementing guidelines within governance and compliance
Approach and working methods
The session consists of short knowledge blocks interspersed with interactive work forms, case discussions and group discussions. You reflect on your own practical examples or AI projects within your organization and learn which ethical considerations play a role in this.
The training can be adapted based on the sector or issues within your organization, for example:
Use of AI in public services
Commercial applications with customer data
AI in recruitment and selection, decision-making or surveillance
For whom
This session is intended for policy makers, AI developers, ethics advisors, data specialists, researchers and administrators who want to use AI in a way that is responsible, transparent and socially acceptable. Suitable for both public and private organizations.
Interested in this session?
Feel free to contact us. We are happy to think along with you about a suitable interpretation focused on your team, sector or current issues within your organization.

Description:
Explore the ethical aspects of AI applications. Learn how to responsibly handle bias, transparency, explainability and human control, and how to use AI in a way that is fair, inclusive and trustworthy.
Learning objectives:
Insight into ethical risks such as bias, exclusion, automation pressure and overtrust.
Learning to work with ethical frameworks (such as the EU AI Act, UNESCO AI Ethics, ELSA).
Practical strategies for ethical design, testing, and implementation of AI systems.
For whom: Policy makers, AI developers, ethics advisors and organizations that want to use AI responsibly.
Ethical AI: Responsible Design and Application
Introduction
In this interactive knowledge session, you will delve into the ethical, social and legal issues associated with the use of AI. You will learn how to design and apply AI systems responsibly — with an eye for bias , explainability , transparency , inclusion and human control .
Through topical examples and discussions, you will gain insight into the ethical risks of AI, from discrimination to over-automation. You will be introduced to leading ethical frameworks such as the EU AI Act , UNESCO AI Ethics Framework and ELSA (Ethical, Legal and Societal Aspects) , and learn how to translate these into concrete strategies for designing, implementing and monitoring AI solutions.
This session combines theory with practical guidelines and is applicable to both developers and policy makers.
Learning objectives
1. Gain insight into ethical risks:
Bias and exclusion in datasets or algorithms
Unintentional discrimination or stereotyping
Overconfidence in AI systems ( automation bias )
Loss of human control and responsibility
2. Getting to know ethical frameworks and guidelines:
EU AI Act – risk-based approach to AI systems
UNESCO Recommendation on the Ethics of AI – global ethical standards
ELSA – focus on Ethical, Legal and Societal Aspects
Fundamental principles such as:
Transparency
Proportionality
Explainability
Fairness
3. Develop practical strategies for:
Ethical design of AI systems ( ethics by design )
Involving stakeholders in development and application
Integrating explainability into AI decision systems
Testing and documenting risks and decision-making
Implementing guidelines within governance and compliance
Approach and working methods
The session consists of short knowledge blocks interspersed with interactive work forms, case discussions and group discussions. You reflect on your own practical examples or AI projects within your organization and learn which ethical considerations play a role in this.
The training can be adapted based on the sector or issues within your organization, for example:
Use of AI in public services
Commercial applications with customer data
AI in recruitment and selection, decision-making or surveillance
For whom
This session is intended for policy makers, AI developers, ethics advisors, data specialists, researchers and administrators who want to use AI in a way that is responsible, transparent and socially acceptable. Suitable for both public and private organizations.
Interested in this session?
Feel free to contact us. We are happy to think along with you about a suitable interpretation focused on your team, sector or current issues within your organization.

How It All Started
This is a space to share more about the business: who's behind it, what it does and what this site has to offer. It's an opportunity to tell the story behind the business or describe a special service or product it offers. You can use this section to share the company history or highlight a particular feature that sets it apart from competitors.
Let the writing speak for itself. Keep a consistent tone and voice throughout the website to stay true to the brand image and give visitors a taste of the company's values and personality.