Responsible AI & Bias in AI

Responsible AI & Bias in AI

Audience:  Business users, educators, and technical professionals seeking awareness or foundational understanding of ethical AI.

About: This 3-hour course provides a comprehensive understanding of Responsible AI and the critical role ethics play in the development and use of artificial intelligence. Participants will explore foundational concepts, key principles from leading organizations such as Microsoft and Google, and the legal frameworks that shape AI practices. Through engaging discussions and real-world examples, the course delves into the types of biases that can arise in AI systems, from data and algorithms to societal influences, highlighting their implications.

Module 1: Foundations of Responsible AI

  • What is Responsible AI?
  • Overview of Microsoft’s and Google’s AI principles
  • Why ethics in AI matters: real-world examples
  • Legal and regulatory context (e.g., AODA, GDPR, OHRC)

Module 2: Understanding Bias in AI

  • Types of bias: data, algorithmic, societal
  • Case studies: gender and racial bias in language models
  • How bias manifests in AI systems
  • Interactive activity: Spot the bias

Module 3: Mitigating Bias

  • Data collection and curation strategies
  • Fairness-aware modelling techniques
  • Tools for bias detection (e.g., What-If Tool, TFDV)
  • Inclusive design and accessibility standards

Module 4: Responsible AI in Practice

  • AI governance frameworks
  • Transparency and explainability
  • Accountability and auditability
  • Microsoft’s Responsible AI lifecycle