Course Outline

Foundations of Ethics in Autonomous Systems

  • Defining autonomy in AI agents
  • Key ethical theories applied to machine behavior
  • Stakeholder perspectives and value-sensitive design

Societal Risks and High-Stakes Use Cases

  • Autonomous agents in public safety, health, and defense
  • Human-AI collaboration and trust boundaries
  • Scenarios of unintended consequences and risk amplification

Legal and Regulatory Landscape

  • Overview of AI legislation and policy trends (EU AI Act, NIST, OECD)
  • Accountability, liability, and legal personhood of AI agents
  • Global governance initiatives and gaps

Explainability and Decision Transparency

  • Challenges of black-box autonomous decision making
  • Designing for explainable and auditable agents
  • Transparency tools and frameworks (e.g., model cards, datasheets)

Alignment, Control, and Moral Responsibility

  • AI alignment strategies for agent behavior
  • Human-in-the-loop vs. human-on-the-loop control paradigms
  • Shared responsibility between designers, users, and institutions

Ethical Risk Assessment and Mitigation

  • Risk mapping and critical failure analysis in agent design
  • Safeguards and off-switch mechanisms
  • Bias, discrimination, and fairness auditing

Governance Design and Institutional Oversight

  • Principles of responsible AI governance
  • Multistakeholder oversight models and audits
  • Designing compliance frameworks for autonomous agents

Summary and Next Steps

Requirements

  • Understanding of AI systems and machine learning fundamentals
  • Familiarity with autonomous agents and their applications
  • Knowledge of ethical and legal frameworks in technology policy

Audience

  • AI ethicists
  • Policy makers and regulators
  • Advanced AI practitioners and researchers
 14 Hours

Testimonials (1)

Upcoming Courses

Related Categories