COMP1944 Artificial Intelligence Ethics, Fall 2024, HKUST

Dekai Wu | |

Course organization



All lectures will be held in the classroom, as well as online live interactively, at the regularly scheduled times until further notice due to university coronavirus measures. The recurring Zoom meetings for the lectures can be found in Canvas. Note that these Zoom meetings only admit authenticated users with ITSC accounts (with domain or

Times and places

Lecture 1: WF 16:30-17:50, Rm 4582 (Lift 27-28).

Office hours: W 18:00-19:00. The TA's office hours are posted at


Course: is the master home page for the course.

TA: contains all information from the TAs.

Forum: is where all discussion outside class should be done. Always read before asking/posting/emailing your question. Note that you must register for your account at the first lecture, tutorial, or lab following the Discussion Forum User Guide.


Course description

This course critically surveys the fast moving, urgent, emerging area of AI ethics. AI is explosively disrupting every sphere of our work and lives. Cambridge Analytica and fake news bots. AI driven social media displacing traditional journalism. Drone warfare. Elimination of traditional jobs. Privacy-violating advertising. Biased AI decision/recognition algorithms. Deepfakes. Autonomous vehicles. Automated hedge fund trading. No area remains untouched. Policy think tanks, governments, and tech companies around the world have started paying serious attention to AI ethics. How will human civilization survive the rise of AI? What are the new rules? What are the ethical frameworks needed to avoid extinction? What are engineers’ and entrepreneurs’ ethical responsibilities?

Course objectives

This course aims to bring together young aspiring engineers and scientists as well as business/management and HSS students into the rapidly emerging, urgent area of AI ethics. AI is disrupting every sphere of our work and lives, bringing unprecedented risks to society. How will human civilization survive the rise of AI? What are the new rules? What are the ethical frameworks needed to avoid extinction? What are engineers’ and entrepreneurs’ ethical responsibilities? The objective of this course is to equip the next generation with the conceptual tools and AI background to recognize and analyze the key risks of newly arising AI technology to society, articulate the social impact within various ethics frameworks, and design policies and/or systems that explicitly address the ethical issues.

Alignment with Common Core program goals
  1. Broadening horizons: to allow students to gain intellectual breadth and an appreciation of intellectual achievements across and beyond the main academic disciplines of their studies. Typically, engineers and scientists get far too little exposure to thinking deeply about the (unintended as well as intended) social consequences of what they build, while business and HSS students get far too little exposure to deep, real understanding of the new technology and where it is going. This course will necessarily push students from both ends toward the sometimes-uncomfortable areas on the other ends.
  2. A passion for learning: to spark students’ passion for learning and enhance their higher order intellectual abilities: analysis and evaluation; judgment and critical thinking; defining and solving problems. The disrupted societies we live in are familiar to everyone, as are many of the social problems we experience. At the same time, the radical new AI-powered technologies on our phones, internet, factories, and businesses are ubiquitous. The pilot run of the course shows students to be highly engaged in pushing the bounds of their thinking on how these problems interact and intersect in projecting the near future, and spark new levels of cross-disciplinary interest.
  3. A lifelong pursuit of excellence: to provide a foundation for students’ life-long development through personal growth, preparation for future careers and opportunities to make contributions to the community. AI is not only one of today’s most exciting fields, but also cuts deeply to questions of what it means to be human, as machines suddenly challenge us in so many areas where humans have always been indisputably dominant until now. This course prepares students to be ahead of the curve as these concerns grow in society and in business, and excites students to seek paths to develop healthy new AI based societies within their lifetimes.

Course intended learning outcomes

At the end of the Artificial Intelligence Ethics course, you will have achieved the following outcomes. Codes indicate alignment for Common Core Areas and the Area ILOs.

  1. Compare different frameworks for AI ethics, including IEEE Ethically Aligned Design, and be familiar with the many types of misuse of AI technology — 20%
  2. Evaluate the limits of deontological rule-based AI ethics; analyze intended and unintended consequences in line with consequentialist AI ethics, social well-being metrics, and AI for social good; and analyze the societal role of virtue AI ethics — 10%
  3. Propose designs for embedding values into autonomous systems and artificial moral cognition, particularly with regard to the ethics of emotional AI, empathetic AI, and affective computing — 10%
  4. Analyze the AI and machine learning’s fairness, responsibility and accountability, and evaluate what degree of transparency and explainability is possible — 10%
  5. Recognize weaponization of information and exploitation of unconscious biases, and propose training dataset design policies to avoid algorithmic bias and discriminatory outcomes — 20%
  6. Analyze the social consequences of alternative approaches of personal data rights and individual access control, and recognize the risks of surveillance capitalism — 10%
  7. Explain the risks of autonomous weapons, analyze the tradeoffs, and contrast policy proposals — 10%
  8. Analyze AI safety options in the coming eras of strong AI and artificial superintelligence — 10%



Honor policy

To receive a passing grade, you are required to sign an honor statement acknowledging that you understand and will uphold all policies on plagiarism and collaboration.


All materials submitted for grading must be your own work. You are advised against being involved in any form of copying (either copying other people's work or allowing others to copy yours). If you are found to be involved in an incident of plagiarism, you will receive a failing grade for the course and the incident will be reported for appropriate disciplinary actions.

University policy requires that students who cheat more than once be expelled. Please review the cheating topic from your UST Student Guide.

Warning: sophisticated plagiarism detection systems are in operation!


You are encouraged to collaborate in study groups. However, you must write up solutions on your own. You must also acknowledge your collaborators in the write-up for each problem, whether or not they are classmates. Other cases will be dealt with as plagiarism.


Course grading will be adjusted to the difficulty of assignments and exams. Moreover, I guarantee you the following.

Grade guarantees
If you achieve 85% you will receive at least a A grade.
75% B
65% C
55% D

Your grade will be determined by a combination of factors:

Grade weighting
Class participation ~20%
In-class exercises/quizzes ~15%
Assignments ~20%
Midterm exam ~20%
Final presentation ~25%


No reading material is allowed during the examinations. No make-ups will be given unless prior approval is granted by the instructor, or you are in unfavorable medical condition with physician's documentation on the day of the examination. In addition, being absent at the final examination results in automatic failure of the course according to university regulations, unless prior approval is obtained from the department head.


Science and engineering ⁠— not only ethics and humanities⁠ — is about communication between people. Good participation in class and in the online forum will count for approximately 25%.


All assignments must be submitted by 23:00 on the due date. Assignments will be collected electronically using the automated CASS assignment collection system. Late assignments cannot be accepted. Sorry, in the interest of fairness, exceptions cannot be made.

Assignments will account for a total of approximately 65%.

Required readings

Any linked material (unless labeled "Supplementary references") is required reading that you are responsible for.


Each week we cover a different aspect of AI ethics and society, with the class structured in two halves.

The first half begins with a short provocation such as a TED talk, to create a controversial context for discussion and debate. We then review the relevant literature, illuminate major concepts, and critique them. The first half finishes with a questionnaire/quiz that tests your understanding and poses self-reflection challenges.

The second half exercises these concepts. We may do case studies, breakout groups, collaborative mind mapping, and other interactive work to help concretize, explore, and internalize the issues and challenges. Individual contributions to these exercises over the course of the semester are a large part of the assessment for the course.


wk date topics description, IEEE goals, course ILOs assignment reading, notes
1 20249131 Trolley problems everywhere
Overview and orientation to topics of fairness, accountability, and transparency in society, AI and machine learning, the impact of AI and automation upon labor and the job market (IEEE foundation of methodologies to guide ethical research and design) CILO-1, 5, 8 (in-class) Moral Machine EAD p9-35, "From Principles to Practice", "General Principles" ● Moral Machine ● "The Trolley Problem", The Good Place, s02e05 ● The Good Place and Philosophy: Everything is Forking Fine!The Forking Trolley: An Ethical Journey to The Good Place
1 20240202 Orientation
Course organization
Embedding ethics into AIs themselves (IEEE foundation of embedding values into autonomous systems) CILO-1, 4, 7 EAD p169-197, "Embedding Values into Autonomous and Intelligent Systems"
2 20240207 Artificial children
Artificial moral cognition
Embedding ethics into AIs themselves (IEEE foundation of embedding values into autonomous systems) CILO-1, 4, 7 EAD p169-197, "Embedding Values into Autonomous and Intelligent Systems" ● "Surprise! You already have kids and they're AIs" De Kai @ TEDxXi'an
2 20240209 A world of fear
AI and social disruption
Deepfakes, chatbots, and drones: how AI democratizes weapons of mass destruction and disrupts civilization with information disorder and lethal autonomous weapons CILO-1, 5, 6 EAD p68-89, "Well-being" ● "Slaughterbots" ● “Panicdemic: The biggest fear in AI Ethics is fear itself” ● Social disruption
3 20240214 Artificial gossips
Privacy, safety, security
Inclusion and respect: surveillance capitalism, identity theft, artificial gossips (IEEE objective of personal data rights and individual access control) CILO-1, 6 EAD p110-123, "Personal Data and Individual Agency" ● "Artificial Gossips" De Kai @ TEDxKlagenfurt
4 20240221 Is our AI neurotypical?
Weak AI, strong AI, and superintelligence
Contrasts between different senses and levels of “AI”. that impact human-machine interaction and society in very different ways CILO-1, 8 "Can an AI Really Relate? What's Universal in Language and Music" [alternate] De Kai @ TEDxBeijing
4 20240223 The three Rs
Regurgitation, routine, remixing
How today's AI falls short of human intelligence CILO-1, 8
5 20240228 Toward mindfulness
Of two minds about AI
Consciousness, sentience, self-awareness, mindfulness, metacognition CILO-1, 5
6 20240306 Human, mathematical, and artificial biases
Unconscious vs inductive vs algorithmic bias
Relationships between three different foundational kinds of bias, contrasting human evolutionary cognitive psychological biases that are useful but dangerous, vs mathematical biases that are required to do any learning or generalization, vs intended and unintended data-driven biases that have serious social consequences CILO-1, 5 "How you can help transform the internet into a place of trust" Claire Wardie @ TED 2019 [transcript] ●
7 20240313 Artificial storytellers
The grand cycle of intelligence
Learning to talk, learning to think; role of social media, recommendation engines, and search engines in computational propaganda and artificial storytellers CILO-1, 3, 5 "The Grand Cycle of Intelligence"
7 20240315 Neginformation
Algorithmic negligence
Sound, informed judgment: misinformation, disinformation, malinformation, and neginformation; collective intelligence CILO-? "Conspirituality: How Wellness Became a Gateway for Misinformation" @ CBS Reports ● "Why is YouTube Broadcasting Climate Misinformation to Millions? @ Avaaz [pdf]"
8 20240320 Algorithmic censorship
Information disorder
Open minded diversity of opinion: Catering to the id: key challenges for social media, recommendation engines, and search engines CILO-1, 3, 5 Universal Masking ● "The disastrous consequences of information disorder: AI is preying upon our unconscious cognitive biases" De Kai @ Boma COVID-19 Summit (Session 2) ●
9 20240327 Nurturing mindfulness
Conscious AI
Consciousness, sentience, self-awareness, mindfulness, metacognition CILO-1, 5 "Why AI is impossible without mindfulness" De Kai @ TEDxOakland
10 20240410 Midterm (in class, closed book)
10 20240412 Artificial intimacy
Empathetic AI
Affective computing and artificial intimacy (IEEE future technology concern of affective computing) CILO-1, 3 EAD p90-109, "Affective Computing" ●
11 20240417 The illusion of explainability
Explainable AI
Explainable AI and its challenges; mindful AI and its societal impact (IEEE goal of transparency; IEEE objective of transparency and individual rights) CILO-1, 3, 4 "Why Meaningful AI is Musical" [alternate] De Kai @ TEDxZhujiangNewTown
11 20240419 Are machines more creative than humans?
Creative AI
Computational creativity and its impact on society and culture CILO-1, 8 "Do You Speak Pentatonic? The Multilinguality of Music" De Kai @ TEDxWanChai
12 20240424 Schooling our artificial children
AI ethics methodologies
The roles and responsibilities of AI/ML scientists, tech companies, modelers, think tanks, regulators and governments; approaches to formulating AI ethics methodologies (IEEE goal of accountability “what's the responsibility and accountability of an ML designer, an ML professional teacher, an ML end user teacher, and an ML end user operator?”; IEEE objective of legal frameworks) CILO-1 EAD p124-139, "Methods to Guide Ethical Research and Design" ●
12 20240426 How to be descriptive, predictive, and prescriptive
AI ethics paradigms
Descriptive versus prescriptive and predictive ethics; relates classic philosophy of normative/comparative ethics and deontological/consequentialist/virtue ethics to the problem of AI ethics, and discusses why purely rule-based AI ethics will fail (IEEE goal of human rights; IEEE objective of legal frameworks) CILO-1, 2 EAD p36-67, "Classical Ethics in A/IS" ● "The Paradox of AI Ethics: Why Rule-based AI Ethics Will Fail" De Kai @ TEDxChiangMai
13 20240503 Extinction, zoo, upload, merge?
AGI safety
Is the future of humans extinction, to be zoo or pets, to upload, or to merge with AIs? (IEEE future technology concern of safety and beneficence of artificial general intelligence (AGI) and artificial superintelligence; IEEE future technology concern of mixed reality) CILO-1, 7, 8 EAD p198-281, "Policy", "Law" ●

Introduction and orientation

In class exercise

The Trolley Problem

We're going to try tackling the trolley problem ourselves. We'll all look together at 13 scenarios randomly generated by MIT's Moral Machine site, and enter your choices into the Google Form we give you in class.