All lectures and tutorials will be held ONLINE LIVE INTERACTIVELY at the regularly scheduled times until further notice due to university coronavirus measures.
You can find the recurring Zoom meetings for the lectures in Canvas. You are highly recommended to join the meetings from there. Note that these Zoom meetings only admit authenticated users with ITSC accounts (with domain connect.ust.hk or ust.hk. You can only join the meetings via either of the two paths above.You must register for the lectures at the following links. After registering, you will receive a confirmation email containing information about joining the meeting.
After you are registered, you may use the following links to join the lectures:
If you haven’t done so, please watch this video to get your HKUST Zoom account ready as soon as possible, not just for this course but also for all other courses at HKUST:
Lecture 1: WF 13:30-14:50, Rm TBA.
Office hours: W 15:00-16:00. The TA's office hours are posted at http://course.cs.ust.hk/comp4901m/ta/.
Course: http://www.cs.ust.hk/~dekai/4901M/ is the master home page for the course.
TA: http://course.cs.ust.hk/comp4901m/ta/ contains all information from the TAs.
Forum: http://comp151.cse.ust.hk/~dekai/content/?q=forum/1. is where all discussion outside class should be done. Always read before asking/posting/emailing your question. Note that you must register for your account at the first lecture, tutorial, or lab.
This course critically surveys the fast moving, urgent, emerging area of AI ethics. AI is explosively disrupting every sphere of our work and lives. Cambridge Analytica and fake news bots. AI driven social media displacing traditional journalism. Drone warfare. Elimination of traditional jobs. Privacy-violating advertising. Biased AI decision/recognition algorithms. Deepfakes. Autonomous vehicles. Automated hedge fund trading. No area remains untouched. Policy think tanks, governments, and tech companies around the world have started paying serious attention to AI ethics. How will human civilization survive the rise of AI? What are the new rules? What are the ethical frameworks needed to avoid extinction? What are engineers’ and entrepreneurs’ ethical responsibilities?
At the end of the Artificial Intelligence Ethics course, you will have achieved the following outcomes.
To receive a passing grade, you are required to sign an honor statement acknowledging that you understand and will uphold all policies on plagiarism and collaboration.
All materials submitted for grading must be your own work. You are advised against being involved in any form of copying (either copying other people's work or allowing others to copy yours). If you are found to be involved in an incident of plagiarism, you will receive a failing grade for the course and the incident will be reported for appropriate disciplinary actions.
University policy requires that students who cheat more than once be expelled. Please review the cheating topic from your UST Student Guide.
Warning: sophisticated plagiarism detection systems are in operation!
You are encouraged to collaborate in study groups. However, you must write up solutions on your own. You must also acknowledge your collaborators in the write-up for each problem, whether or not they are classmates. Other cases will be dealt with as plagiarism.
Course grading will be adjusted to the difficulty of assignments and exams. Moreover, I guarantee you the following.
If you achieve | 85% | you will receive at least a | A | grade. |
75% | B | |||
65% | C | |||
55% | D |
Your grade will be determined by a combination of factors:
Exams | 0% (due to university coronovirus meaures) |
In-class exercises/quizzes | ~30% |
Class participation | ~20% |
Forum participation | ~10% |
Assignments | ~40% |
No reading material is allowed during the examinations. No make-ups will be given unless prior approval is granted by the instructor, or you are in unfavorable medical condition with physician's documentation on the day of the examination. In addition, being absent at the final examination results in automatic failure of the course according to university regulations, unless prior approval is obtained from the department head.
Science and engineering — not only ethics and humanities — is about communication between people. Good participation in class will count for approximately 20%, and good participation in the online forum will count for approximately 10%.
All assignments must be submitted by 23:00 on the due date. Assignments will be collected electronically using the automated CASS assignment collection system. Late assignments cannot be accepted. Sorry, in the interest of fairness, exceptions cannot be made.
Assignments will account for a total of approximately 65%.
Any linked material (unless labeled "Supplementary references") is required reading that you are responsible for.
Each week we cover a different aspect of AI ethics and society, with the class structured in two halves.
The first half begins with a short provocation such as a TED talk, to create a controversial context for discussion and debate. We then review the relevant literature, illuminate major concepts, and critique them. The first half finishes with a questionnaire/quiz that tests your understanding and poses self-reflection challenges.
The second half exercises these concepts. We may do case studies, breakout groups, collaborative mind mapping, and other interactive work to help concretize, explore, and internalize the issues and challenges. Individual contributions to these exercises over the course of the semester are a large part of the assessment for the course.
wk | date | topics | IEEE goals | assignment | notes |
1 | 20210203 | The soft side of software Introduction and course organization |
Overview and orientation to topics of fairness, accountability, and transparency in society, AI and machine learning, the impact of AI and automation upon labor and the job market (IEEE goal of accountability “what's the responsibility and accountability of an ML designer, an ML professional teacher, an ML end user teacher, and an ML end user operator?”; IEEE objective of legal frameworks) | Gamestop, Timnit Gehru, WhatsApp | |
2 | 20210210 | The biggest fear in AI Ethics is fear itself Information disorder and social disruption |
Misinformation vs disinformation vs malinformation, deepfakes, chatbots, and how they disrupt society | IEEE Ethically Aligned Design [pdf] | |
3 | 20210219 | How does AI know good from evil? Unconscious bias, inductive bias, and algorithmic bias |
Relationships between three different foundational kinds of bias, contrasting human evolutionary cognitive psychological biases that are useful but dangerous, vs mathematical biases that are required to do any learning or generalization, vs intended and unintended data-driven biases that have serious social consequences | ||
4 | 20210226 | Artificial children AI ethics methodologies |
Survey of approaches to formulating AI ethics methodologies (IEEE foundation of methodologies to guide ethical research and design) | ||
5 | 20210305 | Can an AI really relate? Weak AI, strong AI, and superintelligence |
Contrasts between different senses and levels of “AI”. that impact human-machine interaction and society in very different ways | ||
6 | 20210312 | Artificial mindfulness Conscious AI |
Mindful AI and its societal impact (IEEE goal of transparency; IEEE objective of transparency and individual rights) | ||
7 | 20210319 | Why rule-based AI ethics will fail Prescriptive/descriptive and deontological/consequentialist/virtue AI ethics |
Relates classic ethics philosophy to the problem of AI ethics, and discusses why purely rule-based AI ethics will fail (IEEE goal of human rights; IEEE objective of legal frameworks) | ||
8 | 20210326 | The illusion of explainability Artificial moral cognition |
Embedding ethics into AIs themselves, and the illusion of explainability (IEEE foundation of embedding values into autonomous systems) | ||
9 | 20210409 | Artificial gossips Privacy, safety, security |
Surveillance capitalism, identity theft, artificial gossips (IEEE objective of personal data rights and individual access control) | ||
10 | 20210416 | Artificial storytellers AI framing and narratives |
The role of social media, recommendation engines, and search engines in computational propaganda and artificial storytellers | ||
11 | 20210423 | Are machines more creative than humans? Constructive and creative AI |
Computational creativity and its impact on society and culture | ||
12 | 20210430 | Artificial intimacy Empathetic AI |
Affective computing and artificial intimacy (IEEE future technology concern of affective computing) | ||
13 | 20210507 | Extinction, zoo, upload, merge? AGI safety |
Is the future of humans extinction, to be zoo or pets, to upload, or to merge with AIs? (IEEE future technology concern of safety and beneficence of artificial general intelligence (AGI) and artificial superintelligence; IEEE future technology concern of mixed reality) |