All lectures and tutorials will be held ONLINE LIVE INTERACTIVELY at the regularly scheduled times until further notice due to university coronavirus measures.
You can find the recurring Zoom meetings for the lectures in Canvas. You are highly recommended to join the meetings from there. Note that these Zoom meetings only admit authenticated users with ITSC accounts (with domain connect.ust.hk or ust.hk. You can only join the meetings via either of the two paths above.You must register for the lectures at the following links. After registering, you will receive a confirmation email containing information about joining the meeting.
After you are registered, you may use the following links to join the lectures:
If you haven’t done so, please watch this video to get your HKUST Zoom account ready as soon as possible, not just for this course but also for all other courses at HKUST:
Lecture 1: M 12:00-14:50, G009B (CYT Bldg).
Office hours: M 15:00-16:00. The TA's office hours are posted at http://course.cs.ust.hk/comp4901m/ta/.
Course: http://www.cs.ust.hk/~dekai/4901M/ is the master home page for the course.
TA: http://course.cs.ust.hk/comp4901m/ta/ contains all information from the TAs.
Forum: http://comp151.cse.ust.hk/~dekai/content/?q=forum/1. is where all discussion outside class should be done. Always read before asking/posting/emailing your question. Note that you must register for your account at the first lecture, tutorial, or lab.
This course critically surveys the fast moving, urgent, emerging area of AI ethics. AI is explosively disrupting every sphere of our work and lives. Cambridge Analytica and fake news bots. AI driven social media displacing traditional journalism. Drone warfare. Elimination of traditional jobs. Privacy-violating advertising. Biased AI decision/recognition algorithms. Deepfakes. Autonomous vehicles. Automated hedge fund trading. No area remains untouched. Policy think tanks, governments, and tech companies around the world have started paying serious attention to AI ethics. How will human civilization survive the rise of AI? What are the new rules? What are the ethical frameworks needed to avoid extinction? What are engineers’ and entrepreneurs’ ethical responsibilities?
At the end of the Artificial Intelligence Ethics course, you will have achieved the following outcomes.
To receive a passing grade, you are required to sign an honor statement acknowledging that you understand and will uphold all policies on plagiarism and collaboration.
All materials submitted for grading must be your own work. You are advised against being involved in any form of copying (either copying other people's work or allowing others to copy yours). If you are found to be involved in an incident of plagiarism, you will receive a failing grade for the course and the incident will be reported for appropriate disciplinary actions.
University policy requires that students who cheat more than once be expelled. Please review the cheating topic from your UST Student Guide.
Warning: sophisticated plagiarism detection systems are in operation!
You are encouraged to collaborate in study groups. However, you must write up solutions on your own. You must also acknowledge your collaborators in the write-up for each problem, whether or not they are classmates. Other cases will be dealt with as plagiarism.
Course grading will be adjusted to the difficulty of assignments and exams. Moreover, I guarantee you the following.
If you achieve | 85% | you will receive at least a | A | grade. |
75% | B | |||
65% | C | |||
55% | D |
Your grade will be determined by a combination of factors:
Exams | 0% (due to university coronovirus meaures) |
Pop quizzes |
~10% |
Class participation | ~15% |
Forum participation | ~10% |
Assignments | ~65% |
No reading material is allowed during the examinations. No make-ups will be given unless prior approval is granted by the instructor, or you are in unfavorable medical condition with physician's documentation on the day of the examination. In addition, being absent at the final examination results in automatic failure of the course according to university regulations, unless prior approval is obtained from the department head.
Science and engineering — not only ethics and humanities — is about communication between people. Good participation in class will count for approximately 15%, and good participation in the online forum will count for approximately 10%.
All assignments must be submitted by 23:00 on the due date. Assignments will be collected electronically using the automated CASS assignment collection system. Late assignments cannot be accepted. Sorry, in the interest of fairness, exceptions cannot be made.
Assignments will account for a total of approximately 65%.
Any linked material (unless labeled "Supplementary references") is required reading that you are responsible for.
Each week we cover a different aspect of AI ethics and society, with the class structured in two halves.
The first half begins with a short provocation such as a TED talk, to create a controversial context for discussion and debate. We then review the relevant literature, illuminate major concepts, and critique them. The first half finishes with a questionnaire/quiz that tests your understanding and poses self-reflection challenges.
The second half exercises these concepts. We may do case studies, breakout groups, collaborative mind mapping, and other interactive work to help concretize, explore, and internalize the issues and challenges. Individual contributions to these exercises over the course of the semester are a large part of the assessment for the course.
wk | date | topics | IEEE goals | assignment | notes |
1 |
20200224 |
The soft side of software Course organization |
IEEE goal of accountability: what's the responsibility and accountability of an ML designer, an ML professional teacher, an ML end user teacher, and an ML end user operator? IEEE objective of legal frameworks for Stuart Russell, Don Howard, Patrick Lin and George Bekey | ||
2 |
20200302 | The biggest fear in AI Ethics is fear itself Information disorder and social disruption |
|||
3 |
20200309 |
How does AI know good from evil? Unconscious bias, inductive bias, and algorithmic bias |
|||
4 |
20200316 | Artificial children AI ethics methodologies |
IEEE foundation of methodologies to guide ethical research and design | ||
5 |
20200323 | Can an AI really relate? Weak AI, strong AI, and superintelligence |
|||
6 |
20200330 | Artificial mindfulness Conscious AI |
IEEE goal of transparency, IEEE objective of transparency and individual rights |
||
7 |
20200406 | Why rule-based AI ethics will fail Prescriptive/descriptive and deontological/consequentialist/virtue AI ethics |
IEEE goal of human rights | ||
8 |
20200413 | The illusion of explainability Artificial moral cognition |
IEEE foundation of embedding values into autonomous systems | ||
9 |
20200420 | Artificial gossips Privacy, safety, security |
IEEE objective of personal data rights and individual access control | ||
10 |
20200427 | Artificial storytellers AI framing and narratives |
|||
11 |
20200504 | Are machines more creative than humans? Constructive and creative AI |
|||
12 |
20200511 | Artificial intimacy Empathetic AI |
IEEE future technology concern of affective computing | ||
13 |
20200518 | Extinction, zoo, upload, merge? AGI safety |
IEEE future technology concern of safety and beneficence of artificial general intelligence (AGI) and artificial superintelligence; IEEE future technology concern of mixed reality |