CS980 Fall 2022
CS980 - Advanced Topics: Fair, Accountable and Transparent Machine Learning
Class information
Start Date | End Date | Days | Time | Location |
---|---|---|---|---|
8/29/2022 | 12/12/2022 | Tue/Thurs | 5:10pm - 6:30pm | KING N233 |
Prerequisite: CS750/CS850: Machine Learning
Description: Machine learning is increasingly all around us, determining who gets loans, parole, educational and career opportunities, medical care, and so on. With this increasing ubiquity–and with the increasing power and complexity of modern machine learning–have come concerns about the fairness, accountability, and transparency of these models and the systems that rely on them. ML fairness seeks to detect and mitigate situations where models learn to unethically (and often illegally) rely on protected attributes like race, gender and sexual orientation in making high-stakes decisions like those in justice and finance. ML accountability seeks to deal properly with model mistakes. Who is at fault when a model screws up? How do we “fire” a model? How do we ensure that a given mistake doesn’t happen again? Finally, ML transparency seeks to expose the internal logic of these complicated, nonlinear models in order to help humans use them in more effective, ethical and accountable ways. These three concerns are heavily intertwined, and have given rise to the Fair, Accountable and Transparent (FAccT) movement in machine learning. This movement has become a very popular sub-area of AI, bringing together researchers, businesses and policymakers in thinking about the implications of an AI-reliant society.
This course will be a seminar consisting of roughly 50% reading and 50% coding assignments, including a final project exploring some aspect of FAccT ML. The coding parts of the course will be taught in Python, and will require CS750/CS850: Machine Learning (or equivalent) as a prerequisite.
Syllabus
Updated: 8/24/2022
Last version: 8/24/2022
Important announcements
There are a couple of key things that students in this course need to know.
Syllabus subject to change
As this is a new course, the syllabus is subject to change. I’ll make previous versions available via version control, and send out an email if/when I make changes, but it’s on you to keep up.
Mandatory reporter
I am a mandatory reporter. That means that if you tell me about a situation where you are being or have been sexually harassed, assaulted, stalked, or otherwise endangered, I am legally obligated to file a report with the UNH DEI office. That said, please do let me know if any of these things are happening or have happened to you. I am more than happy to work with you on figuring out what resources the university can provide to help deal with the issue.
Overview/structure
This course is a seminar on fair, accountable and transparent (FAT, or FAccT) machine learning. This is a movement in ML which is represented by its own annual conference, but which also shows up frequently in machine learning and HCI conferences, as well as domain-specific communities.
This being a seminar, I won’t be lecturing much. Every other class session (and approximately half the course grade) will consist of student-led discussions of papers where I will ask one student to prepare a lightweight presentation on the paper(s) and facilitate a discussion thereof, and the rest of the students to write a short response to it after the class discussion, due the day of the next class session.
The other half of class will consist of skill workshops where students will learn coding and toolkits, and work on the project-based assignments that will consist of the other half of the course grade. I’ll also use these sessions for overflow discussion if needed.
Grading distribution
The grading distribution is as follows:
45% - Project milestones
45% - Reading presentations and responses
10% - Attendance and discussion
Late policy
Mini-projects/project milestones will be penalized 10% per day late, to a maximum of 50% if you turn it in before the end of the semester.
Presentations will be given a 0 if they aren’t done on time. Paper responses will also be given a 0, but students will have the opportunity to drop their lowest two weeks’ worth of response grades.
Attendance policy
In-person attendance is important for making a small, discussion-based seminar a good experience for everyone involved. So I do ask for in-person attendance at every class session, and part of your final grade (10%) will be based on attendance and participation in discussion.
I am happy to accommodate life events. If you get COVID or have some other pressing reason to attend remotely, I can move the class to a hybrid format. I’m also happy to grant excused absences for medical or family problems, or various other kinds of emergencies. Let me know as early as possible if you need either of these kinds of accommodation.
Academic honesty policy
Your work is expected to be your own. I will follow the UNH academic honesty policy, which lays out what is considered cheating and what the process is for dealing with cases of reported academic dishonesty.
Schedule
Week | Block | Day | Date | Description | Reading | Assignment Due |
---|---|---|---|---|---|---|
1 | Intro | T | 8/30 | Overview of ML and intro to FATML | ||
Th | 9/1 | |||||
2 | T | 9/6 | Overview of transparency | A Survey Of Methods For Explaining Black Box Models | ||
Th | 9/8 | Reading response | ||||
3 | T | 9/13 | Overview of fairness | A Review on Fairness in Machine Learning | ||
Th | 9/15 | Reading response, intro mini-project proposal | ||||
4 | Transparency | T | 9/20 | Motivating transparency | The Mythos of Model Interpretability, Toward a Rigorous Science of Interpretable Machine Learning | |
Th | 9/22 | Reading response | ||||
5 | T | 9/27 | Perturbation methods | “Why Should I Trust You?”: Explaining the Predictions of Any Classifier, A Unified Approach to Interpreting Model Predictions | ||
Th | 9/29 | Reading response | ||||
6 | T | 10/4 | Saliency methods | Deep inside convolutional networks: Visualising image classification models and saliency maps, Axiomatic Attribution for Deep Networks | ||
Th | 10/6 | Reading response, intro mini-project | ||||
7 | T | 10/11 | Attention methods | Attention is Not Explanation, Rationalizing Neural Predictions | ||
Th | 10/13 | Reading response | ||||
8 | T | 10/18 | Example-based methods | Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning, Understanding Black-box Predictions via Influence Functions | ||
Th | 10/20 | Reading response | ||||
9 | Fairness | T | 10/25 | Bias in NLP | Risk of racial bias in hate speech detection, Gender Bias in Contextualized Word Embeddings | |
Th | 10/27 | Reading response | ||||
10 | T | 11/1 | Addressing bias | Certifying and removing disparate impact, Equality of Opportunity in Supervised Learning | ||
Th | 11/3 | Reading response, mini-project 2 or final project milestone | ||||
11 | T | 11/8 | Election day. No class. | |||
Th | 11/10 | Addressing bias II | Learning Fair Representations, Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment | |||
12 | T | 11/15 | Reading response | |||
Accountability | Th | 11/17 | Defining accountability | Machine Learning Techniques for Accountability, Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems | ||
13 | T | 11/22 | Reading response | |||
Th | 11/24 | Thanksgiving day. No class. | ||||
14 | T | 11/29 | Human-model collaboration | Human Decisions and Machine Predictions, Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance | ||
Th | 12/1 | Reading response | ||||
15 | T | 12/6 | Explanations for human-model collaboration | On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection, Attention-Based Explanations Don’t Help Humans Detect Misclassifications of Online Toxicity | ||
Th | 12/8 | Reading response, mini-project 3 or final project |