An introductory philosophy class this term has been giving students the chance to engage in high-level conversations with industry and academic experts about a thorny, and timely, issue—the ethics of IT.
“Ethics and Information Technology,” with Susan Brison, the Eunice and Julian Cohen Professor for the Study of Ethics and Human Values, explores an array of topics in tech ethics. The course and a related workshop Brison co-organized are part of her ongoing effort to ensure that as new technology is being created, ethics and social justice are central to the process.
A vast and emerging field, tech ethics is “kind of like where bioethics was 30 years ago,” says Brison, whose areas of expertise include free speech and social media. “My own view is, it’s really pressing.”
A Broad-Ranging Conversation
In the largely discussion-based class, students apply ethical, political, and legal theories to current issues such as free speech, cyber-harassment, and digital surveillance.
They also delve into the problem of algorithmic injustice, which Brison defines as a range of harms involved in the use of artificial intelligence, including machine learning, to automate decisions across various social spheres.
“The use of computer systems employing algorithms to, for example, perform online searches, predict crimes, identify the ‘best’ job applicants, and allocate medical care may initially seem to be a way of avoiding human bias,” she says.
“However, in practice, algorithmic decision-making has often proven to be far from neutral and objective and can, in fact, amplify biases and reinforce stereotypes based on race, gender, and other social categories.”
On the syllabus were two public lectures co-sponsored by the Leslie Center for the Humanities, the Nelson A. Rockefeller Center for Public Policy, and the Wright Center for the Study of Computation and Just Communities in conjunction with the class.
UCLA Professor Safiya Noble, author of the bestselling book Algorithms of Oppression: How Search Engines Reinforce Racism, and Anita Allen, professor of law and philosophy at the University of Pennsylvania and an expert in privacy and data protection law, spoke in Filene Auditorium. Noble and Allen joined the class for hourlong conversations following their talks, which took place on July 25 and Aug. 1, respectively.
And students recently heard presentations by participants in the three-day “Dartmouth Workshop on Ethics and Information Technology,” co-organized by Brison and Steven Kelts, a lecturer in the Princeton University Center for Human Values, with grants from Dartmouth’s Ethics Institute, Neukom Institute For Computational Science, and the Lu Family Academic Enrichment Fund in Philosophy.
With sessions such as “Affective Computing and Human-AI Interaction,” “Fairness In/And Machine Learning,” and “Shame is the Name of the Game: A Value-Sensitive Design Approach to Combating Uncivil Speech Online,” the workshop at the Hanover Inn earlier this month explored many of the same issues as the course. It drew more than a dozen ethicists, researchers, and professors from Google, Meta, and universities across the country.
Sophia Rubens ’24, a physics major from Stratham, N.H., says meeting with the lecturers and workshop participants has been “really valuable.”
“Most of us use email, social media, lots of things that have predictive algorithms, predictive advertising,” says Rubens, who chose the course based on its everyday relevance. She says that it has been illuminating to see how leaders in the field “are communicating with high-ranking officials in these companies.”
On a recent Wednesday afternoon, Kelts and several other workshop participants spoke to the class and answered questions during an “ask me anything” session.
The discussion kicked off with a question from Allison Zhuang ’25 that elicited nods from many of her classmates. “Do people in charge of product development listen to ethical concerns you bring up?”
Geoff Keeling, an AI ethicist at Google, said he’s had “nothing but positive engagement” with the bioethics team he works on. When Zhuang asked for an example, Keeling described a model he made that was used to work accurately with different skin types, prompted by his concerns about a new product that was being developed.
The conversation also touched on various challenges involved with applying standards to IT, such as promoting an interdisciplinary approach among researchers who are unaccustomed to thinking outside of their fields, and keeping pace with new developments.
Often, by the time standards are in place, technology has changed, Keeling said.
A Lasting Effect
Looking ahead, it appears that the effects of the class and the workshop will be felt long after the term ends.
Brison says the workshop generated what she expects will be an ongoing collaboration among industry and academic leaders.
Despite some constraints, people working for Google and Meta can talk quite freely about areas they’re researching and problems they’re running into that they would like to get advice about from ethicists, Brison says. Since they often play a role in shaping their companies’ research agendas, “it’s exciting to be able to be in communication with them at the earliest stages.”
And Brian Zheng ’24, a government major from Naperville, Il., says he expects to apply what he learned in class to a career in the military.
After graduating, he plans to commission through the ROTC program as a second lieutenant, says Zheng, a member of South House. He hopes to work in either communications or intelligence, in which technology is a factor in “decisions that definitely impact people’s lives across the world.”
Kamil Salame ’24, a politics, philosophy, and economics major from Greenwich, Conn., says the class has changed the way he views technology.
Going forward, he’ll be more aware of how his data is being used to identify his interests, passions, and “what I might want to buy,” says Salame, a member of School House. “It’s enabled me to think critically about the ways in which I use technology, and the ways in which technology uses me.”