Abstract: Catastrophic and existential risks from AI must be taken seriously. But the current existential risk discourse is dominated by speculation. In this talk, I’ll dissect a series of fallacies that have led to alarmist conclusions, including a misapplication of probability, a mischaracterization of human intelligence, and a conflation of AI models with AI systems. I will argue that we already have the tools to address risks calmly and collectively, and provide a set of recommendations for how to do so.
This talk is based on a chapter of the forthcoming book AI Snake Oil by Arvind Narayanan and Sayash Kapoor.

When

7/22/2024

3:30 pm - 5:00 pm

Add To Calendar

Location

Haldeman Hall 41 (Kreindler Conference Hall)

Sponsored by

Philosophy Department, Wright Center

Audience

Public

Arvind Narayanan “How to Think about AI and Existential Risk”