Why Should We Care About Understanding A.I.?
“What Makes Us Human in the Age of A.I.?”
A joint series sponsored by The Center for Digital Ethics and the Georgetown Humanities Initiative
presents
Will Fleisher
Assistant Professor of Philosophy
and Research Assistant Professor, Center for Digital Ethics
“Why Should We Care About Understanding AI?”
Wednesday, October 30, 2024
4:00 pm
Old North 205
Please register to reserve your seat
The most sophisticated AI tools use models that are deeply opaque. They are so large, so complex, and are trained using so much data, that not even their developers fully understand why they function the way they do. AI models are even more opaque to the general public, as much of the knowledge of how AI is developed is kept secret by its developers. And even when the models are open source, this does not aid the comprehension of those without advanced training.
This opacity has raised concerns about the use of complex AI tools in a democratic society. Transparency is a requirement for democratic legitimacy. Moreover, some have argued that people have a right to an explanation for how they are treated.
I think there is something fundamentally right about these concerns: we often do need to understand a tool before it is permissible to use it. However, explaining why AI opacity is a problem for protecting legitimacy and the right to explanation is more complicated than it seems. There is a great deal of opacity present in our understanding of existing, non-AI technologies. For instance, drugs are commonly prescribed even without understanding their mechanism of action. Moreover, our governments currently operate with a great deal of opacity, and in some cases this opacity does not seem problematic.
If we are to ground the importance of understanding AI, we need a better explanation of what that understanding does for us, and why we should care.