Explainable AI
April 1, 2024 Ethics in AI Explainable AI
The following is a short essay I had to write for the course Ethics in AI. I briefly summarize an article on it (which I can’t find…), propose a question, and very succinctly try to answer and discuss it.
This text discusses the current difficulties in the field of Explainable AI. Firstly, it highlights two primary approaches to AI: symbolic and statistical. Symbolic AI offers a higher degree of explainability through its logic-based processes but still faces some challenges in explainability. On the other hand, statistical techniques also face challenges in explainability, as they leverage enormous amounts of data and inherently have a statistical nature that is difficult to interpret. The text then outlines two main methods to improve the explainability of AI systems: Develop inherently explainable AI systems or examine these ‘black box’ systems to provide insights into how they work. It mentions that we often decide what method might be appropriate to explain the model based on what type of explanation we are seeking. For example, users often require a ‘local’ approach, where a specific decision is explained. On the contrary, developers might need a ‘global’ approach that explains how a system works. The text then mentions that scientists have been using statistical techniques for quite some time and that machine learning (ML) has many helpful applications. Sometimes having good accuracy is enough, but for much of scientific discovery, we are not only interested in \textit{what} the answer is but also \textit{why}. Lastly, some applications are mentioned where similar systems are used. For example, risk assessment tools and ML in healthcare can both be of tremendous value, but it is also important that we understand the reasoning behind it, so that we can stay in the loop of making important decisions that affect people’s lives.
There are numerous challenges in the field of explainable AI. We are broadly concerned about ensuring transparency of ML decisions, especially in critical fields such as healthcare and criminal justice. However, while we are developing various methods, we also must consider the need for accountability in the deployment of these techniques. This is especially the case when there is a profound implication on individual’s lives, such as in the aforementioned fields.
As explainability and transparency should play an essential role in the deployment of ML systems in sensitive areas such as healthcare or criminal justice, who should bear the primary responsibility for ensuring that these systems do not operate as opaque ‘black boxes’?
We have three main candidates that can bear primary responsibility: Government, institutions that develop these ML systems, and intuitions that apply these systems. From these three, I believe that the government stands out as the most appropriate, and also capable entity. The threat of opaque systems being deployed in sensitive areas is not local, but rather regional, national, or even global. Therefore, governments, by virtue of their oversight capabilities, are positioned in such a way that they can set and enforce standards that ensure transparency and accountability in AI applications. Moreover, governments have a general overview of societal needs and are, or at least should be, in a position to make the most informed decisions on legislation. Although we also might wish for developers and users of ML systems to bear responsibility, realistically, we cannot expect it from them, because of many financial incentives that they might be swayed by. By legislating transparency requirements and overseeing that they are implemented, governments can ensure that ML systems are developed and deployed in sensitive areas so that everyone involved can understand them. Only then, will ‘black boxes’ become slightly more ‘white’.