Responsible Data Science against black boxes

Romayssa Bedjaoui
3 min readFeb 22, 2021

The artificial intelligence (AI) era is becoming real, intelligent systems that can do tasks that exceed our human competence are growing in their capabilities and their impact on our lives. They’re shaping our work, leisure time, education, etc. As machine learning algorithms are solving in less and less time more and more complex problems, they are also growing to become opaque and dynamic.

Black boxes algorithms are everywhere

Algorithms are becoming Black boxes that produce results that are very hard and almost impossible to inspect or understand.

Algorithms decide our future and life, and we don’t know how these crucial matters are being handled.

A call for transparency has been made. In my article, I try to explain transparency in AI, why is it required and what is the limit that we might face when trying to implement transparent systems.

What is transparency?

Looking at the Cambridge University Dictionary, transparency is the quality of being done in an open way without secrets. Albert Meijer in his paper understanding modern transparency said:

Transparency is lifting the veil of secrecy or the ability to look clearly through the windows of an institution. Everything is out in the open and can be scrutinized. Transparency is contrasted with opaque policy measures, where it is hard to discover who takes the decisions, what they are, and who gains and who loses.

Transparency can be also interpreted as a mean to battle corruption, as openness creates security.

Why transparency in AI?

Even if algorithms might appear more fair and reliable than humans, they’re made by us. More importantly, they mimic our mistakes by learning from our past, as they’re fed with real-life data. The goal of technology is to improve our social standards not replicate our dark history.

Through transparent systems we want to check if they’re:

  • Fair and unbiased: no segregation of a minority over a majority.
  • Reliable: by generalizing well over errors and by proving robustness against attacks.
  • Respecting privacy: data and users being protected.
  • Causal: the predictions are inferred from causal associations, not spurious correlations.
  • Trusted: the end-user need to feel safe using the systems.

Transparent systems are about building a responsible and ethical technological baseline This, can’t be accomplished with the actual black-boxed. We need to open them, understand them, learn from them and prevent them from making our mistakes. Short why to put things together:

We seek interpretability from transparency and Interpretability is understandability.

Limits of Transparency in AI

Unfortunately opening these black boxes is not an easy task, understanding what these systems are doing can be impossible in some cases. The cyberspace is infinite, while the cybertime and the human cognitive space are limited. Plus the used algorithms are growing to be more dynamic and complex. Even experts in the field cannot understand fully what some advanced deep learning algorithms are doing.

Of course, there is a growing interest in machine learning systems interpretability to address these problems. That led to a new research area in AI interpretability and explainability, where researchers try to find methods on how to explain intelligent systems predictions.

But should we open all the black boxes? Is total transparency required and safe? Should we keep the Pandora box close?

References:

  1. Albert Meijer “Understanding modern transparency”
  2. Ananny, Mike & Crawford, Kate. “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability”
  3. “What Is Transparency?” Carolyn Ball
  4. Towards A Rigorous Science of Interpretable Machine Learning, (Ml)
  5. “Transparent to whom? No algorithmic accountability without a critical audience, Information, Communication & Society”
  6. “Interpretable Machine Learning A Guide for Making Black Box Models Explainable.” https://christophm.github.io/interpretable-ml-book/storytime.html#storytime

--

--