IIT Jodhpur scientist analyses the explainability of black-box machine learning models

0

JODHPUR : The Indian Institute of Technology Jodhpur’s researcher works in the cutting-edge field of Artificial Engineering and analyses the problem of explainability of black-box machine learning models. The research shows how the current philosophy of explainable machine learning suffers from certain limitations that have led to a proliferation of black-box models. Machine learning is one of the most sought-after areas of study today. In that context, the explainability of machine learning models represents a fundamental problem. The main aim of this research was to develop more transparent (explainable) machine learning models that can be deployed in several practical applications where intelligent prediction and analysis are required.

 

The research was carried out by Dr. Manish Narwaria, Assistant Professor, Department of Electrical Engineering, IIT Jodhpur. Is it possible to retain the accuracy of black box models while enhancing their explainability? This is the question that the IIT Jodhpur researchers seek to answer. In a recent paper published in Image and Vision Computing (https://doi.org/10.1016/j.imavis.2021.104353), Dr Narwaria provided his perspectives on the issues associated with the accuracy-explainability trade-off and possible ways of mitigating them.

 

As part of the Artificial Intelligence ensemble, Machine Learning and Deep Learning methods are increasingly being used in computer vision applications such as automatic object detection, segmentation, and tracking of objects from static images and videos.    ML- and DL-enabled computer vision applications are found in self-driving cars, banking operations for automated ID checks, and healthcare for automated diagnosing maladies from diagnostic images.

 

ML and DLs are fundamentally algorithms that learn patterns from existing data and use this knowledge for predictions and other intelligent operations, thereby enabling continuous learning – much like the human brain. These automated learning systems can be black-box or white-box models, depending on the level of transparency of the internal algorithm with respect to its scrutiny on aspects like generalization, robustness, performance and scalability to related applications.   Black-box models such as neural networks rely almost exclusively on data for algorithm development and tend to be accurate. However, the inner workings of such models are difficult to interpret in the context of the practical problem being analyzed. On the other hand, white-box models such as linear regression and decision trees rely on explicit modeling of application-specific domain knowledge and may sacrifice  accuracy for more interpretable models.

 

“Explainability should be accorded high priority in ML algorithm design process and not left merely as a post-hoc exercise”, Dr. Manish Narwaria, Assistant Professor, Department of Electrical Engineering, IIT Jodhpur emphasized. He added, “The main objective of my lab is to enable the development of more transparent (explainable) machine learning models that can be deployed in several practical applications where intelligent prediction and analysis are required.”

 

Dr. Narwaria highlighted and addressed fundamental issues with algorithm implementation, results interpretation, and meaningful extensions to several practical use cases. His analyses and findings provide a rigorous perspective on the conceptualization and design of explainable machine learning algorithms. These would eventually enable a more general framework for developing meaningfully explainable machine learning models, irrespective of the application or practical use case.

 

The findings of the IIT Jodhpur researcher provide a solid basis for the design and development of more transparent (i.e., explainable) machine learning models. The future scope of this work includes precise computational modelling and refinement that can enable real-time deployment of machine learning models in several practical use cases.