Journal of the National Science Foundation of Sri Lanka (2022),
Vol 50, 263-276
Artificial Intelligence (AI) and its data-centric branch of machine learning
(ML) have greatly evolved over the last few decades. However, as AI is used
increasingly in real world use cases, the importance of the interpretability of
and accessibility to AI systems have become major research areas. The lack of
interpretability of ML based systems is a major hindrance to widespread
adoption of these powerful algorithms. This is due to many reasons including
ethical and regulatory concerns, which have resulted in poorer adoption of ML
in some areas. The recent past has seen a surge in research on interpretable
ML. Generally, designing a ML system requires good domain understanding
combined with expert knowledge. New techniques are emerging to improve ML
accessibility through automated model design. This paper provides a review of
the work done to improve interpretability and accessibility of machine learning
in the context of global problems while also being relevant to developing
countries. We review work under multiple levels of interpretability including
scientific and mathematical interpretation, statistical interpretation and
partial semantic interpretation. This review includes applications in three
areas, namely food processing, agriculture and health.
N. Ranasinghe, A. Ramanan, S. Fernando, P. N. Hameed, D. Herath, T. Malepathirana, P. Suganthan, M. Niranjan, S. Halgamuge
2022-11-30