Explainable artificial intelligence in medicine: symbolic reasoning, machine learning, and hybrid approaches
View/ Open
Date
2023-12Author
Theocharous, CharaPublisher
Πανεπιστήμιο Κύπρου, Σχολή Θετικών και Εφαρμοσμένων Επιστημών / University of Cyprus, Faculty of Pure and Applied SciencesPlace of publication
CyprusGoogle Scholar check
Keyword(s):
Metadata
Show full item recordAbstract
As the usage of Artificial Intelligence (AI) is growing exponentially, it has been incorporated in medical diagnosis as well as other domains. Although Machine Learning (ML) models have been widespread adopted, many of them remain mostly black-boxes, meaning that their reasoning and/or their results are not understandable by the users. In addition, the appearance of some inaccurate or unfair results of these systems, in combination with legal regulations, led to the need of explainable AI. Moreover, there are separate disciplines of AI, each having their advantages and disadvantages. On the one hand, modern ML and Deep Learning are characterised by high performance, but also limited interpretability. On the other hand, early symbolic AI approaches seem more interpretable, but also more costly, as rules are created through human intervention. Modernizing symbolic reasoning by incorporating ML may help the improvement of explainability in AI outcomes in medicine.