Show simple item record

dc.contributor.advisorPattichis, Constantinosen
dc.contributor.advisorKakas, Antonisen
dc.contributor.authorPrentzas, Nicoletta F.en
dc.creatorPetkov, Nicolai [0000-0003-2163-8647]
dc.date.accessioned2024-05-27T09:03:33Z
dc.date.available2024-05-27T09:03:33Z
dc.date.issued2024-05-24
dc.identifier.urihttp://gnosis.library.ucy.ac.cy/handle/7/66220en
dc.descriptionIncludes bibliographical references.en
dc.descriptionNumber of sources in the bibliography: 220.en
dc.descriptionThesis (Ph. D.) -- University of Cyprus, Faculty of Pure and Applied Sciences, Department of Computer Science, 2024.en
dc.descriptionThe University of Cyprus Library holds the printed form of the thesis.en
dc.description.abstractThe advantages of Artificial Intelligence (AI) in the medical domain have been extensively discussed in the literature, with applications in medical imaging, disease diagnosis, treatment recommendation and patient care in general. Yet, the current adoption of AI solutions in medicine suggests that despite its undeniable potential AI is not achieving this at the expected level. The opacity of many AI models creates uncertainty in the way they operate, that ultimately raises concerns about their trustworthiness, which in turn raises major barriers for their adoption in life-threatening domains, where their value could be very significant. Explainable AI (XAI) has emerged as an important area of research to address these issues with the development of new methods and techniques that will enable AI systems to provide clear and human understandable explanations of their results and decision-making process. This thesis contributes to this line of research by exploring a hybrid AI approach that integrates machine learning with logical methods of argumentation to provide explainable solutions to learning problems. The main study of this thesis concerns the proposal of the ArgEML approach and methodology for Explainable Machine Learning (ML) via Argumentation. In the context of ArgEML argumentation is used both as a naturally suitable target language for ML and explanations for the ML predictions as well as the foundation for new notions to drive the machine learning process. The flexible reasoning form of argumentation in the face of unknown and incomplete information together with the direct link of argumentation to justification and explanation enables the development of a natural form of explainable ML. We have implemented ArgEML in the context of the structured argumentation framework of Gorgias. Other choices are equally possible as the framework of ArgEML at the conceptual and theoretical level is independent of the technical details of argumentation. Explanations in ArgEML are constructed from argumentation, with arguments supporting a conclusion to provide the attributive part of the explanation, giving the reasons why a conclusion would hold, and defending arguments to provide the contrastive element of the explanation, justifying why a particular conclusion is a good choice in relation to other possible and contradictory conclusions. Additionally, ArgEML offers an explanation-based partitioning of the learning space, by identifying subgroups of the problem characterized by a unique type of pattern of explanation. This helps us understand and improve the learner. ArgEML is a case of symbolic supervised learning that it can also be applied in hybrid mode on top of other symbolic or non-symbolic learners. The ArgEML methodology was evaluated on standard datasets, and tested on real-life medical data for the prediction of stroke, endometrial cancer and intracranial aneurysm. In terms of accuracy, the results were comparable with baseline ML models like Random Forest and Support Vector Machine. The ArgEML framework puts an emphasis on the form of explanations that accompany predictions containing both an attributive part, as well as a contractive part. Compared to other explainability techniques, the ArgEML approach provides highly informative “peer explanations” of the form that human experts in the medical domain would.en
dc.format.extent
dc.language.isoengen
dc.publisherΠανεπιστήμιο Κύπρου, Σχολή Θετικών και Εφαρμοσμένων Επιστημών / University of Cyprus, Faculty of Pure and Applied Sciences
dc.rightsinfo:eu-repo/semantics/openAccessen
dc.subject.lcshen
dc.subject.lcshen
dc.subject.lcshen
dc.subject.lcshen
dc.subject.lcshen
dc.titleAdvancing AI explainability with an argumentation-based machine learning approach and its application in the medical domainen
dc.typeinfo:eu-repo/semantics/doctoralThesisen
dc.contributor.committeememberDimopoulos, Yannisen
dc.contributor.committeememberKeravnou-Papailiou, Elpidaen
dc.contributor.committeememberBassiliades, Nicken
dc.contributor.committeememberPetkov, Nicolaien
dc.contributor.departmentΤμήμα Πληροφορικής / Department of Computer Science
dc.subject.uncontrolledtermARGUMENTATION IN MACHINE LEARNINGen
dc.subject.uncontrolledtermEXPLAINABLE MACHINE LEARNINGen
dc.identifier.lcen
dc.author.facultyΣχολή Θετικών και Εφαρμοσμένων Επιστημών / Faculty of Pure and Applied Sciences
dc.author.departmentΤμήμα Πληροφορικής / Department of Computer Science
dc.type.uhtypeDoctoral Thesisen
dc.rights.embargodate2025-05-24
dc.contributor.orcidPrentzas, Nicoletta F. [0000-0003-0843-1086]
dc.contributor.orcidPattichis, Constantinos [0000-0003-1271-8151]
dc.contributor.orcidKakas, Antonis [0000-0001-6773-3944]
dc.contributor.orcidDimopoulos, Yannis [0000-0001-9583-9754]
dc.contributor.orcidKeravnou-Papailiou, Elpida [0000-0002-8980-4253]
dc.contributor.orcidBassiliades, Nick [0000-0001-6035-1038]
dc.gnosis.orcid0000-0003-0843-1086
dc.gnosis.orcid0000-0003-1271-8151
dc.gnosis.orcid0000-0001-6773-3944
dc.gnosis.orcid0000-0001-9583-9754
dc.gnosis.orcid0000-0002-8980-4253
dc.gnosis.orcid0000-0001-6035-1038
dc.gnosis.orcid0000-0003-2163-8647


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record