Enhancing deep learning model explainability in brain tumor datasets using post-heuristic approaches

Διπλωματική εργασία--Πανεπιστήμιο Μακεδονίας, Θεσσαλονίκη, 2024.

Πρώτος συγγραφέας: Πασβάντης, Κωνσταντίνος
Επόπτης Καθηγητής: Πρωτοπαπαδάκης, Ευτύχιος
Μορφή: Electronic Thesis or Dissertation
Γλώσσα: English
Άλλες Λεπτομέρειες Έκδοσης: Πανεπιστήμιο Μακεδονίας, 2024
Τμήμα: Πρόγραμμα Μεταπτυχιακών Σπουδών στην Τεχνητή Νοημοσύνη και Αναλυτική Δεδομένων
Θέματα/Λέξεις Κλειδιά:
Διαθέσιμο Online: http://dspace.lib.uom.gr/handle/2159/30853
Περίληψη: Διπλωματική εργασία--Πανεπιστήμιο Μακεδονίας, Θεσσαλονίκη, 2024.
Επιτομή: The application of deep learning models in medical diagnosis has showcased considerable efficacy in recent years. Nevertheless, a notable limitation involves the inherent lack of explainability during decision-making processes. This study addresses such a constraint, by enhancing the interpretability robustness. The primary focus is directed towards refining the explanations generated by the LIME Library and LIME image explainer. This is achieved throuhg post-processing mechanisms, based on scenario-specific rules. Multiple experiments have been conducted using publicly accessible datasets related to brain tumor detection. Our proposed post-heuristic approach demonstrates significant advancements, yielding more robust and concrete results, in the context of medical diagnosis.