RESEARCH KEYNOTE SERIES

                                                        Madjid Fathi

                                  Professor & Director (University of Siegen, Germany)

Bio : Madjid Fathi is a professor at the Department of Electrical Engineering and Computer Science, University of Siegen. He is the Chair and Director of the Institute of Knowledge-Based Systems and Knowledge Management. Before he got to Siegen he was visiting Professor at UNM, Florida State and Georgia Tech. in USA. He established the Research Center KMIS (Knowledge Management & Intelligent System) in 2007. He did his sabbatical in U.C. Berkeley from 2012.- 2013 at BISC (Berkeley Initiative of Soft Computing) by Professor Zadeh, the Father of Fuzzy Logic.
His research interests are focused on AI, Knowledge Based system & knowledge management ap-plications in medicine and engineering, computational intelligence and knowledge discovery from text (KDT). Dr. Fathi has published 4 text and 8 edited books, with his students he published more than 250 papers receiving 5 best paper awards. He got the European Award prise ‘Qute’ 2015.He has published the book “Computer Aided Writing” with Dr. Klahold by springer published in 12/2019.
He is also the editor of the book series “Integrated Systems: Innovations and Applications” pub-lished by Springer. Recently Dr. Fathi with Dr. Reza Alam U.C. Berkeley published the Book Inte-grated Systems: Data Driven Engineering, August/24.

Title for Talk: Explainable AI as an Intellectual Paradigm – Data-driven intelligent de-cision support through integration of knowledge graphs

Abstract: Explainable AI (XAI) has become an essential aspect of AI, emphasizing the need for transparency and understanding in artificial intelligence models. XAI isn’t just a model; it embodies the intelligence required to describe and uncover how solutions to complex tasks are achieved. By utilizing cognitive technologies and deep learning, XAI can offer solutions through informed decision-making processes. Crucial to the effective application of AI is the ability to search for the right knowledge and develop strategies for col-lecting and using facts.
The success of explainable AI relies heavily on the extraction and representation of human knowledge, which is then used to train intelligent agents. In this talk, I will discuss the pivotal role of explainable AI models in analyzing and supporting intelligent algorithms, particularly in the context of creating AI sys-tems based on knowledge graphs. Knowledge graphs are advanced knowledge modeling methods ca-pable of organizing and representing information in a general context. They interlink networks of infor-mation, reflecting the contextual relationships between various parts of the extracted knowledge.
When combined with well-designed AI algorithms, knowledge graphs have shown great potential in en-hancing data-driven intelligent decision-making processes. This synergy is particularly evident in fields such as medical applications, smart Industry 4.0, smart cities, and other problem-solving and optimiza-tion domains. By leveraging the strengths of XAI and knowledge graphs, we can achieve more transpar-ent, accurate, and effective AI systems that drive innovation and efficiency across various industries.

Important Deadlines

Full Paper Submission: 16th August 2024
Acceptance Notification: 30th August 2024
Final Paper Submission: 15th September 2024
Early Bird Registration 16th September 2024
Presentation Submission: 29th September 2024
Conference: 24 - 26 October 2024

Previous Conference

Sister Conferences

Search

Announcements

  • Best Paper Award will be given for each track.
  • Conference Record no- 59035