Master’s Thesis Presentation • Data Systems • A Framework for Explaining LLM Reasoning with Knowledge Graphs

Tuesday, December 2, 2025 1:00 pm - 2:00 pm EST (GMT -05:00)

Please note: This master’s thesis presentation will take place online.

Moein Shirdel, Master’s candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Lukasz Golab

Large Language Models (LLMs) have demonstrated remarkable question-answering (QA) capabilities, yet their decision processes and outputs often remain opaque and prone to factual inconsistencies. While existing methods evaluate or ground LLM outputs after generation, they typically lack mechanisms for aligning LLM reasoning with external knowledge sources.

This thesis introduces AprèsCoT, a lightweight model-agnostic framework that validates LLM reasoning by grounding it in an external knowledge graph (KG). AprèsCoT operates through three main components: Subgraph Retrieval, which extracts a KG subgraph relevant to a given query; Triple Extraction and Parsing, which converts the LLM’s output into factual triples; and Matching, which aligns these triples with entities and relations in the extracted KG subgraph. The integration of these modules enables alignment between LLM reasoning and structured knowledge, producing traceable and structured explanations alongside model outputs. We evaluate alternative retrieval and matching strategies, analyze their trade-offs, and demonstrate how AprèsCoT helps users surface reasoning gaps, hallucinations, and missing facts. Experiments across multiple domains, including large-scale KGs, highlight AprèsCoT’s effectiveness in advancing trust-worthy and explainable AI.


Attend this master’s thesis presentation virtually on Zoom.