As organizations race to architect scalable AI solutions, a fundamental question emerges: How do we best organize the interplay between structured knowledge and contextual intelligence?
A compelling analogy has emerged that reframes how we should think about knowledge graphs and large language models in enterprise AI systems.
Knowledge graphs are the fact tables of AI architecture. LLMs are the dimensions.
This mental model, borrowed from dimensional modeling principles, offers a surprisingly robust framework for understanding how these technologies complement each other in production systems.
The Dimensional Modeling Foundation
In traditional data warehousing, fact tables store measurable, quantitative data—the core business events and metrics. Dimension tables provide the context, the "who, what, when, where, why" that makes those facts meaningful. The beauty of this separation is clear: facts remain stable and queryable, while dimensions provide rich, flexible context for interpretation.
Now consider how this maps to modern AI architecture:
- Knowledge graphs serve as fact tables: They store discrete, verifiable relationships between entities—the fundamental assertions about your domain
- LLMs function as dimension tables: They provide contextual understanding, semantic interpretation, and the ability to reason about those facts in natural language
This is not just a metaphor—it is an architectural principle with profound implications for how we design AI systems.
Knowledge Graphs: The Structured Core
Like fact tables, knowledge graphs excel at storing what we know to be true at a given point in time:
- Entity relationships that can be verified and version-controlled
- Domain-specific ontologies that remain consistent across applications
- Structured data that supports precise queries and logical inference
- Provenance and lineage, critical for enterprise governance
Knowledge graphs provide the bedrock of verifiable truth that enterprise AI systems require. They answer questions like "What products did customer X purchase?" or "Which regulatory requirements apply to process Y?" with precision and auditability.
However, unlike traditional fact tables, knowledge graphs must accommodate the reality that facts evolve. Customer relationships change, regulatory requirements update, and domain knowledge expands. The key is maintaining versioned, auditable changes while preserving the semantic consistency that makes knowledge graphs valuable.
Just as fact tables in a data warehouse maintain referential integrity, knowledge graphs enforce semantic consistency across your AI ecosystem, even as the underlying facts evolve over time.
LLMs: The Contextual Dimensions
LLMs, meanwhile, excel at providing contextual interpretation of those facts:
- Natural language understanding that makes structured data accessible
- Semantic reasoning that can infer relationships not explicitly stored
- Contextual adaptation based on user intent, domain, and situation
- The ability to synthesize information across multiple knowledge domains
When a user asks, "Why did our customer satisfaction drop in Q3?", the LLM does not need to store every possible causal relationship. Instead, it queries the knowledge graph for relevant facts and applies its dimensional understanding to generate contextual explanations.
Architectural Patterns That Emerge
This dimensional approach suggests several practical architectural patterns:
Pattern 1: Fact-First Retrieval
Query the knowledge graph first for authoritative facts, then use LLMs to interpret and contextualize those results. This ensures responses are grounded in verified data while remaining conversational and accessible.
Example: A customer service agent asks "Why did our Q3 sales drop in the healthcare segment?" The system queries the knowledge graph for healthcare sales data, competitive moves, and regulatory changes in Q3, then uses an LLM to synthesize these facts into a coherent explanation that considers market dynamics and seasonal patterns.
Pattern 2: Dimensional Enrichment
Allow LLMs to suggest additional context or related concepts that should be queried from the knowledge graph, creating a feedback loop between structured and contextual knowledge.
Example: When analyzing a supply chain disruption, an LLM identifies that "geopolitical tensions" might be relevant and suggests querying the knowledge graph for related entities like "trade sanctions," "shipping routes," and "alternative suppliers" that were not in the original query scope.
Pattern 3: Hybrid Reasoning
Combine graph traversal algorithms with LLM reasoning to answer complex questions that require both factual accuracy and semantic understanding.
Example: "Which customers are at risk of churn based on recent product changes?" The system uses graph traversal to find customers who purchased affected products, then applies LLM reasoning to analyze support ticket sentiment, usage patterns, and communication history to assess churn probability.
Pattern 4: Version-Controlled Context
Just as dimension tables can be slowly changing dimensions (SCDs), LLM behavior can be versioned and controlled while the underlying facts remain stable.
Example: As regulatory interpretation evolves, LLM prompts for compliance analysis can be updated and versioned while the underlying regulatory text in the knowledge graph remains unchanged, allowing teams to track how interpretation guidance has evolved over time.
Why This Mental Model Matters
Separation of concerns. Knowledge graphs handle what traditional databases do best: storing, indexing, and querying structured relationships. LLMs handle what they do best: understanding context, intent, and natural language nuance.
Governance and compliance. Facts stored in knowledge graphs can be audited, versioned, and governed using existing data management practices. LLM outputs become interpretations of those governed facts, not sources of truth themselves.
Scalability and performance. Graph queries are optimized for relationship traversal. LLM inference is optimized for contextual reasoning. Separating these concerns allows each to be scaled and optimized independently.
Maintainability. Domain experts can update knowledge graphs using structured processes. LLM behavior can be refined through prompt engineering and fine-tuning without touching the underlying facts.
Implementation Considerations
This architectural approach requires thoughtful implementation:
Design your schema carefully. Knowledge graphs, like fact tables, benefit from well-designed schemas that anticipate query patterns and maintain consistency over time.
Establish clear interfaces. The boundary between structured facts and contextual interpretation should be explicit and well-documented, much like the interface between OLTP systems and data warehouses.
Plan for evolution. Knowledge graphs should support schema evolution and versioning. LLM behavior should be parameterizable and version-controlled.
Monitor both layers. Track the accuracy of facts in your knowledge graph separately from the quality of LLM interpretations of those facts.
Looking Forward
As AI systems become more sophisticated, this dimensional thinking becomes increasingly valuable. We are moving toward architectures where multiple specialized models: retrieval, reasoning, generation, work together in orchestrated workflows.
The knowledge graph as fact table, LLM as dimension model provides a mental framework for organizing these complex systems. It suggests clear separation of responsibilities, established patterns for data flow, and proven approaches to governance and scalability.
Just as dimensional modeling transformed how we think about analytics databases, this approach may transform how we architect intelligent systems that are both powerful and trustworthy.
The goal is not to choose between structured knowledge and contextual intelligence. It is to architect systems where each does what it does best.
Addressing the Limitations
Knowledge graphs require continuous evolution and face challenges with entity ambiguity and relation complexity. The analogy may also oversimplify LLM capabilities, which extend beyond contextual dimensions. Integration complexity and scalability challenges remain significant in practice.
Current State of the Field
Recent research focuses heavily on hybrid approaches that combine knowledge graphs with LLMs. The roadmap for unifying LLMs and KGs includes three frameworks: KG-enhanced LLMs, LLM-augmented KGs, and synergized approaches where both play equal roles. This suggests the field recognizes the complementary nature of these technologies, though not necessarily through the dimensional modeling lens proposed here.
Enterprise implementations often aim to use LLMs for automated report generation while using knowledge graphs to ensure accuracy and provide the right context. This practical application aligns with the fact table/dimension separation, where structured facts inform generated content.
As organizations accelerate their AI adoption, the architectural decisions we make today will determine whether these systems scale sustainably or create technical debt that compounds over time. Thinking dimensionally about AI architecture is one way to ensure we build systems that serve both immediate needs and long-term strategic objectives.