LadybugDB for Edge Agent AI memory
Vector stores don't think — they search. They find fragments that sound like your query, then forget they ever looked. Every session starts from nothing. Every context window is a memory that dissolves at sunset.
But the deeper problem isn't amnesia. It's that when agents do remember, they remember in someone else's house — on servers you don't control, in formats you can't inspect, under terms you didn't write.
Memory Graph is a book about building something different: persistent, structured, queryable memory that lives inside your application — no external servers, no data leaving your process, no infrastructure you don't own. An embedded graph database that travels with your agent the way a nervous system travels with a body.
You'll learn how to model not just facts, but relationships between facts. Causality. Temporal ordering. The layered structure of meaning that makes memory more than a search index. You'll build ontologies that enforce what can be known and how. You'll combine graph traversal with semantic search — so your agents find not just what's similar, but what's connected.
The result is an agent that remembers the way you do: structurally, contextually, privately — with memory that belongs to you.
Minimum price
$22.97
$35.97
You pay
$35.97Author earns
$28.77Buying multiple copies for your team? See below for a discount!
About
About the Book
Who This Book Is For
This book is for AI engineers, developers, and researchers who are building systems that need to *remember*. If you are working on AI agents, personal assistants, knowledge management tools, or any application where an AI needs persistent, structured, queryable memory — this book gives you the architecture and the implementation.
You do not need prior experience with graph databases. The book starts from first principles and builds up. Familiarity with any programming language and basic database concepts is sufficient.
What This Book Is About
AI agents today have a memory problem. Vector stores give them semantic search — the ability to find things that *sound like* a query. But human memory is not a search engine. It is a layered, structured system that captures relationships between things, causality between events, temporal ordering of experiences, and hierarchical composition of meaning.
This book shows you how to build that kind of memory using LadybugDB — an embedded graph database that speaks Cypher, runs inside your application process, and provides the structural foundation for knowledge that goes beyond retrieval.
We start with the property graph model and the Cypher query language, establishing the tools you need to work with graph data. We then move into typed schemas and ontologies — how table declarations in LadybugDB become a formal specification of what your knowledge graph can express, enforced by the database itself.
The middle chapters tackle the hard problems of knowledge representation. How do you model relationships that involve more than two participants? How do you create relationships *about* relationships? How do you reason about causality, temporal ordering, and hierarchical meaning? We solve these problems through hypergraphs, metagraphs, and the bipartite layered graph pattern — all implemented in standard Cypher without requiring specialized databases.
We then build the semantic spacetime framework, where every piece of knowledge has both a meaning and a lifespan, and where the graph captures not just what is known but *when* it was learned and *when* it expires.
The culminating chapter constructs a complete agentic memory ontology — a production-ready schema with entity hierarchies (Entity, Fact, Event, Memory), twelve typed edge node tables for relationships, a time tree for temporal anchoring, and vector indexes for semantic search. This is not a toy example. It is a working architecture for AI systems that need to understand context, causality, and the passage of time.
The final chapters cover practical integration: reading from relational databases, bulk data import, and graph algorithms that reveal hidden structure in the knowledge graph — importance rankings, community clusters, connectivity patterns, and causal cycles.
What You Will Learn
- How the labeled property graph model works and why it is better suited for AI memory than RDF triples
- The Cypher query language — creating, querying, and manipulating graph data in LadybugDB
- How typed schemas create enforceable ontologies, and how polymorphic relations reduce schema complexity by 5x
- Subgraphs for isolating knowledge by user, domain, or context
- The progression from triples to hypergraphs to metagraphs, and the bipartite pattern that makes metagraphs practical
- Semantic spacetime — organizing knowledge along axes of meaning and time
- Vector indexes and how to combine semantic search with graph traversal for retrieval that is both relevant and structurally grounded
- A complete, step-by-step construction of an agentic memory ontology
- Importing data from PostgreSQL, SQLite, DuckDB, CSV, and Parquet
- Graph algorithms for discovering importance, communities, and structural patterns in knowledge
How This Book Is Structured
The book follows a deliberate arc from foundations to architecture:
Foundations (Chapters 0–2) introduce the problem space, the property graph model, and the Cypher query language.
Structure (Chapters 3–4) show how typed schemas and subgraphs create the organizational framework for knowledge.
Expressiveness (Chapters 5–7) tackle increasingly powerful graph structures — hypergraphs, metagraphs, and the semantic spacetime framework — that model the complexity of human-like memory.
Application (Chapters 8–9) combine vector search with graph structure and build the complete agentic memory ontology.
Integration (Chapters 10–11) cover practical concerns: relational database interoperability and graph algorithms for knowledge analysis.
Each chapter builds on the previous ones. Code examples are executable against LadybugDB. The ontology constructed in Chapter 9 draws on every concept introduced in the preceding chapters.
Why LadybugDB
LadybugDB is an embedded property graph database built on the Kùzu engine. It runs inside your application process — no server, no network latency, no deployment infrastructure. It stores data in typed, schema-enforced tables with columnar storage and vectorized query processing. It supports ACID transactions, HNSW vector indexes, and polymorphic relationship tables.
For AI memory, these properties matter: the database travels with the agent, queries respond in microseconds, the schema enforces ontological rules, and vector search integrates with graph traversal in a single query. It is the right tool for building memory systems that need to be fast, portable, structured, and semantically searchable.
Team Discounts
Team Discounts
Get a team discount on this book!
Up to 3 members
- Minimum price
- $57.00
- Suggested price
- $89.00
Up to 5 members
- Minimum price
- $91.00
- Suggested price
- $143.00
Up to 10 members
- Minimum price
- $160.00
- Suggested price
- $251.00
Up to 15 members
- Minimum price
- $229.00
- Suggested price
- $359.00
Up to 25 members
- Minimum price
- $344.00
- Suggested price
- $539.00
Author
About the Author
Hey I am Volodymyr
Seasoned Developer's Journey from COBOL to Web 3.0, SSI, Privacy First Edge AI, and Beyond
As a seasoned developer with over 20 years of experience, I have worked with various programming languages, including some that are considered "dead," such as COBOL and Smalltalk. However, my passion for innovation and embracing cutting-edge technology has led me to focus on the emerging fields of Web 5.0, Self-Sovereign Identity (SSI),AI Agents, Knowledge Graphs, Agentiic memory systems, and the architecture of a decentralized world that empowers data democratization.
A firm believer in the potential of agent systems and the concept of a "soft" internet, I am dedicated to exploring and promoting these transformative ideas. In addition to writing, I also enjoy sharing my knowledge and insights through videoblogging. Most of my Medium posts serve as supplementary content to the videos on my YouTube channel, which you can explore here: https://www.youtube.com/c/VolodymyrPavlyshyn.
Join me on this exciting journey as we delve into the future of technology and the possibilities it holds.
Contents
Table of Contents
About This Book
- Who This Book Is For
- What This Book Is About
- What You Will Learn
- How This Book Is Structured
- Why LadybugDB
Why an Embedded Graph Database Matters for AI Memory
- The Memory Problem in AI
- Why Graphs?
- Why Embedded?
- LadybugDB: An Embedded Graph Database for AI Workloads
- What This Book Covers
Edge AI - Intelligence at the Source
- Hardware revolution enabling local intelligence
- Model optimization making AI practical on constrained devices
- Privacy advantages of local processing
- Business transformation through real-time intelligence
Personal and Private AI - Intelligence Under User Control
- Technical architectures for privacy preservation
- Privacy-first business models creating value
- User control and data sovereignty
- Real-world implementations proving viability
Agent Authonomy and Knowledge Graph : Tools, Reasoning and Memory with Graph Empowerment
- Beyond the Magic of LLMs
- The Three Pillars of Agent Autonomy
- Pillar 1: Actions and Tools — Giving Agents Hands
- Pillar 2: Memory — The Foundation of Context
- Pillar 3: Reasoning and Decision Making
- The Synergy of Graph-Empowered Autonomy
- Integrated Decision Loop
- Practical Implementation Strategies
- Advanced Graph Empowerment Techniques
- Hybrid Reasoning Architectures
- Distributed Graph Processing
- Evolutionary and Adaptive Mechanisms
- Future Directions and Challenges
- Scalability Concerns
- Interpretability Benefits and Challenges
- Standardization Needs
- Ethical and Safety Considerations
- Conclusion: The Path to True Autonomy
Why AI Desperately Needs Knowledge Graphs
- The Foundation of Intelligent Behavior
- Knowledge Graphs as AI Memory Architecture
- Intelligent Tool Selection and Orchestration
- Enhanced Reasoning and Decision Making
- The Integration Challenge: Making It All Work Together
- The Path Forward: Toward More Intelligent AI
An Introduction to Knowledge Graphs: Making Sense of Connected Information
- What Are Knowledge Graphs?
- The Building Blocks: Entities, Relationships, and Attributes
- Why Knowledge Graphs Matter: Beyond Simple Data Storage
- Real-World Applications: Where Knowledge Graphs Shine
- The Journey from Data to Insight: How Knowledge Graphs Work
- Challenges and Considerations: The Complexities of Connected Data
- The Future of Knowledge Graphs: Emerging Possibilities
Beyond a Directed Graphs - Why AI need more and why we not there yet
Metagraphs and Hypergraphs for complex AI agent memory and RAG
- Knowledge graphs and not just graphs
- Strings not things
- Knowledge Graphs are not enough, and Tripples is not enough
- Temporal aware Semantic and Episodic memory for AI agents
- Hypergraphs as rescue
- Named Graphs and Graph of Graphs for Multi-model and multilingual data
- Humman-like memories for AI Agent with Metagraph
Getting Started with LadybugDB
- What Is LadybugDB?
- Installation
- Database Files
- Python Bindings
- The CLI Shell
- Ladybug Explorer
- Bugscope
- Project Structure
- What Comes Next
The Property Graph Model
- What Is a Graph Database?
- The Labeled Property Graph (LPG) Model
- A Simple Example
- How LPG Differs from RDF
- The Structured Property Graph in LadybugDB
Cypher: The Graph Query Language
- What Is Cypher?
- Creating the Schema
- CREATE: Inserting Data
- MATCH: Finding Patterns
- WHERE: Filtering Results
- RETURN: Projecting Results
- WITH: Chaining Query Stages
- SET: Updating Properties
- DELETE: Removing Data
- MERGE: Idempotent Upserts
- Variable-Length Paths
- Shortest Path Queries
- Named Paths
- UNWIND: Expanding Lists
- UNION: Combining Results
- Putting It Together
Typed Graphs and Ontologies
- Schema as Ontology
- CREATE NODE TABLE
- CREATE REL TABLE
- The Power of Polymorphic Relations
- Building an Ontology Step by Step
Subgraphs: Partitioning Knowledge
- What Are Subgraphs?
- Strictly Typed Subgraphs
- Open Type Subgraphs (Untyped)
- Managing Subgraphs
- Typed vs. Untyped: When to Use Which
- Practical Patterns for AI Memory
From Triples to Metagraphs
- The Problem Starts with Triples
- How Human Memory Actually Works
- Hypergraphs: When One Edge Is Not Enough
- The Wall: You Cannot Reference a Hyperedge
- Metagraphs: Graphs All the Way Down
- The Metagraph Problem: No Native Tooling
- Bipartite Graphs: The Elegant Compromise
- Hyperedges in Practice
- Multipartite Graphs: Extending the Model
- Layered Graphs: The Practical Implementation
- What We Gain
Metagraph Modeling and the Contains Pattern
- From Theory to Practice
- The Edge-Node Schema
- The Contains Pattern
- Meta-Reasoning: Relationships About Relationships
- The Four Kinds of Containment
- Composing Complex Scenes
Semantic Spacetime
- What Is Semantic Spacetime?
- The Four Fundamental Relations
- The Full Semantic Spacetime Schema
- Graph Clustering with Layer and Kind
- Temporality and Dynamic Memory
- Extending Semantic Spacetime with Temporal Relations
- Extending with Extra Causality
- What This Gives an AI Agent
Vector Indexes
- Why Vectors Matter for AI Agents
- Vector Indexes in LadybugDB
- Querying with Vector Search
- The Retrieval Pipeline for AI Agents
- Which Nodes Get Embeddings?
- Embedding Strategy
- Vector Search vs. Full-Text Search
Memory Design
- The Time Tree - Modeling Temporal Ambiguity
- The Problem with Precise Timestamps
- The Ontology of Memory — Building a Semantic Foundation
- Layered Visibility: Controlling Complexity
- Relations
Designing Agentic Memory
- The Complete Memory Ontology
- Design Principles
- The Entity Hierarchy
- Universal Node Columns
- Part 1: Entity Node Tables
- Part 2: Edge Node Tables
- Part 3: Polymorphic REL Tables
- Part 4: The Time Tree
- Part 5: Wiring a Complete Example
- Part 6: Query Patterns
- Summary
Integrating with Relational Databases
- Why Relational Integration Matters
- The Extension System
- Reading from External Databases
- Bulk Import with COPY FROM
- Schema Inference from External Data
- Practical Patterns
- Choosing the Right Import Method
- Running Cypher Queries Against Relational Databases
Graph Algorithms and Extensions
- The Extension Ecosystem
- Graph Algorithms
- Full-Text Search (BM25)
- LLM Text Embeddings
- Practical Algorithm Workflows
- Configuration
Ladybug Memory: A Working AI Memory System
- From Theory to Practice
- The Simplest Possible Example
- Architecture Overview
- The Graph Schema
- The Memory Interface
- Entity Extraction and Knowledge Graphs
- H-GLUE: Intelligent Document Chunking
- Dynamic Schema Discovery
- A Complete Example: Healthcare Article
- How It All Connects
- Practical Integration Patterns
- What This Demonstrates
The Leanpub 60 Day 100% Happiness Guarantee
Within 60 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks.
Now, this is technically risky for us, since you'll have the book or course files either way. But we're so confident in our products and services, and in our authors and readers, that we're happy to offer a full money back guarantee for everything we sell.
You can only find out how good something is by trying it, and because of our 100% money back guarantee there's literally no risk to do so!
So, there's no reason not to click the Add to Cart button, is there?
See full terms...
Earn $8 on a $10 Purchase, and $16 on a $20 Purchase
We pay 80% royalties on purchases of $7.99 or more, and 80% royalties minus a 50 cent flat fee on purchases between $0.99 and $7.98. You earn $8 on a $10 sale, and $16 on a $20 sale. So, if we sell 5000 non-refunded copies of your book for $20, you'll earn $80,000.
(Yes, some authors have already earned much more than that on Leanpub.)
In fact, authors have earned over $14 million writing, publishing and selling on Leanpub.
Learn more about writing on Leanpub
Free Updates. DRM Free.
If you buy a Leanpub book, you get free updates for as long as the author updates the book! Many authors use Leanpub to publish their books in-progress, while they are writing them. All readers get free updates, regardless of when they bought the book or how much they paid (including free).
Most Leanpub books are available in PDF (for computers) and EPUB (for phones, tablets and Kindle). The formats that a book includes are shown at the top right corner of this page.
Finally, Leanpub books don't have any DRM copy-protection nonsense, so you can easily read them on any supported device.
Learn more about Leanpub's ebook formats and where to read them
Write and Publish on Leanpub
You can use Leanpub to easily write, publish and sell in-progress and completed ebooks and online courses!
Leanpub is a powerful platform for serious authors, combining a simple, elegant writing and publishing workflow with a store focused on selling in-progress ebooks.
Leanpub is a magical typewriter for authors: just write in plain text, and to publish your ebook, just click a button. (Or, if you are producing your ebook your own way, you can even upload your own PDF and/or EPUB files and then publish with one click!) It really is that easy.