Most agentic AI stacks live in Python. That is fine for a notebook and
painful for anything you want to ship next to a user. Local agents need
three things Python makes hard and Rust makes easy: a single static
binary, predictable memory, and an embedded database that rides inside
the process — no JVM, no Docker sidecar, no server to babysit.
Graphs are the other half of the story. A language model hallucinates
plausibly; a knowledge graph tells you what is *true* about your world —
the tickets, the documents, the promises between agents, the provenance
of every belief. Vector search tells you what is similar. A graph tells
you what is real.
This book teaches you to build that graph in Rust, end to end. You will
work through four pillars of the semantic web toolbox in the order a
working agent actually needs them:
- RDF — the data model: a graph of facts made of triples.
- RDFS and OWL — the schema and the reasoner.
- SHACL — the constraints that guard the boundary of your system.
- SPARQL — the query language that ties it all together.
- LPG. - property graphs
Everything happens inside a single Cargo workspace, driven by one
running example: **Ares**, a research-assistant agent whose memory
graph observes papers, forms beliefs, makes promises to other agents,
and tracks the provenance of every claim. By the final chapter you
will have a working binary that loads its own schema, runs an OWL 2 RL
reasoner over its own data, validates the result against SHACL shapes,
and emits a JSON report — all in under a thousand lines of Rust.
## What makes this book different
- **Rust-native, not a Java port.** Every example uses `oxigraph`,
`reasonable`, and `rudof_lib` — three maintained crates that cover
enough of the W3C stack to build a real agent memory. Where the
Rust ecosystem is thin, the book says so plainly and shows the
workaround. No hand-waving.
- **One example, cumulative.** Ares grows chapter by chapter. You
learn each new concept by adding one axiom to a schema you already
understand, not by re-reading a fresh `foo`/`bar` example every time.
- **Agent-first framing.** Every concept is introduced through the
problem it solves for a local agent: identity, trust, provenance,
validation, hybrid retrieval.
- **Hybrid RAG, done right.** The final chapter wires the graph into
a hybrid retrieval pipeline that combines vector search with
SPARQL-grounded answers — the architecture you actually want in
production.
## Who this book is for
You write Rust. You have shipped at least one non-trivial Cargo
project. You have heard of RDF and SPARQL but never used them in
anger — or you have used them in Java or Python and want the same
power without the JVM. You are building something with the word
"agent" in its design document.
You do **not** need a background in description logic, linked data,
or the semantic web. Every term is defined in plain English the first
time it appears, and every concept is motivated by Ares before any
formal definition shows up.
## What you will build
- A Cargo workspace with three library crates (`graph`, `reasoner`,
`validator`) and a binary (`ares`) that drives them.
- A Turtle schema for agents, observations, beliefs, promises, and
provenance.
- An OWL 2 RL ontology that infers trust transitivity, inverse
observations, and identity across DOI and arXiv URLs.
- A SHACL shape file that rejects malformed observations before they
reach the store.
- A full pipeline — load, reason, validate, report — with a `--strict`
flag for CI.
- A hybrid RAG layer that grounds LLM output against the graph.
## About the author
Volodymyr Pavlyshyn works at the intersection of knowledge graphs,
identity, and AI agents. He has spent years building systems where
trust, provenance, and structured memory matter more than raw model
size — and this book distills that experience into a Rust stack you
can ship.