Leanpub Header

Skip to main content

Integrating Data Fusion and Cognitive Architectures

Volume 1 Foundations and Mechanisms

If machines are ever going to think, they have to do more than process data — they have to understand it.

Integrating Data Fusion and Cognitive Architectures: Volume I – Foundations and Mechanisms is the blueprint for that transformation. It bridges two worlds that have lived apart for decades: the hard mathematics of sensor fusion and the structured reasoning of cognitive science. The result? A unified system that can perceive, reason, and adapt — not someday, but now.

Minimum price

$19.00

$29.00

You pay

$29.00

Author earns

$23.20
$

...Or Buy With Credits!

You can get credits with a paid monthly or annual Reader Membership, or you can buy them here.
PDF
About

About

About the Book

If machines are ever going to think, they have to do more than process data — they have to understand it.

Integrating Data Fusion and Cognitive Architectures: Volume I – Foundations and Mechanisms is the blueprint for that transformation. It bridges two worlds that have lived apart for decades: the hard mathematics of sensor fusion and the structured reasoning of cognitive science. The result? A unified system that can perceive, reason, and adapt — not someday, but now.

This is where raw data becomes intelligence. You’ll learn how to connect Bayesian inference, control theory, and symbolic cognition into a single continuous process. How a Kalman filter and a production rule can live in the same architecture. How a machine can explain not just what it did, but why.

Inside you’ll discover:

  • The full integration of the JDL Data Fusion Model with modern cognitive architectures like SOAR, ACT-R, and LIDA.
  • The mathematical backbone of cognition — state estimation, uncertainty propagation, and feedback control.
  • How to transform perception pipelines into reasoning systems that learn, plan, and self-correct.
  • Design patterns for real-world, explainable, and auditable intelligent systems.

This book isn’t for hobbyists. It’s for engineers, researchers, and systems thinkers who want to build machines that go beyond computation — machines that understand context, anticipate change, and make decisions you can trust.

You won’t find fluff or hype here. You’ll find rigor, architecture, and the missing link between sensing and thought.

Because intelligence isn’t an algorithm. It’s an integration.

Bundles

Bundles that include this book

Author

About the Author

gareth thomas

Gareth Morgan Thomas is a qualified expert with extensive expertise across multiple STEM fields. Holding six university diplomas in electronics, software development, web development, and project management, along with qualifications in computer networking, CAD, diesel engineering, well drilling, and welding, he has built a robust foundation of technical knowledge.

Educated in Auckland, New Zealand, Gareth Morgan Thomas also spent three years serving in the New Zealand Army, where he honed his discipline and problem-solving skills. With years of technical training, Gareth Morgan Thomas is now dedicated to sharing his deep understanding of science, technology, engineering, and mathematics through a series of specialized books aimed at both beginners and advanced learners.

Contents

Table of Contents

Chapter 1. The Fusion–Cognition Convergence

Section 1. Why Now? Historical Context and Drivers

  • Sensor proliferation, cheap compute, and ubiquitous connectivity since 2000
  • Maturation of the JDL model alongside cognitive architectures (SOAR, ACT-R, etc.)
  • Mission, safety, and compliance pressures demanding traceable decisions
  • Data volume/velocity/variety outpacing classical pipelines

Section 2. The Separate Worlds: Fusion vs. Cognition

  • Fusion focus: estimation, association, and uncertainty management (L0–L3)
  • Cognition focus: goals, memory, reasoning, and learning loops
  • Tooling and culture split: signal processing vs. cognitive modeling
  • Pain points at the boundary: semantics, temporal abstraction, intent

Section 3. Convergence Forces: Technology, Markets, and Theory

  • Deep nets + probabilistic inference + symbolic reasoning (neuro-symbolic)
  • Edge/cloud orchestration enabling L0–L4 + user refinement at scale
  • Market demand for explainability, assurance, and adaptive behavior
  • Theoretical links: POMDPs, control, and cognitive control/attention

Section 4. The Fusion–Cognition Continuum Framework

  • Levels mapping: JDL (L0–L4) ↔ perception, working memory, control
  • Data structures: tracks/graphs/ontologies ↔ episodes/chunks/embeddings
  • Control surfaces: sensor management, policy switching, attention
  • Hand-off patterns across L0–L4: when cognition should override fusion

Chapter 2. The JDL Data Fusion Model

Section 1. Origins and Evolution

  • Early DARPA lineage and the Level 0–4 schema with later Level 5 addition
  • Design goals, scope limits, and assumptions about uncertainty and structure
  • Relationship to OODA, control theory, and enterprise decision loops

Section 2. Level 0: Sub-Object Assessment

  • Detection, denoising, calibration, and registration at the signal/feature layer
  • SNR, clutter, and false-alarm control; windowing and CFAR patterns
  • Examples: beamforming, STAP, voxel filtering, spectral unmixing

Section 3. Level 1: Object Assessment

  • Tracking, classification, and identification of entities/objects
  • Filters and smoothers: KF/EKF/UKF/PF; data models and observability
  • Track management: initiation, maintenance, termination, and identity fusion

Section 4. Level 2: Situation Assessment

  • Relational inference: spatial, temporal, and semantic context
  • Graphs, ontologies, and event calculus for situation hypotheses
  • Multi-entity interactions, formations, and activity patterns

Section 5. Level 3: Impact Assessment

  • Threat, risk, and mission impact scoring under uncertainty
  • Utility, costs, and effect prediction; counterfactual reasoning
  • Escalation logic, alerts, and decision support outputs

Section 6. Level 4: Process Refinement

  • Sensor/asset management and algorithm-policy selection
  • Closed-loop adaptation using performance feedback and priors
  • Scheduling under compute, bandwidth, and energy constraints

Section 7. Level 5: User Refinement

  • Human-in-the-loop preferences, intent, and interactive query
  • Active learning and relevance feedback for model updating
  • Explainers, summaries, and trust calibration

Section 8. Critiques and Extensions

  • Ambiguities between levels and temporal treatment gaps
  • From pipeline to loop: control-theoretic and cybernetic views
  • Data-centric AI, neuro-symbolic fusion, and ontology-grounded updates

Chapter 3. Cognitive Architectures: The Reasoning Layer

Section 1. What Is a Cognitive Architecture?

  • Core subsystems: perception, working memory, long-term memory, learning, control
  • Representations: symbolic, subsymbolic, and hybrid encodings
  • Task models, bounded rationality, and performance metrics

Section 2. SOAR: State, Operator, And Result

  • Production rules and working memory organization
  • Goal hierarchies, impasses, and subgoaling for problem solving
  • Chunking as learning; implications for latency and scale

Section 3. ACT-R: Adaptive Control of Thought—Rational

  • Declarative vs procedural memory and activation dynamics
  • Retrieval latency, noise, and utility-based choice
  • Learning mechanisms and parameterization for real-time behavior

Section 4. LIDA, Global Workspace, and Related Models

  • Broadcast/competition dynamics and attentional mechanisms
  • Episodic memory, affect, and salience in control loops
  • Relevance to situation understanding and intent recognition

Section 5. Comparative Criteria for Fusion Integration

  • Real-time suitability, explainability, and verification needs
  • Mapping to data structures (tracks, graphs, chunks, embeddings)
  • Middleware fit (ROS 2/DDS), failure modes, and mitigation patterns

Chapter 4. Mapping Fusion to Cognition: A Unified Blueprint

Section 1. Level Mappings (JDL ↔ Cognitive Primitives)

  • L0 ↔ perception; L1 ↔ object memory; L2 ↔ situation model
  • L3 ↔ evaluation/utility; L4 ↔ meta-control; L5 ↔ user model
  • Consistent interfaces and timing assumptions across levels

Section 2. Data Structures and Messages

  • Tracks, graphs, ontologies ↔ chunks, schemas, and embeddings
  • Time, uncertainty, and provenance annotations as first-class fields
  • ROS 2/DDS message patterns and QoS profiles for fusion-cognition IO

Section 3. Control Surfaces and Policies

  • Sensor management, scheduler hooks, and attentional gating
  • Algorithm switching, resource budgets, and safety envelopes
  • Conflict resolution and arbitration policies

Section 4. Hand-off Patterns Across L0–L4

  • When cognition should override or veto fusion outputs
  • Hypothesis promotion/demotion and cross-level feedback
  • Audit trails, logging, and traceability for assurance

Section 5. Minimal Viable Cognitive Fusion Stack

  • Core nodes/services, buffers, and shared memory abstractions
  • Training/validation loop and golden-trace reuse
  • Metrics to prove value: accuracy, latency, robustness, and trust

Chapter 5. Level 0–1 Sensor & Signal Fusion

Section 1. Modalities, Calibration, and Synchronization

  • Camera/LiDAR/Radar/IMU/GNSS characteristics and complementary error modes
  • Intrinsic/extrinsic calibration and temporal alignment (PTP/GPSDO/IMU sync)
  • Registration spaces: pixel, range–bearing, ego frame, and world frame

Section 2. Preprocessing and Featureization

  • Denoising, deblurring, deskewing, and motion compensation
  • Spectral/temporal filtering, CFAR, and background subtraction
  • Feature stacks: keypoints, descriptors, learned embeddings, and uncertainty

Section 3. Bayesian State-Space Foundations

  • Process/measurement models; observability and identifiability
  • Noise models (Gaussian, heavy-tailed, mixture) and robustness
  • Latency, jitter, and out-of-sequence measurements (OOSM) handling

Section 4. Filters and Smoothers (L0→L1)

  • KF/EKF/UKF/PF selection criteria and stability considerations
  • Smoothing (RTS/fixed-lag) and multi-rate fusion pipelines
  • Practical tuning: covariances, gating radii, and innovation tests

Section 5. Multi-Sensor Registration and Initialization

  • Frame-to-frame, map-to-frame, and sensor-to-sensor alignment
  • Bootstrapping tracks: detection logic, N-scan confirmation, and seeds
  • Failure modes: miscalibration drift, time skew, and false-lock traps

Chapter 6. Data Association

Section 1. Gating and Scoring

  • Elliptical/Mahalanobis gating and missed-detection control
  • Likelihood, NLL, and learned similarity scores
  • Clutter modeling and density estimation for realistic scenes

Section 2. Assignment Algorithms

  • Hungarian, auction, and min-cost flow formulations
  • Multi-frame association (N-scan, tracklet stitching) and complexity trade-offs
  • Deferred decisions vs immediate commits under latency budgets

Section 3. Probabilistic Approaches

  • JPDA/JIPDA for dense scenes; track coalescence mitigation
  • MHT/LMHT and hypothesis tree pruning strategies
  • RFS/GLMB/δ-GLMB and labeled multi-Bernoulli tracking

Section 4. Identity and Re-Identification

  • Appearance models, re-id embeddings, and cross-modal cues
  • Identity persistence across occlusions and handoffs
  • Open-set, long-tail, and look-alike management

Section 5. Diagnostics and Metrics

  • CLEAR MOT, HOTA, IDF1, and OSPA for association quality
  • Ablations: gating radius, clutter rate, frame rate, and sensor mix
  • Common pitfalls: overfitting to benchmarks and label leakage

Chapter 7. Level 2 Situation Assessment

Section 1. Relational Models and Graphs

  • Entities, relations, and interaction patterns over space–time
  • Dynamic graphs, message passing, and attention over neighborhoods
  • Hypothesis generation vs confirmation and resource-aware search

Section 2. Ontologies and Semantic Lifting

  • Domain vocabularies, OWL/RDF schemas, and provenance tags
  • Reasoners (DL/Lite) and rule engines for constraint checks
  • Bridging numeric tracks to symbolic events and roles

Section 3. Activities, Events, and Patterns

  • Compositional event models and temporal templates
  • Multi-agent activities (formations, pursuit, rendezvous)
  • Anomaly categories: contextual, collective, and behavioral

Section 4. Evaluation and Assurance at L2

  • Situation accuracy, latency-to-detect, and false-context rates
  • Calibration, counterfactual probes, and stress scenarios
  • Hand-off artifacts for L3 impact assessment

Chapter 8. Temporal Reasoning and Dynamics

Section 1. Temporal Logics and Constraints

  • Interval/point algebra, STL/LTL for safety and mission rules
  • Temporal joins, windowing, and event alignment operators
  • Consistency checks under jitter, dropouts, and resampling

Section 2. Sequence Models

  • HMM/HSMM and CRF for structured temporal labeling
  • RNN/LSTM/GRU vs TCN for long-range dependencies
  • Transformers with causal masks and streaming attention

Section 3. Time Alignment and Uncertainty Propagation

  • Asynchronous sensor fusion and skew-tolerant buffers
  • Fixed-lag smoothing vs online filters under compute caps
  • Propagating covariance and epistemic/aleatoric terms over time

Section 4. Temporal Evaluation

  • Detection delay, time-to-stability, and robustness-to-gaps
  • Segment- and event-level metrics; calibration over horizons
  • Failure analysis: drift, covariate shift, and non-stationarity

Chapter 9. Level 3 Cognitive Scenario Recognition

Section 1. From Situations to Scenarios

  • Elevating L2 contexts into hypotheses about plans, goals, and outcomes
  • Evidence graphs: chaining events, preconditions, and causal links
  • Hypothesis lifecycle: spawn, compete, merge, retire

Section 2. Intent and Goal Inference

  • Inverse planning/POMDP formulations under partial observability
  • Utility models, constraints, and tactic libraries per domain
  • Ambiguity management: multi-intent posteriors and tie-breaking

Section 3. Neuro-Symbolic Composition

  • Learned perception + symbolic rules/templates over event graphs
  • Program induction and differentiable logic for scenario scoring
  • Robustness via invariants, unit checks, and counterexamples

Section 4. LLM-Assisted Reasoning (Guardrailed)

  • Text grounding: schema-constrained prompting over structured context
  • Retrieval windows, tool use, and refusal policies for safety
  • Latency/SLO budgeting and deterministic fallbacks

Section 5. Scenario Evaluation

  • Precision/recall over scenarios, timeliness, and false-escalation rates
  • Stress suites: rare events, concept drift, and adversarial behavior
  • Auditables: rationale traces, rule hits, and sensitivity slices

Chapter 10. Level 4 Adaptive Fusion Control

Section 1. Control Surfaces and Levers

  • Sensor tasking, frame rates, beam steering, and region-of-interesting
  • Algorithm selection, parameter tuning, and model hot-swap
  • Budget partitions: compute, bandwidth, energy, and operator attention

Section 2. Policy Learning and Selection

  • Contextual bandits vs RL; off-policy evaluation under safety bounds
  • Meta-learning across environments; cold-start strategies
  • Confidence-aware switching with hysteresis and dwell times

Section 3. Schedulers and Resource Arbitration

  • Multi-queue priority schemes and deadline monotonic patterns
  • Graceful degradation modes and bounded staleness
  • Admission control and overload protection

Section 4. Safety, Assurance, and Governance

  • Guarded actions behind formal checks and simulators-in-the-loop
  • Canarying, rollback, and blast-radius containment
  • Policy provenance, approvals, and SOX/ISO audit hooks

Section 5. Telemetry and Feedback

  • KPIs for control: reward, regret, duty cycle, and SLA adherence
  • Telemetry design: counters, histograms, traces, and exemplars
  • Online A/B of control policies with interleaving

Chapter 11. Memory Systems and Fusion Buffers

Section 1. Working Memory and Temporal Buffers

  • Sliding windows, fixed-lag stores, and time-indexed access
  • Ordering, watermarking, and out-of-order reconciliation
  • Eviction policies tuned to scenario horizons

Section 2. Long-Term Memory and Knowledge Stores

  • Tracks→episodes→schemas; retention and compaction strategies
  • Graph/columnar hybrids for queries at scale
  • Provenance, lineage, and signed attestations

Section 3. Caching and Materialization

  • View caches for common joins and cross-level lookups
  • Hot-path materialization to cut p99 latency
  • Consistency models: eventual, bounded-stale, and read-your-writes

Section 4. Persistence, Checkpointing, and Recovery

  • Snapshotting stateful operators and log-structured storage
  • Crash-only design with idempotent replays
  • Disaster recovery RPO/RTO targets and drills

Section 5. Failure Modes and Hardening

  • Memory leaks, unbounded growth, and thundering replays
  • Skewed keys, hotspot entities, and shard imbalance
  • Data corruption detection and auto-quarantine

Chapter 12. Software Architecture Foundations

Section 1. Process Topology and Decomposition

  • Node composition vs nodelets; pipes-and-filters vs actors
  • Bounded contexts and anti-corruption layers around legacy
  • State isolation to contain faults and ease upgrades

Section 2. Messaging and QoS (ROS 2/DDS)

  • QoS profiles: reliability, durability, history, and deadline
  • Back-pressure, flow control, and zero-copy pathways
  • IDL/schema discipline, versioning, and compatibility

Section 3. Real-Time and Determinism

  • Priority inheritance, CPU pinning, and NUMA awareness
  • Executor models, timer jitter control, and schedulability
  • Clock sources, time synchronization, and monotonicity

Section 4. Dataflow Patterns and Interfaces

  • Pub/sub, request/response, and async task orchestration
  • Sidecar services for validation, rate-limit, and explainers
  • Interface contracts: schemas, SLAs, and health endpoints

Section 5. Packaging, Testing, and Release

  • Build graph hygiene, reproducible containers, and SBOMs
  • Unit→prop→HIL→field gates; golden-trace regression
  • Rollout plans: blue/green, canary, and staged geos

Chapter 13. ROS 2 Cognitive Fusion Framework

Section 1. Node Graph and Composition

  • Core fusion nodes (L0–L4), cognition services, and shared buffers
  • Composition vs separate processes for isolation and latency control
  • Lifecycle nodes, startup order, and health-check dependencies

Section 2. Interfaces, Schemas, and Namespacing

  • Message definitions for tracks, situations, and rationales
  • Namespaces, remapping, and tf trees across multi-robot fleets
  • Versioning, deprecation windows, and schema compatibility tests

Section 3. Launch, Parameters, and Configuration

  • Launch files for topology, QoS, and per-environment overrides
  • Parameter servers, dynamic reconfigure, and safety locks
  • Secrets handling and environment-variable discipline

Section 4. QoS Discipline and Reliability

  • Reliability, history depth, and deadline settings by stream criticality
  • Back-pressure, loss recovery, and bounded-staleness handshakes
  • Recording/replay hooks for golden-trace debugging

Section 5. Observability and Debugging

  • Structured logs, traces, and metrics for fusion–cognition paths
  • Introspection tools, message sniffers, and event timelines
  • Offline notebooks and dashboards for issue triage

Chapter 14. Simulation and Testing

Section 1. Simulation Stack and Fidelity

  • Sensor, dynamics, and environment models aligned to target domains
  • Abstraction layers to swap simulators without code churn
  • Calibration of sim-to-real gaps with measured artifacts

Section 2. Scenario Generation and Coverage

  • Handcrafted edge cases, fuzzed scenes, and counterfactuals
  • Distributional coverage: weather, density, behaviors, and faults
  • Seeds, determinism, and reproducible randomization

Section 3. Hardware-in-the-Loop and SIL/MIL

  • Signal injection, timing closure, and latency budgets
  • Golden-trace loops for regression and drift detection
  • Safe fault-insertion and rollback procedures

Section 4. CI Pipelines and Gates

  • Unit → property → integration → system → HIL stages
  • Flake control, quarantine lanes, and rerun economics
  • Performance baselines and release-blocking thresholds

Section 5. Test Oracles, Labels, and Metrics

  • Auto-oracles from constraints and invariants
  • Label strategies: weak, synthetic, active, and human-in-the-loop
  • Quality dashboards for accuracy, latency, robustness, and safety

Chapter 15. Edge Deployment and Optimization

Section 1. Targets, Constraints, and Budgets

  • Embedded SOCs, GPUs/NPUs, and thermal/energy ceilings
  • Latency SLOs, frame budgets, and memory footprints
  • Degradation modes and minimum viable perception

Section 2. Model and Graph Optimization

  • Pruning, distillation, quantization, and operator fusion
  • ONNX/TensorRT conversion and calibration workflows
  • Mixed precision, sparsity, and kernel autotuning

Section 3. Runtime Scheduling and Priorities

  • Real-time executors, stream priorities, and pinned cores
  • Co-scheduling perception, association, and cognition loops
  • Deadline monitors and watchdog resets with safe fallbacks

Section 4. I/O, Storage, and Networking

  • DMA/zero-copy paths, ring buffers, and lock-free queues
  • Bounded logging, local caching, and writeback policies
  • Link adaptation, retries, and congestion control

Section 5. Profiling, Telemetry, and Field Debug

  • Hot-path tracing, flame graphs, and percentile latency
  • On-device logging/telemetry budgets and sampling
  • Remote diagnostics, snapshots, and redaction discipline

Chapter 16. Cloud-Native Fusion Systems

Section 1. Streaming, Storage, and Serving

  • Ingest (pub/sub), feature stores, and long-horizon archives
  • Batch vs streaming analytics for model and policy updates
  • Retrieval APIs for audit, replay, and counterfactuals

Section 2. Microservices and Dataflow

  • Stateless vs stateful operators and scaling patterns
  • Exactly-once/at-least-once semantics and idempotent design
  • Schema registry, contracts, and backward compatibility

Section 3. Orchestration and Reliability

  • Workload placement, autoscaling, and bin-packing
  • Circuit breakers, bulkheads, and graceful degradation
  • Chaos testing for network, node, and dependency failures

Section 4. Security, Privacy, and Compliance

  • AuthN/Z, key management, and encrypted channels at rest/in transit
  • PII minimization, provenance, and tamper-evident logs
  • Policy enforcement and region/tenant isolation

Section 5. Cost, SLOs, and Governance

  • Cost per scenario/decision and efficiency scorecards
  • SLO drafting: latency, availability, freshness, and explainability
  • Change management, approvals, and audit-ready releases

The Leanpub 60 Day 100% Happiness Guarantee

Within 60 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks.

Now, this is technically risky for us, since you'll have the book or course files either way. But we're so confident in our products and services, and in our authors and readers, that we're happy to offer a full money back guarantee for everything we sell.

You can only find out how good something is by trying it, and because of our 100% money back guarantee there's literally no risk to do so!

So, there's no reason not to click the Add to Cart button, is there?

See full terms...

Earn $8 on a $10 Purchase, and $16 on a $20 Purchase

We pay 80% royalties on purchases of $7.99 or more, and 80% royalties minus a 50 cent flat fee on purchases between $0.99 and $7.98. You earn $8 on a $10 sale, and $16 on a $20 sale. So, if we sell 5000 non-refunded copies of your book for $20, you'll earn $80,000.

(Yes, some authors have already earned much more than that on Leanpub.)

In fact, authors have earned over $14 million writing, publishing and selling on Leanpub.

Learn more about writing on Leanpub

Free Updates. DRM Free.

If you buy a Leanpub book, you get free updates for as long as the author updates the book! Many authors use Leanpub to publish their books in-progress, while they are writing them. All readers get free updates, regardless of when they bought the book or how much they paid (including free).

Most Leanpub books are available in PDF (for computers) and EPUB (for phones, tablets and Kindle). The formats that a book includes are shown at the top right corner of this page.

Finally, Leanpub books don't have any DRM copy-protection nonsense, so you can easily read them on any supported device.

Learn more about Leanpub's ebook formats and where to read them

Write and Publish on Leanpub

You can use Leanpub to easily write, publish and sell in-progress and completed ebooks and online courses!

Leanpub is a powerful platform for serious authors, combining a simple, elegant writing and publishing workflow with a store focused on selling in-progress ebooks.

Leanpub is a magical typewriter for authors: just write in plain text, and to publish your ebook, just click a button. (Or, if you are producing your ebook your own way, you can even upload your own PDF and/or EPUB files and then publish with one click!) It really is that easy.

Learn more about writing on Leanpub