Leanpub Header

Skip to main content

Experimental Robotics Projects Volume One

Simulation-Based Design Analysis, and Evaluation of Robotic Systems

This is not a beginner’s robotics book. And it is absolutely not for hobbyists.

This book was written for engineers, researchers, and advanced practitioners who already understand one hard truth:

Robotic systems don’t fail because of missing algorithms — they fail because assumptions go untested.

Minimum price

$19.00

$29.00

You pay

$29.00

Author earns

$23.20
$

...Or Buy With Credits!

You can get credits with a paid monthly or annual Reader Membership, or you can buy them here.
PDF
About

About

About the Book

“you just can't differentiate between a robot and the very best of humans.”
― Isaac Asimov, I, Robot

“One man’s “magic” is another man’s engineering.”
― Robert Heinlein

This is not a beginner’s robotics book.
And it is absolutely not for hobbyists.

This book was written for engineers, researchers, and advanced practitioners who already understand one hard truth:

Robotic systems don’t fail because of missing algorithms — they fail because assumptions go untested.

Most robotics books show you what to build.
Very few show you how systems breakwhy results don’t reproduce, or how to prove performance before hardware ever moves.

That is exactly what this book does.

Why simulation matters (and why most people misuse it)

Real robots are expensive, noisy, and misleading.
Simulation, when used correctly, is the opposite — it exposes weaknesses clearly and repeatably.

Volume 1 treats simulation as a scientific instrument, not a demo environment.

You’ll learn how to:

  • Design perception, planning, control, and learning systems that survive stress
  • Quantify uncertainty instead of ignoring it
  • Detect drift, instability, and divergence early
  • Run controlled experiments where every assumption is visible

This is robotics as systems engineering, not optimism.

What this book actually covers

Every topic is treated as an experiment, not a tutorial:

  • Visual–inertial SLAM with noise, degeneracy, and drift exposed
  • EKF vs UKF under real uncertainty
  • Sampling-based and optimization-based planning at scale
  • MPC, nonlinear control, and hybrid systems under failure conditions
  • Reinforcement learning examined for instability and overfitting
  • Multi-robot coordination pushed to communication limits
  • Safety, fault injection, adversarial attacks, and verification
  • Physics engines, numerical stability, and simulator bias
  • Large-scale parallel simulation and reproducible benchmarking

If something fails, the book shows how to measure it and why it failed.

Who this book is for

This book is for:

  • Robotics engineers working on real systems
  • Graduate students and researchers who need reproducible results
  • Professionals building autonomy stacks, not demos

It assumes comfort with math, control theory, probability, and robotics software.
If you want shortcuts, this book will frustrate you.
If you want clarity, it will save you years.

Why Volume 1 matters — even if you “know robotics”

Most failures happen between disciplines:

  • Estimation breaking control
  • Learning exploiting simulator artifacts
  • Multi-robot systems collapsing under scale

Volume 1 forces these interactions into the open.

You’ll finish the book thinking less in modules — and more in coupled dynamical systems.

About the two-volume structure

Volume 1 focuses on rigorous simulation-based design, analysis, and verification.
Volume 2 moves into interaction, embodiment, and large-scale simulated worlds.

Volume 1 gives you the discipline required to do Volume 2 correctly.

Final word

This is a serious engineering text.

It does not motivate.
It does not simplify.
It does not pretend robotics is easy.

It teaches you how to think clearly, test honestly, and fail safely — before hardware does.

If that’s what you want,
Experimental Robotics Projects — Volume 1 belongs on your desk.

Share this book

Categories

Bundles

Bundles that include this book

Author

About the Author

gareth thomas

Gareth Morgan Thomas is a qualified expert with extensive expertise across multiple STEM fields. Holding six university diplomas in electronics, software development, web development, and project management, along with qualifications in computer networking, CAD, diesel engineering, well drilling, and welding, he has built a robust foundation of technical knowledge.

Educated in Auckland, New Zealand, Gareth Morgan Thomas also spent three years serving in the New Zealand Army, where he honed his discipline and problem-solving skills. With years of technical training, Gareth Morgan Thomas is now dedicated to sharing his deep understanding of science, technology, engineering, and mathematics through a series of specialized books aimed at both beginners and advanced learners.

Contents

Table of Contents

Chapter 1. Perception & State Estimation

Section 1. Visual–Inertial SLAM in Simulation

  • Simulation environment and sensor configuration
  • Camera and IMU noise modeling
  • Feature extraction and tracking
  • Visual–inertial fusion pipeline
  • Map construction and keyframe management
  • Loop closure detection
  • Drift accumulation and correction analysis
  • Performance evaluation metrics

Section 2. Multi-Sensor Kalman Filtering (EKF vs UKF)

  • System state definition and motion model
  • Sensor observation models
  • Linearization assumptions and limitations
  • EKF implementation and tuning
  • UKF implementation and sigma-point selection
  • Filter consistency and divergence cases
  • Comparative accuracy and stability analysis

Section 3. Classical Geometry vs Learned Perception

  • Problem formulation and benchmark tasks
  • Classical geometric perception pipeline
  • Neural perception model selection
  • Training data generation in simulation
  • Inference latency and resource usage
  • Accuracy, robustness, and failure analysis

Section 4. Loop Closure and Drift Correction

  • Drift sources in long-horizon SLAM
  • Place recognition methods
  • Pose graph construction
  • Optimization and constraint solving
  • Consistency enforcement
  • Quantitative drift reduction results

Section 5. Domain Randomization for Perception Robustness

  • Parameter randomization design
  • Dataset generation across domains
  • Training and validation procedures
  • Generalization to unseen environments
  • Sensitivity analysis
  • Failure case taxonomy

Section 6. Observability and Degeneracy in Localization

  • State observability analysis
  • Degenerate motion patterns
  • Rank-deficient estimation scenarios
  • Simulation-based validation
  • Mitigation strategies
  • Implications for real-world deployment

Chapter 2. Planning & Decision-Making

Section 1. Sampling-Based Motion Planning (RRT, RRT*, PRM)

  • Problem definition and configuration space modeling
  • Collision checking and environment representation
  • Tree and roadmap construction strategies
  • Sampling bias and heuristic guidance
  • Path optimization and convergence behavior
  • Computational complexity and scalability analysis

Section 2. Trajectory Optimization Methods

  • Trajectory parameterization and discretization
  • Cost function formulation
  • Constraint handling and feasibility enforcement
  • Gradient-based vs sampling-based optimization
  • Initialization sensitivity and local minima
  • Performance and solution quality evaluation

Section 3. Belief-Space Planning Under Uncertainty

  • State uncertainty representation
  • Propagation of belief through dynamics
  • Observation models and information gain
  • Planning in belief space
  • Approximation and tractability trade-offs
  • Evaluation under partial observability

Section 4. Task-and-Motion Planning Integration

  • Symbolic task representation
  • Continuous motion planning interfaces
  • Hierarchical decomposition strategies
  • Failure recovery and replanning
  • Temporal and resource constraints
  • End-to-end execution evaluation

Section 5. Risk-Aware and Chance-Constrained Planning

  • Risk metrics and probability models
  • Chance constraint formulation
  • Uncertainty propagation in planning
  • Trade-offs between safety and optimality
  • Scenario-based evaluation
  • Robustness assessment

Section 6. Planning in Dynamic and Adversarial Environments

  • Modeling dynamic obstacles and agents
  • Prediction of environment evolution
  • Online replanning strategies
  • Adversarial scenario generation
  • Stability and responsiveness analysis
  • Comparative performance metrics

Chapter 3. Control & Dynamics

Section 1. Rigid-Body Dynamics Modeling in Simulation

  • System modeling assumptions and scope
  • Coordinate frames and kinematic chains
  • Inertia, mass, and center-of-mass specification
  • Forward and inverse dynamics computation
  • Numerical integration and stability considerations
  • Model validation against reference behaviors

Section 2. Inverse Dynamics Control

  • Control objectives and formulation
  • Feedforward and feedback components
  • Constraint handling and actuator limits
  • Sensitivity to modeling errors
  • Performance under varying loads
  • Tracking accuracy evaluation

Section 3. Model Predictive Control for Robotics

  • Prediction model formulation
  • Horizon selection and discretization
  • Cost function and constraint design
  • Real-time optimization strategies
  • Computational burden analysis
  • Closed-loop performance assessment

Section 4. Nonlinear and Adaptive Control Methods

  • Nonlinear system characterization
  • Controller design techniques
  • Parameter adaptation mechanisms
  • Robustness to disturbances
  • Convergence and stability analysis
  • Comparative control performance

Section 5. Hybrid and Switched Control Systems

  • Hybrid system modeling
  • Mode switching logic
  • Stability across switching events
  • Guard conditions and transitions
  • Failure scenarios and recovery
  • Validation in multi-mode tasks

Section 6. Stability Analysis and Failure Modes

  • Stability criteria and definitions
  • Lyapunov-based analysis
  • Numerical stability testing
  • Failure mode identification
  • Stress testing under extreme conditions
  • Interpretation of stability margins

Chapter 4. Learning-Based Robotics

Section 1. Reinforcement Learning for Continuous Control

  • Problem formulation and reward design
  • State and action space definition
  • Policy and value function representation
  • Training in simulation environments
  • Convergence behavior and instability cases
  • Policy evaluation and benchmarking

Section 2. Sample Efficiency and Exploration Strategies

  • Exploration–exploitation trade-offs
  • On-policy vs off-policy learning
  • Replay buffers and data reuse
  • Curriculum learning and task shaping
  • Measuring sample efficiency
  • Failure cases and overfitting analysis

Section 3. Imitation Learning and Behavioral Cloning

  • Expert data generation
  • Dataset aggregation methods
  • Supervised policy training
  • Distribution shift and compounding errors
  • Performance comparison to reinforcement learning
  • Generalization assessment

Section 4. Learning Forward and Inverse Dynamics Models

  • Model structure and representation
  • Training data collection
  • Prediction accuracy evaluation
  • Integration with control and planning
  • Error accumulation and stability issues
  • Comparative model performance

Section 5. Sim-to-Real Gap Quantification

  • Sources of simulation mismatch
  • Parameter sensitivity analysis
  • Robust training strategies
  • Transfer performance metrics
  • Failure diagnosis
  • Implications for real-world deployment

Section 6. Policy Generalization and Overfitting

  • Generalization criteria
  • Training diversity requirements
  • Regularization techniques
  • Evaluation on unseen tasks
  • Overfitting indicators
  • Mitigation strategies

Chapter 5. Multi-Robot & Swarm Systems

Section 1. Decentralized Formation Control

  • Problem formulation and formation specifications
  • Relative vs absolute positioning schemes
  • Control laws for formation maintenance
  • Scalability with increasing agent count
  • Disturbance and noise sensitivity
  • Formation stability evaluation

Section 2. Consensus and Distributed Optimization

  • Consensus problem definition
  • Communication graph modeling
  • Distributed update rules
  • Convergence conditions
  • Effects of delays and packet loss
  • Performance and convergence analysis

Section 3. Multi-Agent Reinforcement Learning

  • Joint vs decentralized learning formulations
  • Credit assignment strategies
  • Non-stationarity and training instability
  • Cooperative and competitive task setups
  • Policy convergence behavior
  • Emergent coordination analysis

Section 4. Communication-Constrained Coordination

  • Communication bandwidth modeling
  • Information sharing strategies
  • Local vs global coordination trade-offs
  • Robustness to communication failures
  • Performance under constrained channels
  • Comparative coordination outcomes

Section 5. Cooperative Task Allocation

  • Task representation and decomposition
  • Auction-based allocation mechanisms
  • Distributed negotiation protocols
  • Dynamic task reassignment
  • Efficiency and optimality measures
  • Scalability assessment

Section 6. Emergent Behaviors in Large-Scale Swarms

  • Local interaction rule design
  • Self-organization mechanisms
  • Pattern formation and phase transitions
  • Robustness to agent loss
  • Sensitivity to parameter variation
  • Quantitative emergence metrics

Chapter 6. Safety, Robustness & Verification

Section 1. Failure Injection and Fault Simulation

  • Identification of critical failure modes
  • Sensor fault modeling and injection
  • Actuator degradation and delay simulation
  • Partial observability scenarios
  • Cascading failure analysis
  • System response characterization

Section 2. Robust Control Under Uncertainty

  • Uncertainty modeling in dynamics and sensing
  • Robust control objectives
  • Worst-case disturbance analysis
  • Robust controller design methods
  • Performance degradation assessment
  • Robustness margin evaluation

Section 3. Safety-Constrained Planning and Control

  • Safety specification and constraint definition
  • Hard vs soft safety constraints
  • Constraint enforcement mechanisms
  • Trade-offs between safety and performance
  • Runtime constraint monitoring
  • Safety violation analysis

Section 4. Adversarial Attacks on Robotic Systems

  • Threat model definition
  • Adversarial perception perturbations
  • Policy manipulation and spoofing attacks
  • System resilience testing
  • Detection and mitigation strategies
  • Impact assessment on task performance

Section 5. Formal Verification of Robotic Controllers

  • Formal model abstraction
  • Reachability and invariance analysis
  • Verification toolchain setup
  • Scalability limitations
  • Counterexample generation
  • Verification outcome interpretation

Section 6. Runtime Monitoring and Safety Envelopes

  • Runtime state monitoring architectures
  • Safety envelope definition
  • Anomaly detection mechanisms
  • Intervention and override strategies
  • Latency and responsiveness analysis
  • Evaluation under stress scenarios

The Leanpub 60 Day 100% Happiness Guarantee

Within 60 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks.

Now, this is technically risky for us, since you'll have the book or course files either way. But we're so confident in our products and services, and in our authors and readers, that we're happy to offer a full money back guarantee for everything we sell.

You can only find out how good something is by trying it, and because of our 100% money back guarantee there's literally no risk to do so!

So, there's no reason not to click the Add to Cart button, is there?

See full terms...

Earn $8 on a $10 Purchase, and $16 on a $20 Purchase

We pay 80% royalties on purchases of $7.99 or more, and 80% royalties minus a 50 cent flat fee on purchases between $0.99 and $7.98. You earn $8 on a $10 sale, and $16 on a $20 sale. So, if we sell 5000 non-refunded copies of your book for $20, you'll earn $80,000.

(Yes, some authors have already earned much more than that on Leanpub.)

In fact, authors have earned over $14 million writing, publishing and selling on Leanpub.

Learn more about writing on Leanpub

Free Updates. DRM Free.

If you buy a Leanpub book, you get free updates for as long as the author updates the book! Many authors use Leanpub to publish their books in-progress, while they are writing them. All readers get free updates, regardless of when they bought the book or how much they paid (including free).

Most Leanpub books are available in PDF (for computers) and EPUB (for phones, tablets and Kindle). The formats that a book includes are shown at the top right corner of this page.

Finally, Leanpub books don't have any DRM copy-protection nonsense, so you can easily read them on any supported device.

Learn more about Leanpub's ebook formats and where to read them

Write and Publish on Leanpub

You can use Leanpub to easily write, publish and sell in-progress and completed ebooks and online courses!

Leanpub is a powerful platform for serious authors, combining a simple, elegant writing and publishing workflow with a store focused on selling in-progress ebooks.

Leanpub is a magical typewriter for authors: just write in plain text, and to publish your ebook, just click a button. (Or, if you are producing your ebook your own way, you can even upload your own PDF and/or EPUB files and then publish with one click!) It really is that easy.

Learn more about writing on Leanpub