Concurrency with Modern C++
$33.00
Minimum price
$41.00
Suggested price

Concurrency with Modern C++

What every professional C++ programmer should know about concurrency.

About the Book

  • C++11 and C++14 have the basic building blocks for creating concurrent or parallel programs.
  • With C++17, we got the parallel algorithms of the Standard Template Library (STL). That means most of the algorithms of the STL can be executed sequentially, in parallel, or vectorized.
  • The concurrency story in C++ goes on. With C++20, we got coroutines, atomic smart pointers, semaphores, latches, and barriers.
  • C++23 supports the first concrete coroutine: std::generator.
  • With future C++ standards, we can hope for executors, extended futures, transactional memory, and more.

This book explains the details of concurrency in modern C++ and gives you nearly 200 running code examples. Therefore, you can combine theory with practice and get the most out of it.

Because this book is about concurrency, I present many pitfalls and show you how to overcome them.

The book is 100 % finished, but I will update it regularly. The next update is probably about C++26. Furthermore, I will write about lock-free concurrent data structure and patterns for parallelization.

About the Author

Rainer Grimm
Rainer Grimm

I've worked as a software architect, team lead, and instructor since 1999. In 2002, I created a further education round at my company. I have given training courses since 2002. My first tutorials were about proprietary management software, but soon after, I began teaching Python and C++. In my spare time, I like to write articles about C++, Python, and Haskell. I also like to speak at conferences. I publish weekly on my English blog https://www.modernescpp.com.

Since 2016, I have been an independent instructor giving seminars about modern C++ and Python. I have published several books in various languages about modern C++ and in particular, concurrency. Due to my profession, I always search for the best way to teach modern C++.

My books "C++ 11 für Programmierer ", "C++" and "C++ Standardbibliothek kurz & gut" for the "kurz & gut" series were published by Pearson and O'Reilly. They are available in German, English, Korean, and Persian. In summer 2018 I published a new book on Leanpub: "Concurrency with Modern C++". This book is also available in German: "Modernes C++: Concurrency meistern".

Packages

The Book

Includes:

  • extras
    Source Code
  • PDF

  • EPUB

  • WEB

  • English

$33.00
Minimum price
$41.00
Suggested price
Concurreny with Modern C++ Team Edition: Five Copies

Get five copies to the price of three. This package includes all code examples.

Includes:

  • extras
    Source Code
  • PDF

  • EPUB

  • WEB

  • English

$99.00
Minimum price
$123.00
Suggested price

Bundles that include this book

$65.00
Bought separately
$50.00
Bundle Price
$100.94
Bought separately
$54.99
Bundle Price
$106.00
Bought separately
$70.00
Bundle Price

Reader Testimonials

Bart Vandewoestyne
Bart Vandewoestyne

Senior Development Engineer Software at Esterline

'Concurrency with Modern C++' is your practical guide to getting familiar with concurrent programming in Modern C++. Starting with the C++ Memory Model and using many ready-to-run code examples, the book covers a good deal of what you need to improve your C++ multithreading skills. Next to the enlightening case studies that will bring you up to speed, the overview of upcoming concurrency features might even wet your appetite for more!

Ian Reeve
Ian Reeve

Senior Storage Software Engineer for Dell Inc.

Rainer Grimm's Concurrency with Modern C++ is a well written book covering the theory and practice for working with concurrency per the existing C++ standards, as well as addressing the potential changes for the upcoming C++ 20 standard. He provides a conversational discussion of the applications and best practices for concurrency along with example code to reinforce the details of each topic. An informative and worthwhile read!

Robert Badea
Robert Badea

Technical Team Leader

Concurrency with Modern C++ is the easiest way to become an expert in the multithreading environment. This book contains both simple and advanced topics, and it has everything a developer needs, in order to become an expert in this field: Lots of content, a big number of running code examples, along with great explanation, and a whole chapter for pitfalls. I enjoyed reading it, and I highly recommend it for everyone working with C++.

Zeshuang Mi
Zeshuang Mi

Postgraduate

Concurrency with Modern C++ makes the multithreading's concept so clear. At the beginning of this book, it makes me know very clearly about memory model. The memory model is very important to the multithreading. The book makes me have confidence when i write my concurrent program, which makes me know what should i do or what i shouldn't do. The author elaborates the multithreading pattern which makes so helpful to me. I love it so much.

Table of Contents

  •  
    • Reader Testimonials
    • Introduction
      • Conventions
        • Special Fonts
        • Special Symbols
        • Special Boxes
      • Source Code
        • Run the Programs
      • How should you read the book?
      • Personal Notes
        • Acknowledgment
        • About Me
  • A Quick Overview
    • 1. Concurrency with Modern C++
      • 1.1 C++11 and C++14: The Foundation
        • 1.1.1 Memory Model
        • 1.1.2 Multithreading
      • 1.2 C++17: Parallel Algorithms of the Standard Template Library
        • 1.2.1 Execution Policy
        • 1.2.2 New Algorithms
      • 1.3 Coroutines
      • 1.4 Case Studies
        • 1.4.1 Calculating the Sum of a Vector
        • 1.4.2 The Dining Philosophers Problem by Andre Adrian
        • 1.4.3 Thread-Safe Initialization of a Singleton
        • 1.4.4 Ongoing Optimization with CppMem
        • 1.4.5 Fast Synchronization of Threads
      • 1.5 Variations of Futures
      • 1.6 Modification and Generalization of a Generator
      • 1.7 Various Job Workflows
      • 1.8 The Future of C++
        • 1.8.1 Executors
        • 1.8.2 Extended futures
        • 1.8.3 Transactional Memory
        • 1.8.4 Task Blocks
        • 1.8.5 Data-Parallel Vector Library
      • 1.9 Patterns and Best Practices
        • 1.9.1 Synchronization
        • 1.9.2 Concurrent Architecture
        • 1.9.3 Best Practices
      • 1.10 Data Structures
      • 1.11 Challenges
      • 1.12 Time Library
      • 1.13 CppMem
      • 1.14 Glossary
  • The Details
    • 2. Memory Model
      • 2.1 Basics of the Memory Model
        • 2.1.1 What is a memory location?
        • 2.1.2 What happens if two threads access the same memory location?
      • 2.2 The Contract
        • 2.2.1 The Foundation
        • 2.2.2 The Challenges
      • 2.3 Atomics
        • 2.3.1 Strong versus Weak Memory Model
        • 2.3.2 The Atomic Flag
        • 2.3.3 std::atomic
        • 2.3.4 All Atomic Operations
        • 2.3.5 Free Atomic Functions
        • 2.3.6 std::atomic_ref (C++20)
      • 2.4 The Synchronization and Ordering Constraints
        • 2.4.1 The Six Variants of Memory Orderings in C++
        • 2.4.2 Sequential Consistency
        • 2.4.3 Acquire-Release Semantic
        • 2.4.4 std::memory_order_consume
        • 2.4.5 Relaxed Semantics
      • 2.5 Fences
        • 2.5.1 std::atomic_thread_fence
        • 2.5.2 std::atomic_signal_fence
    • 3. Multithreading
      • 3.1 The Basic Thread std::thread
        • 3.1.1 Thread Creation
        • 3.1.2 Thread Lifetime
        • 3.1.3 Thread Arguments
        • 3.1.4 Member Functions
      • 3.2 The Improved Thread std::jthread (C++20)
        • 3.2.1 Automatically Joining
        • 3.2.2 Cooperative Interruption of a std::jthread
      • 3.3 Shared Data
        • 3.3.1 Mutexes
        • 3.3.2 Locks
        • 3.3.3 std::lock
        • 3.3.4 Thread-safe Initialization
      • 3.4 Thread-Local Data
      • 3.5 Condition Variables
        • 3.5.1 The Predicate
        • 3.5.2 Lost Wakeup and Spurious Wakeup
        • 3.5.3 The Wait Workflow
      • 3.6 Cooperative Interruption (C++20)
        • 3.6.1 std::stop_source
        • 3.6.2 std::stop_token
        • 3.6.3 std::stop_callback
        • 3.6.4 A General Mechanism to Send Signals
        • 3.6.5 Additional Functionality of std::jthread
        • 3.6.6 New wait Overloads for the condition_variable_any
      • 3.7 Semaphores (C++20)
      • 3.8 Latches and Barriers (C++20)
        • 3.8.1 std::latch
        • 3.8.2 std::barrier
      • 3.9 Tasks
        • 3.9.1 Tasks versus Threads
        • 3.9.2 std::async
        • 3.9.3 std::packaged_task
        • 3.9.4 std::promise and std::future
        • 3.9.5 std::shared_future
        • 3.9.6 Exceptions
        • 3.9.7 Notifications
      • 3.10 Synchronized Outputstreams (C++20)
    • 4. Parallel Algorithms of the Standard Template Library
      • 4.1 Execution Policies
        • 4.1.1 Parallel and Vectorized Execution
        • 4.1.2 Exceptions
        • 4.1.3 Hazards of Data Races and Deadlocks
      • 4.2 Algorithms
      • 4.3 The New Algorithms
        • 4.3.1 More overloads
        • 4.3.2 The functional Heritage
      • 4.4 Compiler Support
        • 4.4.1 Microsoft Visual Compiler
        • 4.4.2 GCC Compiler
        • 4.4.3 Further Implementations of the Parallel STL
      • 4.5 Performance
        • 4.5.1 Microsoft Visual Compiler
        • 4.5.2 GCC Compiler
    • 5. Coroutines (C++20)
      • 5.1 A Generator Function
      • 5.2 Characteristics
        • 5.2.1 Typical Use Cases
        • 5.2.2 Underlying Concepts
        • 5.2.3 Design Goals
        • 5.2.4 Becoming a Coroutine
      • 5.3 The Framework
        • 5.3.1 Promise Object
        • 5.3.2 Coroutine Handle
        • 5.3.3 Coroutine Frame
      • 5.4 Awaitables and Awaiters
        • 5.4.1 Awaitables
        • 5.4.2 The Concept Awaiter
        • 5.4.3 std::suspend_always and std::suspend_never
        • 5.4.4 initial_suspend
        • 5.4.5 final_suspend
        • 5.4.6 Awaiter
      • 5.5 The Workflows
        • 5.5.1 The Promise Workflow
        • 5.5.2 The Awaiter Workflow
      • 5.6 co_return
        • 5.6.1 A Future
      • 5.7 co_yield
        • 5.7.1 An Infinite Data Stream
      • 5.8 co_await
        • 5.8.1 Starting a Job on Request
        • 5.8.2 Thread Synchronization
      • 5.9 std::generator (C++23)
    • 6. Case Studies
      • 6.1 Calculating the Sum of a Vector
        • 6.1.1 Single-Threaded addition of a Vector
        • 6.1.2 Multi-threaded Summation with a Shared Variable
        • 6.1.3 Thread-Local Summation
        • 6.1.4 Summation of a Vector: The Conclusion
      • 6.2 The Dining Philosophers Problem by Andre Adrian
        • 6.2.1 Multiple Resource Use
        • 6.2.2 Multiple Resource Use with Logging
        • 6.2.3 Erroneous Busy Waiting without Resource Hierarchy
        • 6.2.4 Erroneous Busy Waiting with Resource Hierarchy
        • 6.2.5 Still Erroneous Busy Waiting with Resource Hierarchy
        • 6.2.6 Correct Busy Waiting with Resource Hierarchy
        • 6.2.7 Good low CPU load Busy Waiting with Resource Hierarchy
        • 6.2.8 std::mutex with Resource Hierarchy
        • 6.2.9 std::lock_guard with Resource Hierarchy
        • 6.2.10 std::lock_guard and Synchronized Output with Resource Hierarchy
        • 6.2.11 std::lock_guard and Synchronized Output with Resource Hierarchy and a count
        • 6.2.12 A std::unique_lock using deferred locking
        • 6.2.13 A std::scoped_lock with Resource Hierarchy
        • 6.2.14 The Original Dining Philosophers Problem using Semaphores
        • 6.2.15 A C++20 Compatible Semaphore
      • 6.3 Thread-Safe Initialization of a Singleton
        • 6.3.1 Double-Checked Locking Pattern
        • 6.3.2 Performance Measurement
        • 6.3.3 Thread-Safe Meyers Singleton
        • 6.3.4 std::lock_guard
        • 6.3.5 std::call_once with std::once_flag
        • 6.3.6 Atomics
        • 6.3.7 Performance Numbers of the various Thread-Safe Singleton Implementations
      • 6.4 Ongoing Optimization with CppMem
        • 6.4.1 CppMem: Non-Atomic Variables
        • 6.4.2 CppMem: Locks
        • 6.4.3 CppMem: Atomics with Sequential Consistency
        • 6.4.4 CppMem: Atomics with Acquire-Release Semantics
        • 6.4.5 CppMem: Atomics with Non-atomics
        • 6.4.6 CppMem: Atomics with Relaxed Semantic
        • 6.4.7 Conclusion
      • 6.5 Fast Synchronization of Threads
        • 6.5.1 Condition Variables
        • 6.5.2 std::atomic_flag
        • 6.5.3 std::atomic<bool>
        • 6.5.4 Semaphores
        • 6.5.5 All Numbers
      • 6.6 Variations of Futures
        • 6.6.1 A Lazy Future
        • 6.6.2 Execution on Another Thread
      • 6.7 Modification and Generalization of a Generator
        • 6.7.1 Modifications
        • 6.7.2 Generalization
      • 6.8 Various Job Workflows
        • 6.8.1 The Transparent Awaiter Workflow
        • 6.8.2 Automatically Resuming the Awaiter
        • 6.8.3 Automatically Resuming the Awaiter on a Separate Thread
      • 6.9 Thread-Safe Queue
    • 7. The Future of C++
      • 7.1 Executors
        • 7.1.1 A long Way
        • 7.1.2 What is an Executor?
        • 7.1.3 First Examples
        • 7.1.4 Goals of an Executor Concept
        • 7.1.5 Terminology
        • 7.1.6 Execution Functions
        • 7.1.7 A Prototype Implementation
      • 7.2 Extended Futures
        • 7.2.1 Concurrency TS v1
        • 7.2.2 Unified Futures
      • 7.3 Transactional Memory
        • 7.3.1 ACI(D)
        • 7.3.2 Synchronized and Atomic Blocks
        • 7.3.3 transaction_safe versus transaction_unsafe Code
      • 7.4 Task Blocks
        • 7.4.1 Fork and Join
        • 7.4.2 define_task_block versus define_task_block_restore_thread
        • 7.4.3 The Interface
        • 7.4.4 The Scheduler
      • 7.5 Data-Parallel Vector Library
        • 7.5.1 Data-Parallel Vectors
        • 7.5.2 The Interface of the Data-Parallel Vectors
  • Patterns
    • 8. Patterns and Best Practices
      • 8.1 History
      • 8.2 Invaluable Value
      • 8.3 Pattern versus Best Practices
      • 8.4 Anti-Pattern
    • 9. Synchronization Patterns
      • 9.1 Dealing with Sharing
        • 9.1.1 Copied Value
        • 9.1.2 Thread-Specific Storage
        • 9.1.3 Future
      • 9.2 Dealing with Mutation
        • 9.2.1 Scoped Locking
        • 9.2.2 Strategized Locking
        • 9.2.3 Thread-Safe Interface
        • 9.2.4 Guarded Suspension
    • 10. Concurrent Architecture
      • 10.1 Active Object
        • 10.1.1 Challenges
        • 10.1.2 Solution
        • 10.1.3 Components
        • 10.1.4 Dynamic Behavior
        • 10.1.5 Advantages and Disadvantages
        • 10.1.6 Implementation
      • 10.2 Monitor Object
        • 10.2.1 Challenges
        • 10.2.2 Solution
        • 10.2.3 Components
        • 10.2.4 Dynamic Behavior
        • 10.2.5 Advantages and Disadvantages
        • 10.2.6 Implementation
      • 10.3 Half-Sync/Half-Async
        • 10.3.1 Challenges
        • 10.3.2 Solution
        • 10.3.3 Components
        • 10.3.4 Dynamic Behavior
        • 10.3.5 Advantages and Disadvantages
        • 10.3.6 Example
      • 10.4 Reactor
        • 10.4.1 Challenges
        • 10.4.2 Solution
        • 10.4.3 Components
        • 10.4.4 Dynamic Behavior
        • 10.4.5 Advantages and Disadvantages
        • 10.4.6 Example
      • 10.5 Proactor
        • 10.5.1 Challenges
        • 10.5.2 Solution
        • 10.5.3 Components
        • 10.5.4 Advantages and Disadvantages
        • 10.5.5 Example
      • 10.6 Further Information
    • 11. Best Practices
      • 11.1 General
        • 11.1.1 Code Reviews
        • 11.1.2 Minimize Sharing of Mutable Data
        • 11.1.3 Minimize Waiting
        • 11.1.4 Prefer Immutable Data
        • 11.1.5 Use pure functions
        • 11.1.6 Look for the Right Abstraction
        • 11.1.7 Use Static Code Analysis Tools
        • 11.1.8 Use Dynamic Enforcement Tools
      • 11.2 Multithreading
        • 11.2.1 Threads
        • 11.2.2 Data Sharing
        • 11.2.3 Condition Variables
        • 11.2.4 Promises and Futures
      • 11.3 Memory Model
        • 11.3.1 Don’t use volatile for synchronization
        • 11.3.2 Don’t program Lock Free
        • 11.3.3 If you program Lock-Free, use well-established patterns
        • 11.3.4 Don’t build your abstraction, use guarantees of the language
        • 11.3.5 Don’t reinvent the wheel
  • Data Structures
    • 12. General Considerations
      • 12.1 Concurrent Stack
      • 12.2 Locking Strategy
      • 12.3 Granularity of the Interface
      • 12.4 Typical Usage Pattern
        • 12.4.1 Linux (GCC)
        • 12.4.2 Windows (cl.exe)
      • 12.5 Avoidance of Loopholes
      • 12.6 Contention
        • 12.6.1 Single-Threaded Summation without Synchronization
        • 12.6.2 Single-Threaded Summation with Synchronization (lock)
        • 12.6.3 Single-Threaded Summation with Synchronization (atomic)
        • 12.6.4 The Comparison
      • 12.7 Scalability
      • 12.8 Invariants
      • 12.9 Exceptions
    • 13. Lock-Based Data Structures
      • 13.1 Concurrent Stack
        • 13.1.1 A Stack
      • 13.2 Concurrent Queue
        • 13.2.1 A Queue
        • 13.2.2 Coarse-Grained Locking
        • 13.2.3 Fine-Grained Locking
    • 14. Lock-Free Data Structures
      • 14.1 General Considerations
        • 14.1.1 The Next Evolutionary Step
        • 14.1.2 Sequential Consistency
      • 14.2 Concurrent Stack
        • 14.2.1 A Simplified Implementation
        • 14.2.2 A Complete Implementation
      • 14.3 Concurrent Queue
  • Further Information
    • 15. Challenges
      • 15.1 ABA Problem
      • 15.2 Blocking Issues
      • 15.3 Breaking of Program Invariants
      • 15.4 Data Races
      • 15.5 Deadlocks
      • 15.6 False Sharing
      • 15.7 Lifetime Issues of Variables
      • 15.8 Moving Threads
      • 15.9 Race Conditions
    • 16. The Time Library
      • 16.1 The Interplay of Time Point, Time Duration, and Clock
      • 16.2 Time Point
        • 16.2.1 From Time Point to Calendar Time
        • 16.2.2 Cross the valid Time Range
      • 16.3 Time Duration
        • 16.3.1 Calculations
      • 16.4 Clocks
        • 16.4.1 Accuracy and Steadiness
        • 16.4.2 Epoch
      • 16.5 Sleep and Wait
    • 17. CppMem - An Overview
      • 17.1 The simplified Overview
        • 17.1.1 1. Model
        • 17.1.2 2. Program
        • 17.1.3 3. Display Relations
        • 17.1.4 4. Display Layout
        • 17.1.5 5. Model Predicates
        • 17.1.6 The Examples
    • 18. Glossary
      • 18.1 adress_free
      • 18.2 ACID
      • 18.3 CAS
      • 18.4 Callable Unit
      • 18.5 Complexity
      • 18.6 Concepts
      • 18.7 Concurrency
      • 18.8 Critical Section
      • 18.9 Deadlock
      • 18.10 Eager Evaluation
      • 18.11 Executor
      • 18.12 Function Objects
      • 18.13 Lambda Functions
      • 18.14 Lazy evaluation
      • 18.15 Lock-free
      • 18.16 Lock-based
      • 18.17 Lost Wakeup
      • 18.18 Math Laws
      • 18.19 Memory Location
      • 18.20 Memory Model
      • 18.21 Modification Order
      • 18.22 Monad
      • 18.23 Non-blocking
      • 18.24 obstruction-free
      • 18.25 Parallelism
      • 18.26 Predicate
      • 18.27 Pattern
      • 18.28 RAII
      • 18.29 Release Sequence
      • 18.30 Sequential Consistency
      • 18.31 Sequence Point
      • 18.32 Spurious Wakeup
      • 18.33 Thread
      • 18.34 Total order
      • 18.35 TriviallyCopyable
      • 18.36 Undefined Behavior
      • 18.37 volatile
      • 18.38 wait-free
    • Index

The Leanpub 60 Day 100% Happiness Guarantee

Within 60 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks.

Now, this is technically risky for us, since you'll have the book or course files either way. But we're so confident in our products and services, and in our authors and readers, that we're happy to offer a full money back guarantee for everything we sell.

You can only find out how good something is by trying it, and because of our 100% money back guarantee there's literally no risk to do so!

So, there's no reason not to click the Add to Cart button, is there?

See full terms...

Earn $8 on a $10 Purchase, and $16 on a $20 Purchase

We pay 80% royalties on purchases of $7.99 or more, and 80% royalties minus a 50 cent flat fee on purchases between $0.99 and $7.98. You earn $8 on a $10 sale, and $16 on a $20 sale. So, if we sell 5000 non-refunded copies of your book for $20, you'll earn $80,000.

(Yes, some authors have already earned much more than that on Leanpub.)

In fact, authors have earnedover $14 millionwriting, publishing and selling on Leanpub.

Learn more about writing on Leanpub

Free Updates. DRM Free.

If you buy a Leanpub book, you get free updates for as long as the author updates the book! Many authors use Leanpub to publish their books in-progress, while they are writing them. All readers get free updates, regardless of when they bought the book or how much they paid (including free).

Most Leanpub books are available in PDF (for computers) and EPUB (for phones, tablets and Kindle). The formats that a book includes are shown at the top right corner of this page.

Finally, Leanpub books don't have any DRM copy-protection nonsense, so you can easily read them on any supported device.

Learn more about Leanpub's ebook formats and where to read them

Write and Publish on Leanpub

You can use Leanpub to easily write, publish and sell in-progress and completed ebooks and online courses!

Leanpub is a powerful platform for serious authors, combining a simple, elegant writing and publishing workflow with a store focused on selling in-progress ebooks.

Leanpub is a magical typewriter for authors: just write in plain text, and to publish your ebook, just click a button. (Or, if you are producing your ebook your own way, you can even upload your own PDF and/or EPUB files and then publish with one click!) It really is that easy.

Learn more about writing on Leanpub