CConcurrency

POSIX Threads & Synchronization in C: Mutexes, Condition Variables & Atomic Operations

TT
TopicTrick Team
POSIX Threads & Synchronization in C: Mutexes, Condition Variables & Atomic Operations

POSIX Threads & Synchronization in C: Mutexes, Condition Variables & Atomic Operations


Table of Contents


Why Threads? The Concurrency Model

mermaid

Threads share:

  • Heap: All malloc'd memory visible to all threads.
  • Global variables: Any static or global-scope variable.
  • File descriptors: Open files, sockets.

Threads have their own:

  • Stack: Local variables in each function, function call chain.
  • Registers: Current instruction pointer, register file.
  • Signal mask.

The shared heap is both the power and the danger. Multiple threads can cooperate on the same data — but without explicit coordination, they corrupt each other's work.


Creating and Joining Threads

c

Compile with: gcc -pthread main.c -o main


Race Conditions: The Core Problem

A race condition occurs when two threads access shared data concurrently, and at least one access is a write. The result depends on which thread executes first — non-deterministic behavior:

c

The bug: counter++ compiles to three machine instructions (load, increment, store). Another thread can execute its load between your increment and store, causing one increment to be lost.


Mutexes: Protecting Critical Sections

A mutex (mutual exclusion lock) ensures that only one thread executes the critical section at a time:

c

Best practices:

  • Hold the lock for the shortest possible time (minimize the critical section).
  • Never call blocking functions while holding a lock.
  • Use PTHREAD_MUTEX_INITIALIZER for static mutexes, pthread_mutex_init for dynamic (heap-allocated) mutexes.
  • Always check return values for pthread_mutex_lock in production code.

Deadlock: The Concurrency Trap

Deadlock occurs when two threads each hold a lock the other needs:

c

Deadlock prevention rules:

  1. Lock ordering: Always acquire multiple locks in the same global order.
  2. Lock timeout: Use pthread_mutex_timedlock with a timeout.
  3. Try-lock: Use pthread_mutex_trylock to detect potential deadlock.
  4. Minimize lock nesting: Avoid holding one lock while acquiring another.

Condition Variables: Thread Signaling

Mutexes are for mutual exclusion. Condition variables allow threads to wait until a condition is true without busy-polling:

c

pthread_cond_wait atomically releases the mutex and puts the thread to sleep — no CPU is consumed while waiting. When pthread_cond_signal wakes it, the thread re-acquires the mutex before returning.


Read-Write Locks: Concurrent Reads

Many data structures are read frequently but written rarely. Using a mutex blocks all concurrent reads unnecessarily. A read-write lock allows multiple concurrent readers but exclusive write access:

c

Use rwlocks when: readers far outnumber writers (in-memory caches, configuration databases, routing tables).


C11 Atomic Operations: Lock-Free Programming

C11 introduced <stdatomic.h> for hardware-level atomic operations — faster than mutexes for simple shared counters and flags:

c

When to use atomics vs mutexes:

  • Atomics: Simple flags, counters, single-value updates.
  • Mutexes: Complex multi-step operations on multiple variables, condition waiting.
  • Never use regular variables for inter-thread communication without synchronization.

Building a Thread Pool

A thread pool pre-creates a fixed number of threads that persist and pick up work from a shared queue — avoiding the overhead of creating/destroying threads for each task:

c

Thread pools are the core mechanism behind Nginx's worker process model, Java's ExecutorService, and (at the OS level) the Linux kernel's kworker threads.


Frequently Asked Questions

What is the difference between a mutex and a semaphore? A mutex is owned by one thread — only the locking thread can unlock it. A semaphore (from <semaphore.h>) is a counter — any thread can increment or decrement it, making it suitable for signaling between threads and limiting concurrent access to a resource (e.g., a connection pool of size 10).

How do I detect race conditions? Use ThreadSanitizer (TSan): gcc -fsanitize=thread -g main.c. TSan instruments every memory access and detects all data races that occur during a test run. It adds ~5-10× runtime overhead but is the most reliable way to find race conditions.

Is volatile sufficient for thread safety? No. volatile prevents the compiler from caching a variable in a register and forces re-reading from memory on each access — but it does NOT create atomic read-modify-write operations and provides NO synchronization. Use _Atomic types from <stdatomic.h> for safe inter-thread variable access.

When should I use pthread_cond_broadcast vs pthread_cond_signal? pthread_cond_signal wakes exactly one waiting thread. pthread_cond_broadcast wakes all waiting threads. Use broadcast when: the condition is true for all waiters (e.g., a "shutdown" event), or you can't determine which specific waiter should proceed.


Key Takeaway

POSIX threads turn C programs into Multi-Core Engines. With the right synchronization — mutexes for exclusion, condition variables for signaling, atomics for simple shared state — you can safely parallelize workloads and approach linear scaling with the number of CPU cores.

The fundamental discipline: never access shared data without synchronization. The type of synchronization (mutex, rwlock, atomic) depends on the access pattern and performance requirements. ThreadSanitizer makes race conditions visible — run it on every concurrent codebase.

Read next: Processes, Fork & Exec: Process-Level Isolation →


Part of the C Mastery Course — 30 modules from C basics to expert-level systems engineering.