Using HDF5 as an In-Memory Circular Buffer
Context
The HDF5 library is a powerful solution for structured data storage, but its default usage assumes durable file-backed I/O. What if we want to use the same layout and tooling for in-memory circular buffers, especially across multiple processes?
This idea came up in a mailing list thread posted by David Schneider in 2016. The question was simple but practical:
“Can I use HDF5 like a circular buffer in memory, with live updates and multiple consumers, using the same schema we already use for on-disk archival?”
That hit close to home—we’d solved a similar problem in our trading systems.
Perspective
We faced the same challenges in building real-time market data pipelines. We needed: - A buffer of recent events in memory (circular structure) - Multi-process access (writer + readers) - A clean way to flush or archive data to disk - The same schema shared across both memory and persistent storage
Instead of twisting HDF5 into a fully-fledged circular buffer, we used HDF5’s virtual file driver (VFD) to great effect.
Solution
1. Boost Ring Buffer + IPC
In systems where latency and determinism matter, we use:
- Boost's
circular_buffer
for the in-memory structure - ZeroMQ + Protocol Buffers (or Thrift) for pub/sub messaging
- A fallback mechanism that flushes the buffer’s tail into disk-backed HDF5 for audit or recovery
This gives us: - One writer, many readers (process-safe) - Fault tolerance (via tail-dump or WAL-like shadowing) - Interop with Python, Julia, R via schema-consistent I/O
2. Experimental Mode: One HDF5 File Per Node
In distributed computation (e.g., HPC or large-scale simulations), I skip shared memory entirely: - Run each task independently using serial HDF5 - Use MPI + grid engine orchestration - Merge or reduce results later
The simplicity here avoids shared memory complexity and works well for experimental setups or large batch jobs.
Could You Do It In Pure HDF5?
Yes—using H5Pset_fapl_core()
you can instruct HDF5 to treat a memory region as the backing store. In theory, you could:
mmap
orshm_open()
a fixed-size region- Initialize an HDF5 file layout in that region
- Write with wrap-around logic (circular overwrite)
- Map other readers to the same memory block
But beware: - HDF5 won’t enforce concurrency guarantees - You must handle locking or versioning externally - Reader/writer separation needs care
Summary
You can use HDF5 as part of a circular buffer system, especially by leveraging the core virtual file driver with a shared memory mapping. But in practice:
Feature | Viable with HDF5? |
---|---|
In-memory datasets | ✅ (core VFD) |
Shared memory usage | ✅ with mmap /IPC |
Circular overwrite | ❌ manual logic |
Multi-process safety | ⚠️ external sync |
Schema reuse | ✅ seamless |
For production pipelines, I prefer: - Boost + ZMQ + Protobuf/Thrift for live data - HDF5 for archival and structured persistence
The two worlds meet cleanly if you manage the boundary carefully.
— Steven Varga