Skip to content

Index

Optimizing HDF5 for Event-Based Time Series Access

Original Question (Tamas, Mar 30, 2017)

I am currently using HDF5 to store my data and I have a structure that is too slow for reading single events.
My first attempt was to have a table of all hits, including the event_id as a field, so a simple table of:

hit_id | event_id | dom_id | time | tot | triggered | pmt_id

This is extremely slow for reading single events, as I have to go through all hits to extract one event.
My second attempt was to create groups for each event, and place the hits in a dataset inside that group. This also turned out to be slow.
So far the only solution I have found is to go back to ROOT, which is a bit of a shame.
Has anyone worked with similar data structures in HDF5? What is the fastest way of reading a single event?

Response (Steven Varga, Mar 31, 2017)

Hello Tamas,

I use HDF5 to store irregular time series (IRTS) data in the financial domain, where performance and storage density both matter. Here’s the approach I’ve been using successfully for the past 5 years:

🧱 Dataset Structure

  • Data is partitioned per day, each stored as a variable-length dataset (VLEN) containing a stream of events.
  • The dataset holds a custom compound datatype, tailored for compactness and compatibility.

For example, the record type might look like:

[event_id, asset, price, timestamp, flags, ...]

This layout reduces overhead and improves both sequential read throughput and storage density.

🔧 Access Pattern

  • The system is designed for write-once, read-many, with sequential reads being the dominant access mode.
  • For processing, I use C++ iterators to walk through the event stream efficiently.
  • This low-level backend is then exposed to Julia, R, and Python bindings for analysis pipelines.

Because the datatype is stored directly in the file, it’s also viewable with HDFView for inspection or debugging.

🚀 Performance Context

  • Runs in an MPI cluster using C++, Julia, and Rcpp.
  • Delivers excellent performance under both full-stream and filtered-access scenarios.
  • Chunk size does affect access time significantly—tuning that is essential depending on your use case.

🧠 Optimization Trade-Off

To your question—"how to access all or only some events efficiently"—that’s inherently a bi-objective optimization:

  • You can gain speed by trading off space, e.g., pre-sorting, duplicating metadata, or chunking strategically.

So while HDF5 might need more careful tuning than ROOT out of the box, it gives you portability and schema clarity across toolchains.

Hope this helps, Steve

HDF5 for In-Memory Circular Buffers: Reuse Schema in RAM

The Original Ask (David Schneider, Dec 10 2016)

David from SLAC/LCLS posed this neat challenge:

“Is it possible to implement an in‑memory circular buffer using HDF5? We'd like both offline (on‑disk append) and online (shared‑memory overwrite) access via the same HDF5 schema and API, possibly using SWMR for shared-memory consumption.” :contentReference[oaicite:1]{index=1} Basically, a single schema that works both for archival and real-time consumption—elegant.

1. Werner's Insight: The Virtual File Driver

Werner dropped the elegant solution:

“There is a virtual file driver that operates on a memory image of an HDF5 file. It should be no problem to have this one also operate on shared memory.” :contentReference[oaicite:2]{index=2}

That’s referencing HDF5’s core VFD—you can treat a pointer to memory (including shared memory via mmap or shm_open) as if it were an HDF5 file. The same dataset API (H5Dcreate, H5Dwrite, etc.) applies, so you can reuse your schema seamlessly.

2. Steve’s Real‑World Twist (HFT-inspired)

Steve Varga chimed with a production-grade twist:

“Boost’s circular/ring buffer handles one-writer-many-readers; tail flushing can be channeled to the writer or fault‑tolerant hosts. Combine with ZeroMQ + Protocol Buffers or Thrift.”
“For experiments—where failure isn't critical—you can just access HDF5 locally on cluster nodes using MPI + Grid Engine + serial HDF5.” :contentReference[oaicite:3]{index=3}

So if you're doing industrial-strength durability, go ring buffer + messaging middleware. For HPC experiments where speed and simplicity triump, stick with HDF5+MPI.

Quick API Sketch (Julia‑Flavored)

```julia using HDF5, SharedMemory # hypothetical module?

fid = h5open_sharedmem(shm_address, mode="r+")

Use HDF5 API as if working on a real file

dset = d_create(fid, "/buffer", datatype=Float64, dims=(N,), maxdims=(HDF5.UNLIMITED,)) write(dset, new_chunk) close(fid) ````

Summary Table

Scenario Approach Notes
On-disk appendable buffer HDF5 datasets (append mode) Standard functionality
In-memory circular buffer HDF5 via core VFD over a memory region Shared schema/API in RAM
High‑throughput, production-grade Boost ring buffer + messaging (ZeroMQ, ProtoBuf) More robust, fault-tolerant
Experimental/distributed HPC HDF5 per node + MPI/Scheduler (serial HDF5) Simple, performance-focused

Using HDF5 as an In-Memory Circular Buffer

Context

The HDF5 library is a powerful solution for structured data storage, but its default usage assumes durable file-backed I/O. What if we want to use the same layout and tooling for in-memory circular buffers, especially across multiple processes?

This idea came up in a mailing list thread posted by David Schneider in 2016. The question was simple but practical:

“Can I use HDF5 like a circular buffer in memory, with live updates and multiple consumers, using the same schema we already use for on-disk archival?”

That hit close to home—we’d solved a similar problem in our trading systems.

Perspective

We faced the same challenges in building real-time market data pipelines. We needed: - A buffer of recent events in memory (circular structure) - Multi-process access (writer + readers) - A clean way to flush or archive data to disk - The same schema shared across both memory and persistent storage

Instead of twisting HDF5 into a fully-fledged circular buffer, we used HDF5’s virtual file driver (VFD) to great effect.

Solution

1. Boost Ring Buffer + IPC

In systems where latency and determinism matter, we use:

  • Boost's circular_buffer for the in-memory structure
  • ZeroMQ + Protocol Buffers (or Thrift) for pub/sub messaging
  • A fallback mechanism that flushes the buffer’s tail into disk-backed HDF5 for audit or recovery

This gives us: - One writer, many readers (process-safe) - Fault tolerance (via tail-dump or WAL-like shadowing) - Interop with Python, Julia, R via schema-consistent I/O

2. Experimental Mode: One HDF5 File Per Node

In distributed computation (e.g., HPC or large-scale simulations), I skip shared memory entirely: - Run each task independently using serial HDF5 - Use MPI + grid engine orchestration - Merge or reduce results later

The simplicity here avoids shared memory complexity and works well for experimental setups or large batch jobs.

Could You Do It In Pure HDF5?

Yes—using H5Pset_fapl_core() you can instruct HDF5 to treat a memory region as the backing store. In theory, you could:

  1. mmap or shm_open() a fixed-size region
  2. Initialize an HDF5 file layout in that region
  3. Write with wrap-around logic (circular overwrite)
  4. Map other readers to the same memory block

But beware: - HDF5 won’t enforce concurrency guarantees - You must handle locking or versioning externally - Reader/writer separation needs care

Summary

You can use HDF5 as part of a circular buffer system, especially by leveraging the core virtual file driver with a shared memory mapping. But in practice:

Feature Viable with HDF5?
In-memory datasets ✅ (core VFD)
Shared memory usage ✅ with mmap/IPC
Circular overwrite ❌ manual logic
Multi-process safety ⚠️ external sync
Schema reuse ✅ seamless

For production pipelines, I prefer: - Boost + ZMQ + Protobuf/Thrift for live data - HDF5 for archival and structured persistence

The two worlds meet cleanly if you manage the boundary carefully.

— Steven Varga

Structuring Historical Options Data in HDF5—H5CPP Tips

The Problem (Dan E, Oct 27, 2014)

I have equity options historical data in daily CSVs—one file per day with around 700k rows for ~4k symbols. Each row includes a dozen fields like symbol, strike, maturity, open/high/low/close, volume, etc. I want fast access for both:

  1. All options on a given day for a symbol or list of symbols
  2. Time series for a specific option across multiple dates

My current approach builds a Pandas DataFrame for each day and stores it under an 'OPTIONS' group in HDF5. But accessing a few symbols loads the entire day’s worth of data—huge overhead. And fetching a specific contract across many days means loading many files.

How should I structure this? Use one big file or many? Hierarchy? And any recommendations for Python access (like Pandas)?

H5CPP Wisdom (Steven Varga, Oct 27, 2014)

Steve offers a templated, high-performance structure—tailored for daily partitioning and fast indexing:

  1. Hash symbols to numeric indices, and use those for indexing instead of strings.
  2. Name data blocks by date—e.g., 2014-01-01.mat—so each day's data is self-contained.
  3. Enable chunking and high compression to balance I/O throughput and file size.
  4. Treat irregular time series (e.g., tick-by-tick events) differently than regular ones:
  5. Use HDF5 custom datatypes with "pocket tables" for compact, sequential access of irregular data.
  6. For regular time series (e.g., OHLC candles), use dense N-dimensional slabs (e.g. [instrument, time, OHLC]) with float is efficient.
  7. If you’re running this on a parallel filesystem with MPI and PHDF5, you can achieve throughput and storage efficiency that rivals—and may surpass—SQL systems.

H5CPP Approach in Practice

Layout Strategy

/options/
2014-01-01.mat       ← daily file
... <Date>.mat             parameterized by date

Inside a daily file:

  • Datasets keyed by hashed symbol IDs or structured arrays.
  • Regular series: stored as compact multidimensional slabs.
  • Irregular data: structured as compact “pocket tables” with custom datatypes.

Python Access Pattern

# Pseudocode using h5py or H5CPP-Python bindings
with h5py.File("2014-01-01.mat", "r") as f:
    data = f['options'][symbol_id]  # fast direct index
````

For a time-series across dates:

```python
df_series = []
for date in dates:
    with h5py.File(f"{date}.mat", "r") as f:
        df_series.append(f['options'][symbol_id])

# Combine into one timeline

Benefits

  • Selective reads—fetch only what's needed, e.g. a single symbol per day.
  • Efficient storage—chunked and compressed format minimizes disk footprint.
  • Scalable throughput—especially when using MPI + PHDF5 on parallel filesystems.
  • Language-agnostic—H5CPP’s type mappings and structuring make it accessible from C++, Python, Julia, etc.