Data Exporters
Functions for exporting spike train data and analysis results to CSV, pickle, and other output formats.
Data exporters that mirror data_loaders, writing SpikeData to common formats.
Provided exporters:
HDF5 generic with one of four styles:
raster(units x time matrix with a specified bin size in ms),ragged(flatspike_timesplusspike_times_index),group(one HDF5 group per unit), orpaired(parallelidcesandtimesarrays).NWB Units table (
spike_times/spike_times_index) via h5py.KiloSort/Phy (
spike_times.npy+spike_clusters.npy).
All exporters accept SpikeData times in milliseconds and convert to the target time units as needed.
- spikelab.data_loaders.data_exporters.export_spikedata_to_hdf5(sd, filepath, *, style='ragged', raster_dataset='raster', raster_bin_size_ms=None, spike_times_dataset='spike_times', spike_times_index_dataset='spike_times_index', spike_times_unit='s', fs_Hz=None, group_per_unit='units', group_time_unit='s', idces_dataset='idces', times_dataset='times', times_unit='ms', raw_dataset=None, raw_time_dataset=None, raw_time_unit='ms')[source]
Export a SpikeData to a generic HDF5 file using a chosen style.
- Parameters:
sd (SpikeData) – The SpikeData object to export.
filepath (str) – Path where the HDF5 file will be created (overwrites existing).
style (Literal["raster", "ragged", "group", "paired"]) – Export format style; see the module docstring for what each style produces.
raster_dataset (str) – HDF5 dataset name for the raster matrix.
raster_bin_size_ms (float | None) – Bin size in milliseconds for rasterization. Required for raster style.
spike_times_dataset (str) – Dataset name for concatenated spike times.
spike_times_index_dataset (str) – Dataset name for cumulative spike count indices.
spike_times_unit (TimeUnit) – Time unit for spike times (‘ms’, ‘s’, ‘samples’).
fs_Hz (float | None) – Sampling frequency in Hz. Required when any unit is ‘samples’.
group_per_unit (str) – HDF5 group name containing per-unit datasets.
group_time_unit (TimeUnit) – Time unit for individual unit datasets.
idces_dataset (str) – Dataset name for unit indices array.
times_dataset (str) – Dataset name for spike times array.
times_unit (TimeUnit) – Time unit for spike times.
raw_dataset (str | None) – Dataset name for raw analog data (if present in sd).
raw_time_dataset (str | None) – Dataset name for raw data time vector.
raw_time_unit (TimeUnit) – Time unit for raw data timestamps.
- Raises:
ImportError – If h5py is not available.
ValueError – For invalid styles, missing required parameters, or missing fs_Hz when needed.
- Return type:
Notes
Spike times are automatically converted from milliseconds to the requested unit.
The function creates or overwrites the target HDF5 file.
Raw data is only written if both raw_dataset and raw_time_dataset are provided and the SpikeData contains raw_data and raw_time attributes.
For raster style, the bin size is stored as an attribute for provenance.
Parameters mirror the corresponding loader function to ease round-tripping.
The generic HDF5 format does not persist
neuron_attributesormetadata; useAnalysisWorkspace.save(workspace HDF5) orexport_to_picklefor full-fidelity round-trips.
- spikelab.data_loaders.data_exporters.export_spikedata_to_nwb(sd, filepath, *, spike_times_dataset='spike_times', spike_times_index_dataset='spike_times_index', group='units')[source]
Export SpikeData to a minimal NWB-like file using h5py.
- Parameters:
sd (SpikeData) – The SpikeData object to export.
filepath (str) – Path where the NWB file will be created (overwrites existing).
spike_times_dataset (str) – Name of the dataset containing concatenated spike times. Default is “spike_times” per NWB convention.
spike_times_index_dataset (str) – Name of the dataset containing cumulative indices. Default is “spike_times_index” per NWB convention.
group (str) – Name of the HDF5 group to contain the datasets. Default is “units” per NWB convention.
- Raises:
ImportError – If h5py is not available.
- Return type:
Notes
Spike times are automatically converted from milliseconds to seconds.
The output file structure follows NWB conventions but is minimal (does not include full NWB metadata or schema validation).
Empty units (no spikes) are handled correctly in the index array.
This is compatible with the load_spikedata_from_nwb function when prefer_pynwb=False.
- spikelab.data_loaders.data_exporters.export_spikedata_to_kilosort(sd, folder, *, fs_Hz, spike_times_file='spike_times.npy', spike_clusters_file='spike_clusters.npy', time_unit='samples', cluster_ids=None)[source]
Export SpikeData to a KiloSort/Phy-like folder.
- Parameters:
sd (SpikeData) – The SpikeData object to export.
folder (str) – Directory path where the .npy files will be created. Created if it doesn’t exist.
fs_Hz (float) – Sampling frequency in Hz. Required for time unit conversion, especially when time_unit=’samples’.
spike_times_file (str) – Filename for the spike times array. Default is “spike_times.npy”.
spike_clusters_file (str) – Filename for the spike clusters array. Default is “spike_clusters.npy”.
time_unit (TimeUnit) – Time unit for output spike times. ‘samples’: integer sample indices (default, KiloSort standard). ‘ms’: milliseconds (float). ‘s’: seconds (float).
cluster_ids (Sequence[int] | None) – Custom cluster IDs for each unit. If None, uses sequential integers 0, 1, 2, … Length must match sd.N.
- Returns:
- Paths to the created spike_times.npy
and spike_clusters.npy files.
- Return type:
Notes
The output arrays have the same length (one entry per spike across all units).
Spike times are sorted by unit order, not chronologically.
Empty units (no spikes) don’t contribute entries to the output arrays.
The ‘samples’ time unit produces integer arrays suitable for KiloSort/Phy.
Cluster IDs can be arbitrary integers and don’t need to be sequential.
- spikelab.data_loaders.data_exporters.export_to_pickle(obj, filepath, *, protocol=None, s3_upload=False, aws_access_key_id=None, aws_secret_access_key=None, aws_session_token=None, region_name=None)[source]
Export a spikelab data object to a pickle file.
Supported types:
SpikeData,RateData,PairwiseCompMatrix,PairwiseCompMatrixStack,RateSliceStack,SpikeSliceStack.- Parameters:
obj – The spikelab data object to export.
filepath (str) – Path where the pickle file will be created (overwrites existing). If s3_upload=True, this should be an S3 URL (s3://bucket/key).
protocol (int | None) – Pickle protocol version. If None, uses the highest protocol available. Lower protocols (e.g., 2, 3) may be needed for compatibility with older Python versions.
s3_upload (bool) – If True, upload to S3 URL specified in filepath.
aws_access_key_id (str | None) – AWS access key ID for S3 uploads.
aws_secret_access_key (str | None) – AWS secret access key for S3 uploads.
aws_session_token (str | None) – AWS session token for temporary credentials.
region_name (str | None) – AWS region name for S3 access.
- Returns:
- Path to the created pickle file (local path or S3
URL).
- Return type:
path (str)