General data management¶
DataManager is a simple class for managing data produced by multiple runs of a simulation carried out in separate processes or machines. Each process is assigned a unique ID and Python Shelf object to write its data to. Each shelf is a dictionary whose keys must be strings. The DataManager can collate information across multiple shelves using the get(key) method, which returns a dictionary with keys the unique session names, and values the value written in that session (typically only the values will be of interest). If each value is a tuple or list then you can use the get_merged(key) to get a concatenated list. If the data type is more complicated you can use the get(key) method and merge by hand. The idea is each process generates files with names that do not interfere with each other so that there are no file concurrency issues, and then in the data analysis phase, the data generated separately by each process is merged together.
- Return dictionary with keys the session names, and values the values stored in that session for the given key.
- Return a single list of the merged lists or tuples if each value for every session is a list or tuple.
- Returns a dictionary with keys the keys matching match and values get(key). If match is a string, a matching key has to start with that string. If match is a function, a key matches if match(key).
- Like get_merged(key) but across all keys that match.
- Returns a straight list of every value session[key] for all sessions and all keys matching match.
- Returns all
(key, value)pairs, for each Shelf file, as an iterator (useful for large files with too much data to be loaded into memory).
- Return all values, for each Shelf file, as an iterator.
- As for
itervaluesbut returns a list rather than an iterator.
- Returns the total number of items across all the Shelf files.
- A list of all the keys across all sessions.
- Returns a randomly named session Shelf, multiple processes can write to these without worrying about concurrency issues.
- Returns a consistently named Shelf specific to that user and computer, only one process can write to it without worrying about concurrency issues.
- Returns a LockingSession object, a limited proxy to the underlying Shelf which acquires and releases a lock before and after every operation, making it safe for concurrent access.
- A list of all the shelf filenames for all sessions.
- Generates a unique key for inserting an element into a session without overwriting data, uses uuid4.
- The base path for data files.
- A (hopefully) unique identifier for the user and computer, consists of the username and the computer network name.
- The filename of the computer-specific session file. This file should only be accessed by one process at a time, there’s no way to protect against concurrent write accesses causing it to be corrupted.
The following function describes how to load Address-Event Representation files (AER files).
See also the
AERSpikeMonitor for saving spikes in that format, and
SpikeGeneratorGroup for reusing them in a simulation.
load_aer(filename, check_sorted=False, reinit_time=False)¶
Loads Address Event Representation (AER) data files for use in Brian. Files contain spikes as a binary representation of an
address(i.e. neuron identifier) and a timestamp.
This function returns two arrays, an array of addresses (neuron indices) and an array of spike times (in second).
Note: For index files (that point to multiple .(ae)dat files, typically aeidx files) it will return a list containing tuples (addr, time) as for single files.
ids, times = load_aer(‘/path/to/file.aedat’)
reinit_timeIf True, sets the first spike time to zero and all others relative to that one.
check_sortedIf True, checks if timestamps are sorted, and sorts them if necessary.
To use the spikes recorded in the AER file
filenamein a Brian
NeuronGroup, one should do:
addr, timestamp = load_AER(filename, reinit_time = True) G = AERSpikeGeneratorGroup((addr, timestamps))
An example script can be found in examples/misc/spikes_io.py