Brian hears is an auditory modelling library for Python. It is part of the neural network simulator package Brian, but can also be used on its own. To download Brian hears, simply download Brian: Brian hears is included as part of the package.
Brian hears is primarily designed for generating and manipulating sounds, and applying large banks of filters. We import the package by writing:
from brian import *
from brian.hears import *
Then, for example, to generate a tone or a whitenoise we would write:
sound1 = tone(1*kHz, .1*second)
sound2 = whitenoise(.1*second)
These sounds can then be manipulated in various ways, for example:
sound = sound1+sound2
sound = sound.ramp()
If you have the pygame package installed, you can also play these sounds:
sound.play()
We can filter these sounds through a bank of 3000 gammatone filters covering the human auditory range as follows:
cf = erbspace(20*Hz, 20*kHz, 3000)
fb = Gammatone(sound, cf)
output = fb.process()
The output of this would look something like this (zoomed into one region):
Alternatively, if we’re interested in modelling auditory nerve fibres, we could feed the output of this filterbank directly into a group of neurons defined with Brian:
# Half-wave rectification and compression [x]^(1/3)
ihc = FunctionFilterbank(fb, lambda x: 3*clip(x, 0, Inf)**(1.0/3.0))
# Leaky integrate-and-fire model with noise and refractoriness
eqs = '''
dv/dt = (I-v)/(1*ms)+0.2*xi*(2/(1*ms))**.5 : 1
I : 1
'''
anf = FilterbankGroup(ihc, 'I', eqs, reset=0, threshold=1, refractory=5*ms)
This model would give output something like this:
The human cochlea applies the equivalent of 3000 auditory filters, which causes a technical problem for modellers which this package is designed to address. At a typical sample rate, the output of 3000 filters would saturate the computer’s RAM in a few seconds. To deal with this, we use online computation, that is we only ever keep in memory the output of the filters for a relatively short duration (say, the most recent 20ms), do our modelling with these values, and then discard them. Although this requires that some models be rewritten for online rather than offline computation, it allows us to easily handle models with very large numbers of channels. 3000 or 6000 for human monaural or binaural processing is straightforward, and even much larger banks of filters can be used (for example, around 30,000 in Goodman DFM, Brette R (2010). Spike-timing-based computation in sound localization. PLoS Comput. Biol. 6(11): e1000993. doi:10.1371/journal.pcbi.1000993). Techniques for online computation are discussed below in the section Online computation.
Brian hears consists of classes and functions for defining sounds, filter chains, cochlear models, neuron models and head-related transfer functions. These classes are designed to be modular and easily extendable. Typically, a model will consist of a chain starting with a sound which is plugged into a chain of filter banks, which are then plugged into a neuron model.
The two main classes in Brian hears are Sound and Filterbank, which function very similarly. Each consists of multiple channels (typically just 1 or 2 in the case of sounds, and many in the case of filterbanks, but in principle any number of channels is possible for either). The difference is that a filterbank has an input source, which can be either a sound or another filterbank.
All scripts using Brian hears should start by importing the Brian and Brian hears packages as follows:
from brian import *
from brian.hears import *
To download Brian hears, simply download Brian: Brian hears is included as part of the package.
See also
Reference documentation for Brian hears, which covers everything in this overview in detail, and more. List of examples of using Brian hears.
Sounds can be loaded from a WAV or AIFF file with the loadsound() function (and saved with the savesound() function or Sound.save() method), or by initialising with a filename:
sound = loadsound('test.wav')
sound = Sound('test.aif')
sound.save('test.wav')
Various standard types of sounds can also be constructed, e.g. pure tones, white noise, clicks and silence:
sound = tone(1*kHz, 1*second)
sound = whitenoise(1*second)
sound = click(1*ms)
sound = silence(1*second)
You can pass a function of time or an array to initialise a sound:
# Equivalent to Sound.tone
sound = Sound(lambda t:sin(50*Hz*2*pi*t), duration=1*second)
# Equivalent to Sound.whitenoise
sound = Sound(randn(int(1*second*44.1*kHz)), samplerate=44.1*kHz)
Multiple channel sounds can be passed as a list or tuple of filenames, arrays or Sound objects:
sound = Sound(('left.wav', 'right.wav'))
sound = Sound((randn(44100), randn(44100)), samplerate=44.1*kHz)
sound = Sound((Sound.tone(1*kHz, 1*second),
Sound.tone(2*kHz, 1*second)))
A multi-channel sound is also a numpy array of shape (nsamples, nchannels), and can be initialised as this (or converted to a standard numpy array):
sound = Sound(randn(44100, 2), samplerate=44.1*kHz)
arr = array(sound)
Sounds can be added and multiplied:
sound = Sound.tone(1*kHz, 1*second)+0.1*Sound.whitenoise(1*second)
For more details on combining and operating on sounds, including shifting them in time, repeating them, resampling them, ramping them, finding and setting intensities, plotting spectrograms, etc., see Sound.
Sounds can be played using the play() function or Sound.play() method:
play(sound)
sound.play()
Sequences of sounds can be played as:
play(sound1, sound2, sound3)
The number of channels in a sound can be found using the nchannels attribute, and individual channels can be extracted using the Sound.channel() method, or using the left and right attributes in the case of stereo sounds:
print sound.nchannels
print amax(abs(sound.left-sound.channel(0)))
As an example of using this, the following swaps the channels in a stereo sound:
sound = Sound('test_stereo.wav')
swappedsound = Sound((sound.right, sound.left))
swappedsound.play()
The level of the sound can be computed and changed with the sound.level attribute. Levels are returned in dB which is a special unit in Brian hears. For example, 10*dB+10 will raise an error because 10 does not have units of dB. The multiplicative gain of a value in dB can be computed with the function gain(level). All dB values are measured as RMS dB SPL assuming that the values of the sound object are measured in Pascals. Some examples:
sound = whitenoise(100*ms)
print sound.level
sound.level = 60*dB
sound.level += 10*dB
sound *= gain(-10*dB)
The standard way to set up a model based on filterbanks is to start with a sound and then construct a chain of filterbanks that modify it, for example a common model of cochlear filtering is to apply a bank of gammatone filters, and then half wave rectify and compress it (for example, with a 1/3 power law). This can be achieved in Brian hears as follows (for 3000 channels in the human hearing range from 20 Hz to 20 kHz):
cfmin, cfmax, cfN = 20*Hz, 20*kHz, 3000
cf = erbspace(cfmin, cfmax, cfN)
sound = Sound('test.wav')
gfb = GammatoneFilterbank(sound, cf)
ihc = FunctionFilterbank(gfb, lambda x: clip(x, 0, Inf)**(1.0/3.0))
The erbspace() function constructs an array of centre frequencies on the ERB scale. The GammatoneFilterbank(source, cf) class creates a bank of gammatone filters with inputs coming from source and the centre frequencies in the array cf. The FunctionFilterbank(source, func) creates a bank of filters that applies the given function func to the inputs in source.
Filterbanks can be added and multiplied, for example for creating a linear and nonlinear path, e.g.:
sum_path_fb = 0.1*linear_path_fb+0.2*nonlinear_path_fb
A filterbank must have an input with either a single channel or an equal number of channels. In the former case, the single channel is duplicated for each of the output channels. However, you might want to apply gammatone filters to a stereo sound, for example, but in this case it’s not clear how to duplicate the channels and you have to specify it explicitly. You can do this using the Repeat, Tile, Join and Interleave filterbanks. For example, if the input is a stereo sound with channels LR then you can get an output with channels LLLRRR or LRLRLR by writing (respectively):
fb = Repeat(sound, 3)
fb = Tile(sound, 3)
To combine multiple filterbanks into one, you can either join them in series or interleave them, as follows:
fb = Join(source1, source2)
fb = Interleave(source1, source2)
For a more general (but more complicated) approach, see RestructureFilterbank.
Two of the most important generic filterbanks (upon which many of the others are based) are LinearFilterbank and FIRFilterbank. The former is a generic digital filter for FIR and IIR filters. The latter is specifically for FIR filters. These can be implemented with the former, but the implementation is optimised using FFTs with the latter (which can often be hundreds of times faster, particularly for long impulse responses). IIR filter banks can be designed using IIRFilterbank which is based on the syntax of the iirdesign scipy function.
You can change the input source to a Filterbank by modifying its source attribute, e.g. to change the input sound of a filterbank fb you might do:
fb.source = newsound
Note that the new source should have the same number of channels.
You can implement control paths (using the output of one filter chain path to modify the parameters of another filter chain path) using ControlFilterbank (see reference documentation for more details). For examples of this in action, see the following:
To create spiking neuron models based on filter chains, you use the FilterbankGroup class. This acts exactly like a standard Brian NeuronGroup except that you give a source filterbank and choose a state variable in the target equations for the output of the filterbank. A simple auditory nerve fibre model would take the inner hair cell model from earlier, and feed it into a noisy leaky integrate-and-fire model as follows:
# Inner hair cell model as before
cfmin, cfmax, cfN = 20*Hz, 20*kHz, 3000
cf = erbspace(cfmin, cfmax, cfN)
sound = Sound.whitenoise(100*ms)
gfb = Gammatone(sound, cf)
ihc = FunctionFilterbank(gfb, lambda x: 3*clip(x, 0, Inf)**(1.0/3.0))
# Leaky integrate-and-fire model with noise and refractoriness
eqs = '''
dv/dt = (I-v)/(1*ms)+0.2*xi*(2/(1*ms))**.5 : 1
I : 1
'''
G = FilterbankGroup(ihc, 'I', eqs, reset=0, threshold=1, refractory=5*ms)
# Run, and raster plot of the spikes
M = SpikeMonitor(G)
run(sound.duration)
raster_plot(M)
show()
And here’s the output (after 6 seconds of computation on a 2GHz laptop):
Often, you want to use log-scaled axes for frequency in plots, but the built-in matplotlib axis labelling for log-scaled axes doesn’t work well for frequencies. We provided two functions (log_frequency_xaxis_labels() and log_frequency_yaxis_labels()) to automatically set useful axis labels. For example:
cf = erbspace(100*Hz, 10*kHz)
...
semilogx(cf, response)
axis('tight')
log_frequency_xaxis_labels()
Typically in auditory modelling, we precompute the entire output of each channel of the filterbank (“offline computation”), and then work with that. This is straightforward, but puts a severe limit on the number of channels we can use or the length of time we can work with (otherwise the RAM would be quickly exhausted). Brian hears allows us to use a very large number of channels in filterbanks, but at the cost of only storing the output of the filterbanks for a relatively short period of time (“online computation”). This requires a slight change in the way we use the output of the filterbanks, but is actually not too difficult. For example, suppose we wanted to compute the vector of RMS values for each channel of the output of the filterbank. Traditionally, or if we just use the syntax output = fb.process() in Brian hears, we have an array output of shape (nsamples, nchannels). We could compute the vector of RMS values as:
rms = sqrt(mean(output**2, axis=0))
To do the same thing with online computation, we simply store a vector of the running sum of squares, and update it for each buffered segment as it is computed. At the end of the processing, we divide the sum of squares by the number of samples and take the square root.
The Filterbank.process() method allows us to pass an optional function f(output, running) of two arguments. In this case, process() will first call running = f(output, 0) for the first buffered segment output. It will then call running = f(output, running) for each subsequent segment. In other words, it will “accumulate” the output of f, passing the output of each call to the subsequent call. To compute the vector of RMS values then, we simply do:
def sum_of_squares(input, running):
return running+sum(input**2, axis=0)
rms = sqrt(fb.process(sum_of_squares)/nsamples)
If the computation you wish to perform is more complicated than can be achieved with the process() method, you can derive a class from Filterbank (see that class’ reference documentation for more details on this).
The Sound, OnlineSound and Filterbank classes (and all classes derived from them) all implement the same buffering mechanism. The purpose of this is to allow for efficient processing of multiple channels in buffers. Rather than precomputing the application of filters to all channels (which for large numbers of channels or long sounds would not fit in memory), we process small chunks at a time. The entire design of these classes is based on the idea of buffering, as defined by the base class Bufferable (see section Options). Each class has two methods, buffer_init() to initialise the buffer, and buffer_fetch(start, end) to fetch the portion of the buffer from samples with indices from start to end (not including end as standard for Python). The buffer_fetch(start, end) method should return a 2D array of shape (end-start, nchannels) with the buffered values.
From the user point of view, all you need to do, having set up a chain of Sound and Filterbank objects, is to call buffer_fetch(start, end) repeatedly. If the output of a Filterbank is being plugged into a FilterbankGroup object, everything is handled automatically. For cases where the number of channels is small or the length of the input source is short, you can use the Filterbank.fetch(duration)() method to automatically handle the initialisation and repeated application of buffer_fetch.
To extend Filterbank, it is often sufficient just to implement the buffer_apply(input) method. See the documentation for Filterbank for more details.
Brian hears comes with a package of predefined filter classes to be used as basic blocks by the user. All of them are implemented as filterbanks.
First, a series of standard filters widely used in audio processing are available:
Class | Descripition | Example |
---|---|---|
IIRFilterbank | Bank of low, high, bandpass or bandstop filter of type Chebyshef, Elliptic, etc... | Example: IIRfilterbank (hears) |
Butterworth | Bank of low, high, bandpass or bandstop Butterworth filters | Example: butterworth (hears) |
LowPass | Bank of lowpass filters of order 1 | Example: cochleagram (hears) |
Second, the library provides linear auditory filters developed to model the middle ear transfer function and the frequency analysis of the cochlea:
Class | Description | Example |
---|---|---|
MiddleEar | Linear bandpass filter, based on middle-ear frequency response properties | Example: tan_carney_simple_test (hears/tan_carney_2003) |
Gammatone | Bank of IIR gammatone filters (based on Slaney implementation) | Example: gammatone (hears) |
ApproximateGammatone | Bank of IIR gammatone filters (based on Hohmann implementation) | Example: approximate_gammatone (hears) |
LogGammachirp | Bank of IIR gammachirp filters with logarithmic sweep (based on Irino implementation) | Example: log_gammachirp (hears) |
LinearGammachirp | Bank of FIR chirp filters with linear sweep and gamma envelope | Example: linear_gammachirp (hears) |
LinearGaborchirp | Bank of FIR chirp filters with linear sweep and gaussian envelope |
Finally, Brian hears comes with a series of complex nonlinear cochlear models developed to model nonlinear effects such as filter bandwith level dependency, two-tones suppression, peak position level dependency, etc.
Class | Description | Example |
---|---|---|
DRNL | Dual resonance nonlinear filter as described in Lopez-Paveda and Meddis, JASA 2001 | Example: drnl (hears) |
DCGC | Compressive gammachirp auditory filter as described in Irino and Patterson, JASA 2001 | Example: dcgc (hears) |
TanCarney | Auditory phenomenological model as described in Tan and Carney, JASA 2003 | Example: tan_carney_simple_test (hears/tan_carney_2003) |
ZhangSynapse | Model of an inner hair cell – auditory nerve synapse (Zhang et al., JASA 2001) | Example: tan_carney_simple_test (hears/tan_carney_2003) |
Fig. 1 and 3 (spking output without spiking/refractory period) should reproduce the output of the AN3_test_tone.m and AN3_test_click.m scripts, available in the code accompanying the paper Tan & Carney (2003). This matlab code is available from http://www.urmc.rochester.edu/labs/Carney-Lab/publications/auditory-models.cfm
Tan, Q., and L. H. Carney. “A Phenomenological Model for the Responses of Auditory-nerve Fibers. II. Nonlinear Tuning with a Frequency Glide”. The Journal of the Acoustical Society of America 114 (2003): 2007.
import numpy as np
import matplotlib.pyplot as plt
from brian.stdunits import kHz, Hz, ms
from brian.network import Network
from brian.monitor import StateMonitor, SpikeMonitor
from brian.globalprefs import set_global_preferences
#set_global_preferences(useweave=True)
from brian.hears import (Sound, get_samplerate, set_default_samplerate, tone,
click, silence, dB, TanCarney, MiddleEar, ZhangSynapse)
from brian.clock import reinit_default_clock
set_default_samplerate(50*kHz)
sample_length = 1 / get_samplerate(None)
cf = 1000 * Hz
print 'Testing click response'
duration = 25*ms
levels = [40, 60, 80, 100, 120]
# a click of two samples
tones = Sound([Sound.sequence([click(sample_length*2, peak=level*dB),
silence(duration=duration - sample_length)])
for level in levels])
ihc = TanCarney(MiddleEar(tones), [cf] * len(levels), update_interval=1)
syn = ZhangSynapse(ihc, cf)
s_mon = StateMonitor(syn, 's', record=True, clock=syn.clock)
R_mon = StateMonitor(syn, 'R', record=True, clock=syn.clock)
spike_mon = SpikeMonitor(syn)
net = Network(syn, s_mon, R_mon, spike_mon)
net.run(duration * 1.5)
for idx, level in enumerate(levels):
plt.figure(1)
plt.subplot(len(levels), 1, idx + 1)
plt.plot(s_mon.times / ms, s_mon[idx])
plt.xlim(0, 25)
plt.xlabel('Time (msec)')
plt.ylabel('Sp/sec')
plt.text(15, np.nanmax(s_mon[idx])/2., 'Peak SPL=%s SPL' % str(level*dB));
ymin, ymax = plt.ylim()
if idx == 0:
plt.title('Click responses')
plt.figure(2)
plt.subplot(len(levels), 1, idx + 1)
plt.plot(R_mon.times / ms, R_mon[idx])
plt.xlabel('Time (msec)')
plt.xlabel('Time (msec)')
plt.text(15, np.nanmax(s_mon[idx])/2., 'Peak SPL=%s SPL' % str(level*dB));
plt.ylim(ymin, ymax)
if idx == 0:
plt.title('Click responses (with spikes and refractoriness)')
plt.plot(spike_mon.spiketimes[idx] / ms,
np.ones(len(spike_mon.spiketimes[idx])) * np.nanmax(R_mon[idx]), 'rx')
print 'Testing tone response'
reinit_default_clock()
duration = 60*ms
levels = [0, 20, 40, 60, 80]
tones = Sound([Sound.sequence([tone(cf, duration).atlevel(level*dB).ramp(when='both',
duration=10*ms,
inplace=False),
silence(duration=duration/2)])
for level in levels])
ihc = TanCarney(MiddleEar(tones), [cf] * len(levels), update_interval=1)
syn = ZhangSynapse(ihc, cf)
s_mon = StateMonitor(syn, 's', record=True, clock=syn.clock)
R_mon = StateMonitor(syn, 'R', record=True, clock=syn.clock)
spike_mon = SpikeMonitor(syn)
net = Network(syn, s_mon, R_mon, spike_mon)
net.run(duration * 1.5)
for idx, level in enumerate(levels):
plt.figure(3)
plt.subplot(len(levels), 1, idx + 1)
plt.plot(s_mon.times / ms, s_mon[idx])
plt.xlim(0, 120)
plt.xlabel('Time (msec)')
plt.ylabel('Sp/sec')
plt.text(1.25 * duration/ms, np.nanmax(s_mon[idx])/2., '%s SPL' % str(level*dB));
ymin, ymax = plt.ylim()
if idx == 0:
plt.title('CF=%.0f Hz - Response to Tone at CF' % cf)
plt.figure(4)
plt.subplot(len(levels), 1, idx + 1)
plt.plot(R_mon.times / ms, R_mon[idx])
plt.xlabel('Time (msec)')
plt.xlabel('Time (msec)')
plt.text(1.25 * duration/ms, np.nanmax(R_mon[idx])/2., '%s SPL' % str(level*dB));
plt.ylim(ymin, ymax)
if idx == 0:
plt.title('CF=%.0f Hz - Response to Tone at CF (with spikes and refractoriness)' % cf)
plt.plot(spike_mon.spiketimes[idx] / ms,
np.ones(len(spike_mon.spiketimes[idx])) * np.nanmax(R_mon[idx]), 'rx')
plt.show()
This script demonstrates how to save/load spikes in AER format from inside Brian.
from brian import *
####################### SAVING #########################
# First we need to generate some spikes
N = 1000
g = PoissonGroup(N, 200*Hz)
# And set up a monitor to record those spikes to the disk
Maer = AERSpikeMonitor(g, './dummy.aedat')
# Now we can run
run(100*ms)
# This line executed automatically when the script ends, but here we
# need to close the file because we re-use it from within the same script
Maer.close()
clear(all = True)
reinit_default_clock()
####################### LOADING ########################
# Now we can re-load the spikes
addr, timestamps = load_aer('./dummy.aedat')
# Feed them to a SpikeGeneratorGroup
group = SpikeGeneratorGroup(N, (addr, timestamps))
# The group can now be used as any other, here we choose to monitor
# the spikes
newM = SpikeMonitor(group, record = True)
run(100*ms)
# And plot the result
raster_plot(newM)
show()
Network (CUBA) with short-term synaptic plasticity for excitatory synapses (Depressing at long timescales, facilitating at short timescales)
from brian import *
from time import time
eqs = '''
dv/dt = (ge+gi-(v+49*mV))/(20*ms) : volt
dge/dt = -ge/(5*ms) : volt
dgi/dt = -gi/(10*ms) : volt
'''
P = NeuronGroup(4000, model=eqs, threshold= -50 * mV, reset= -60 * mV)
P.v = -60 * mV + rand(4000) * 10 * mV
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
Ce = Connection(Pe, P, 'ge', weight=1.62 * mV, sparseness=.02)
Ci = Connection(Pi, P, 'gi', weight= -9 * mV, sparseness=.02)
stp = STP(Ce, taud=200 * ms, tauf=20 * ms, U=.2)
M = SpikeMonitor(P)
rate = PopulationRateMonitor(P)
t1 = time()
run(1 * second)
t2 = time()
print "Simulation time:", t2 - t1, "s"
print M.nspikes, "spikes"
subplot(211)
raster_plot(M)
subplot(212)
plot(rate.times / ms, rate.smooth_rate(5 * ms))
show()
These examples cover some basic topics in writing Brian scripts in Python. The complete source code for the examples is available in the examples folder in the extras package.
Implementation example of the dual resonance nonlinear (DRNL) filter with parameters fitted for human as described in Lopez-Paveda, E. and Meddis, R., A human nonlinear cochlear filterbank, JASA 2001.
A class called DRNL implementing this model is available in the library.
The entire pathway consists of the sum of a linear and a nonlinear pathway.
The linear path consists of a bank of bandpass filters (second order gammatone), a low pass function, and a gain/attenuation factor, g, in a cascade.
The nonlinear path is a cascade consisting of a bank of gammatone filters, a compression function, a second bank of gammatone filters, and a low pass function, in that order.
The parameters are given in the form 10**(p0+mlog10(cf)).
from brian import *
from brian.hears import *
simulation_duration = 50*ms
samplerate = 50*kHz
level = 50*dB # level of the input sound in rms dB SPL
sound = whitenoise(simulation_duration, samplerate).ramp()
sound.level = level
nbr_cf = 50 #number of centre frequencies
#center frequencies with a spacing following an ERB scale
center_frequencies = erbspace(100*Hz,1000*Hz, nbr_cf)
#conversion to stape velocity (which are the units needed by the following centres)
sound = sound*0.00014
#### Linear Pathway ####
#bandpass filter (second order gammatone filter)
center_frequencies_linear = 10**(-0.067+1.016*log10(center_frequencies))
bandwidth_linear = 10**(0.037+0.785*log10(center_frequencies))
order_linear = 3
gammatone = ApproximateGammatone(sound, center_frequencies_linear,
bandwidth_linear, order=order_linear)
#linear gain
g = 10**(4.2-0.48*log10(center_frequencies))
func_gain = lambda x:g*x
gain = FunctionFilterbank(gammatone, func_gain)
#low pass filter(cascade of 4 second order lowpass butterworth filters)
cutoff_frequencies_linear = center_frequencies_linear
order_lowpass_linear = 2
lp_l = LowPass(gain, cutoff_frequencies_linear)
lowpass_linear = Cascade(gain, lp_l, 4)
#### Nonlinear Pathway ####
#bandpass filter (third order gammatone filters)
center_frequencies_nonlinear = center_frequencies
bandwidth_nonlinear = 10**(-0.031+0.774*log10(center_frequencies))
order_nonlinear = 3
bandpass_nonlinear1 = ApproximateGammatone(sound, center_frequencies_nonlinear,
bandwidth_nonlinear,
order=order_nonlinear)
#compression (linear at low level, compress at high level)
a = 10**(1.402+0.819*log10(center_frequencies)) #linear gain
b = 10**(1.619-0.818*log10(center_frequencies))
v = .2 #compression exponent
func_compression = lambda x: sign(x)*minimum(a*abs(x), b*abs(x)**v)
compression = FunctionFilterbank(bandpass_nonlinear1, func_compression)
#bandpass filter (third order gammatone filters)
bandpass_nonlinear2 = ApproximateGammatone(compression,
center_frequencies_nonlinear,
bandwidth_nonlinear,
order=order_nonlinear)
#low pass filter
cutoff_frequencies_nonlinear = center_frequencies_nonlinear
order_lowpass_nonlinear = 2
lp_nl = LowPass(bandpass_nonlinear2, cutoff_frequencies_nonlinear)
lowpass_nonlinear = Cascade(bandpass_nonlinear2, lp_nl, 3)
#adding the two pathways
dnrl_filter = lowpass_linear+lowpass_nonlinear
dnrl = dnrl_filter.process()
figure()
imshow(flipud(dnrl.T), aspect='auto')
show()
A very simple example Brian script to show how to implement an integrate and fire model. In this example, we also drive the single integrate and fire neuron with regularly spaced spikes from the SpikeGeneratorGroup.
from brian import *
tau = 10 * ms
Vr = -70 * mV
Vt = -55 * mV
G = NeuronGroup(1, model='V:volt', threshold=Vt, reset=Vr)
input = SpikeGeneratorGroup(1, [(0, t * ms) for t in linspace(10, 100, 25)])
C = Connection(input, G)
C[0, 0] = 2 * mV
M = StateMonitor(G, 'V', record=True)
G.V = Vr
run(100 * ms)
plot(M.times / ms, M[0] / mV)
show()
Example with named threshold and reset variables
from brian import *
eqs = '''
dge/dt = -ge/(5*ms) : volt
dgi/dt = -gi/(10*ms) : volt
dx/dt = (ge+gi-(x+49*mV))/(20*ms) : volt
'''
P = NeuronGroup(4000, model=eqs, threshold='x>-50*mV', \
reset=Refractoriness(-60 * mV, 5 * ms, state='x'))
#P=NeuronGroup(4000,model=eqs,threshold=Threshold(-50*mV,state='x'),\
# reset=Reset(-60*mV,state='x')) # without refractoriness
P.x = -60 * mV
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
Ce = Connection(Pe, P, 'ge', weight=1.62 * mV, sparseness=0.02)
Ci = Connection(Pi, P, 'gi', weight= -9 * mV, sparseness=0.02)
M = SpikeMonitor(P)
run(1 * second)
raster_plot(M)
show()
Pierre Yger’s winning entry for the 2012 Brian twister.
from brian import *
import numpy, os, pylab
"""
An implementation of a simple topographical network, like those used in Mehring 2005 or Yger 2011.
Cells are aranged randomly on a 2D plane and connected according to a gaussian profile
P(r) = exp(-d**2/(2*sigma**2)), with delays depending linearly on the distances.
Note that the exact number of synapses per neuron is not fixed here.
To avoid any border conditions, the plane is considered to be toroidal.
Script will generate an Synchronous Irregular (SI) slow regime with propagating
waves that will spread in various directions, wandering over the network.
In addition, an external layer of Poisson sources will stimulates some cells on the network, with
a wiring scheme such that the word BRIAN will pop up. External rates can be turned off to observed the
spontaneous activity of the 2D layer. One can observe that despite the inputs is constant, the network
is not always responding to it.
The script will display, while running, the spikes and Vm of the excitatory cells.
Varying sigma will show the various activity structures from a random network (s_lat > 1), to a very
locally connected one (s_lat < 0.1)
"""
### We are setting the global timestep of the simulation
Clock(0.1 * ms)
### Cell parameters ###
tau_m = 20. * ms # Membrane time constant
c_m = 0.2 * nF # Capacitance
tau_exc = 3. * ms # Synaptic time constant (excitatory)
tau_inh = 7. * ms # Synaptic time constant (inhibitory)
tau_ref = 5. * ms # Refractory period
El = -80 * mV # Leak potential
Ee = 0. * mV # Reversal potential (excitation)
Ei = -70.* mV # Reversal potential (inhibition)
Vt = -50 * mV # Spike Threhold
Vr = -60 * mV # Spike Reset
### Equation for a Conductance-based IAF ####
eqs = Equations('''
dv/dt = (El-v)/tau_m + (ge*(Ee-v)+gi*(Ei-v))/c_m : volt
dge/dt = -ge*(1./tau_exc) : uS
dgi/dt = -gi*(1./tau_inh) : uS
''')
n_cells = 12500 # Total number of cells
n_exc = int(0.8 * n_cells) # 4:1 ratio for exc/inh
size = 1. # Size of the network
simtime = 1000 * ms # Simulation time
sim_step = 1 * ms # Display snapshots every sim_step ms
epsilon = 0.02 # Probability density
s_lat = 0.2 # Spread of the lateral connections
g_exc = 4. * nS # Excitatory conductance
g_inh = 64. * nS # Inhibitory conductance
g_ext = 200 * nS # External drive
velocity = 0.3 * mm/ms # velocity
ext_rate = 100 * Hz # Rate of the external source
max_distance = size * mm/numpy.sqrt(2) # Since this is a torus
max_delay = max_distance/velocity # Needed for the connectors
### Generate the images with the letters B, R, I, A, N
### To do that, we create a png image and read it as a matrix
pylab.figure()
pylab.text(0.125, 0.4, "B R I A N", size=80)
pylab.setp(gca(), xticks=[], yticks=[])
pylab.savefig("BRIAN.png")
brian_letters = imread("BRIAN.png")
os.remove("BRIAN.png")
brian_letters = numpy.flipud(mean(brian_letters,2)).T
pylab.close()
### We create the cells and generate random positons in [0, size]x[0, size]
all_cells = NeuronGroup(n_cells, model=eqs, threshold=Vt, reset=Vr, refractory=tau_ref)
all_cells.position = size*numpy.random.rand(n_cells, 2)
exc_cells = all_cells[0:n_exc]
inh_cells = all_cells[n_exc:n_cells]
### We initialize v values slightly above Vt, to have initial spikes
all_cells.v = El + 1.1*numpy.random.rand(n_cells) * (Vt - El)
### Now we create the source that will write the word BRIAN
sources = PoissonGroup(1, ext_rate)
sources.position = array([[0.5, 0.5]])
### Function to get the distance between one position and an array of positions
### This is needed to used the vectorized form of the connections in the brian.Connection objects
### Note that the plane is wrapped, to avoid any border effects.
def get_distance(x, y):
d1 = abs(x - y)
min_d = numpy.minimum(d1, size - d1)
return numpy.sqrt(numpy.sum(min_d**2, 1))
### Function returning the probabilities of connections as a functions of distances
def probas(i, j, x, y):
distance = get_distance(x[i], y[j])
return epsilon * numpy.exp(-distance**2/(2*s_lat**2))
### Function returning linear delays as function of distances
def delays(i, j, x, y):
distance = get_distance(x[i], y[j])
return 0.1*ms + (distance * mm )/ velocity
### Function assessing if a cell is located in a particular letter of the word BRIAN
### Return 0 if not, and 1 if yes.
def is_in_brian(i, j, x, y):
a, b = brian_letters.shape
tmp_x, tmp_y = (y[j][:, 0]*a).astype(int), (y[j][:, 1]*b).astype(int)
return 1 - brian_letters[tmp_x, tmp_y]
print "Building network with wrapped 2D gaussian profiles..."
Ce = Connection(exc_cells, all_cells, 'ge', weight=g_exc, max_delay=max_delay,
sparseness=lambda i, j : probas(i, j, exc_cells.position, all_cells.position),
delay =lambda i, j : delays(i, j, exc_cells.position, all_cells.position))
Ci = Connection(inh_cells, all_cells, 'gi', weight=g_inh, max_delay=max_delay,
sparseness=lambda i, j : probas(i, j, inh_cells.position, all_cells.position),
delay =lambda i, j : delays(i, j, inh_cells.position, all_cells.position))
Cext = Connection(sources, all_cells, 'ge', weight=g_ext, max_delay=max_delay,
sparseness=lambda i, j : is_in_brian(i, j, sources.position, all_cells.position))
print "--> mean probability from excitatory synapses:", Ce.W.getnnz()/float(n_exc*n_cells) * 100, "%"
print "--> mean probability from inhibitory synapses:", Ci.W.getnnz()/float((n_cells - n_exc)*n_cells) * 100, "%"
print "Setting the recorders..."
v_exc = RecentStateMonitor(exc_cells, 'v', record=True)
s_exc = SpikeCounter(exc_cells)
ion() # To enter the interactive mode
print "Initializing the plots..."
fig1 = pylab.subplot(211)
im = fig1.scatter(all_cells.position[:n_exc, 0], all_cells.position[:n_exc, 1], c=[0]*n_exc)
im.set_clim(0, 1)
fig1.set_ylabel("spikes")
pylab.colorbar(im)
fig2 = pylab.subplot(212)
im = fig2.scatter(all_cells.position[:n_exc, 0], all_cells.position[:n_exc, 1], c=[0]*n_exc)
im.set_clim(El, Vt)
fig2.set_ylabel("v")
pylab.colorbar(im)
manager = pylab.get_current_fig_manager()
print "Running network ..."
for time in xrange(int((simtime/sim_step)/ms)):
run(sim_step)
fig1.cla()
fig1.set_title("t = %g s" %((sim_step * time)/ms))
idx = s_exc.count > 0
if numpy.sum(idx) > 0:
im = fig1.scatter(all_cells.position[:n_exc, 0][idx], all_cells.position[:n_exc, 1][idx], c=[0]*n_exc)
s_exc.count = numpy.zeros(n_exc) ## We reset the spike counter
fig1.set_xlim(0, size)
fig1.set_ylim(0, size)
fig1.set_ylabel("spikes")
im.set_clim(0, 1)
setp(fig1, xticks=[], yticks=[])
fig2.cla()
im = fig2.scatter(all_cells.position[:n_exc, 0], all_cells.position[:n_exc, 1], c=v_exc.values[-1])
fig2.set_xlim(0, size)
fig2.set_ylim(0, size)
fig2.set_ylabel("v")
im.set_clim(El, Vt)
setp(fig2, xticks=[], yticks=[])
manager.canvas.draw()
manager.canvas.flush_events()
ioff() # To leave the interactive mode
CUBA example with delays.
Connection (no delay): 3.5 s DelayConnection: 5.7 s Synapses (with precomputed offsets): 6.6 s # 6.9 s Synapses with weave: 6.4 s Synapses with zero delays: 5.2 s
from brian import *
import time
start_time = time.time()
taum = 20 * ms
taue = 5 * ms
taui = 10 * ms
Vt = -50 * mV
Vr = -60 * mV
El = -49 * mV
eqs = Equations('''
dv/dt = (ge+gi-(v-El))/taum : volt
dge/dt = -ge/taue : volt
dgi/dt = -gi/taui : volt
''')
P = NeuronGroup(4000, model=eqs, threshold=Vt, reset=Vr, refractory=5 * ms)
P.v = Vr
P.ge = 0 * mV
P.gi = 0 * mV
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
we = (60 * 0.27 / 10) * mV # excitatory synaptic weight (voltage)
wi = (-20 * 4.5 / 10) * mV # inhibitory synaptic weight
Se = Synapses(Pe, P, model = 'w : 1', pre = 'ge += we')
Si = Synapses(Pi, P, model = 'w : 1', pre = 'gi += wi')
Se[:,:]=0.02
Si[:,:]=0.02
Se.delay='rand()*ms'
Si.delay='rand()*ms'
P.v = Vr + rand(len(P)) * (Vt - Vr)
# Record the number of spikes
Me = PopulationSpikeCounter(Pe)
Mi = PopulationSpikeCounter(Pi)
# A population rate monitor
M = PopulationRateMonitor(P)
print "Network construction time:", time.time() - start_time, "seconds"
print len(P), "neurons in the network"
print "Simulation running..."
run(1 * msecond)
start_time = time.time()
run(1 * second)
duration = time.time() - start_time
print "Simulation time:", duration, "seconds"
print Me.nspikes, "excitatory spikes"
print Mi.nspikes, "inhibitory spikes"
plot(M.times / ms, M.smooth_rate(2 * ms, 'gaussian'))
show()
The first thing to do in any Brian program is to load Brian and the names of its functions and classes. The standard way to do this is to use the Python from ... import * statement.
from brian import *
The neuron model we will use in this tutorial is the simplest possible leaky integrate and fire neuron, defined by the differential equation:
tau dV/dt = -(V-El)
and with a threshold value Vt and reset value Vr.
Brian has a system for defining physical quantities (quantities with a physical dimension such as time). The code below illustrates how to use this system, which (mostly) works just as you’d expect.
tau = 20 * msecond # membrane time constant Vt = -50 * mvolt # spike threshold Vr = -60 * mvolt # reset value El = -60 * mvolt # resting potential (same as the reset)
The built in standard units in Brian consist of all the fundamental SI units like second and metre, along with a selection of derived SI units such as volt, farad, coulomb. All names are lowercase following the SI standard. In addition, there are scaled versions of these units using the standard SI prefixes m=1/1000, K=1000, etc.
The simplest way to define a neuron model in Brian is to write a list of the differential equations that define it. For the moment, we’ll just give the simplest possible example, a single differential equation. You write it in the following form:
dx/dt = f(x) : unit
where x is the name of the variable, f(x) can be any valid Python expression, and unit is the physical units of the variable x. In our case we will write:
dV/dt = -(V-El)/tau : volt
to define the variable V with units volt.
To complete the specification of the model, we also define a threshold and reset value and create a group of 40 neurons with this model.
G = NeuronGroup(N=40, model='dV/dt = -(V-El)/tau : volt', threshold=Vt, reset=Vr)
The statement creates a new object ‘G’ which is an instance of the Brian class NeuronGroup, initialised with the values in the line above and 40 neurons. In Python, you can call a function or initialise a class using keyword arguments as well as ordered arguments, so if I defined a function f(x,y) I could call it as f(1,2) or as f(y=2,x=1) and get the same effect. See the Python tutorial for more information on this.
For the moment, we leave the neurons in this group unconnected to each other, each evolves separately from the others.
Finally, we run the simulation for 1 second of simulated time. By default, the simulator uses a timestep dt = 0.1 ms.
run(1 * second)
And that’s it! To see some of the output of this network, go to the next part of the tutorial.
The units system of Brian is useful for ensuring that everything is consistent, and that you don’t make hard to find mistakes in your code by using the wrong units. Try changing the units of one of the parameters and see what happens.
You should see an error message with a Python traceback (telling you which functions were being called when the error happened), ending in a line something like:
Brian.units.DimensionMismatchError: The differential equations
are not homogeneous!, dimensions were (m^2 kg s^-3 A^-1)
(m^2 kg s^-4 A^-1)
Dynamics of a network of sparsely connected inhibitory current-based integrate-and-fire neurons. Individual neurons fire irregularly at low rate but the network is in an oscillatory global activity regime where neurons are weakly synchronized.
from brian import *
N = 5000
Vr = 10 * mV
theta = 20 * mV
tau = 20 * ms
delta = 2 * ms
taurefr = 2 * ms
duration = .1 * second
C = 1000
sparseness = float(C)/N
J = .1 * mV
muext = 25 * mV
sigmaext = 1 * mV
eqs = """
dV/dt = (-V+muext + sigmaext * sqrt(tau) * xi)/tau : volt
"""
group = NeuronGroup(N, eqs, threshold=theta,
reset=Vr, refractory=taurefr)
group.V = Vr
conn = Connection(group, group, state='V', delay=delta,
weight = -J,
sparseness=sparseness)
M = SpikeMonitor(group)
LFP = PopulationRateMonitor(group, bin=0.4 * ms)
run(duration)
subplot(211)
raster_plot(M)
xlim(0, duration/ms)
subplot(212)
plot(LFP.times_/ms, LFP.rate)
xlim(0, duration/ms)
show()
A Connection object has an attribute W which is its connection matrix.
Brian’s system for connection matrices can be slightly confusing. The way it works is roughly as follows. There are two types of connection matrix data structures, ConstructionMatrix and ConnectionMatrix. The construction matrix types are used for building connectivity, and are optimised for insertion and deletion of elements, but access is slow. The connection matrix types are used when the simulation is running, and are optimised for fast access, but not for adding/removing or modifying elements. When a Connection object is created, it is given a construction matrix data type, and when the network is run, this matrix is converted to its corresponding connection matrix type. As well as this construction/connection matrix type distinction, there is also the distinction between dense/sparse/dynamic matrices, each of which have their own construction and connection versions.
The dense matrix structure is very simple, both the construction and connection types are basically just 2D numpy arrays.
The sparse and dynamic matrix structures are very different for construction and connection. Both the sparse and dynamic construction matrices are essentially just the scipy.lil_matrix sparse matrix type, however we add some slight improvements to scipy’s matrix data type to make it more efficient for our case.
The sparse and dynamic connection matrix structures are documented in more detail in the reference pages for SparseConnectionMatrix and DynamicConnectionMatrix.
For customised run-time modifications to sparse and dense connection matrices you have two options. You can modify the data structures directly using the information in the reference pages linked to in the paragraph above, or you can use the methods defined in the ConnectionMatrix class, which work for dense, sparse and dynamic matrix structures, and do not depend on implementation specific details. These methods provide element, row and column access. The row and column access methods use either DenseConnectionVector or SparseConnectionVector objects. The dense connection vector is just a 1D numpy array of length the size of the row/column. The sparse connection vector is slightly more complicated (but not much), see its documentation for details. The idea is that in most cases, both dense and sparse connection vectors can be operated on without having to know how they work, so for example if v is a ConnectionVector then 2*v is of the same type. So for a ConnectionMatrix W, this should work, whatever the structure of W:
W.set_row(i, 2*W.get_row(i))
Or equivalently:
W[i,:] = 2*W[i,:]
The syntax W[i,:], W[:,i] and W[i,j] is supported for integers i and j for (respectively) row, column and element access.
Library models are defined using the MembraneEquation class. This is a subclass of Equations which is defined by a capacitance C and a sum of currents. The following instruction:
eqs=MembraneEquation(200*pF)
defines the equation C*dvm/dt=0*amp, with the membrane capacitance C=200 pF. The name of the membrane potential variable can be changed as follows:
eqs=MembraneEquation(200*pF,vm='V')
The main interest of this class is that one can use it to build models by adding currents to a membrane equation. The Current class is a subclass of Equations which defines a current to be added to a membrane equation. For example:
eqs=MembraneEquation(200*pF)+Current('I=(V0-vm)/R : amp',current_name='I')
defines the same equation as:
eqs=Equations('''
dvm/dt=I/(200*pF) : volt
I=(V0-vm)/R : amp
''')
The keyword current_name is optional if there is no ambiguity, i.e., if there is only one variable or only one variable with amp units. As for standard equations, Current objects can be initialised with a multiline string (several equations). By default, the convention for the current direction is the one for injected current. For the ionic current convention, use the IonicCurrent class:
eqs=MembraneEquation(200*pF)+IonicCurrent('I=(vm-V0)/R : amp')
Compartmental neuron models can be created by merging several MembraneEquation objects, with the compartments module. If soma and dendrite are two compartments defined as MembraneEquation objects, then a neuron with those 2 compartments can be created as follows:
neuron_eqs=Compartments({'soma':soma,'dendrite':dendrite})
neuron_eqs.connect('soma','dendrite',Ra)
neuron=NeuronGroup(1,model=neuron_eqs)
The Compartments object is initialised with a dictionary of MembraneEquation objects. The returned object neuron_eqs is also a MembraneEquation object, where the name of each compartment has been appended to variable names (with a leading underscore). For example, neuron.vm_soma refers to variable vm of the somatic compartment. The connect method adds a coupling current between the two named compartments, with the given resistance Ra.
A few standard Integrate-and-Fire models are implemented in the IF library module:
from brian.library.IF import *
All these functions return Equations objects (more precisely, MembraneEquation objects).
Leaky integrate-and-fire model (dvm/dt=(El-vm)/tau : volt):
eqs=leaky_IF(tau=10*ms,El=-70*mV)
Perfect integrator (dvm/dt=Im/tau : volt):
eqs=perfect_IF(tau=10*ms)
Quadratic integrate-and-fire model (C*dvm/dt=a*(vm-EL)*(vm-VT) : volt):
eqs=quadratic_IF(C=200*pF,a=10*nS/mV,EL=-70*mV,VT=-50*mV)
Exponential integrate-and-fire model (C*dvm/dt=gL*(EL-vm)+gL*DeltaT*exp((vm-VT)/DeltaT) :volt):
eqs=exp_IF(C=200*pF,gL=10*nS,EL=-70*mV,VT=-55*mV,DeltaT=3*mV)
In general, it is possible to define a neuron group with different parameter values for each neuron, by passing strings at initialisation. For example, the following code defines leaky integrate-and-fire models with heterogeneous resting potential values:
eqs=leaky_IF(tau=10*ms,El='V0')+Equations('V0:volt')
group=NeuronGroup(100,model=eqs,reset=0*mV,threshold=15*mV)
Integrate-and-fire models with two variables can display a very rich set of electrophysiological behaviours. In Brian, two such models have been implemented: Izhikevich model and Brette-Gerstner adaptive exponential integrate-and-fire model (also included in the IF module). The equations are obtained in the same way as for one-dimensional models:
eqs=Izhikevich(a=0.02/ms,b=0.2/ms)
eqs=Brette_Gerstner(C=281*pF,gL=30*nS,EL=-70.6*mV,VT=-50.4*mV,DeltaT=2*mV,tauw=144*ms,a=4*nS)
eqs=aEIF(C=281*pF,gL=30*nS,EL=-70.6*mV,VT=-50.4*mV,DeltaT=2*mV,tauw=144*ms,a=4*nS) # equivalent
and two state variables are defined: vm (membrane potential) and w (adaptation variable). The equivalent equations for Izhikevich model are:
dvm/dt=(0.04/ms/mV)*vm**2+(5/ms)*vm+140*mV/ms-w : volt
dw/dt=a*(b*vm-w) : volt/second
and for Brette-Gerstner model:
C*dvm/dt=gL*(EL-vm)+gL*DeltaT*exp((vm-VT)/DeltaT)-w :volt
dw/dt=(a*(vm-EL)-w)/tauw : amp
To simulate these models, one needs to specify a threshold value, and a good choice is VT+4*DeltaT. The reset is particular in these models since it is bidimensional: vm->Vr and w->w+b. A specific reset class has been implemented for this purpose: AdaptiveReset, initialised with Vr and b. Thus, a typical construction of a group of such models is:
eqs=Brette_Gerstner(C=281*pF,gL=30*nS,EL=-70.6*mV,VT=-50.4*mV,DeltaT=2*mV,tauw=144*ms,a=4*nS)
group=NeuronGroup(100,model=eqs,threshold=-43*mV,reset=AdaptiveReset(Vr=-70.6*mvolt,b=0.0805*nA))
A few simple synaptic models are implemented in the module synapses:
from brian.library.synapses import *
All the following functions need to be passed the name of the variable upon which the received spikes will act, and the name of the variable representing the current or conductance. The simplest one is the exponential synapse:
eqs=exp_synapse(input='x',tau=10*ms,unit=amp,output='x_current')
It is equivalent to:
eqs=Equations('''
dx/dt=-x/tau : amp
x_out=x
''')
Here, x is the variable which receives the spikes and x_current is the variable to be inserted in the membrane equation (since it is a one-dimensional synaptic model, the variables are the same). If the output variable name is not defined, then it will be automatically generated by adding the suffix _out to the input name.
Two other types of synapses are implemented. The alpha synapse (x(t)=alpha*(t/tau)*exp(1-t/tau), where alpha is a normalising factor) is defined with the same syntax by:
eqs=alpha_synapse(input='x',tau=10*ms,unit=amp)
and the bi-exponential synapse is defined by (x(t)=(tau2/(tau2-tau1))*(exp(-t/tau1)-exp(-t/tau2)), up to a normalising factor):
eqs=biexp_synapse(input='x',tau1=10*ms,tau2=5*ms,unit=amp)
For all types of synapses, the normalising factor is such that the maximum of x(t) is 1. These functions can be used as in the following example:
eqs=MembraneEquation(C=200*pF)+Current('I=gl*(El-vm)+ge*(Ee-vm):amp')
eqs+=alpha_synapse(input='ge_in',tau=10*ms,unit=siemens,output='ge')
where alpha conductances have been inserted in the membrane equation.
One can directly insert synaptic currents with the functions exp_current, alpha_current and biexp_current:
eqs=MembraneEquation(C=200*pF)+Current('I=gl*(El-vm):amp')+\
alpha_current(input='ge',tau=10*ms)
(the units is amp by default), or synaptic conductances with the functions exp_conductance, alpha_conductance and biexp_conductance:
eqs=MembraneEquation(C=200*pF)+Current('I=gl*(El-vm):amp')+\
alpha_conductance(input='ge',E=0*mV,tau=10*ms)
where E is the reversal potential.
A few standard ionic currents have implemented in the module ionic_currents:
from brian.library.ionic_currents import *
When the current name is not specified, a unique name is generated automatically. Models can be constructed by adding currents to a MembraneEquation.
Leak current (gl*(El-vm)):
current=leak_current(gl=10*nS,El=-70*mV,current_name='I')
Hodgkin-Huxley K+ current:
current=K_current_HH(gmax,EK,current_name='IK'):
Hodgkin-Huxley Na+ current:
current=Na_current_HH(gmax,ENa,current_name='INa'):
Bridge experiment (current-clamp)
from brian import *
from brian.library.electrophysiology import *
defaultclock.dt = .01 * ms
#log_level_debug()
taum = 20 * ms
gl = 20 * nS
Cm = taum * gl
Re = 50 * Mohm
Ce = 0.5 * ms / Re
N = 10
eqs = Equations('''
dvm/dt=(-gl*vm+i_inj)/Cm : volt
#Rbridge:ohm
CC:farad
I:amp
''')
eqs += electrode(.6 * Re, Ce)
#eqs+=current_clamp(vm='v_el',i_inj='i_cmd',i_cmd='I',Re=.4*Re,Ce=Ce,
# bridge='Rbridge')
eqs += current_clamp(vm='v_el', i_inj='i_cmd', i_cmd='I', Re=.4 * Re, Ce=Ce,
bridge=Re, capa_comp='CC')
setup = NeuronGroup(N, model=eqs)
setup.I = 0 * nA
setup.v = 0 * mV
#setup.Rbridge=linspace(0*Mohm,60*Mohm,N)
setup.CC = linspace(0 * Ce, Ce, N)
recording = StateMonitor(setup, 'v_rec', record=True)
run(50 * ms)
setup.I = .5 * nA
run(200 * ms)
setup.I = 0 * nA
run(150 * ms)
for i in range(N):
plot(recording.times / ms + i * 400, recording[i] / mV, 'k')
show()
Example of the use of the class LinearGammachirp available in the library. It implements a filterbank of FIR gammatone filters with linear frequency sweeps as described in Wagner et al. 2009, “Auditory responses in the barn owl’s nucleus laminaris to clicks: impulse response and signal analysis of neurophonic potential”, J. Neurophysiol. In this example, a white noise is filtered by a gammachirp filterbank and the resulting cochleogram is plotted. The different impulse responses are also plotted.
from brian import *
from brian.hears import *
sound = whitenoise(100*ms).ramp()
sound.level = 50*dB
nbr_center_frequencies = 10 #number of frequency channels in the filterbank
#center frequencies with a spacing following an ERB scale
center_frequencies = erbspace(100*Hz, 1000*Hz, nbr_center_frequencies)
c = 0.0 #glide slope
time_constant = linspace(3, 0.3, nbr_center_frequencies)*ms
gamma_chirp = LinearGammachirp(sound, center_frequencies, time_constant, c)
gamma_chirp_mon = gamma_chirp.process()
figure()
imshow(gamma_chirp_mon.T, aspect='auto')
figure()
plot(gamma_chirp.impulse_response.T)
show()
The Brian source code repository is broken into the following directories:
The main package, documented above, with the following additional directories:
Package for storing tests, composed of:
The main development folder, for works in progress, debugging stuff, tools, etc. Consists of:
Dumping ground for files used for debugging a problem.
The main folder for developer tools.
See also
User guide for Brian hears.
Sets the default samplerate for Brian hears objects, by default 44.1 kHz.
Class for working with sounds, including loading/saving, manipulating and playing.
For an overview, see Sounds.
Initialisation
The following arguments are used to initialise a sound object
Loading, saving and playing
Load the file given by filename and returns a Sound object. Sound file can be either a .wav or a .aif file.
Save the sound as a WAV.
If the normalise keyword is set to True, the amplitude of the sound will be normalised to 1. The samplewidth keyword can be 1 or 2 to save the data as 8 or 16 bit samples.
Plays the sound (normalised to avoid clipping if required). If sleep=True then the function will wait until the sound has finished playing before returning.
Properties
The length of the sound in seconds.
The number of samples in the sound.
The number of channels in the sound.
An array of times (in seconds) corresponding to each sample.
The left channel for a stereo sound.
The right channel for a stereo sound.
Returns the nth channel of the sound.
Generating sounds
All sound generating methods can be used with durations arguments in samples (int) or units (e.g. 500*ms). One can also set the number of channels by setting the keyword argument nchannels to the desired value. Notice that for noise the channels will be generated independantly.
Returns a pure tone at frequency for duration, using the default samplerate or the given one. The frequency and phase parameters can be single values, in which case multiple channels can be specified with the nchannels argument, or they can be sequences (lists/tuples/arrays) in which case there is one frequency or phase for each channel.
Returns a white noise. If the samplerate is not specified, the global default value will be used.
Returns a power-law noise for the given duration. Spectral density per unit of bandwidth scales as 1/(f**alpha).
Sample usage:
noise = powerlawnoise(200*ms, 1, samplerate=44100*Hz)
Arguments:
Returns brown noise, i.e powerlawnoise() with alpha=2
Returns pink noise, i.e powerlawnoise() with alpha=1
Returns a silent, zero sound for the given duration. Set nchannels to set the number of channels.
Returns a click of the given duration.
If peak is not specified, the amplitude will be 1, otherwise peak refers to the peak dB SPL of the click, according to the formula 28e-6*10**(peak/20.).
Returns a harmonic complex composed of pure tones at integer multiples of the fundamental frequency f0. The amplitude and phase keywords can be set to either a single value or an array of values. In the former case the value is set for all harmonics, and harmonics up to the sampling frequency are generated. In the latter each harmonic parameter is set separately, and the number of harmonics generated corresponds to the length of the array.
Returns an artifically created spoken vowel sound (following the source-filter model of speech production) with a given pitch.
The vowel can be specified by either providing vowel as a string (‘a’, ‘i’ or ‘u’) or by setting formants to a sequence of formant frequencies.
The returned sound is normalized to a maximum amplitude of 1.
The implementation is based on the MakeVowel function written by Richard O. Duda, part of the Auditory Toolbox for Matlab by Malcolm Slaney: http://cobweb.ecn.purdue.edu/~malcolm/interval/1998-010/
Timing and sequencing
Returns the sequence of sounds in the list sounds joined together
Repeats the sound n times
Returns the Sound with length extended by the given duration, which can be the number of samples or a length of time in seconds.
Returns the sound delayed by duration, which can be the number of samples or a length of time in seconds. Normally, only integer numbers of samples will be used, but if fractional=True then the filtering method from http://www.labbookpages.co.uk/audio/beamforming/fractionalDelay.html will be used (introducing some small numerical errors). With this method, you can specify the filter_length, larger values are slower but more accurate, especially at higher frequencies. The large default value of 2048 samples provides good accuracy for sounds with frequencies above 20 Hz, but not for lower frequency sounds. If you are restricted to high frequency sounds, a smaller value will be more efficient. Note that if fractional=True then duration is assumed to be a time not a number of samples.
Returns the Sound with length extended (or contracted) to have L samples.
Slicing
One can slice sound objects in various ways, for example sound[100*ms:200*ms] returns the part of the sound between 100 ms and 200 ms (not including the right hand end point). If the sound is less than 200 ms long it will be zero padded. You can also set values using slicing, e.g. sound[:50*ms] = 0 will silence the first 50 ms of the sound. The syntax is the same as usual for Python slicing. In addition, you can select a subset of the channels by doing, for example, sound[:, -5:] would be the last 5 channels. For time indices, either times or samples can be given, e.g. sound[:100] gives the first 100 samples. In addition, steps can be used for example to reverse a sound as sound[::-1].
Arithmetic operations
Standard arithemetical operations and numpy functions work as you would expect with sounds, e.g. sound1+sound2, 3*sound or abs(sound).
Level
Can be used to get or set the level of a sound, which should be in dB. For single channel sounds a value in dB is used, for multiple channel sounds a value in dB can be used for setting the level (all channels will be set to the same level), or a list/tuple/array of levels. It is assumed that the unit of the sound is Pascals.
Returns the sound at the given level in dB SPL (RMS) assuming array is in Pascals. level should be a value in dB, or a tuple of levels, one for each channel.
Can be used to set or get the maximum level of a sound. For mono sounds, this is the same as the level, but for multichannel sounds it is the maximum level across the channels. Relative level differences will be preserved. The specified level should be a value in dB, and it is assumed that the unit of the sound is Pascals.
Returns the sound with the maximum level across channels set to the given level. Relative level differences will be preserved. The specified level should be a value in dB and it is assumed that the unit of the sound is Pascals.
Ramping
Adds a ramp on/off to the sound
Returns a ramped version of the sound (see Sound.ramp()).
Plotting
Plots a spectrogram of the sound
Arguments:
Returns the values returned by pylab’s specgram, namely (pxx, freqs, bins, im) where pxx is a 2D array of powers, freqs is the corresponding frequencies, bins are the time bins, and im is the image axis.
Returns the spectrum of the sound and optionally plots it.
Arguments:
Returns (Z, freqs, phase) where Z is a 1D array of powers, freqs is the corresponding frequencies, phase is the unwrapped phase of spectrum.
Save the sound as a WAV.
If the normalise keyword is set to True, the amplitude of the sound will be normalised to 1. The samplewidth keyword can be 1 or 2 to save the data as 8 or 16 bit samples.
Load the file given by filename and returns a Sound object. Sound file can be either a .wav or a .aif file.
Plays the sound (normalised to avoid clipping if required). If sleep=True then the function will wait until the sound has finished playing before returning.
Returns a white noise. If the samplerate is not specified, the global default value will be used.
Returns a power-law noise for the given duration. Spectral density per unit of bandwidth scales as 1/(f**alpha).
Sample usage:
noise = powerlawnoise(200*ms, 1, samplerate=44100*Hz)
Arguments:
Returns brown noise, i.e powerlawnoise() with alpha=2
Returns pink noise, i.e powerlawnoise() with alpha=1
Returns an IRN_S noise. The iterated ripple noise is obtained trough a cascade of gain and delay filtering. For more details: see Yost 1996 or chapter 15 in Hartman Sound Signal Sensation.
Returns an IRN_O noise. The iterated ripple noise is obtained many attenuated and delayed version of the original broadband noise. For more details: see Yost 1996 or chapter 15 in Hartman Sound Signal Sensation.
Returns a pure tone at frequency for duration, using the default samplerate or the given one. The frequency and phase parameters can be single values, in which case multiple channels can be specified with the nchannels argument, or they can be sequences (lists/tuples/arrays) in which case there is one frequency or phase for each channel.
Returns a click of the given duration.
If peak is not specified, the amplitude will be 1, otherwise peak refers to the peak dB SPL of the click, according to the formula 28e-6*10**(peak/20.).
Returns a series of n clicks (see click()) separated by interval.
Returns a harmonic complex composed of pure tones at integer multiples of the fundamental frequency f0. The amplitude and phase keywords can be set to either a single value or an array of values. In the former case the value is set for all harmonics, and harmonics up to the sampling frequency are generated. In the latter each harmonic parameter is set separately, and the number of harmonics generated corresponds to the length of the array.
Returns a silent, zero sound for the given duration. Set nchannels to set the number of channels.
Returns the sequence of sounds in the list sounds joined together
Generalised linear filterbank
Initialisation arguments:
The filter parameters are stored in the modifiable attributes filt_b, filt_a and filt_state (the variable z in the section below).
Has one method:
Reduces cascades of low order filters into smaller cascades of high order filters.
ncascade is the number of cascaded filters to use, which should be a divisor of the original number.
Note that higher order filters are often numerically unstable.
Notes
These notes adapted from scipy’s lfilter() function.
The filterbank is implemented as a direct II transposed structure. This means that for a single channel and element of the filter cascade, the output y for an input x is defined by:
a[0]*y[m] = b[0]*x[m] + b[1]*x[m-1] + ... + b[m]*x[0]
- a[1]*y[m-1] - ... - a[m]*y[0]
using the following difference equations:
y[i] = b[0]*x[i] + z[0,i-1]
z[0,i] = b[1]*x[i] + z[1,i-1] - a[1]*y[i]
...
z[m-3,i] = b[m-2]*x[i] + z[m-2,i-1] - a[m-2]*y[i]
z[m-2,i] = b[m-1]*x[i] - a[m-1]*y[i]
where i is the output sample number.
The rational transfer function describing this filter in the z-transform domain is:
-1 -nb
b[0] + b[1]z + ... + b[m] z
Y(z) = --------------------------------- X(z)
-1 -na
a[0] + a[1]z + ... + a[m] z
Finite impulse response filterbank
Initialisation parameters:
Filterbank used to restructure channels, including repeating and interleaving.
Standard forms of usage:
Repeat mono source N times:
RestructureFilterbank(source, N)
For a stereo source, N copies of the left channel followed by N copies of the right channel:
RestructureFilterbank(source, N)
For a stereo source, N copies of the channels tiled as LRLRLR...LR:
RestructureFilterbank(source, numtile=N)
For two stereo sources AB and CD, join them together in serial to form the output channels in order ABCD:
RestructureFilterbank((AB, CD))
For two stereo sources AB and CD, join them together interleaved to form the output channels in order ACBD:
RestructureFilterbank((AB, CD), type='interleave')
These arguments can also be combined together, for example to AB and CD into output channels AABBCCDDAABBCCDDAABBCCDD:
RestructureFilterbank((AB, CD), 2, 'serial', 3)
The three arguments are the number of repeats before joining, the joining type (‘serial’ or ‘interleave’) and the number of tilings after joining. See below for details.
Initialise arguments:
Filterbank that joins the channels of its inputs in series, e.g. with two input sources with channels AB and CD respectively, the output would have channels ABCD. You can initialise with multiple sources separated by commas, or by passing a list of sources.
Filterbank that interleaves the channels of its inputs, e.g. with two input sources with channels AB and CD respectively, the output would have channels ACBD. You can initialise with multiple sources separated by commas, or by passing a list of sources.
Filterbank that repeats each channel from its input, e.g. with 3 repeats channels ABC would map to AAABBBCCC.
Filterbank that tiles the channels from its input, e.g. with 3 tiles channels ABC would map to ABCABCABC.
Filterbank that just applies a given function. The function should take as many arguments as there are sources.
For example, to half-wave rectify inputs:
FunctionFilterbank(source, lambda x: clip(x, 0, Inf))
The syntax lambda x: clip(x, 0, Inf) defines a function object that takes a single argument x and returns clip(x, 0, Inf). The numpy function clip(x, low, high) returns the values of x clipped between low and high (so if x<low it returns low, if x>high it returns high, otherwise it returns x). The symbol Inf means infinity, i.e. no clipping of positive values.
Technical details
Note that functions should operate on arrays, in particular on 2D buffered segments, which are arrays of shape (bufsize, nchannels). Typically, most standard functions from numpy will work element-wise.
If you want a filterbank that changes the shape of the input (e.g. changes the number of channels), set the nchannels keyword argument to the number of output channels.
Sum filterbanks together with given weight vectors.
For example, to take the sum of two filterbanks:
SumFilterbank((fb1, fb2))
To take the difference:
SumFilterbank((fb1, fb2), (1, -1))
Filterbank that does nothing to its input.
Useful for removing a set of filters without having to rewrite your code. Can also be used for simply writing compound derived classes. For example, if you want a compound Filterbank that does AFilterbank and then BFilterbank, but you want to encapsulate that into a single class, you could do:
class ABFilterbank(DoNothingFilterbank):
def __init__(self, source):
a = AFilterbank(source)
b = BFilterbank(a)
DoNothingFilterbank.__init__(self, b)
However, a more general way of writing compound filterbanks is to use CombinedFilterbank.
Filterbank that can be used for controlling behaviour at runtime
Typically, this class is used to implement a control path in an auditory model, modifying some filterbank parameters based on the output of other filterbanks (or the same ones).
The controller has a set of input filterbanks whose output values are used to modify a set of output filterbanks. The update is done by a user specified function or class which is passed these output values. The controller should be inserted as the last bank in a chain.
Initialisation arguments:
The updater
The updater argument can be either a function or class instance. If it is a function, it should have a form like:
# A single input
def updater(input):
...
# Two inputs
def updater(input1, input2):
...
# Arbitrary number of inputs
def updater(*inputs):
...
Each argument input to the function is a numpy array of shape (numsamples, numchannels) where numsamples is the number of samples just computed, and numchannels is the number of channels in the corresponding filterbank. The function is not restricted in what it can do with these inputs.
Functions can be used to implement relatively simple controllers, but for more complicated situations you may want to maintain some state variables for example, and in this case you can use a class. The object updater should be an instance of a class that defines the __call__ method (with the same syntax as above for functions). In addition, you can define a reinitialisation method reinit() which will be called when the buffer_init() method is called on the filterbank, although this is entirely optional.
Example
The following will do a simple form of gain control, where the gain parameter will drift exponentially towards target_rms/rms with a given time constant:
# This class implements the gain (see Filterbank for details)
class GainFilterbank(Filterbank):
def __init__(self, source, gain=1.0):
Filterbank.__init__(self, source)
self.gain = gain
def buffer_apply(self, input):
return self.gain*input
# This is the class for the updater object
class GainController(object):
def __init__(self, target, target_rms, time_constant):
self.target = target
self.target_rms = target_rms
self.time_constant = time_constant
def reinit(self):
self.sumsquare = 0
self.numsamples = 0
def __call__(self, input):
T = input.shape[0]/self.target.samplerate
self.sumsquare += sum(input**2)
self.numsamples += input.size
rms = sqrt(self.sumsquare/self.numsamples)
g = self.target.gain
g_tgt = self.target_rms/rms
tau = self.time_constant
self.target.gain = g_tgt+exp(-T/tau)*(g-g_tgt)
And an example of using this with an input source, a target RMS of 0.2 and a time constant of 50 ms, updating every 10 ms:
gain_fb = GainFilterbank(source)
updater = GainController(gain_fb, 0.2, 50*ms)
control = ControlFilterbank(gain_fb, source, gain_fb, updater, 10*ms)
Filterbank that encapsulates a chain of filterbanks internally.
This class should mostly be used by people writing extensions to Brian hears rather than by users directly. The purpose is to take an existing chain of filterbanks and wrap them up so they appear to the user as a single filterbank which can be used exactly as any other filterbank.
In order to do this, derive from this class and in your initialisation follow this pattern:
class RectifiedGammatone(CombinedFilterbank):
def __init__(self, source, cf):
CombinedFilterbank.__init__(self, source)
source = self.get_modified_source()
# At this point, insert your chain of filterbanks acting on
# the modified source object
gfb = Gammatone(source, cf)
rectified = FunctionFilterbank(gfb,
lambda input: clip(input, 0, Inf))
# Finally, set the output filterbank to be the last in your chain
self.set_output(fb)
This combination of a Gammatone and a rectification via a FunctionFilterbank can now be used as a single filterbank, for example:
x = whitenoise(100*ms)
fb = RectifiedGammatone(x, [1*kHz, 1.5*kHz])
y = fb.process()
Details
The reason for the get_modified_source() call is that the source attribute of a filterbank can be changed after creation. The modified source provides a buffer (in fact, a DoNothingFilterbank) so that the input to the chain of filters defined by the derived class doesn’t need to be changed.
Bank of gammatone filters.
They are implemented as cascades of four 2nd-order IIR filters (this 8th-order digital filter corresponds to a 4th-order gammatone filter).
The approximated impulse response is defined as follow where [Hz] is the equivalent rectangular bandwidth of the filter centered at .
It comes from Slaney’s exact gammatone implementation (Slaney, M., 1993, “An Efficient Implementation of the Patterson-Holdsworth Auditory Filter Bank”. Apple Computer Technical Report #35). The code is based on Slaney’s Matlab implementation.
Initialised with arguments:
Bank of approximate gammatone filters implemented as a cascade of order IIR gammatone filters.
The filter is derived from the sampled version of the complex analog gammatone impulse response where corresponds to order, defines the oscillation frequency cf, and defines the bandwidth parameter.
The design is based on the Hohmann implementation as described in Hohmann, V., 2002, “Frequency analysis and synthesis using a Gammatone filterbank”, Acta Acustica United with Acustica. The code is based on the Matlab gammatone implementation from Meddis’ toolbox.
Initialised with arguments:
Bank of gammachirp filters with a logarithmic frequency sweep.
The approximated impulse response is defined as follows: where [Hz] is the equivalent rectangular bandwidth of the filter centered at .
The implementation is a cascade of 4 2nd-order IIR gammatone filters followed by a cascade of ncascades 2nd-order asymmetric compensation filters as introduced in Unoki et al. 2001, “Improvement of an IIR asymmetric compensation gammachirp filter”.
Initialisation parameters:
Bank of gammachirp filters with linear frequency sweeps and gamma envelope as described in Wagner et al. 2009, “Auditory responses in the barn owl’s nucleus laminaris to clicks: impulse response and signal analysis of neurophonic potential”, J. Neurophysiol.
The impulse response is defined as follow where corresponds to time_constant and to phase (see definition of parameters).
Those filters are implemented as FIR filters using truncated time representations of gammachirp functions as the impulse response. The impulse responses, which need to have the same length for every channel, have a duration of 15 times the biggest time constant. The length of the impulse response is therefore 15*max(time_constant)*sampling_rate. The impulse responses are normalized with respect to the transmitted power, i.e. the rms of the filter taps is 1.
Initialisation parameters:
Has attributes:
Bank of gammachirp filters with linear frequency sweeps and gaussian envelope as described in Wagner et al. 2009, “Auditory responses in the barn owl’s nucleus laminaris to clicks: impulse response and signal analysis of neurophonic potential”, J. Neurophysiol.
The impulse response is defined as follows: , where corresponds to time_constant and to phase (see definition of parameters).
These filters are implemented as FIR filters using truncated time representations of gammachirp functions as the impulse response. The impulse responses, which need to have the same length for every channel, have a duration of 12 times the biggest time constant. The length of the impulse response is therefore 12*max(time_constant)*sampling_rate. The envelope is a gaussian function (Gabor filter). The impulse responses are normalized with respect to the transmitted power, i.e. the rms of the filter taps is 1.
Initialisation parameters:
Has attributes:
Filterbank of IIR filters. The filters can be low, high, bandstop or bandpass and be of type Elliptic, Butterworth, Chebyshev etc. The passband and stopband can be scalars (for low or high pass) or pairs of parameters (for stopband and passband) yielding similar filters for every channel. They can also be arrays of shape (1, nchannels) for low and high pass or (2, nchannels) for stopband and passband yielding different filters along channels. This class uses the scipy iirdesign function to generate filter coefficients for every channel.
See the documentation for scipy.signal.iirdesign for more details.
Initialisation parameters:
Filterbank of low, high, bandstop or bandpass Butterworth filters. The cut-off frequencies or the band frequencies can either be the same for each channel or different along channels.
Initialisation parameters:
Cascade of n times a linear filterbank.
Initialised with arguments:
Bank of 1st-order lowpass filters
The code is based on the code found in the Meddis toolbox. It was implemented here to be used in the DRNL cochlear model implementation.
Initialised with arguments:
Bank of asymmetric compensation filters.
Those filters are meant to be used in cascade with gammatone filters to approximate gammachirp filters (Unoki et al., 2001, Improvement of an IIR asymmetric compensation gammachirp filter, Acoust. Sci. & Tech.). They are implemented a a cascade of low order filters. The code is based on the implementation found in the AIM-MAT toolbox.
Initialised with arguments:
Implementation of the dual resonance nonlinear (DRNL) filter as described in Lopez-Paveda, E. and Meddis, R., “A human nonlinear cochlear filterbank”, JASA 2001.
The entire pathway consists of the sum of a linear and a nonlinear pathway.
The linear path consists of a bank of bandpass filters (second order gammatone), a low pass function, and a gain/attenuation factor, g, in a cascade.
The nonlinear path is a cascade consisting of a bank of gammatone filters, a compression function, a second bank of gammatone filters, and a low pass function, in that order.
Initialised with arguments:
defines the parameters set corresponding to a certain fit. It can be either:
The possible parameters to change and their default values for humans (see Lopez-Paveda, E. and Meddis, R.,”A human nonlinear cochlear filterbank”, JASA 2001. for notation) are:
param['stape_scale']=0.00014
param['order_linear']=3
param['order_nonlinear']=3
from there on the parameters are given in the form where cf is the center frequency:
param['cf_lin_p0']=-0.067
param['cf_lin_m']=1.016
param['bw_lin_p0']=0.037
param['bw_lin_m']=0.785
param['cf_nl_p0']=-0.052
param['cf_nl_m']=1.016
param['bw_nl_p0']=-0.031
param['bw_nl_m']=0.774
param['a_p0']=1.402
param['a_m']=0.819
param['b_p0']=1.619
param['b_m']=-0.818
param['c_p0']=-0.602
param['c_m']=0
param['g_p0']=4.2
param['g_m']=0.48
param['lp_lin_cutoff_p0']=-0.067
param['lp_lin_cutoff_m']=1.016
param['lp_nl_cutoff_p0']=-0.052
param['lp_nl_cutoff_m']=1.016
The compressive gammachirp auditory filter as described in Irino, T. and Patterson R., “A compressive gammachirp auditory filter for both physiological and psychophysical data”, JASA 2001.
Technical implementation details and notation can be found in Irino, T. and Patterson R., “A Dynamic Compressive Gammachirp Auditory Filterbank”, IEEE Trans Audio Speech Lang Processing.
The model consists of a control pathway and a signal pathway in parallel.
The control pathway consists of a bank of bandpass filters followed by a bank of highpass filters (this chain yields a bank of gammachirp filters).
The signal pathway consist of a bank of fix bandpass filters followed by a bank of highpass filters with variable cutoff frequencies (this chain yields a bank of gammachirp filters with a level-dependent bandwidth). The highpass filters of the signal pathway are controlled by the output levels of the two stages of the control pathway.
Initialised with arguments:
The possible parameters to change and their default values (see Irino, T. and Patterson R., “A Dynamic Compressive Gammachirp Auditory Filterbank”, IEEE Trans Audio Speech Lang Processing) are:
param['b1'] = 1.81
param['c1'] = -2.96
param['b2'] = 2.17
param['c2'] = 2.2
param['decay_tcst'] = .5*ms
param['lev_weight'] = .5
param['level_ref'] = 50.
param['level_pwr1'] = 1.5
param['level_pwr2'] = .5
param['RMStoSPL'] = 30.
param['frat0'] = .2330
param['frat1'] = .005
param['lct_ERB'] = 1.5 #value of the shift in ERB frequencies
param['frat_control'] = 1.08
param['order_gc']=4
param['ERBrate']= 21.4*log10(4.37*cf/1000+1) # cf is the center frequency
param['ERBwidth']= 24.7*(4.37*cf/1000 + 1)
Implements the middle ear model from Tan & Carney (2003) (linear filter with two pole pairs and one double zero). The gain is normalized for the response of the analog filter at 1000Hz as in the model of Tan & Carney (their actual C code does however result in a slightly different normalization, the difference in overall level is about 0.33dB (to get exactly the same output as in their model, set the gain parameter to 0.962512703689).
Tan, Q., and L. H. Carney. “A Phenomenological Model for the Responses of Auditory-nerve Fibers. II. Nonlinear Tuning with a Frequency Glide”. The Journal of the Acoustical Society of America 114 (2003): 2007.
Class implementing the nonlinear auditory filterbank model as described in Tan, G. and Carney, L., “A phenomenological model for the responses of auditory-nerve fibers. II. Nonlinear tuning with a frequency glide”, JASA 2003.
The model consists of a control path and a signal path. The control path controls both its own bandwidth via a feedback loop and also the bandwidth of the signal path.
Initialised with arguments:
A FilterbankGroup that represents an IHC-AN synapse according to the Zhang et al. (2001) model. The source should be a filterbank, producing V_ihc (e.g. TanCarney). CF specifies the characteristic frequencies of the AN fibers. params overwrites any parameters values given in the publication.
The group emits spikes according to a time-varying Poisson process with absolute and relative refractoriness (probability of spiking is given by state variable R). The continuous probability of spiking without refractoriness is available in the state variable s.
The n_per_channel argument can be used to generate multiple spike trains for every channel.
If all you need is the state variable s, you can use the class ZhangSynapseRate instead which does not simulate the spike-generating Poisson process.
For details see: Zhang, X., M. G. Heinz, I. C. Bruce, and L. H. Carney. “A Phenomenological Model for the Responses of Auditory-nerve Fibers: I. Nonlinear Tuning with Compression and Suppression”. The Journal of the Acoustical Society of America 109 (2001): 648.
Allows a Filterbank object to be used as a NeuronGroup
Initialised as a standard NeuronGroup object, but with two additional arguments at the beginning, and no N (number of neurons) argument. The number of neurons in the group will be the number of channels in the filterbank. (TODO: add reference to interleave/serial channel stuff here.)
One additional keyword is available beyond that of NeuronGroup:
Note that if you specify your own Clock, it should have 1/dt=samplerate.
Returns the centre frequencies on an ERB scale.
This function is used to generated the coefficient of the asymmetric compensation filter used for the gammachirp implementation.
Sets tick positions for log-scale frequency x-axis at sensible locations.
Also uses scalar representation rather than exponential (i.e. 100 rather than 10^2).
See also: log_frequency_yaxis_labels().
Note: with log scaled axes, it can be useful to call axis('tight') before setting the ticks.
Sets tick positions for log-scale frequency x-axis at sensible locations.
Also uses scalar representation rather than exponential (i.e. 100 rather than 10^2).
See also: log_frequency_yaxis_labels().
Note: with log scaled axes, it can be useful to call axis('tight') before setting the ticks.
Head related transfer function.
Attributes
Methods
Returns a stereo Sound object formed by applying the pair of HRTFs to the mono sound input. Equivalently, you can write hrtf(sound) for hrtf an HRTF object.
Returns an FIRFilterbank object that can be used to apply the HRTF as part of a chain of filterbanks.
You can get the number of samples in the impulse response with len(hrtf).
A collection of HRTFs, typically for a single individual.
Normally this object is created automatically by an HRTFDatabase.
Attributes
Methods
Generates the subset of the set of HRTFs whose coordinates satisfy the condition. This should be one of: a boolean array of length the number of HRTFs in the set, with values of True/False to indicate if the corresponding HRTF should be included or not; an integer array with the indices of the HRTFs to keep; or a function whose argument names are names of the parameters of the coordinate system, e.g. condition=lambda azim:azim<pi/2.
Returns an FIRFilterbank object which applies all of the HRTFs in the set. If interleaved=False then the channels are arranged in the order LLLL...RRRR..., otherwise they are arranged in the order LRLRLR....
You can access an HRTF by index via hrtfset[index], or by its coordinates via hrtfset(coord1=val1, coord2=val2).
Initialisation
Base class for databases of HRTFs
Should have an attribute ‘subjects’ giving a list of available subjects, and a method load_subject(subject) which returns an HRTFSet for that subject.
The initialiser should take (optional) keywords:
Creates a numpy record array from the keywords passed to the function. Each keyword/value pair should be the name of the coordinate the array of values of that coordinate for each location. Returns a numpy record array. For example:
coords = make_coordinates(azimuth=[0, 30, 60, 0, 30, 60],
elevation=[0, 0, 0, 30, 30, 30])
print coords['azimuth']
HRTFDatabase for the IRCAM LISTEN public HRTF database.
For details on the database, see the website.
The database object can be initialised with the following arguments:
The coordinates are pairs (azim, elev) where azim ranges from 0 to 345 degrees in steps of 15 degrees, and elev ranges from -45 to 90 in steps of 15 degrees. After loading the database, the attribute ‘subjects’ gives all the subjects number that were detected as installed.
Obtaining the database
The database can be downloaded here. Each subject archive should be extracted to a folder (e.g. IRCAM) with the names of the subject, e.g. IRCAM/IRC_1002, etc.
Database for creating HRTFSet with artificial interaural time-differences
Initialisation keywords:
To get the HRTFSet, the simplest thing to do is just:
hrtfset = HeadlessDatabase(13).load_subject()
The generated ITDs can be returned using the itd attribute of the HeadlessDatabase object.
If fractional_itds=False then Note that the delays induced in the left and right channels are not symmetric as making them so wastes half the samplerate (if the delay to the left channel is itd/2 and the delay to the right channel is -itd/2). Instead, for each channel either the left channel delay is 0 and the right channel delay is -itd (if itd<0) or the left channel delay is itd and the right channel delay is 0 (if itd>0).
If fractional_itds=True then delays in the left and right channels will be symmetric around a global offset of delay_offset.
Useful for understanding more about the internals.
Base class for Brian.hears classes
Defines a buffering interface of two methods:
In addition, bufferable objects should define attributes:
By default, the class will define a default buffering mechanism which can easily be extended. To extend the default buffering mechanism, simply implement the method:
The default methods for buffer_init() and buffer_fetch() will define a buffer cache which will get larger if it needs to to accommodate a buffer_fetch(start, end) where end-start is larger than the current cache. If the filterbank has a minimum_buffer_size attribute, the internal cache will always have at least this size, and the buffer_fetch_next(samples) method will always get called with samples>=minimum_buffer_size. This can be useful to ensure that the buffering is done efficiently internally, even if the user request buffered chunks that are too small. If the filterbank has a maximum_buffer_size attribute then buffer_fetch_next(samples) will always be called with samples<=maximum_buffer_size - this can be useful for either memory consumption reasons or for implementing time varying filters that need to update on a shorter time window than the overall buffer size.
The following attributes will automatically be maintained:
Generalised filterbank object
Documentation common to all filterbanks
Filterbanks all share a few basic attributes:
The source of the filterbank, a Bufferable object, e.g. another Filterbank or a Sound. It can also be a tuple of sources. Can be changed after the object is created, although note that for some filterbanks this may cause problems if they do make assumptions about the input based on the first source object they were passed. If this is causing problems, you can insert a dummy filterbank (DoNothingFilterbank) which is guaranteed to work if you change the source.
The number of channels.
The sample rate.
The duration of the filterbank. If it is not specified by the user, it is computed by finding the maximum of its source durations. If these are not specified a KeyError will be raised (for example, using OnlineSound as a source).
To process the output of a filterbank, the following method can be used:
Returns the output of the filterbank for the given duration.
For example, to compute the RMS of each channel in a filterbank, you would do:
def sum_of_squares(input, running_sum_of_squares):
return running_sum_of_squares+sum(input**2, axis=0)
rms = sqrt(fb.process(sum_of_squares)/nsamples)
Alternatively, the buffer interface can be used, which is described in more detail below.
Filterbank also defines arithmetical operations for +, -, *, / where the other operand can be a filterbank or scalar.
Details on the class
This class is a base class not designed to be instantiated. A Filterbank object should define the interface of Bufferable, as well as defining a source attribute. This is normally a Bufferable object, but could be an iterable of sources (for example, for filterbanks that mix or add multiple inputs).
The buffer_fetch_next(samples) method has a default implementation that fetches the next input, and calls the buffer_apply(input) method on it, which can be overridden by a derived class. This is typically the easiest way to implement a new filterbank. Filterbanks with multiple sources will need to override this default implementation.
There is a default __init__ method that can be called by a derived class that sets the source, nchannels and samplerate from that of the source object. For multiple sources, the default implementation will check that each source has the same number of channels and samplerate and will raise an error if not.
There is a default buffer_init() method that calls buffer_init() on the source (or list of sources).
Example of deriving a class
The following class takes N input channels and sums them to a single output channel:
class AccumulateFilterbank(Filterbank):
def __init__(self, source):
Filterbank.__init__(self, source)
self.nchannels = 1
def buffer_apply(self, input):
return reshape(sum(input, axis=1), (input.shape[0], 1))
Note that the default Filterbank.__init__ will set the number of channels equal to the number of source channels, but we want to change it to have a single output channel. We use the buffer_apply method which automatically handles the efficient cacheing of the buffer for us. The method receives the array input which has shape (bufsize, nchannels) and sums over the channels (axis=1). It’s important to reshape the output so that it has shape (bufsize, outputnchannels) so that it can be used as the input to subsequent filterbanks.
Base class for Sound and OnlineSound
There are several relevant global options for Brian hears. In particular, activating scipy Weave support with the useweave=True will give considerably faster linear filtering. See Preferences and Compiled code for more details.
The basic principles of developing Brian are:
Coding conventions. We use the PEP-8 coding conventions for our code. Syntax is chosen as much as possible from the user point of view, to reflect the concepts as directly as possible. Ideally, a Brian script should be readable by someone who doesn’t know Python or Brian, although this isn’t always possible. Function and class names should be explicit rather than abbreviated.
Documentation. It is very important to maintain documentation. We use the Sphinx documentation generator tools. The documentation is all hand written. Sphinx source files are stored in the docs_sphinx folder in the repository, and compiled HTML files are stored in the docs folder. Most of the documentation is stored directly in the Sphinx source text files, but reference documentation for important Brian classes and functions are kept in the documentation strings of those classes themselves. This is automatically pulled from these classes for the reference manual section of the documentation. The idea is to keep the definitive reference documentation near the code that it documents, serving as both a comment for the code itself, and to keep the documentation up to date with the code.
In the code, every class or function should start with an explanation of what it does, unless it is trivial. A good idea is to use explicit names rather than abbreviations, so that you instantly understand what it is about. Inside a function, important chunks should also be commented.
Testing. Brian uses the nose package for its testing framework. Tests are stored in the brian/tests directory. Tests associated to a Brian module are stored in brian/tests/testinterface and tests of the mathematical correctness of Brian’s algorithms are stored in brian/tests/testcorrectness.
Errors. It is a good idea to start an important function (e.g. object initialisation) with a check of the arguments, and possibly issue errors. This way errors are more understandable by the user.
Enhancements. Brian uses a system parallel to the Python Enhancement Proposals (PEPs) system for Python, called Brian Enhancement Proposals (BEPs). These are stored in dev/BEPs. Ideas for new functionality for Brian are put in here for comment and discussion. A BEP typically includes:
We also use the Brian development mailing list.
First of all, you should register to the developers mailing list. If you want to modify existing modules, you should make sure that you work on the latest SVN version. We use the Eclipse IDE because it has a nice Python plugin (Pydev) and SVN plugin, but of course you can use your preferred IDE. The next step is to carefully read the guidelines in this guide.
Now that you wrote your code:
From that point, your patch will either be directly included in the svn or (more likely) will be first discussed in the mailing list.
New modules. New Brian modules typically start in the dev/ideas folder, then go to brian/experimental when they starting looking like modules. They move to the main folder when they are stable (especially the user syntax).
M. Diesmann et al. (1999). Stable propagation of synchronous spiking in cortical neural networks. Nature 402, 529-533.
from brian import *
# Neuron model parameters
Vr = -70 * mV
Vt = -55 * mV
taum = 10 * ms
taupsp = 0.325 * ms
weight = 4.86 * mV
# Neuron model
eqs = '''
dV/dt=(-(V-Vr)+x)*(1./taum) : volt
dx/dt=(-x+y)*(1./taupsp) : volt
dy/dt=-y*(1./taupsp)+25.27*mV/ms+\
(39.24*mV/ms**0.5)*xi : volt
'''
# Neuron groups
P = NeuronGroup(N=1000, model=eqs,
threshold=Vt, reset=Vr, refractory=1 * ms)
Pinput = PulsePacket(t=50 * ms, n=85, sigma=1 * ms)
# The network structure
Pgp = [ P.subgroup(100) for i in range(10)]
C = Synapses(P, P, model='w:volt', pre='y+=w')
for i in range(9):
C[Pgp[i], Pgp[i + 1]]=True
C.w[Pgp[i], Pgp[i + 1]]=weight
Cinput = Synapses(Pinput, Pgp[0], model='w:volt', pre='y+=w')
Cinput[:,:]=True
Cinput.w[:,:]=weight
# Record the spikes
Mgp = [SpikeMonitor(p) for p in Pgp]
Minput = SpikeMonitor(Pinput)
monitors = [Minput] + Mgp
# Setup the network, and run it
P.V = Vr + rand(len(P)) * (Vt - Vr)
run(100 * ms)
# Plot result
raster_plot(showgrouplines=True, *monitors)
show()
Brian is a clock-based simulator: operations are done synchronously at each tick of a clock.
Many Brian objects store a clock object, passed in the initialiser with the optional keyword clock. For example, to simulate a neuron group with time step dt=1 ms:
myclock=Clock(dt=1*ms)
group=NeuronGroup(100,model='dx/dt=1*mV/ms : volt',clock=myclock)
If no clock is specified, the program uses the global default clock. When Brian is initially imported, this is the object defaultclock, and it has a default time step of 0.1 ms. In a simple script, you can override this by writing (for example):
defaultclock.dt = 1*ms
You may wish to use multiple clocks in your program. In this case, for each object which requires one, you have to pass a copy of its Clock object. The network run function automatically handles objects with different clocks, updating them all at the appropriate time according to their time steps (value of dt).
Multiple clocks can be useful, for example, for defining a simulation that runs with a very small dt, but with some computationally expensive operation running at a lower frequency. In the following example, the model is simulated with dt=0.01 ms and the variable x is recorded every ms:
simulation_clock=Clock(dt=0.01*ms)
record_clock=Clock(dt=1*ms)
group=NeuronGroup(100,model='dx/dt=-x/tau : volt',clock=simulation_clock)
M=StateMonitor(group,'x',record='True',clock=record_clock)
The current time of a clock is stored in the attribute t (simulation_clock.t) and the timestep is stored in the attribute dt.
When using multiple clocks, it can be important to specify the order in which they evaluated, which you can using the order keyword of the Clock object, e.g.:
clock_first = Clock(dt=1*ms, order=0)
clock_second = Clock(dt=5*ms, order=1)
Every 5ms, these two clocks will coincide, and the order attribute means that clock_first will always be evaluated before clock_second.
The default clock uses an underlying integer representation. This behaviour was changed in Brian 1.3 from earlier versions which used a float representation. To recover the earlier behaviour if it is important, you can use FloatClock or NaiveClock.
You may want to have events that happen at regular times, but still want to use the default clock for all other objects, in which case you can use the EventClock for a network_operation() and it will not create any clock ambiguities, e.g.:
from brian import *
...
G = NeuronGroup(N, eqs, ...)
...
@network_operation(clock=EventClock(dt=1*second))
def do_something():
...
Tests if two scalar values have the same dimensions, returns a bool.
Note that the syntax may change in later releases of Brian, with tighter integration of scalar and array valued quantities.
Tests if a scalar value is dimensionless or not, returns a bool.
Note that the syntax may change in later releases of Brian, with tighter integration of scalar and array valued quantities.
Exception class for attempted operations with inconsistent dimensions
For example, 3*mvolt + 2*amp raises this exception. The purpose of this class is to help catch errors based on incorrect units. The exception will print a representation of the dimensions of the two inconsistent objects that were operated on. If you want to check for inconsistent units in your code, do something like:
try:
...
your code here
...
except DimensionMismatchError, inst:
...
cleanup code here, e.g.
print "Found dimension mismatch, details:", inst
...
Decorator to check units of arguments passed to a function
Sample usage:
@check_units(I=amp,R=ohm,wibble=metre,result=volt)
def getvoltage(I,R,**k):
return I*R
You don’t have to check the units of every variable in the function, and you can define what the units should be for variables that aren’t explicitly named in the definition of the function. For example, the code above checks that the variable wibble should be a length, so writing:
getvoltage(1*amp,1*ohm,wibble=1)
would fail, but:
getvoltage(1*amp,1*ohm,wibble=1*metre)
would pass. String arguments are not checked (e.g. getvoltage(wibble='hello') would pass).
The special name result is for the return value of the function.
An error in the input value raises a DimensionMismatchError, and an error in the return value raises an AssertionError (because it is a code problem rather than a value problem).
Notes
This decorator will destroy the signature of the original function, and replace it with the signature (*args, **kwds). Other decorators will do the same thing, and this decorator critically needs to know the signature of the function it is acting on, so it is important that it is the first decorator to act on a function. It cannot be used in combination with another decorator that also needs to know the signature of the function.
Typically, you shouldn’t need to use any details about the following two classes, and their implementations are subject to change in future releases of Brian.
A number with an associated physical dimension.
In most cases, it is not necessary to create a Quantity object by hand, instead use the constant unit names second, kilogram, etc. The details of how Quantity objects work is subject to change in future releases of Brian, as we plan to reimplement it in a more efficient manner, more tightly integrated with numpy. The following can be safely used:
A physical unit
Normally, you do not need to worry about the implementation of units. They are derived from the Quantity object with some additional information (name and string representation). You can define new units which will be used when generating string representations of quantities simply by doing an arithmetical operation with only units, for example:
Nm = newton * metre
Note that operations with units are slower than operations with Quantity objects, so for efficiency if you do not need the extra information that a Unit object carries around, write 1*second in preference to second.
Adaptive exponential integrate-and-fire model. http://www.scholarpedia.org/article/Adaptive_exponential_integrate-and-fire_model
Introduced in Brette R. and Gerstner W. (2005), Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity, J. Neurophysiol. 94: 3637 - 3642.
from brian import *
# Parameters
C = 281 * pF
gL = 30 * nS
taum = C / gL
EL = -70.6 * mV
VT = -50.4 * mV
DeltaT = 2 * mV
Vcut = VT + 5 * DeltaT
# Pick an electrophysiological behaviour
tauw, a, b, Vr = 144 * ms, 4 * nS, 0.0805 * nA, -70.6 * mV # Regular spiking (as in the paper)
#tauw,a,b,Vr=20*ms,4*nS,0.5*nA,VT+5*mV # Bursting
#tauw,a,b,Vr=144*ms,2*C/(144*ms),0*nA,-70.6*mV # Fast spiking
eqs = """
dvm/dt=(gL*(EL-vm)+gL*DeltaT*exp((vm-VT)/DeltaT)+I-w)/C : volt
dw/dt=(a*(vm-EL)-w)/tauw : amp
I : amp
"""
neuron = NeuronGroup(1, model=eqs, threshold=Vcut, reset="vm=Vr;w+=b", freeze=True)
neuron.vm = EL
trace = StateMonitor(neuron, 'vm', record=0)
spikes = SpikeMonitor(neuron)
run(20 * ms)
neuron.I = 1 * nA
run(100 * ms)
neuron.I = 0 * nA
run(20 * ms)
# We draw nicer spikes
vm = trace[0]
for _, t in spikes.spikes:
i = int(t / defaultclock.dt)
vm[i] = 20 * mV
plot(trace.times / ms, vm / mV)
show()
Realtime plotting example
# These lines are necessary for interactive plotting when launching from the
# Eclipse IDE, they may not be necessary in every environment.
import matplotlib
matplotlib.use('WXAgg') # You may need to experiment, try WXAgg, GTKAgg, QTAgg, TkAgg
from brian import *
###### Set up the standard CUBA example ######
N = 4000
eqs = '''
dv/dt = (ge+gi-(v+49*mV))/(20*ms) : volt
dge/dt = -ge/(5*ms) : volt
dgi/dt = -gi/(10*ms) : volt
'''
P = NeuronGroup(N, eqs, threshold= -50 * mV, reset= -60 * mV)
P.v = -60 * mV + 10 * mV * rand(len(P))
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
Ce = Connection(Pe, P, 'ge', weight=1.62 * mV, sparseness=0.02)
Ci = Connection(Pi, P, 'gi', weight= -9 * mV, sparseness=0.02)
M = SpikeMonitor(P)
trace = RecentStateMonitor(P, 'v', record=range(5), duration=200 * ms)
ion()
subplot(211)
raster_plot(M, refresh=10 * ms, showlast=200 * ms, redraw=False)
subplot(212)
trace.plot(refresh=10 * ms, showlast=200 * ms)
run(1 * second)
ioff() # switch interactive mode off
show() # and wait for user to close the window before shutting down
This section is intended as a guide to how Brian functions internally for people developing Brian itself, or extensions to Brian. It may also be of some interest to others wishing to better understand how Brian works internally.
Example of the use of the class Butterworth available in the library. In this example, a white noise is filtered by a bank of butterworth bandpass filters and lowpass filters which are different for every channels. The centre or cutoff frequency of the filters are linearly taken between 100kHz and 1000kHz and its bandwidth frequency increases linearly with frequency.
from brian import *
from brian.hears import *
level = 50*dB # level of the input sound in rms dB SPL
sound = whitenoise(100*ms).ramp()
sound = sound.atlevel(level)
order = 2 #order of the filters
#### example of a bank of bandpass filter ################
nchannels = 50
center_frequencies = linspace(100*Hz, 1000*Hz, nchannels)
bw = linspace(50*Hz, 300*Hz, nchannels) # bandwidth of the filters
#arrays of shape (2 x nchannels) defining the passband frequencies (Hz)
fc = vstack((center_frequencies-bw/2, center_frequencies+bw/2))
filterbank = Butterworth(sound, nchannels, order, fc, 'bandpass')
filterbank_mon = filterbank.process()
figure()
subplot(211)
imshow(flipud(filterbank_mon.T), aspect='auto')
### example of a bank of lowpass filter ################
nchannels = 50
cutoff_frequencies = linspace(200*Hz, 1000*Hz, nchannels)
filterbank = Butterworth(sound, nchannels, order, cutoff_frequencies, 'low')
filterbank_mon = filterbank.process()
subplot(212)
imshow(flipud(filterbank_mon.T), aspect='auto')
show()
Some standard types of NeuronGroup have already been defined. PoissonGroup and PoissonInput to generate spikes with Poisson statistics, PulsePacket to generate pulse packets with specified parameters, and SpikeGeneratorGroup to generate spikes which fire at prespecified times.
A group that generates independent Poisson spike trains.
Initialised as:
PoissonGroup(N,rates[,clock])
with arguments:
Adds a Poisson input to a NeuronGroup. Allows to efficiently simulate a large number of independent Poisson inputs to a NeuronGroup variable, without simulating every synapse individually. The synaptic events are generated randomly during the simulation and are not preloaded and stored in memory (unless record=True is used). All the inputs must target the same variable, have the same frequency and same synaptic weight. You can use as many PoissonInput objects as you want, even targetting a same NeuronGroup. There is the possibility to consider time jitter in the presynaptic spikes, and synaptic unreliability. The inputs can also be recorded if needed. Finally, all neurons from the NeuronGroup receive independent realizations of Poisson spike trains, except if the keyword freeze=True is used, in which case all neurons receive the same Poisson input.
Initialised as:
PoissonInput(target[, N][, rate][, weight][, state][, jitter][, reliability][, copies][, record][, freeze])
with arguments:
Fires a Gaussian distributed packet of n spikes with given spread
Initialised as:
PulsePacket(t,n,sigma[,clock])
with arguments:
Methods
This class is derived from SpikeGeneratorGroup and has all its methods as well as one additional method:
Change the parameters and/or generate a new pulse packet.
Emits spikes at given times
Initialised as:
SpikeGeneratorGroup(N,spiketimes[,clock[,period]])
with arguments:
Has an attribute:
Sample usages
The simplest usage would be a list of pairs (i,t):
spiketimes = [(0,1*ms), (1,2*ms)]
SpikeGeneratorGroup(N,spiketimes)
A more complicated example would be to pass a generator:
import random
def nextspike():
nexttime = random.uniform(0*ms,10*ms)
while True:
yield (random.randint(0,9),nexttime)
nexttime = nexttime + random.uniform(0*ms,10*ms)
P = SpikeGeneratorGroup(10,nextspike())
This would give a neuron group P with 10 neurons, where a random one of the neurons fires at an average rate of one every 5ms. Please note that as of 1.3.1, this behavior is preserved but will run slower than initializing with arrays, or lists.
Notes
Note that if a neuron fires more than one spike in a given interval dt, additional spikes will be discarded. A warning will be issued if this is detected.
Also, if you want to use a SpikeGeneratorGroup with many spikes and/or neurons, please use an initialization with arrays.
Also note that if you pass a generator, then reinitialising the group will not have the expected effect because a generator object cannot be reinitialised. Instead, you should pass a callable object which returns a generator. In the example above, that would be done by calling:
P = SpikeGeneratorGroup(10,nextspike)
Whenever P is reinitialised, it will call nextspike() to create the required spike container.
Emits spikes at given times
Warning
This function has been deprecated after Brian 1.3.1 and will be removed in a future release. Use SpikeGeneratorGroup instead. To convert spiketimes for MultipleSpikeGeneratorGroup into a form suitable for SpikeGeneratorGroup, do:
N = len(spiketimes)
spiketimes = [(i, t) for i in xrange(N) for t in spiketimes[i]]
Initialised as:
MultipleSpikeGeneratorGroup(spiketimes[,clock[,period]])
with arguments:
Note that if two or more spike times fall within the same dt, spikes will stack up and come out one per dt until the stack is exhausted. A warning will be generated if this happens.
Also note that if you pass a generator, then reinitialising the group will not have the expected effect because a generator object cannot be reinitialised. Instead, you should pass a callable object which returns a generator, this will be called each time the object is reinitialised by calling the reinit() method.
Sample usage:
spiketimes = [[1*msecond, 2*msecond]]
P = MultipleSpikeGeneratorGroup(spiketimes)
Figure 1F. (simulation takes about 10 mins)
Coincidence detection is seen as a signal detection problem, that of detecting a given depolarization in a background of noise, within one characteristic time constant. The choice of the spike threshold implements a particular trade-off between false alarms (depolarization was due to noise) and misses (depolarization was not detected).
Caption (Fig 1F). Receiver-operation characteristic (ROC) for one level of noise, obtained by varying the threshold (black curve). The hit rate is the probability that the neuron fires within one integration time constant t when depolarized by Dv, and the false alarm rate is the firing probability without depolarization. The corresponding theoretical curve, with sensitivity index d’ =Dv/sigma, is shown in red.
from brian import *
from scipy.special import erf
def spike_probability(x): # firing probability for unit variance and zero mean, and threshold = x
return .5*(1-erf(x/sqrt(2.)))
tau_cd=5*ms # membrane time constant (cd for coincidence detector)
tau_n=tau_cd # input is an Ornstein-Uhlenbeck process with the same time constant as the membrane time constant
T=3*tau_n # neurons are depolarized by w at regular intervales, T is the spacing
Nspikes=10000 # number of input spikes
T0=T*Nspikes # initial period without inputs, to calculate the false alarm rate
N=500 # number of neurons, each neuron has a different threshold between 0. and 3.
w=1 # synaptic weight (depolarization)
sigma=1. # input noise s.d.
sigmav=sigma*sqrt(tau_n/(tau_n+tau_cd)) # noise s.d. on the membrane potential
print "d'=",1./sigmav # discriminability index
# Integrate-and-fire neurons
eqs='''
dv/dt=(sigma*n-v)/tau_cd : 1
dn/dt=-n/tau_n+(2/tau_n)**.5*xi : 1
vt : 1 # spike threshold
'''
neurons=NeuronGroup(N,model=eqs,threshold='v>vt',reset='v=0',refractory=tau_cd)
neurons.vt=linspace(0.,3,N) # spike threshold varies across neurons
counter=SpikeCounter(neurons)
# Inputs are regular spikes, starting at T0
input=SpikeGeneratorGroup(1,[(0,n*T+T0) for n in range(Nspikes)])
C=Connection(input,neurons,'v',weight=w)
# Calculate the false alarm rate
run(T0,report='text')
FR=tau_cd*counter.count*1./T0
# Calculate the hit rate
counter.reinit()
run(Nspikes*T,report='text')
HR=counter.count*1./Nspikes-FR*(T-tau_cd)/tau_cd
# Prediction based on Gaussian statistics
FRpred=spike_probability(neurons.vt/sigmav)
HRpred=spike_probability((neurons.vt-w)/sigmav)
# Figure
plot(FR*100,HR*100,'k') # simulations
plot(FRpred*100,HRpred*100,'r') # theoretical predictions
plot([0,100],[0,100],'k--')
plot([0,100],[50,50],'k--')
xlim(0,100)
ylim(0,100)
xlabel('False alarm rate (%)')
ylabel('Hit rate (%)')
show()
Example of the use of the function spike_triggered_average. A white noise is filtered by a gaussian filter (low pass filter) which output is used to generate spikes (poission process) Those spikes are used in conjunction with the input signal to retrieve the filter function.
from brian import *
from brian.hears import *
from numpy.random import randn
from numpy.linalg import norm
from matplotlib import pyplot
dt = 0.1*ms
defaultclock.dt = dt
stimulus_duration = 15000*ms
stimulus = randn(int(stimulus_duration/ dt))
#filter
n=200
filt = exp(-((linspace(0.5,n,n))-(n+5)/2)**2/(n/3));
filt = filt/norm(filt)*1000;
filtered_stimulus = convolve(stimulus,filt)
neuron = PoissonGroup(1,lambda t:filtered_stimulus[int(t/dt)])
spikes = SpikeMonitor(neuron)
run(stimulus_duration,report='text')
spikes = spikes[0] #resulting spikes
max_interval = 20*ms #window duration of the spike triggered average
onset = 10*ms
sta,time_axis = spike_triggered_average(spikes,stimulus,max_interval,dt,onset=onset,display=True)
figure()
plot(time_axis,filt/max(filt))
plot(time_axis,sta/max(sta))
xlabel('time axis')
ylabel('sta')
legend(('real filter','estimated filter'))
show()
Fig. 14 from:
Rossant C, Leijon S, Magnusson AK, Brette R (2011). “Sensitivity of noisy neurons to coincident inputs”. Journal of Neuroscience, 31(47).
5000 independent E/I Poisson inputs are injected into a leaky integrate-and-fire neuron. Synchronous events, following an independent Poisson process at 40 Hz, are considered, where 15 E Poisson spikes are randomly shifted to be synchronous at those events. The output firing rate is then significantly higher, showing that the spike timing of less than 1% of the excitatory synapses have an important impact on the postsynaptic firing.
from brian import *
# neuron parameters
theta = -55*mV
El = -65*mV
vmean = -65*mV
taum = 5*ms
taue = 3*ms
taui = 10*ms
eqs = Equations("""
dv/dt = (ge+gi-(v-El))/taum : volt
dge/dt = -ge/taue : volt
dgi/dt = -gi/taui : volt
""")
# input parameters
p = 15
ne = 4000
ni = 1000
lambdac = 40*Hz
lambdae = lambdai = 1*Hz
# synapse parameters
we = .5*mV/(taum/taue)**(taum/(taue-taum))
wi = (vmean-El-lambdae*ne*we*taue)/(lambdae*ni*taui)
# NeuronGroup definition
group = NeuronGroup(N=2, model=eqs, reset=El, threshold=theta, refractory=5*ms)
group.v = El
group.ge = group.gi = 0
# independent E/I Poisson inputs
p1 = PoissonInput(group[0], N=ne, rate=lambdae, weight=we, state='ge')
p2 = PoissonInput(group[0], N=ni, rate=lambdai, weight=wi, state='gi')
# independent E/I Poisson inputs + synchronous E events
p3 = PoissonInput(group[1], N=ne, rate=lambdae-(p*1.0/ne)*lambdac, weight=we, state='ge')
p4 = PoissonInput(group[1], N=ni, rate=lambdai, weight=wi, state='gi')
p5 = PoissonInput(group[1], N=1, rate=lambdac, weight=p*we, state='ge')
# run the simulation
reinit_default_clock()
M = SpikeMonitor(group)
SM = StateMonitor(group, 'v', record=True)
run(1*second)
# plot trace and spikes
for i in [0,1]:
spikes = M.spiketimes[i]-.0001
val = SM.values[i]
subplot(2,1,i+1)
plot(SM.times, val)
plot(tile(spikes, (2,1)),
vstack((val[array(spikes*10000, dtype=int)],
zeros(len(spikes)))), 'b')
title("%s: %d spikes/second" % (["uncorrelated inputs", "correlated inputs"][i],
len(M.spiketimes[i])))
show()
This example shows how to efficiently simulate neurons with a large number of Poisson inputs targetting arbitrarily complex synapses. The approach is very similiar to what the PoissonInput class does internally, but PoissonInput cannot be combined with the Synapses class. You could also just use many PoissonGroup objects as inputs, but this is very slow and memory consuming.
from brian import *
# Poisson inputs
M = 1000 # number of Poisson inputs
max_rate = 100
# Neurons
N = 50 # number of neurons
tau = 10 * ms
E_exc = 0 * mV
E_L = -70 * mV
G = NeuronGroup(N, model='dvm/dt = -(vm - E_L)/tau : mV')
G.rest()
# Dummy neuron group
P = NeuronGroup(1, 'v : 1', threshold= -1, reset=0) # spikes every timestep
# time varying rate
def varying_rate(t):
return defaultclock.dt * max_rate * (0.5 + 0.5 * sin(2 * pi * 5 * t))
# Synaptic connections: binomial(cellM, varying_rate(t)) gives the number of
# events per timestep. The synapse model is a conductance-based instanteneous
# jump in postsynaptic membrane potential
S = Synapses(P, G, model='''
J : 1
cellM : 1
''',
pre='vm += binomial(cellM, varying_rate(t)) * J * (E_exc - vm)')
S[:, :] = True
S.cellM = M #we need one value for M per cell, so that binomial is vectorized
S.J = 0.0005
mon = StateMonitor(G, 'vm', record=True)
run(1 * second, report='text')
mon.plot()
show()
In the previous part of the tutorial, all the neurons start at the same values and proceed deterministically, so they all spike at exactly the same times. In this part, we introduce some randomness by initialising all the membrane potentials to uniform random values between the reset and threshold values.
We start as before:
from brian import * tau = 20 * msecond # membrane time constant Vt = -50 * mvolt # spike threshold Vr = -60 * mvolt # reset value El = -49 * mvolt # resting potential (same as the reset) G = NeuronGroup(N=40, model='dV/dt = -(V-El)/tau : volt', threshold=Vt, reset=Vr) M = SpikeMonitor(G)
But before we run the simulation, we set the values of the membrane potentials directly. The notation G.V refers to the array of values for the variable V in group G. In our case, this is an array of length 40. We set its values by generating an array of random numbers using Brian’s rand function. The syntax is rand(size) generates an array of length size consisting of uniformly distributed random numbers in the interval 0, 1.
G.V = Vr + rand(40) * (Vt - Vr)
And now we run the simulation as before.
run(1 * second) print M.nspikes
But this time we get a varying number of spikes each time we run it, roughly between 800 and 850 spikes. In the next part of this tutorial, we introduce a bit more interest into this network by connecting the neurons together.
Setting and getting global preferences is done with the following functions:
Set global preferences for Brian
Usage:
``set_global_preferences(...)``
where ... is a list of keyword assignments.
Get the value of the named global preference
If you have a module named brian_global_config anywhere on your Python path, Brian will attempt to import it to define global preferences. For example, to automatically enable weave compilation for all your Brian projects, create a file brian_global_config.py somewhere in the Python path with the following contents:
from brian.globalprefs import *
set_global_preferences(useweave=True)
The following global preferences have been defined:
Input-Frequency curve of a neuron (cortical RS type) Network: 1000 unconnected integrate-and-fire neurons (Brette-Gerstner) with an input parameter I. The input is set differently for each neuron. Spikes are sent to a ‘neuron’ group with the same size and variable n, which has the role of a spike counter.
from brian import *
from brian.library.IF import *
N = 1000
eqs = Brette_Gerstner() + Current('I:amp')
print eqs
group = NeuronGroup(N, model=eqs, threshold= -20 * mV, reset=AdaptiveReset())
group.vm = -70 * mV
group.I = linspace(0 * nA, 1 * nA, N)
counter = NeuronGroup(N, model='n:1')
C = IdentityConnection(group, counter, 'n')
i = N * 8 / 10
trace = StateMonitor(group, 'vm', record=i)
duration = 5 * second
run(duration)
subplot(211)
plot(group.I / nA, counter.n / duration)
xlabel('I (nA)')
ylabel('Firing rate (Hz)')
subplot(212)
plot(trace.times / ms, trace[i] / mV)
xlabel('Time (ms)')
ylabel('Vm (mV)')
show()
Figure 1C,D. Duration selectivity. (takes about 1 min)
Caption (Fig. 1C,D). A postsynaptic neuron receives inputs from A and B. It is more likely to fire when the stimulus in the synchrony receptive field of A and B.
from brian import *
# Parameters and equations of the rebound neurons
Vt=-55*mV
Vr=-70*mV
El=-35*mV
EK=-90*mV
Va=Vr
ka=5*mV
gmax2=2
tau=20*ms
ginh_max=5.
tauK2=100*ms
N=10000 # number of neurons (= different durations)
rest_time=1*second # initial time
tmin=rest_time-20*ms #for plots
tmax=rest_time+600*ms
eqs='''
dv/dt=(El-v+(gmax*gK+gmax2*gK2+ginh)*(EK-v))/tau : volt
dgK/dt=(gKinf-gK)/tauK : 1 # IKLT
dgK2/dt=-gK2/tauK2 : 1 # Delayed rectifier
gKinf=1./(1+exp((Va-v)/ka)) : 1
duration : second
ginh = ginh_max*((t>rest_time) & (t<(rest_time+duration))) : 1
tauK : ms
gmax : 1
theta : volt # threshold
'''
neurons=NeuronGroup(2*N,model=eqs,threshold='v>theta',reset='v=Vr;gK2=1')
neurons.v=Vr
neurons.theta=Vt
neurons.gK=1./(1+exp((Va-El)/ka))
# Neuron A, duplicated to simulate multiple input durations simultaneously
neuronsA=neurons[:N]
neuronsA.tauK=400*ms
neuronsA.gmax=1
neuronsA.theta=-55*mV
neuronsA.duration=linspace(100*ms,1*second,N)
# Neuron B, duplicated to simulate multiple input durations simultaneously
neuronsB=neurons[N:]
neuronsB.tauK=100*ms
neuronsB.gmax=1.5
neuronsB.theta=-54*mV
neuronsB.duration=linspace(100*ms,1*second,N)
# Noisy coincidence detectors
tau_cd=5*ms
tau_n=tau_cd
sigma=0.2 # noise s.d. in units of the threshold
eqs_post='''
dv/dt=(n-v)/tau_cd : 1
dn/dt=-n/tau_n+sigma*(2/tau_n)**.5*xi : 1
'''
postneurons=NeuronGroup(N,model=eqs_post,threshold=1,reset=0)
CA=IdentityConnection(neuronsA,postneurons,'v',weight=0.5)
CB=IdentityConnection(neuronsB,postneurons,'v',weight=0.5)
spikes=SpikeCounter(postneurons)
M=StateMonitor(postneurons,'v',record=N/3)
run(rest_time+1.1*second,report='text')
# Figure
subplot(121) # Fig. 1C, example trace
plot(M.times/ms,M[N/3],'k')
xlim(1350,1500)
ylim(-.3,1)
xlabel('Time (ms)')
ylabel('V')
subplot(122) # Fig. 1D, duration tuning curve
count=spikes.count
# Smooth the tuning curve
window=200
rate=zeros(len(count)-window)
for i in range(0,len(count)-window):
rate[i]=mean(count[i:i+window])
plot((neuronsA.duration[window/2:-window/2]/ms)[::10],rate[::10],'k')
xlim(0,1000)
ylim(0,0.5)
xlabel('Duration (ms)')
ylabel('Spiking probability')
show()
This is a Brian script implementing a benchmark described in the following review paper:
Simulation of networks of spiking neurons: A review of tools and strategies (2007). Brette, Rudolph, Carnevale, Hines, Beeman, Bower, Diesmann, Goodman, Harris, Zirpe, Natschlager, Pecevski, Ermentrout, Djurfeldt, Lansner, Rochel, Vibert, Alvarez, Muller, Davison, El Boustani and Destexhe. Journal of Computational Neuroscience 23(3):349-98
Benchmark 1: random network of integrate-and-fire neurons with exponential synaptic conductances
Clock-driven implementation with Euler integration (no spike time interpolation)
Brian is a simulator for spiking neural networks written in Python, developed by R. Brette and D. Goodman. http://brian.di.ens.fr
from brian import *
import time
# Time constants
taum = 20 * msecond
taue = 5 * msecond
taui = 10 * msecond
# Reversal potentials
Ee = (0. + 60.) * mvolt
Ei = (-80. + 60.) * mvolt
start_time = time.time()
eqs = Equations('''
dv/dt = (-v+ge*(Ee-v)+gi*(Ei-v))*(1./taum) : volt
dge/dt = -ge*(1./taue) : 1
dgi/dt = -gi*(1./taui) : 1
''')
# NB 1: conductances are in units of the leak conductance
# NB 2: multiplication is faster than division
P = NeuronGroup(4000, model=eqs, threshold=10 * mvolt, \
reset=0 * mvolt, refractory=5 * msecond,
order=1, compile=True)
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
we = 6. / 10. # excitatory synaptic weight (voltage)
wi = 67. / 10. # inhibitory synaptic weight
Ce = Connection(Pe, P, 'ge', weight=we, sparseness=0.02)
Ci = Connection(Pi, P, 'gi', weight=wi, sparseness=0.02)
# Initialization
P.v = (randn(len(P)) * 5 - 5) * mvolt
P.ge = randn(len(P)) * 1.5 + 4
P.gi = randn(len(P)) * 12 + 20
# Record the number of spikes
Me = PopulationSpikeCounter(Pe)
Mi = PopulationSpikeCounter(Pi)
print "Network construction time:", time.time() - start_time, "seconds"
print "Simulation running..."
start_time = time.time()
run(1 * second)
duration = time.time() - start_time
print "Simulation time:", duration, "seconds"
print Me.nspikes, "excitatory spikes"
print Mi.nspikes, "inhibitory spikes"
Example with short term plasticity model Neurons with regular inputs and depressing synapses
from brian import *
tau_e = 3 * ms
taum = 10 * ms
A_SE = 250 * pA
Rm = 100 * Mohm
N = 10
eqs = '''
dx/dt=rate : 1
rate : Hz
'''
input = NeuronGroup(N, model=eqs, threshold=1., reset=0)
input.rate = linspace(5 * Hz, 30 * Hz, N)
eqs_neuron = '''
dv/dt=(Rm*i-v)/taum:volt
di/dt=-i/tau_e:amp
'''
neuron = NeuronGroup(N, model=eqs_neuron)
C = Connection(input, neuron, 'i')
C.connect_one_to_one(weight=A_SE)
stp = STP(C, taud=1 * ms, tauf=100 * ms, U=.1) # facilitation
#stp=STP(C,taud=100*ms,tauf=10*ms,U=.6) # depression
trace = StateMonitor(neuron, 'v', record=[0, N - 1])
run(1000 * ms)
subplot(211)
plot(trace.times / ms, trace[0] / mV)
title('Vm')
subplot(212)
plot(trace.times / ms, trace[N - 1] / mV)
title('Vm')
show()
The modelfitting library is used for fitting a neuron model to data.
The library provides a single function modelfitting(), which accepts the model and the data as arguments and returns the model parameters that fit best the data. The model is a spiking neuron model, whereas the data consists of both an input (time-varying signal, for example an injected current) and a set of spike trains. Only spikes are considered for the fitness. Several target spike trains can be specified in order to fit independently several data sets. In this case, the modelfitting() function returns as many parameters sets as there are target spike trains.
The model is defined as any spiking neuron model in Brian, by giving the equations as mathematical equations, and the reset and threshold values. The free parameters of the model that shall be fitted by the library are also specified. The data is specified by the input (a vector containing the time-varying injected current), the timestep of the input, and the data as a list of spike times.
Fitting a spiking neuron model to electrophysiological data is performed by maximizing a fitness function measuring the adequacy of the model to the data. This function is defined as the gamma factor, which is based on the number of coincidences between the model spikes and the experimentally-recorded spikes, defined as the number of spikes in the experimental train such that there is at least one spike in the model train within plus or minus delta, where delta is the size of the temporal window (typically a few milliseconds). For more details on the gamma factor, see Jolivet et al. 2008, “A benchmark test for a quantitative assessment of simple neuron models”, J. Neurosci. Methods (available in PDF here).
The optimization procedure is performed by an optimization algorithm. The optimization toolbox used by modelfitting is implemented in the external Python package Playdoh. It also supports distributed and parallel optimization across CPUs and machines. Different optimization algorithms are supported, the default one is CMAES. All those algorithms require the evaluation of the fitness function for a large number of parameter sets. Each iteration of the algorithm involves the simulation of a large number of neurons (one neuron corresponding to one parameter set) as well as the computation of the gamma factor for each neuron. The quality of the result depends on the number of neurons used, which is specified in the modelfitting() function.
Playdoh supports the use of graphical processing units (GPUs) in order to accelerate the speed of convergence of the algorithm. If multiple cores are detected, the library will use all of them by default. Also, if a CUDA-enabled GPU is present on the system, and if PyCUDA is installed, the library will automatically use the GPU by default. In addition, several computers can be networked over IP, see Clusters.
To import the library, use
from brian.library.modelfitting import *
To fit the parameters of a neuron model with respect to some data, use the modelfitting() function
results = modelfitting(model = equations, reset = 0, threshold = 1,
data = spikes,
input = input, dt = .1*ms,
popsize = 1000, maxiter = 10,
R = [1.0e9, 1.0e10], tau = [1*ms, 50*ms])
print_table(results)
Warning
Windows users should read the section Important note for Windows users.
The model is defined by equations (an Equations object), reset (a scalar value or a set of equations as a string) and threshold (a scalar value or a set of equations as a string).
The target spike trains are defined by data (a list of pairs (neuron index, spike time) or a list of spike times if there is only one target spike train).
The input is specified with input (a vector containing the time-varying signal) and dt (the time step of the signal). The input variable should be I in the equations, although the input variable name can be specified with input_var.
The number of particles per target train used in the optimization algorithm is specified with popsize. The total number of neurons is popsize multiplied by the number of target spike trains. The number of iterations in the algorithm is specified with maxiter.
Each free parameter of the model that shall be fitted is defined by two values
param_name = [min, max]
param_name should correspond to the parameter name in the model equations. min and max specify the initial interval from which the parameter values will be uniformly sampled at the beginning of the optimization algorithm. A boundary interval can also be specified by giving four values
param_name = [bound_min, min, max, bound_max]
The parameter values will be forced to stay inside the interval [bound_min, bound_max] during the optimization.
The complete list of arguments can be found in the reference section of the modelfitting() function.
The best parameters and the corresponding best fitness values found by the optimization procedure are returned in the OptimizationResult object result.
The model fitting library uses the Python multiprocessing package to distribute fitting across processors in a single computer or across multiple computers. However, there is a limitation of the Windows version of multiprocessing which you can read about here. The end result is that a script like this:
from brian.library.modelfitting import *
...
results = modelfitting(...)
will crash, going into an endless loop and creating hundreds of Python processes that have to be shut down by hand. Instead, you have to do this:
from brian.library.modelfitting import *
...
if __name__=='__main__':
results = modelfitting(...)
The model fitting package can be used with a cluster of computers connected over IP. Every computer must have Brian and Playdoh installed, and they must run the Playdoh server: see the Playdoh documentation. Then, you can launch the `modelfitting function with the machines keyword, which is the list of the IP addresses of the machines to use in parallel for the fitting procedure. You must also specify the unit_type keyword, which is CPU or GPU, to indicate whether you want to use CPUs or GPUs on these computers. You can’t mix CPUs and GPUs for the same optimization.
To connect several machines via IP, pass a list of host names or IP addresses as strings to the machines keyword of the modelfitting() function. To specify a specific port, use a tuple (IP, port) instead of a string. You can also specify a default port in the Playdoh user preferences, see the Playdoh documentation.
You can specify an authentication string on all the computers running the Playdoh server to secure communications. See the Playdoh documentation.
The following script launches a fitting procedure in parallel on two machines:
from brian import loadtxt, ms, Equations
from brian.library.modelfitting import *
if __name__ == '__main__':
# List of machines IP addresses
machines = ['bobs-machine.university.com',
'jims-machine.university.com']
equations = Equations('''
dV/dt=(R*I-V)/tau : 1
I : 1
R : 1
tau : second
''')
input = loadtxt('current.txt')
spikes = loadtxt('spikes.txt')
results = modelfitting( model = equations,
reset = 0,
threshold = 1,
data = spikes,
input = input,
dt = .1*ms,
popsize = 1000,
maxiter = 3,
delta = 4*ms,
unit_type = 'CPU',
machines = machines,
R = [1.0e9, 9.0e9],
tau = [10*ms, 40*ms],
refractory = [0*ms, 10*ms])
print_table(results)
The two remote machines would run the Playdoh server.
Example of a simple auditory nerve fibre model with Brian hears.
from brian import *
from brian.hears import *
sound1 = tone(1*kHz, .1*second)
sound2 = whitenoise(.1*second)
sound = sound1+sound2
sound = sound.ramp()
cf = erbspace(20*Hz, 20*kHz, 3000)
cochlea = Gammatone(sound, cf)
# Half-wave rectification and compression [x]^(1/3)
ihc = FunctionFilterbank(cochlea, lambda x: 3*clip(x, 0, Inf)**(1.0/3.0))
# Leaky integrate-and-fire model with noise and refractoriness
eqs = '''
dv/dt = (I-v)/(1*ms)+0.2*xi*(2/(1*ms))**.5 : 1
I : 1
'''
anf = FilterbankGroup(ihc, 'I', eqs, reset=0, threshold=1, refractory=5*ms)
M = SpikeMonitor(anf)
run(sound.duration)
raster_plot(M)
show()
See also the reference sheet. You can download a PDF version of the documentation here.
Fig. 4 from:
Rossant C, Leijon S, Magnusson AK, Brette R (2011). “Sensitivity of noisy neurons to coincident inputs”. Journal of Neuroscience, 31(47).
Two distant or coincident spikes are injected into a noisy balanced leaky integrate-and-fire neuron. The PSTH of the neuron in response to these inputs is calculated along with the extra number of spikes in the two cases. This number is higher for the coincident spikes, showing the sensitivity of a noisy neuron to coincident inputs.
from brian import *
import matplotlib.patches as patches
import matplotlib.path as path
def histo(bins, cc, ax):
# get the corners of the rectangles for the histogram
left = array(bins[:-1])
right = array(bins[1:])
bottom = zeros(len(left))
top = bottom + cc
# we need a (numrects x numsides x 2) numpy array for the path helper
# function to build a compound path
XY = array([[left,left,right,right], [bottom,top,top,bottom]]).T
# get the Path object
barpath = path.Path.make_compound_path_from_polys(XY)
# make a patch out of it
patch = patches.PathPatch(barpath, facecolor='blue', edgecolor='gray', alpha=0.8)
ax.add_patch(patch)
# update the view limits
ax.set_xlim(left[0], right[-1])
ax.set_ylim(bottom.min(), top.max())
# neuron parameters
theta = -55*mV
vmean = -65*mV
taum = 5*ms
taue = 3*ms
taun = 15*ms
sigma = 4*mV
# input times
t1 = 100*ms
t2 = 120*ms
# simulation duration
dur = 200*ms
# number of neuron
N = 10000
bin = 2*ms
# EPSP size
int_EPSP=taue
int_EPSP2=taue*taue/(2*(taum+taue))
max_EPSP=(taum/taue)**(taum/(taue-taum))
we = 3.0*mV/max_EPSP
# model equations
eqs = '''
V=V0+noise : volt
dV0/dt=(-V0+psp)/taum : volt
dpsp/dt=-psp/taue : volt
dnoise/dt=(vmean-noise)/taun+sigma*(2./taun)**.5*xi : volt
'''
threshold = 'V>theta'
reset = vmean
# initialization of the NeuronGroup
reinit_default_clock()
group = NeuronGroup(2*N, model=eqs, reset=reset, threshold=threshold)
group.V0 = group.psp = 0*volt
group.noise = vmean + sigma * randn(2*N)
# input spikes
input_spikes = [(0, t1), (0, t2), (1, t1)]
input = SpikeGeneratorGroup(2, array(input_spikes))
# connections
C = Connection(input, group, 'psp')
C.connect_full(input[0], group[:N], weight=we)
C.connect_full(input[1], group[N:], weight=2*we)
# monitors
prM1 = PopulationRateMonitor(group[:N], bin=bin)
prM2 = PopulationRateMonitor(group[N:], bin=bin)
# launch simulation
run(dur)
# PSTH plot
figure(figsize=(10,10))
prMs = [prM1, prM2]
for i in [0,1]:
prM = prMs[i]
r = prM.rate[:-1]*bin
m = mean(r[:len(r)/2])
ax = subplot(211+i)
histo(prM.times, r, ax)
plot([0,dur],[m,m],'--r')
title("%.2f extra spikes" % sum(r[t1/bin:(t2+20*ms)/bin]-m))
xlim(.05, .2)
ylim(0, .125)
show()