PKE�B��4�(�(brian-1.4.1/hears.html Brian hears — Brian 1.4.1 documentation

# Brian hears¶ Brian hears is an auditory modelling library for Python. It is part of the neural network simulator package Brian, but can also be used on its own. To download Brian hears, simply download Brian: Brian hears is included as part of the package.

Brian hears is primarily designed for generating and manipulating sounds, and applying large banks of filters. We import the package by writing:

from brian import *
from brian.hears import *


Then, for example, to generate a tone or a whitenoise we would write:

sound1 = tone(1*kHz, .1*second)
sound2 = whitenoise(.1*second)


These sounds can then be manipulated in various ways, for example:

sound = sound1+sound2
sound = sound.ramp()


If you have the pygame package installed, you can also play these sounds:

sound.play()


We can filter these sounds through a bank of 3000 gammatone filters covering the human auditory range as follows:

cf = erbspace(20*Hz, 20*kHz, 3000)
fb = Gammatone(sound, cf)
output = fb.process()


The output of this would look something like this (zoomed into one region): Alternatively, if we’re interested in modelling auditory nerve fibres, we could feed the output of this filterbank directly into a group of neurons defined with Brian:

# Half-wave rectification and compression [x]^(1/3)
ihc = FunctionFilterbank(fb, lambda x: 3*clip(x, 0, Inf)**(1.0/3.0))
# Leaky integrate-and-fire model with noise and refractoriness
eqs = '''
dv/dt = (I-v)/(1*ms)+0.2*xi*(2/(1*ms))**.5 : 1
I : 1
'''
anf = FilterbankGroup(ihc, 'I', eqs, reset=0, threshold=1, refractory=5*ms)


This model would give output something like this: The human cochlea applies the equivalent of 3000 auditory filters, which causes a technical problem for modellers which this package is designed to address. At a typical sample rate, the output of 3000 filters would saturate the computer’s RAM in a few seconds. To deal with this, we use online computation, that is we only ever keep in memory the output of the filters for a relatively short duration (say, the most recent 20ms), do our modelling with these values, and then discard them. Although this requires that some models be rewritten for online rather than offline computation, it allows us to easily handle models with very large numbers of channels. 3000 or 6000 for human monaural or binaural processing is straightforward, and even much larger banks of filters can be used (for example, around 30,000 in Goodman DFM, Brette R (2010). Spike-timing-based computation in sound localization. PLoS Comput. Biol. 6(11): e1000993. doi:10.1371/journal.pcbi.1000993). Techniques for online computation are discussed below in the section Online computation.

Brian hears consists of classes and functions for defining sounds, filter chains, cochlear models, neuron models and head-related transfer functions. These classes are designed to be modular and easily extendable. Typically, a model will consist of a chain starting with a sound which is plugged into a chain of filter banks, which are then plugged into a neuron model.

The two main classes in Brian hears are Sound and Filterbank, which function very similarly. Each consists of multiple channels (typically just 1 or 2 in the case of sounds, and many in the case of filterbanks, but in principle any number of channels is possible for either). The difference is that a filterbank has an input source, which can be either a sound or another filterbank.

All scripts using Brian hears should start by importing the Brian and Brian hears packages as follows:

from brian import *
from brian.hears import *


Reference documentation for Brian hears, which covers everything in this overview in detail, and more. List of examples of using Brian hears.

## Sounds¶

Sounds can be loaded from a WAV or AIFF file with the loadsound() function (and saved with the savesound() function or Sound.save() method), or by initialising with a filename:

sound = loadsound('test.wav')
sound = Sound('test.aif')
sound.save('test.wav')


Various standard types of sounds can also be constructed, e.g. pure tones, white noise, clicks and silence:

sound = tone(1*kHz, 1*second)
sound = whitenoise(1*second)
sound = click(1*ms)
sound = silence(1*second)


You can pass a function of time or an array to initialise a sound:

# Equivalent to Sound.tone
sound = Sound(lambda t:sin(50*Hz*2*pi*t), duration=1*second)

# Equivalent to Sound.whitenoise
sound = Sound(randn(int(1*second*44.1*kHz)), samplerate=44.1*kHz)


Multiple channel sounds can be passed as a list or tuple of filenames, arrays or Sound objects:

sound = Sound(('left.wav', 'right.wav'))
sound = Sound((randn(44100), randn(44100)), samplerate=44.1*kHz)
sound = Sound((Sound.tone(1*kHz, 1*second),
Sound.tone(2*kHz, 1*second)))


A multi-channel sound is also a numpy array of shape (nsamples, nchannels), and can be initialised as this (or converted to a standard numpy array):

sound = Sound(randn(44100, 2), samplerate=44.1*kHz)
arr = array(sound)


Sounds can be added and multiplied:

sound = Sound.tone(1*kHz, 1*second)+0.1*Sound.whitenoise(1*second)


For more details on combining and operating on sounds, including shifting them in time, repeating them, resampling them, ramping them, finding and setting intensities, plotting spectrograms, etc., see Sound.

Sounds can be played using the play() function or Sound.play() method:

play(sound)
sound.play()


Sequences of sounds can be played as:

play(sound1, sound2, sound3)


The number of channels in a sound can be found using the nchannels attribute, and individual channels can be extracted using the Sound.channel() method, or using the left and right attributes in the case of stereo sounds:

print sound.nchannels
print amax(abs(sound.left-sound.channel(0)))


As an example of using this, the following swaps the channels in a stereo sound:

sound = Sound('test_stereo.wav')
swappedsound = Sound((sound.right, sound.left))
swappedsound.play()


The level of the sound can be computed and changed with the sound.level attribute. Levels are returned in dB which is a special unit in Brian hears. For example, 10*dB+10 will raise an error because 10 does not have units of dB. The multiplicative gain of a value in dB can be computed with the function gain(level). All dB values are measured as RMS dB SPL assuming that the values of the sound object are measured in Pascals. Some examples:

sound = whitenoise(100*ms)
print sound.level
sound.level = 60*dB
sound.level += 10*dB
sound *= gain(-10*dB)


## Filter chains¶

The standard way to set up a model based on filterbanks is to start with a sound and then construct a chain of filterbanks that modify it, for example a common model of cochlear filtering is to apply a bank of gammatone filters, and then half wave rectify and compress it (for example, with a 1/3 power law). This can be achieved in Brian hears as follows (for 3000 channels in the human hearing range from 20 Hz to 20 kHz):

cfmin, cfmax, cfN = 20*Hz, 20*kHz, 3000
cf = erbspace(cfmin, cfmax, cfN)
sound = Sound('test.wav')
gfb = GammatoneFilterbank(sound, cf)
ihc = FunctionFilterbank(gfb, lambda x: clip(x, 0, Inf)**(1.0/3.0))


The erbspace() function constructs an array of centre frequencies on the ERB scale. The GammatoneFilterbank(source, cf) class creates a bank of gammatone filters with inputs coming from source and the centre frequencies in the array cf. The FunctionFilterbank(source, func) creates a bank of filters that applies the given function func to the inputs in source.

Filterbanks can be added and multiplied, for example for creating a linear and nonlinear path, e.g.:

sum_path_fb = 0.1*linear_path_fb+0.2*nonlinear_path_fb


A filterbank must have an input with either a single channel or an equal number of channels. In the former case, the single channel is duplicated for each of the output channels. However, you might want to apply gammatone filters to a stereo sound, for example, but in this case it’s not clear how to duplicate the channels and you have to specify it explicitly. You can do this using the Repeat, Tile, Join and Interleave filterbanks. For example, if the input is a stereo sound with channels LR then you can get an output with channels LLLRRR or LRLRLR by writing (respectively):

fb = Repeat(sound, 3)
fb = Tile(sound, 3)


To combine multiple filterbanks into one, you can either join them in series or interleave them, as follows:

fb = Join(source1, source2)
fb = Interleave(source1, source2)


For a more general (but more complicated) approach, see RestructureFilterbank.

Two of the most important generic filterbanks (upon which many of the others are based) are LinearFilterbank and FIRFilterbank. The former is a generic digital filter for FIR and IIR filters. The latter is specifically for FIR filters. These can be implemented with the former, but the implementation is optimised using FFTs with the latter (which can often be hundreds of times faster, particularly for long impulse responses). IIR filter banks can be designed using IIRFilterbank which is based on the syntax of the iirdesign scipy function.

You can change the input source to a Filterbank by modifying its source attribute, e.g. to change the input sound of a filterbank fb you might do:

fb.source = newsound


Note that the new source should have the same number of channels.

You can implement control paths (using the output of one filter chain path to modify the parameters of another filter chain path) using ControlFilterbank (see reference documentation for more details). For examples of this in action, see the following:

## Connecting with Brian¶

To create spiking neuron models based on filter chains, you use the FilterbankGroup class. This acts exactly like a standard Brian NeuronGroup except that you give a source filterbank and choose a state variable in the target equations for the output of the filterbank. A simple auditory nerve fibre model would take the inner hair cell model from earlier, and feed it into a noisy leaky integrate-and-fire model as follows:

# Inner hair cell model as before
cfmin, cfmax, cfN = 20*Hz, 20*kHz, 3000
cf = erbspace(cfmin, cfmax, cfN)
sound = Sound.whitenoise(100*ms)
gfb = Gammatone(sound, cf)
ihc = FunctionFilterbank(gfb, lambda x: 3*clip(x, 0, Inf)**(1.0/3.0))
# Leaky integrate-and-fire model with noise and refractoriness
eqs = '''
dv/dt = (I-v)/(1*ms)+0.2*xi*(2/(1*ms))**.5 : 1
I : 1
'''
G = FilterbankGroup(ihc, 'I', eqs, reset=0, threshold=1, refractory=5*ms)
# Run, and raster plot of the spikes
M = SpikeMonitor(G)
run(sound.duration)
raster_plot(M)
show()


And here’s the output (after 6 seconds of computation on a 2GHz laptop): ## Plotting¶

Often, you want to use log-scaled axes for frequency in plots, but the built-in matplotlib axis labelling for log-scaled axes doesn’t work well for frequencies. We provided two functions (log_frequency_xaxis_labels() and log_frequency_yaxis_labels()) to automatically set useful axis labels. For example:

cf = erbspace(100*Hz, 10*kHz)
...
semilogx(cf, response)
axis('tight')
log_frequency_xaxis_labels()


## Online computation¶

Typically in auditory modelling, we precompute the entire output of each channel of the filterbank (“offline computation”), and then work with that. This is straightforward, but puts a severe limit on the number of channels we can use or the length of time we can work with (otherwise the RAM would be quickly exhausted). Brian hears allows us to use a very large number of channels in filterbanks, but at the cost of only storing the output of the filterbanks for a relatively short period of time (“online computation”). This requires a slight change in the way we use the output of the filterbanks, but is actually not too difficult. For example, suppose we wanted to compute the vector of RMS values for each channel of the output of the filterbank. Traditionally, or if we just use the syntax output = fb.process() in Brian hears, we have an array output of shape (nsamples, nchannels). We could compute the vector of RMS values as:

rms = sqrt(mean(output**2, axis=0))


To do the same thing with online computation, we simply store a vector of the running sum of squares, and update it for each buffered segment as it is computed. At the end of the processing, we divide the sum of squares by the number of samples and take the square root.

The Filterbank.process() method allows us to pass an optional function f(output, running) of two arguments. In this case, process() will first call running = f(output, 0) for the first buffered segment output. It will then call running = f(output, running) for each subsequent segment. In other words, it will “accumulate” the output of f, passing the output of each call to the subsequent call. To compute the vector of RMS values then, we simply do:

def sum_of_squares(input, running):
return running+sum(input**2, axis=0)

rms = sqrt(fb.process(sum_of_squares)/nsamples)


If the computation you wish to perform is more complicated than can be achieved with the process() method, you can derive a class from Filterbank (see that class’ reference documentation for more details on this).

## Buffering interface¶

The Sound, OnlineSound and Filterbank classes (and all classes derived from them) all implement the same buffering mechanism. The purpose of this is to allow for efficient processing of multiple channels in buffers. Rather than precomputing the application of filters to all channels (which for large numbers of channels or long sounds would not fit in memory), we process small chunks at a time. The entire design of these classes is based on the idea of buffering, as defined by the base class Bufferable (see section Options). Each class has two methods, buffer_init() to initialise the buffer, and buffer_fetch(start, end) to fetch the portion of the buffer from samples with indices from start to end (not including end as standard for Python). The buffer_fetch(start, end) method should return a 2D array of shape (end-start, nchannels) with the buffered values.

From the user point of view, all you need to do, having set up a chain of Sound and Filterbank objects, is to call buffer_fetch(start, end) repeatedly. If the output of a Filterbank is being plugged into a FilterbankGroup object, everything is handled automatically. For cases where the number of channels is small or the length of the input source is short, you can use the Filterbank.fetch(duration)() method to automatically handle the initialisation and repeated application of buffer_fetch.

To extend Filterbank, it is often sufficient just to implement the buffer_apply(input) method. See the documentation for Filterbank for more details.

## Library¶

Brian hears comes with a package of predefined filter classes to be used as basic blocks by the user. All of them are implemented as filterbanks.

First, a series of standard filters widely used in audio processing are available:

Class Descripition Example
IIRFilterbank Bank of low, high, bandpass or bandstop filter of type Chebyshef, Elliptic, etc... Example: IIRfilterbank (hears)
Butterworth Bank of low, high, bandpass or bandstop Butterworth filters Example: butterworth (hears)
LowPass Bank of lowpass filters of order 1 Example: cochleagram (hears)

Second, the library provides linear auditory filters developed to model the middle ear transfer function and the frequency analysis of the cochlea:

Class Description Example
MiddleEar Linear bandpass filter, based on middle-ear frequency response properties Example: tan_carney_simple_test (hears/tan_carney_2003)
Gammatone Bank of IIR gammatone filters (based on Slaney implementation) Example: gammatone (hears)
ApproximateGammatone Bank of IIR gammatone filters (based on Hohmann implementation) Example: approximate_gammatone (hears)
LogGammachirp Bank of IIR gammachirp filters with logarithmic sweep (based on Irino implementation) Example: log_gammachirp (hears)
LinearGammachirp Bank of FIR chirp filters with linear sweep and gamma envelope Example: linear_gammachirp (hears)
LinearGaborchirp Bank of FIR chirp filters with linear sweep and gaussian envelope

Finally, Brian hears comes with a series of complex nonlinear cochlear models developed to model nonlinear effects such as filter bandwith level dependency, two-tones suppression, peak position level dependency, etc.

Class Description Example
DRNL Dual resonance nonlinear filter as described in Lopez-Paveda and Meddis, JASA 2001 Example: drnl (hears)
DCGC Compressive gammachirp auditory filter as described in Irino and Patterson, JASA 2001 Example: dcgc (hears)
TanCarney Auditory phenomenological model as described in Tan and Carney, JASA 2003 Example: tan_carney_simple_test (hears/tan_carney_2003)
ZhangSynapse Model of an inner hair cell – auditory nerve synapse (Zhang et al., JASA 2001) Example: tan_carney_simple_test (hears/tan_carney_2003)

### Project Versions

Model fitting

#### Next topic

PKE�B��>�̊̊Fbrian-1.4.1/examples-hears-tan_carney_2003_tan_carney_simple_test.html Example: tan_carney_simple_test (hears/tan_carney_2003) — Brian 1.4.1 documentation

# Example: tan_carney_simple_test (hears/tan_carney_2003)¶

Fig. 1 and 3 (spking output without spiking/refractory period) should reproduce the output of the AN3_test_tone.m and AN3_test_click.m scripts, available in the code accompanying the paper Tan & Carney (2003). This matlab code is available from http://www.urmc.rochester.edu/labs/Carney-Lab/publications/auditory-models.cfm

Tan, Q., and L. H. Carney. “A Phenomenological Model for the Responses of Auditory-nerve Fibers. II. Nonlinear Tuning with a Frequency Glide”. The Journal of the Acoustical Society of America 114 (2003): 2007.

import numpy as np
import matplotlib.pyplot as plt

from brian.stdunits import kHz, Hz, ms
from brian.network import Network
from brian.monitor import StateMonitor, SpikeMonitor
from brian.globalprefs import set_global_preferences

#set_global_preferences(useweave=True)
from brian.hears import (Sound, get_samplerate, set_default_samplerate, tone,
click, silence, dB, TanCarney, MiddleEar, ZhangSynapse)
from brian.clock import reinit_default_clock

set_default_samplerate(50*kHz)
sample_length = 1 / get_samplerate(None)
cf = 1000 * Hz

print 'Testing click response'
duration = 25*ms
levels = [40, 60, 80, 100, 120]
# a click of two samples
tones = Sound([Sound.sequence([click(sample_length*2, peak=level*dB),
silence(duration=duration - sample_length)])
for level in levels])
ihc = TanCarney(MiddleEar(tones), [cf] * len(levels), update_interval=1)
syn = ZhangSynapse(ihc, cf)
s_mon = StateMonitor(syn, 's', record=True, clock=syn.clock)
R_mon = StateMonitor(syn, 'R', record=True, clock=syn.clock)
spike_mon = SpikeMonitor(syn)
net = Network(syn, s_mon, R_mon, spike_mon)
net.run(duration * 1.5)
for idx, level in enumerate(levels):
plt.figure(1)
plt.subplot(len(levels), 1, idx + 1)
plt.plot(s_mon.times / ms, s_mon[idx])
plt.xlim(0, 25)
plt.xlabel('Time (msec)')
plt.ylabel('Sp/sec')
plt.text(15, np.nanmax(s_mon[idx])/2., 'Peak SPL=%s SPL' % str(level*dB));
ymin, ymax = plt.ylim()
if idx == 0:
plt.title('Click responses')

plt.figure(2)
plt.subplot(len(levels), 1, idx + 1)
plt.plot(R_mon.times / ms, R_mon[idx])
plt.xlabel('Time (msec)')
plt.xlabel('Time (msec)')
plt.text(15, np.nanmax(s_mon[idx])/2., 'Peak SPL=%s SPL' % str(level*dB));
plt.ylim(ymin, ymax)
if idx == 0:
plt.title('Click responses (with spikes and refractoriness)')
plt.plot(spike_mon.spiketimes[idx] / ms,
np.ones(len(spike_mon.spiketimes[idx])) * np.nanmax(R_mon[idx]), 'rx')

print 'Testing tone response'
reinit_default_clock()
duration = 60*ms
levels = [0, 20, 40, 60, 80]
tones = Sound([Sound.sequence([tone(cf, duration).atlevel(level*dB).ramp(when='both',
duration=10*ms,
inplace=False),
silence(duration=duration/2)])
for level in levels])
ihc = TanCarney(MiddleEar(tones), [cf] * len(levels), update_interval=1)
syn = ZhangSynapse(ihc, cf)
s_mon = StateMonitor(syn, 's', record=True, clock=syn.clock)
R_mon = StateMonitor(syn, 'R', record=True, clock=syn.clock)
spike_mon = SpikeMonitor(syn)
net = Network(syn, s_mon, R_mon, spike_mon)
net.run(duration * 1.5)
for idx, level in enumerate(levels):
plt.figure(3)
plt.subplot(len(levels), 1, idx + 1)
plt.plot(s_mon.times / ms, s_mon[idx])
plt.xlim(0, 120)
plt.xlabel('Time (msec)')
plt.ylabel('Sp/sec')
plt.text(1.25 * duration/ms, np.nanmax(s_mon[idx])/2., '%s SPL' % str(level*dB));
ymin, ymax = plt.ylim()
if idx == 0:
plt.title('CF=%.0f Hz - Response to Tone at CF' % cf)

plt.figure(4)
plt.subplot(len(levels), 1, idx + 1)
plt.plot(R_mon.times / ms, R_mon[idx])
plt.xlabel('Time (msec)')
plt.xlabel('Time (msec)')
plt.text(1.25 * duration/ms, np.nanmax(R_mon[idx])/2., '%s SPL' % str(level*dB));
plt.ylim(ymin, ymax)
if idx == 0:
plt.title('CF=%.0f Hz - Response to Tone at CF (with spikes and refractoriness)' % cf)
plt.plot(spike_mon.spiketimes[idx] / ms,
np.ones(len(spike_mon.spiketimes[idx])) * np.nanmax(R_mon[idx]), 'rx')

plt.show()


### Project Versions

#### Previous topic

Example: tan_carney_Fig7 (hears/tan_carney_2003)

#### Next topic

Example: compensation_ex2_spikes (electrophysiology)

PKE�B�4�v3v3(brian-1.4.1/examples-misc_spikes_io.html Example: spikes_io (misc) — Brian 1.4.1 documentation

# Example: spikes_io (misc)¶

This script demonstrates how to save/load spikes in AER format from inside Brian.

from brian import *

####################### SAVING #########################

# First we need to generate some spikes
N = 1000
g = PoissonGroup(N, 200*Hz)

# And set up a monitor to record those spikes to the disk
Maer = AERSpikeMonitor(g, './dummy.aedat')

# Now we can run
run(100*ms)

# This line executed automatically when the script ends, but here we
# need to close the file because we re-use it from within the same script
Maer.close()

clear(all = True)
reinit_default_clock()

# Now we can re-load the spikes

# Feed them to a SpikeGeneratorGroup

# The group can now be used as any other, here we choose to monitor
# the spikes
newM = SpikeMonitor(group, record = True)

run(100*ms)

# And plot the result
raster_plot(newM)
show()


### Project Versions

#### Previous topic

Example: CUBA (misc)

#### Next topic

Example: non_reliability (misc)

PKE�BBEO� > >;brian-1.4.1/examples-plasticity_short_term_plasticity2.html Example: short_term_plasticity2 (plasticity) — Brian 1.4.1 documentation

# Example: short_term_plasticity2 (plasticity)¶

Network (CUBA) with short-term synaptic plasticity for excitatory synapses (Depressing at long timescales, facilitating at short timescales)

from brian import *
from time import time

eqs = '''
dv/dt = (ge+gi-(v+49*mV))/(20*ms) : volt
dge/dt = -ge/(5*ms) : volt
dgi/dt = -gi/(10*ms) : volt
'''

P = NeuronGroup(4000, model=eqs, threshold= -50 * mV, reset= -60 * mV)
P.v = -60 * mV + rand(4000) * 10 * mV
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
Ce = Connection(Pe, P, 'ge', weight=1.62 * mV, sparseness=.02)
Ci = Connection(Pi, P, 'gi', weight= -9 * mV, sparseness=.02)
stp = STP(Ce, taud=200 * ms, tauf=20 * ms, U=.2)
M = SpikeMonitor(P)
rate = PopulationRateMonitor(P)
t1 = time()
run(1 * second)
t2 = time()
print "Simulation time:", t2 - t1, "s"
print M.nspikes, "spikes"
subplot(211)
raster_plot(M)
subplot(212)
plot(rate.times / ms, rate.smooth_rate(5 * ms))
show()


### Project Versions

#### Next topic

Example: STDP1 (plasticity)

PKE�B��{p����brian-1.4.1/examples.html Examples — Brian 1.4.1 documentation

### Project Versions

#### Previous topic

Tutorial 2c: The CUBA network

#### Next topic

Example: licklider (audition)

# Example: CUBA (synapses)¶

CUBA example with delays.

Connection (no delay): 3.5 s DelayConnection: 5.7 s Synapses (with precomputed offsets): 6.6 s # 6.9 s Synapses with weave: 6.4 s Synapses with zero delays: 5.2 s

from brian import *
import time

start_time = time.time()
taum = 20 * ms
taue = 5 * ms
taui = 10 * ms
Vt = -50 * mV
Vr = -60 * mV
El = -49 * mV

eqs = Equations('''
dv/dt  = (ge+gi-(v-El))/taum : volt
dge/dt = -ge/taue : volt
dgi/dt = -gi/taui : volt
''')

P = NeuronGroup(4000, model=eqs, threshold=Vt, reset=Vr, refractory=5 * ms)
P.v = Vr
P.ge = 0 * mV
P.gi = 0 * mV

Pe = P.subgroup(3200)
Pi = P.subgroup(800)
we = (60 * 0.27 / 10) * mV # excitatory synaptic weight (voltage)
wi = (-20 * 4.5 / 10) * mV # inhibitory synaptic weight

Se = Synapses(Pe, P, model = 'w : 1', pre = 'ge += we')
Si = Synapses(Pi, P, model = 'w : 1', pre = 'gi += wi')
Se[:,:]=0.02
Si[:,:]=0.02
Se.delay='rand()*ms'
Si.delay='rand()*ms'

P.v = Vr + rand(len(P)) * (Vt - Vr)

# Record the number of spikes
Me = PopulationSpikeCounter(Pe)
Mi = PopulationSpikeCounter(Pi)
# A population rate monitor
M = PopulationRateMonitor(P)

print "Network construction time:", time.time() - start_time, "seconds"
print len(P), "neurons in the network"
print "Simulation running..."
run(1 * msecond)
start_time = time.time()

run(1 * second)

duration = time.time() - start_time
print "Simulation time:", duration, "seconds"
print Me.nspikes, "excitatory spikes"
print Mi.nspikes, "inhibitory spikes"
plot(M.times / ms, M.smooth_rate(2 * ms, 'gaussian'))
show()


### Project Versions

#### Previous topic

Example: probabilistic_synapses (synapses)

#### Next topic

Example: delayed_stdp (synapses)

PKE�BG=��GGGG7brian-1.4.1/tutorial_1a_the_simplest_brian_program.html Tutorial 1a: The simplest Brian program — Brian 1.4.1 documentation

# Tutorial 1a: The simplest Brian program¶

## Importing the Brian module¶

The first thing to do in any Brian program is to load Brian and the names of its functions and classes. The standard way to do this is to use the Python from ... import * statement.

from brian import *


## Integrate and Fire model¶

The neuron model we will use in this tutorial is the simplest possible leaky integrate and fire neuron, defined by the differential equation:

tau dV/dt = -(V-El)

and with a threshold value Vt and reset value Vr.

## Parameters¶

Brian has a system for defining physical quantities (quantities with a physical dimension such as time). The code below illustrates how to use this system, which (mostly) works just as you’d expect.

tau = 20 * msecond        # membrane time constant
Vt = -50 * mvolt          # spike threshold
Vr = -60 * mvolt          # reset value
El = -60 * mvolt          # resting potential (same as the reset)


The built in standard units in Brian consist of all the fundamental SI units like second and metre, along with a selection of derived SI units such as volt, farad, coulomb. All names are lowercase following the SI standard. In addition, there are scaled versions of these units using the standard SI prefixes m=1/1000, K=1000, etc.

## Neuron model and equations¶

The simplest way to define a neuron model in Brian is to write a list of the differential equations that define it. For the moment, we’ll just give the simplest possible example, a single differential equation. You write it in the following form:

dx/dt = f(x) : unit

where x is the name of the variable, f(x) can be any valid Python expression, and unit is the physical units of the variable x. In our case we will write:

dV/dt = -(V-El)/tau : volt

to define the variable V with units volt.

To complete the specification of the model, we also define a threshold and reset value and create a group of 40 neurons with this model.

G = NeuronGroup(N=40, model='dV/dt = -(V-El)/tau : volt',
threshold=Vt, reset=Vr)


The statement creates a new object ‘G’ which is an instance of the Brian class NeuronGroup, initialised with the values in the line above and 40 neurons. In Python, you can call a function or initialise a class using keyword arguments as well as ordered arguments, so if I defined a function f(x,y) I could call it as f(1,2) or as f(y=2,x=1) and get the same effect. See the Python tutorial for more information on this.

For the moment, we leave the neurons in this group unconnected to each other, each evolves separately from the others.

## Simulation¶

Finally, we run the simulation for 1 second of simulated time. By default, the simulator uses a timestep dt = 0.1 ms.

run(1 * second)


And that’s it! To see some of the output of this network, go to the next part of the tutorial.

## Exercise¶

The units system of Brian is useful for ensuring that everything is consistent, and that you don’t make hard to find mistakes in your code by using the wrong units. Try changing the units of one of the parameters and see what happens.

## Solution¶

You should see an error message with a Python traceback (telling you which functions were being called when the error happened), ending in a line something like:

Brian.units.DimensionMismatchError: The differential equations
are not homogeneous!, dimensions were (m^2 kg s^-3 A^-1)
(m^2 kg s^-4 A^-1)

### Project Versions

#### Previous topic

Tutorial 1: Basic Concepts

#### Next topic

Tutorial 1b: Counting spikes

PKE�B/%���=�=6brian-1.4.1/examples-frompapers_Brunel_Hakim_1999.html Example: Brunel_Hakim_1999 (frompapers) — Brian 1.4.1 documentation

# Example: Brunel_Hakim_1999 (frompapers)¶

Dynamics of a network of sparsely connected inhibitory current-based integrate-and-fire neurons. Individual neurons fire irregularly at low rate but the network is in an oscillatory global activity regime where neurons are weakly synchronized.

Reference:
“Fast Global Oscillations in Networks of Integrate-and-Fire Neurons with Low Firing Rates” Nicolas Brunel & Vincent Hakim Neural Computation 11, 1621-1671 (1999)
from brian import *

N = 5000
Vr = 10 * mV
theta = 20 * mV
tau = 20 * ms
delta = 2 * ms
taurefr = 2 * ms
duration = .1 * second
C = 1000
sparseness = float(C)/N
J = .1 * mV
muext = 25 * mV
sigmaext = 1 * mV

eqs = """
dV/dt = (-V+muext + sigmaext * sqrt(tau) * xi)/tau : volt
"""

group = NeuronGroup(N, eqs, threshold=theta,
reset=Vr, refractory=taurefr)
group.V = Vr
conn = Connection(group, group, state='V', delay=delta,
weight = -J,
sparseness=sparseness)
M = SpikeMonitor(group)
LFP = PopulationRateMonitor(group, bin=0.4 * ms)

run(duration)

subplot(211)
raster_plot(M)
xlim(0, duration/ms)

subplot(212)
plot(LFP.times_/ms, LFP.rate)
xlim(0, duration/ms)

show()


### Project Versions

#### Previous topic

Example: modelfitting (modelfitting)

#### Next topic

Example: Brette_Gerstner_2005 (frompapers)

PKE�B1JD��>�>-brian-1.4.1/advanced_connection_matrices.html Connection matrices — Brian 1.4.1 documentation

# Connection matrices¶

A Connection object has an attribute W which is its connection matrix.

Brian’s system for connection matrices can be slightly confusing. The way it works is roughly as follows. There are two types of connection matrix data structures, ConstructionMatrix and ConnectionMatrix. The construction matrix types are used for building connectivity, and are optimised for insertion and deletion of elements, but access is slow. The connection matrix types are used when the simulation is running, and are optimised for fast access, but not for adding/removing or modifying elements. When a Connection object is created, it is given a construction matrix data type, and when the network is run, this matrix is converted to its corresponding connection matrix type. As well as this construction/connection matrix type distinction, there is also the distinction between dense/sparse/dynamic matrices, each of which have their own construction and connection versions.

The dense matrix structure is very simple, both the construction and connection types are basically just 2D numpy arrays.

The sparse and dynamic matrix structures are very different for construction and connection. Both the sparse and dynamic construction matrices are essentially just the scipy.lil_matrix sparse matrix type, however we add some slight improvements to scipy’s matrix data type to make it more efficient for our case.

The sparse and dynamic connection matrix structures are documented in more detail in the reference pages for SparseConnectionMatrix and DynamicConnectionMatrix.

For customised run-time modifications to sparse and dense connection matrices you have two options. You can modify the data structures directly using the information in the reference pages linked to in the paragraph above, or you can use the methods defined in the ConnectionMatrix class, which work for dense, sparse and dynamic matrix structures, and do not depend on implementation specific details. These methods provide element, row and column access. The row and column access methods use either DenseConnectionVector or SparseConnectionVector objects. The dense connection vector is just a 1D numpy array of length the size of the row/column. The sparse connection vector is slightly more complicated (but not much), see its documentation for details. The idea is that in most cases, both dense and sparse connection vectors can be operated on without having to know how they work, so for example if v is a ConnectionVector then 2*v is of the same type. So for a ConnectionMatrix W, this should work, whatever the structure of W:

W.set_row(i, 2*W.get_row(i))


Or equivalently:

W[i,:] = 2*W[i,:]


The syntax W[i,:], W[:,i] and W[i,j] is supported for integers i and j for (respectively) row, column and element access.

### Project Versions

#### Previous topic

Projects with multiple files or functions

#### Next topic

Parameters

PKE�B}�ȳėėbrian-1.4.1/librarymodels.html Library models — Brian 1.4.1 documentation

# Library models¶

## Membrane equations¶

Library models are defined using the MembraneEquation class. This is a subclass of Equations which is defined by a capacitance C and a sum of currents. The following instruction:

eqs=MembraneEquation(200*pF)


defines the equation C*dvm/dt=0*amp, with the membrane capacitance C=200 pF. The name of the membrane potential variable can be changed as follows:

eqs=MembraneEquation(200*pF,vm='V')


The main interest of this class is that one can use it to build models by adding currents to a membrane equation. The Current class is a subclass of Equations which defines a current to be added to a membrane equation. For example:

eqs=MembraneEquation(200*pF)+Current('I=(V0-vm)/R : amp',current_name='I')


defines the same equation as:

eqs=Equations('''
dvm/dt=I/(200*pF) : volt
I=(V0-vm)/R : amp
''')


The keyword current_name is optional if there is no ambiguity, i.e., if there is only one variable or only one variable with amp units. As for standard equations, Current objects can be initialised with a multiline string (several equations). By default, the convention for the current direction is the one for injected current. For the ionic current convention, use the IonicCurrent class:

eqs=MembraneEquation(200*pF)+IonicCurrent('I=(vm-V0)/R : amp')


## Compartmental modelling¶

Compartmental neuron models can be created by merging several MembraneEquation objects, with the compartments module. If soma and dendrite are two compartments defined as MembraneEquation objects, then a neuron with those 2 compartments can be created as follows:

neuron_eqs=Compartments({'soma':soma,'dendrite':dendrite})
neuron_eqs.connect('soma','dendrite',Ra)
neuron=NeuronGroup(1,model=neuron_eqs)


The Compartments object is initialised with a dictionary of MembraneEquation objects. The returned object neuron_eqs is also a MembraneEquation object, where the name of each compartment has been appended to variable names (with a leading underscore). For example, neuron.vm_soma refers to variable vm of the somatic compartment. The connect method adds a coupling current between the two named compartments, with the given resistance Ra.

## Integrate-and-Fire models¶

A few standard Integrate-and-Fire models are implemented in the IF library module:

from brian.library.IF import *


All these functions return Equations objects (more precisely, MembraneEquation objects).

• Leaky integrate-and-fire model (dvm/dt=(El-vm)/tau : volt):

eqs=leaky_IF(tau=10*ms,El=-70*mV)

• Perfect integrator (dvm/dt=Im/tau : volt):

eqs=perfect_IF(tau=10*ms)

• Quadratic integrate-and-fire model (C*dvm/dt=a*(vm-EL)*(vm-VT) : volt):

eqs=quadratic_IF(C=200*pF,a=10*nS/mV,EL=-70*mV,VT=-50*mV)

• Exponential integrate-and-fire model (C*dvm/dt=gL*(EL-vm)+gL*DeltaT*exp((vm-VT)/DeltaT) :volt):

eqs=exp_IF(C=200*pF,gL=10*nS,EL=-70*mV,VT=-55*mV,DeltaT=3*mV)


In general, it is possible to define a neuron group with different parameter values for each neuron, by passing strings at initialisation. For example, the following code defines leaky integrate-and-fire models with heterogeneous resting potential values:

eqs=leaky_IF(tau=10*ms,El='V0')+Equations('V0:volt')
group=NeuronGroup(100,model=eqs,reset=0*mV,threshold=15*mV)


## Two-dimensional IF models¶

Integrate-and-fire models with two variables can display a very rich set of electrophysiological behaviours. In Brian, two such models have been implemented: Izhikevich model and Brette-Gerstner adaptive exponential integrate-and-fire model (also included in the IF module). The equations are obtained in the same way as for one-dimensional models:

eqs=Izhikevich(a=0.02/ms,b=0.2/ms)
eqs=Brette_Gerstner(C=281*pF,gL=30*nS,EL=-70.6*mV,VT=-50.4*mV,DeltaT=2*mV,tauw=144*ms,a=4*nS)
eqs=aEIF(C=281*pF,gL=30*nS,EL=-70.6*mV,VT=-50.4*mV,DeltaT=2*mV,tauw=144*ms,a=4*nS) # equivalent


and two state variables are defined: vm (membrane potential) and w (adaptation variable). The equivalent equations for Izhikevich model are:

dvm/dt=(0.04/ms/mV)*vm**2+(5/ms)*vm+140*mV/ms-w : volt
dw/dt=a*(b*vm-w)                            : volt/second

and for Brette-Gerstner model:

C*dvm/dt=gL*(EL-vm)+gL*DeltaT*exp((vm-VT)/DeltaT)-w :volt
dw/dt=(a*(vm-EL)-w)/tauw : amp

To simulate these models, one needs to specify a threshold value, and a good choice is VT+4*DeltaT. The reset is particular in these models since it is bidimensional: vm->Vr and w->w+b. A specific reset class has been implemented for this purpose: AdaptiveReset, initialised with Vr and b. Thus, a typical construction of a group of such models is:

eqs=Brette_Gerstner(C=281*pF,gL=30*nS,EL=-70.6*mV,VT=-50.4*mV,DeltaT=2*mV,tauw=144*ms,a=4*nS)


## Synapses¶

A few simple synaptic models are implemented in the module synapses:

from brian.library.synapses import *


All the following functions need to be passed the name of the variable upon which the received spikes will act, and the name of the variable representing the current or conductance. The simplest one is the exponential synapse:

eqs=exp_synapse(input='x',tau=10*ms,unit=amp,output='x_current')


It is equivalent to:

eqs=Equations('''
dx/dt=-x/tau : amp
x_out=x
''')


Here, x is the variable which receives the spikes and x_current is the variable to be inserted in the membrane equation (since it is a one-dimensional synaptic model, the variables are the same). If the output variable name is not defined, then it will be automatically generated by adding the suffix _out to the input name.

Two other types of synapses are implemented. The alpha synapse (x(t)=alpha*(t/tau)*exp(1-t/tau), where alpha is a normalising factor) is defined with the same syntax by:

eqs=alpha_synapse(input='x',tau=10*ms,unit=amp)


and the bi-exponential synapse is defined by (x(t)=(tau2/(tau2-tau1))*(exp(-t/tau1)-exp(-t/tau2)), up to a normalising factor):

eqs=biexp_synapse(input='x',tau1=10*ms,tau2=5*ms,unit=amp)


For all types of synapses, the normalising factor is such that the maximum of x(t) is 1. These functions can be used as in the following example:

eqs=MembraneEquation(C=200*pF)+Current('I=gl*(El-vm)+ge*(Ee-vm):amp')
eqs+=alpha_synapse(input='ge_in',tau=10*ms,unit=siemens,output='ge')


where alpha conductances have been inserted in the membrane equation.

One can directly insert synaptic currents with the functions exp_current, alpha_current and biexp_current:

eqs=MembraneEquation(C=200*pF)+Current('I=gl*(El-vm):amp')+\
alpha_current(input='ge',tau=10*ms)


(the units is amp by default), or synaptic conductances with the functions exp_conductance, alpha_conductance and biexp_conductance:

eqs=MembraneEquation(C=200*pF)+Current('I=gl*(El-vm):amp')+\
alpha_conductance(input='ge',E=0*mV,tau=10*ms)


where E is the reversal potential.

## Ionic currents¶

A few standard ionic currents have implemented in the module ionic_currents:

from brian.library.ionic_currents import *


When the current name is not specified, a unique name is generated automatically. Models can be constructed by adding currents to a MembraneEquation.

• Leak current (gl*(El-vm)):

current=leak_current(gl=10*nS,El=-70*mV,current_name='I')

• Hodgkin-Huxley K+ current:

current=K_current_HH(gmax,EK,current_name='IK'):
• Hodgkin-Huxley Na+ current:

current=Na_current_HH(gmax,ENa,current_name='INa'):

### Project Versions

The library

#### Next topic

Random processes

PKE�B� �+@+@2brian-1.4.1/examples-electrophysiology_bridge.html Example: bridge (electrophysiology) — Brian 1.4.1 documentation

# Example: bridge (electrophysiology)¶

Bridge experiment (current-clamp)

from brian import *
from brian.library.electrophysiology import *

defaultclock.dt = .01 * ms

#log_level_debug()

taum = 20 * ms
gl = 20 * nS
Cm = taum * gl
Re = 50 * Mohm
Ce = 0.5 * ms / Re
N = 10

eqs = Equations('''
dvm/dt=(-gl*vm+i_inj)/Cm : volt
#Rbridge:ohm
I:amp
''')
eqs += electrode(.6 * Re, Ce)
#eqs+=current_clamp(vm='v_el',i_inj='i_cmd',i_cmd='I',Re=.4*Re,Ce=Ce,
#                   bridge='Rbridge')
eqs += current_clamp(vm='v_el', i_inj='i_cmd', i_cmd='I', Re=.4 * Re, Ce=Ce,
bridge=Re, capa_comp='CC')
setup = NeuronGroup(N, model=eqs)
setup.I = 0 * nA
setup.v = 0 * mV
#setup.Rbridge=linspace(0*Mohm,60*Mohm,N)
setup.CC = linspace(0 * Ce, Ce, N)
recording = StateMonitor(setup, 'v_rec', record=True)

run(50 * ms)
setup.I = .5 * nA
run(200 * ms)
setup.I = 0 * nA
run(150 * ms)
for i in range(N):
plot(recording.times / ms + i * 400, recording[i] / mV, 'k')
show()


### Project Versions

#### Previous topic

Example: compensation_ex2_spikes (electrophysiology)

#### Next topic

Example: SEVC (electrophysiology)

PKE�B���Q�6�61brian-1.4.1/examples-hears_linear_gammachirp.html Example: linear_gammachirp (hears) — Brian 1.4.1 documentation

# Example: linear_gammachirp (hears)¶

Example of the use of the class LinearGammachirp available in the library. It implements a filterbank of FIR gammatone filters with linear frequency sweeps as described in Wagner et al. 2009, “Auditory responses in the barn owl’s nucleus laminaris to clicks: impulse response and signal analysis of neurophonic potential”, J. Neurophysiol. In this example, a white noise is filtered by a gammachirp filterbank and the resulting cochleogram is plotted. The different impulse responses are also plotted.

from brian import *
from brian.hears import *

sound = whitenoise(100*ms).ramp()
sound.level = 50*dB

nbr_center_frequencies = 10  #number of frequency channels in the filterbank
#center frequencies with a spacing following an ERB scale
center_frequencies = erbspace(100*Hz, 1000*Hz, nbr_center_frequencies)

c = 0.0 #glide slope
time_constant = linspace(3, 0.3, nbr_center_frequencies)*ms

gamma_chirp = LinearGammachirp(sound, center_frequencies, time_constant, c)

gamma_chirp_mon = gamma_chirp.process()

figure()

imshow(gamma_chirp_mon.T, aspect='auto')
figure()
plot(gamma_chirp.impulse_response.T)
show()


### Project Versions

#### Previous topic

Example: sound_localisation_model (hears)

#### Next topic

Example: online_computation (hears)

PKE�B�gD p8p8.brian-1.4.1/developer-repositorystructure.html Repository structure — Brian 1.4.1 documentation

# Repository structure¶

The Brian source code repository is broken into the following directories:

brian

The main package, documented above, with the following additional directories:

deprecated
For code that is no longer up to date, but that we keep for backwards compatibility.
experimental
Package for storing experimental code that can be used but whose syntax and functionality may change.
library
Modules where specific models are defined (e.g. neuron and synaptic models).
tests

Package for storing tests, composed of:

testcorrectness
Package for tests of mathematical correctness of algorithms, etc.
testinterface
Package for tests of individual Brian modules. Module names are the names of the module being tested prepended by ‘test’.
unused
Old stuff
utils
Modules that are not Brian-specific, for example circular.py defines circular arrays used for storing spiking events.
dev

The main development folder, for works in progress, debugging stuff, tools, etc. Consists of:

benchmarking
Code for benchmarking performance against other languages and simulators.
BEPs
The Brian Enhancement Proposals.
debugging

Dumping ground for files used for debugging a problem.

troubleshooting
Used for debugging problems from the brian-support mailing list.
ideas
For ideas for new features, incomplete implementations, etc. This is where new things go before going into the main Brian package or the experimental package.
logo
The Brian logo in various sizes.
optimising
Ideas for making Brian faster.
speedtracking
A sort of testing framework which tracks, over time, the speed of various Brian features.
tests
A few scripts to run Brian’s tests.
tools

The main folder for developer tools.

docs
Scripts for invoking Sphinx and building the documentation. Includes script to automatically generate documentation for examples and tutorials, and to build index entries for these.
newrelease
Tools for creating a new public release of Brian.
searchreplace
Some tools for doing global changes to the code (e.g. syntax changes).
dist
Automatically generated distribution files.
docs
Automatically generated documentation files in HTML/PDF format.
docs_sphinx
Sources for Sphinx documentation.
examples
Examples of Brian’s use. Documentation is automatically generated from all of these examples.
tutorials
Source files for the tutorials, documentation is automatically generated from these. Each tutorial has a directory, possibly containing an introduction.txt Sphinx source, followed by a series of files in alphabetical order (e.g. 1a, 1b, 1c, etc.). Multi-line strings are treated as Sphinx source code (take a look at a few examples to get the idea).

### Project Versions

#### Previous topic

Brian package structure

PKE�BMi~��� brian-1.4.1/reference-hears.html Brian hears — Brian 1.4.1 documentation

# Brian hears¶

User guide for Brian hears.

brian.hears.set_default_samplerate(samplerate)

Sets the default samplerate for Brian hears objects, by default 44.1 kHz.

## Sounds¶

class brian.hears.Sound

For an overview, see Sounds.

Initialisation

The following arguments are used to initialise a sound object

data
Can be a filename, an array, a function or a sequence (list or tuple). If its a filename, the sound file (WAV or AIFF) will be loaded. If its an array, it should have shape (nsamples, nchannels). If its a function, it should be a function f(t). If its a sequence, the items in the sequence can be filenames, functions, arrays or Sound objects. The output will be a multi-channel sound with channels the corresponding sound for each element of the sequence.
samplerate=None
The samplerate, if necessary, will use the default (for an array or function) or the samplerate of the data (for a filename).
duration=None
The duration of the sound, if initialising with a function.

Load the file given by filename and returns a Sound object. Sound file can be either a .wav or a .aif file.

save(filename, normalise=False, samplewidth=2)

Save the sound as a WAV.

If the normalise keyword is set to True, the amplitude of the sound will be normalised to 1. The samplewidth keyword can be 1 or 2 to save the data as 8 or 16 bit samples.

play(normalise=False, sleep=False)

Plays the sound (normalised to avoid clipping if required). If sleep=True then the function will wait until the sound has finished playing before returning.

Properties

duration

The length of the sound in seconds.

nsamples

The number of samples in the sound.

nchannels

The number of channels in the sound.

times

An array of times (in seconds) corresponding to each sample.

left

The left channel for a stereo sound.

right

The right channel for a stereo sound.

channel(n)

Returns the nth channel of the sound.

Generating sounds

All sound generating methods can be used with durations arguments in samples (int) or units (e.g. 500*ms). One can also set the number of channels by setting the keyword argument nchannels to the desired value. Notice that for noise the channels will be generated independantly.

static tone(*args, **kwds)

Returns a pure tone at frequency for duration, using the default samplerate or the given one. The frequency and phase parameters can be single values, in which case multiple channels can be specified with the nchannels argument, or they can be sequences (lists/tuples/arrays) in which case there is one frequency or phase for each channel.

static whitenoise(*args, **kwds)

Returns a white noise. If the samplerate is not specified, the global default value will be used.

static powerlawnoise(*args, **kwds)

Returns a power-law noise for the given duration. Spectral density per unit of bandwidth scales as 1/(f**alpha).

Sample usage:

noise = powerlawnoise(200*ms, 1, samplerate=44100*Hz)


Arguments:

duration
Duration of the desired output.
alpha
Power law exponent.
samplerate
Desired output samplerate
static brownnoise(*args, **kwds)

Returns brown noise, i.e powerlawnoise() with alpha=2

static pinknoise(*args, **kwds)

Returns pink noise, i.e powerlawnoise() with alpha=1

static silence(*args, **kwds)

Returns a silent, zero sound for the given duration. Set nchannels to set the number of channels.

static click(*args, **kwds)

Returns a click of the given duration.

If peak is not specified, the amplitude will be 1, otherwise peak refers to the peak dB SPL of the click, according to the formula 28e-6*10**(peak/20.).

static clicks(*args, **kwds)

Returns a series of n clicks (see click()) separated by interval.

static harmoniccomplex(*args, **kwds)

Returns a harmonic complex composed of pure tones at integer multiples of the fundamental frequency f0. The amplitude and phase keywords can be set to either a single value or an array of values. In the former case the value is set for all harmonics, and harmonics up to the sampling frequency are generated. In the latter each harmonic parameter is set separately, and the number of harmonics generated corresponds to the length of the array.

static vowel(*args, **kwds)

Returns an artifically created spoken vowel sound (following the source-filter model of speech production) with a given pitch.

The vowel can be specified by either providing vowel as a string (‘a’, ‘i’ or ‘u’) or by setting formants to a sequence of formant frequencies.

The returned sound is normalized to a maximum amplitude of 1.

The implementation is based on the MakeVowel function written by Richard O. Duda, part of the Auditory Toolbox for Matlab by Malcolm Slaney: http://cobweb.ecn.purdue.edu/~malcolm/interval/1998-010/

Timing and sequencing

static sequence(*sounds, samplerate=None)

Returns the sequence of sounds in the list sounds joined together

repeat(n)

Repeats the sound n times

extended(duration)

Returns the Sound with length extended by the given duration, which can be the number of samples or a length of time in seconds.

shifted(duration, fractional=False, filter_length=2048)

Returns the sound delayed by duration, which can be the number of samples or a length of time in seconds. Normally, only integer numbers of samples will be used, but if fractional=True then the filtering method from http://www.labbookpages.co.uk/audio/beamforming/fractionalDelay.html will be used (introducing some small numerical errors). With this method, you can specify the filter_length, larger values are slower but more accurate, especially at higher frequencies. The large default value of 2048 samples provides good accuracy for sounds with frequencies above 20 Hz, but not for lower frequency sounds. If you are restricted to high frequency sounds, a smaller value will be more efficient. Note that if fractional=True then duration is assumed to be a time not a number of samples.

resized(L)

Returns the Sound with length extended (or contracted) to have L samples.

Slicing

One can slice sound objects in various ways, for example sound[100*ms:200*ms] returns the part of the sound between 100 ms and 200 ms (not including the right hand end point). If the sound is less than 200 ms long it will be zero padded. You can also set values using slicing, e.g. sound[:50*ms] = 0 will silence the first 50 ms of the sound. The syntax is the same as usual for Python slicing. In addition, you can select a subset of the channels by doing, for example, sound[:, -5:] would be the last 5 channels. For time indices, either times or samples can be given, e.g. sound[:100] gives the first 100 samples. In addition, steps can be used for example to reverse a sound as sound[::-1].

Arithmetic operations

Standard arithemetical operations and numpy functions work as you would expect with sounds, e.g. sound1+sound2, 3*sound or abs(sound).

Level

level

Can be used to get or set the level of a sound, which should be in dB. For single channel sounds a value in dB is used, for multiple channel sounds a value in dB can be used for setting the level (all channels will be set to the same level), or a list/tuple/array of levels. It is assumed that the unit of the sound is Pascals.

atlevel(level)

Returns the sound at the given level in dB SPL (RMS) assuming array is in Pascals. level should be a value in dB, or a tuple of levels, one for each channel.

maxlevel

Can be used to set or get the maximum level of a sound. For mono sounds, this is the same as the level, but for multichannel sounds it is the maximum level across the channels. Relative level differences will be preserved. The specified level should be a value in dB, and it is assumed that the unit of the sound is Pascals.

atmaxlevel(level)

Returns the sound with the maximum level across channels set to the given level. Relative level differences will be preserved. The specified level should be a value in dB and it is assumed that the unit of the sound is Pascals.

Ramping

ramp(when='onset', duration=10.0 * msecond, envelope=None, inplace=True)

Adds a ramp on/off to the sound

when='onset'
Can take values ‘onset’, ‘offset’ or ‘both’
duration=10*ms
The time over which the ramping happens
envelope
A ramping function, if not specified uses sin(pi*t/2)**2. The function should be a function of one variable t ranging from 0 to 1, and should increase from f(0)=0 to f(0)=1. The reverse is applied for the offset ramp.
inplace
Whether to apply ramping to current sound or return a new array.
ramped(*args, **kwds)

Returns a ramped version of the sound (see Sound.ramp()).

Plotting

spectrogram(low=None, high=None, log_power=True, other=None, **kwds)

Plots a spectrogram of the sound

Arguments:

low=None, high=None
If these are left unspecified, it shows the full spectrogram, otherwise it shows only between low and high in Hz.
log_power=True
If True the colour represents the log of the power.
**kwds
Are passed to Pylab’s specgram command.

Returns the values returned by pylab’s specgram, namely (pxx, freqs, bins, im) where pxx is a 2D array of powers, freqs is the corresponding frequencies, bins are the time bins, and im is the image axis.

spectrum(*args, **kwds)

Returns the spectrum of the sound and optionally plots it.

Arguments:

low, high
If these are left unspecified, it shows the full spectrum, otherwise it shows only between low and high in Hz.
log_power=True
If True it returns the log of the power.
display=False
Whether to plot the output.

Returns (Z, freqs, phase) where Z is a 1D array of powers, freqs is the corresponding frequencies, phase is the unwrapped phase of spectrum.

brian.hears.savesound(sound, filename, normalise=False, samplewidth=2)

Save the sound as a WAV.

If the normalise keyword is set to True, the amplitude of the sound will be normalised to 1. The samplewidth keyword can be 1 or 2 to save the data as 8 or 16 bit samples.

Load the file given by filename and returns a Sound object. Sound file can be either a .wav or a .aif file.

brian.hears.play(*sounds, normalise=False, sleep=False)

Plays the sound (normalised to avoid clipping if required). If sleep=True then the function will wait until the sound has finished playing before returning.

brian.hears.whitenoise(*args, **kwds)

Returns a white noise. If the samplerate is not specified, the global default value will be used.

brian.hears.powerlawnoise(*args, **kwds)

Returns a power-law noise for the given duration. Spectral density per unit of bandwidth scales as 1/(f**alpha).

Sample usage:

noise = powerlawnoise(200*ms, 1, samplerate=44100*Hz)


Arguments:

duration
Duration of the desired output.
alpha
Power law exponent.
samplerate
Desired output samplerate
brian.hears.brownnoise(*args, **kwds)

Returns brown noise, i.e powerlawnoise() with alpha=2

brian.hears.pinknoise(*args, **kwds)

Returns pink noise, i.e powerlawnoise() with alpha=1

brian.hears.irns(*args, **kwds)

Returns an IRN_S noise. The iterated ripple noise is obtained trough a cascade of gain and delay filtering. For more details: see Yost 1996 or chapter 15 in Hartman Sound Signal Sensation.

brian.hears.irno(*args, **kwds)

Returns an IRN_O noise. The iterated ripple noise is obtained many attenuated and delayed version of the original broadband noise. For more details: see Yost 1996 or chapter 15 in Hartman Sound Signal Sensation.

brian.hears.tone(*args, **kwds)

Returns a pure tone at frequency for duration, using the default samplerate or the given one. The frequency and phase parameters can be single values, in which case multiple channels can be specified with the nchannels argument, or they can be sequences (lists/tuples/arrays) in which case there is one frequency or phase for each channel.

brian.hears.click(*args, **kwds)

Returns a click of the given duration.

If peak is not specified, the amplitude will be 1, otherwise peak refers to the peak dB SPL of the click, according to the formula 28e-6*10**(peak/20.).

brian.hears.clicks(*args, **kwds)

Returns a series of n clicks (see click()) separated by interval.

brian.hears.harmoniccomplex(*args, **kwds)

Returns a harmonic complex composed of pure tones at integer multiples of the fundamental frequency f0. The amplitude and phase keywords can be set to either a single value or an array of values. In the former case the value is set for all harmonics, and harmonics up to the sampling frequency are generated. In the latter each harmonic parameter is set separately, and the number of harmonics generated corresponds to the length of the array.

brian.hears.silence(*args, **kwds)

Returns a silent, zero sound for the given duration. Set nchannels to set the number of channels.

brian.hears.sequence(*sounds, samplerate=None)

Returns the sequence of sounds in the list sounds joined together

### dB¶

class brian.hears.dB_type

The type of values in dB.

dB values are assumed to be RMS dB SPL assuming that the sound source is measured in Pascals.

class brian.hears.dB_error

Error raised when values in dB are used inconsistently with other units.

## Filterbanks¶

class brian.hears.LinearFilterbank(source, b, a)

Generalised linear filterbank

Initialisation arguments:

source
The input to the filterbank, must have the same number of channels or just a single channel. In the latter case, the channels will be replicated.
b, a
The coeffs b, a must be of shape (nchannels, m) or (nchannels, m, p). Here m is the order of the filters, and p is the number of filters in a chain (first you apply [:, :, 0], then [:, :, 1], etc.).

The filter parameters are stored in the modifiable attributes filt_b, filt_a and filt_state (the variable z in the section below).

Has one method:

Reduces cascades of low order filters into smaller cascades of high order filters.

ncascade is the number of cascaded filters to use, which should be a divisor of the original number.

Note that higher order filters are often numerically unstable.

Notes

These notes adapted from scipy’s lfilter() function.

The filterbank is implemented as a direct II transposed structure. This means that for a single channel and element of the filter cascade, the output y for an input x is defined by:

a*y[m] = b*x[m] + b*x[m-1] + ... + b[m]*x
- a*y[m-1] - ... - a[m]*y

using the following difference equations:

y[i] = b*x[i] + z[0,i-1]
z[0,i] = b*x[i] + z[1,i-1] - a*y[i]
...
z[m-3,i] = b[m-2]*x[i] + z[m-2,i-1] - a[m-2]*y[i]
z[m-2,i] = b[m-1]*x[i] - a[m-1]*y[i]


where i is the output sample number.

The rational transfer function describing this filter in the z-transform domain is:

                    -1              -nb
b + bz  + ... + b[m] z
Y(z) = --------------------------------- X(z)
-1              -na
a + az  + ... + a[m] z
class brian.hears.FIRFilterbank(source, impulse_response, use_linearfilterbank=False, minimum_buffer_size=None)

Finite impulse response filterbank

Initialisation parameters:

source
Source sound or filterbank.
impulse_response
Either a 1D array providing a single impulse response applied to every input channel, or a 2D array of shape (nchannels, ir_length) for ir_length the number of samples in the impulse response. Note that if you are using a multichannel sound x as a set of impulse responses, the array should be impulse_response=array(x.T).
minimum_buffer_size=None
If specified, gives a minimum size to the buffer. By default, for the FFT convolution based implementation of FIRFilterbank, the minimum buffer size will be 3*ir_length. For maximum efficiency with FFTs, buffer_size+ir_length should be a power of 2 (otherwise there will be some zero padding), and buffer_size should be as large as possible.
class brian.hears.RestructureFilterbank(source, numrepeat=1, type='serial', numtile=1, indexmapping=None)

Filterbank used to restructure channels, including repeating and interleaving.

Standard forms of usage:

Repeat mono source N times:

RestructureFilterbank(source, N)


For a stereo source, N copies of the left channel followed by N copies of the right channel:

RestructureFilterbank(source, N)


For a stereo source, N copies of the channels tiled as LRLRLR...LR:

RestructureFilterbank(source, numtile=N)


For two stereo sources AB and CD, join them together in serial to form the output channels in order ABCD:

RestructureFilterbank((AB, CD))


For two stereo sources AB and CD, join them together interleaved to form the output channels in order ACBD:

RestructureFilterbank((AB, CD), type='interleave')


These arguments can also be combined together, for example to AB and CD into output channels AABBCCDDAABBCCDDAABBCCDD:

RestructureFilterbank((AB, CD), 2, 'serial', 3)


The three arguments are the number of repeats before joining, the joining type (‘serial’ or ‘interleave’) and the number of tilings after joining. See below for details.

Initialise arguments:

source
Input source or list of sources.
numrepeat=1
Number of times each channel in each of the input sources is repeated before mixing the source channels. For example, with repeat=2 an input source with channels AB will be repeated to form AABB
type='serial'
The method for joining the source channels, the options are 'serial' to join the channels in series, or 'interleave' to interleave them. In the case of 'interleave', each source must have the same number of channels. An example of serial, if the input sources are abc and def the output would be abcdef. For interleave, the output would be adbecf.
numtile=1
The number of times the joined channels are tiled, so if the joined channels are ABC and numtile=3 the output will be ABCABCABC.
indexmapping=None
Instead of specifying the restructuring via numrepeat, type, numtile you can directly give the mapping of input indices to output indices. So for a single stereo source input, indexmapping=[1,0] would reverse left and right. Similarly, with two mono sources, indexmapping=[1,0] would have channel 0 of the output correspond to source 1 and channel 1 of the output corresponding to source 0. This is because the indices are counted in order of channels starting from the first source and continuing to the last. For example, suppose you had two sources, each consisting of a stereo sound, say source 0 was AB and source 1 was CD then indexmapping=[1, 0, 3, 2] would swap the left and right of each source, but leave the order of the sources the same, i.e. the output would be BADC.
class brian.hears.Join(*sources)

Filterbank that joins the channels of its inputs in series, e.g. with two input sources with channels AB and CD respectively, the output would have channels ABCD. You can initialise with multiple sources separated by commas, or by passing a list of sources.

class brian.hears.Interleave(*sources)

Filterbank that interleaves the channels of its inputs, e.g. with two input sources with channels AB and CD respectively, the output would have channels ACBD. You can initialise with multiple sources separated by commas, or by passing a list of sources.

class brian.hears.Repeat(source, numrepeat)

Filterbank that repeats each channel from its input, e.g. with 3 repeats channels ABC would map to AAABBBCCC.

class brian.hears.Tile(source, numtile)

Filterbank that tiles the channels from its input, e.g. with 3 tiles channels ABC would map to ABCABCABC.

class brian.hears.FunctionFilterbank(source, func, nchannels=None, **params)

Filterbank that just applies a given function. The function should take as many arguments as there are sources.

For example, to half-wave rectify inputs:

FunctionFilterbank(source, lambda x: clip(x, 0, Inf))


The syntax lambda x: clip(x, 0, Inf) defines a function object that takes a single argument x and returns clip(x, 0, Inf). The numpy function clip(x, low, high) returns the values of x clipped between low and high (so if x<low it returns low, if x>high it returns high, otherwise it returns x). The symbol Inf means infinity, i.e. no clipping of positive values.

Technical details

Note that functions should operate on arrays, in particular on 2D buffered segments, which are arrays of shape (bufsize, nchannels). Typically, most standard functions from numpy will work element-wise.

If you want a filterbank that changes the shape of the input (e.g. changes the number of channels), set the nchannels keyword argument to the number of output channels.

class brian.hears.SumFilterbank(source, weights=None)

Sum filterbanks together with given weight vectors.

For example, to take the sum of two filterbanks:

SumFilterbank((fb1, fb2))


To take the difference:

SumFilterbank((fb1, fb2), (1, -1))

class brian.hears.DoNothingFilterbank(source)

Filterbank that does nothing to its input.

Useful for removing a set of filters without having to rewrite your code. Can also be used for simply writing compound derived classes. For example, if you want a compound Filterbank that does AFilterbank and then BFilterbank, but you want to encapsulate that into a single class, you could do:

class ABFilterbank(DoNothingFilterbank):
def __init__(self, source):
a = AFilterbank(source)
b = BFilterbank(a)
DoNothingFilterbank.__init__(self, b)


However, a more general way of writing compound filterbanks is to use CombinedFilterbank.

class brian.hears.ControlFilterbank(source, inputs, targets, updater, max_interval=None)

Filterbank that can be used for controlling behaviour at runtime

Typically, this class is used to implement a control path in an auditory model, modifying some filterbank parameters based on the output of other filterbanks (or the same ones).

The controller has a set of input filterbanks whose output values are used to modify a set of output filterbanks. The update is done by a user specified function or class which is passed these output values. The controller should be inserted as the last bank in a chain.

Initialisation arguments:

source
The source filterbank, the values from this are used unmodified as the output of this filterbank.
inputs
Either a single filterbank, or sequence of filterbanks which are used as inputs to the updater.
targets
The filterbank or sequence of filterbanks that are modified by the updater.
updater
The function or class which does the updating, see below.
max_interval
If specified, ensures that the updater is called at least as often as this interval (but it may be called more often). Can be specified as a time or a number of samples.

The updater

The updater argument can be either a function or class instance. If it is a function, it should have a form like:

# A single input
def updater(input):
...

# Two inputs
def updater(input1, input2):
...

# Arbitrary number of inputs
def updater(*inputs):
...


Each argument input to the function is a numpy array of shape (numsamples, numchannels) where numsamples is the number of samples just computed, and numchannels is the number of channels in the corresponding filterbank. The function is not restricted in what it can do with these inputs.

Functions can be used to implement relatively simple controllers, but for more complicated situations you may want to maintain some state variables for example, and in this case you can use a class. The object updater should be an instance of a class that defines the __call__ method (with the same syntax as above for functions). In addition, you can define a reinitialisation method reinit() which will be called when the buffer_init() method is called on the filterbank, although this is entirely optional.

Example

The following will do a simple form of gain control, where the gain parameter will drift exponentially towards target_rms/rms with a given time constant:

# This class implements the gain (see Filterbank for details)
class GainFilterbank(Filterbank):
def __init__(self, source, gain=1.0):
Filterbank.__init__(self, source)
self.gain = gain
def buffer_apply(self, input):
return self.gain*input

# This is the class for the updater object
class GainController(object):
def __init__(self, target, target_rms, time_constant):
self.target = target
self.target_rms = target_rms
self.time_constant = time_constant
def reinit(self):
self.sumsquare = 0
self.numsamples = 0
def __call__(self, input):
T = input.shape/self.target.samplerate
self.sumsquare += sum(input**2)
self.numsamples += input.size
rms = sqrt(self.sumsquare/self.numsamples)
g = self.target.gain
g_tgt = self.target_rms/rms
tau = self.time_constant
self.target.gain = g_tgt+exp(-T/tau)*(g-g_tgt)


And an example of using this with an input source, a target RMS of 0.2 and a time constant of 50 ms, updating every 10 ms:

gain_fb = GainFilterbank(source)
updater = GainController(gain_fb, 0.2, 50*ms)
control = ControlFilterbank(gain_fb, source, gain_fb, updater, 10*ms)

class brian.hears.CombinedFilterbank(source)

Filterbank that encapsulates a chain of filterbanks internally.

This class should mostly be used by people writing extensions to Brian hears rather than by users directly. The purpose is to take an existing chain of filterbanks and wrap them up so they appear to the user as a single filterbank which can be used exactly as any other filterbank.

In order to do this, derive from this class and in your initialisation follow this pattern:

class RectifiedGammatone(CombinedFilterbank):
def __init__(self, source, cf):
CombinedFilterbank.__init__(self, source)
source = self.get_modified_source()
# At this point, insert your chain of filterbanks acting on
# the modified source object
gfb = Gammatone(source, cf)
rectified = FunctionFilterbank(gfb,
lambda input: clip(input, 0, Inf))
# Finally, set the output filterbank to be the last in your chain
self.set_output(fb)


This combination of a Gammatone and a rectification via a FunctionFilterbank can now be used as a single filterbank, for example:

x = whitenoise(100*ms)
fb = RectifiedGammatone(x, [1*kHz, 1.5*kHz])
y = fb.process()


Details

The reason for the get_modified_source() call is that the source attribute of a filterbank can be changed after creation. The modified source provides a buffer (in fact, a DoNothingFilterbank) so that the input to the chain of filters defined by the derived class doesn’t need to be changed.

## Filterbank library¶

class brian.hears.Gammatone(source, cf, b=1.019, erb_order=1, ear_Q=9.26449, min_bw=24.7)

Bank of gammatone filters.

They are implemented as cascades of four 2nd-order IIR filters (this 8th-order digital filter corresponds to a 4th-order gammatone filter).

The approximated impulse response is defined as follow where [Hz] is the equivalent rectangular bandwidth of the filter centered at .

It comes from Slaney’s exact gammatone implementation (Slaney, M., 1993, “An Efficient Implementation of the Patterson-Holdsworth Auditory Filter Bank”. Apple Computer Technical Report #35). The code is based on Slaney’s Matlab implementation.

Initialised with arguments:

source
Source of the filterbank.
cf
List or array of center frequencies.
b=1.019
parameter which determines the bandwidth of the filters (and reciprocally the duration of its impulse response). In particular, the bandwidth = b.ERB(cf), where ERB(cf) is the equivalent bandwidth at frequency cf. The default value of b to a best fit (Patterson et al., 1992). b can either be a scalar and will be the same for every channel or an array of the same length as cf.
erb_order=1, ear_Q=9.26449, min_bw=24.7
Parameters used to compute the ERB bandwidth. . Their default values are the ones recommended in Glasberg and Moore, 1990.
Specify 1 or 2 to use a cascade of 1 or 2 order 8 or 4 filters instead of 4 2nd order filters. Note that this is more efficient but may induce numerical stability issues.
class brian.hears.ApproximateGammatone(source, cf, bandwidth, order=4)

Bank of approximate gammatone filters implemented as a cascade of order IIR gammatone filters.

The filter is derived from the sampled version of the complex analog gammatone impulse response where corresponds to order, defines the oscillation frequency cf, and defines the bandwidth parameter.

The design is based on the Hohmann implementation as described in Hohmann, V., 2002, “Frequency analysis and synthesis using a Gammatone filterbank”, Acta Acustica United with Acustica. The code is based on the Matlab gammatone implementation from Meddis’ toolbox.

Initialised with arguments:

source
Source of the filterbank.
cf
List or array of center frequencies.
bandwidth
List or array of filters bandwidth corresponding, one for each cf.
order=4
The number of 1st-order gammatone filters put in cascade, and therefore the order the resulting gammatone filters.
class brian.hears.LogGammachirp(source, f, b=1.019, c=1, ncascades=4)

Bank of gammachirp filters with a logarithmic frequency sweep.

The approximated impulse response is defined as follows: where [Hz] is the equivalent rectangular bandwidth of the filter centered at .

The implementation is a cascade of 4 2nd-order IIR gammatone filters followed by a cascade of ncascades 2nd-order asymmetric compensation filters as introduced in Unoki et al. 2001, “Improvement of an IIR asymmetric compensation gammachirp filter”.

Initialisation parameters:

source
Source sound or filterbank.
f
List or array of the sweep ending frequencies ( ).
b=1.019
Parameters which determine the duration of the impulse response. b can either be a scalar and will be the same for every channel or an array with the same length as f.
c=1
The glide slope (or sweep rate) given in Hz/second. The trajectory of the instantaneous frequency towards f is an upchirp when c<0 and a downchirp when c>0. c can either be a scalar and will be the same for every channel or an array with the same length as f.
Number of times the asymmetric compensation filter is put in cascade. The default value comes from Unoki et al. 2001.
class brian.hears.LinearGammachirp(source, f, time_constant, c, phase=0)

Bank of gammachirp filters with linear frequency sweeps and gamma envelope as described in Wagner et al. 2009, “Auditory responses in the barn owl’s nucleus laminaris to clicks: impulse response and signal analysis of neurophonic potential”, J. Neurophysiol.

The impulse response is defined as follow where corresponds to time_constant and to phase (see definition of parameters).

Those filters are implemented as FIR filters using truncated time representations of gammachirp functions as the impulse response. The impulse responses, which need to have the same length for every channel, have a duration of 15 times the biggest time constant. The length of the impulse response is therefore 15*max(time_constant)*sampling_rate. The impulse responses are normalized with respect to the transmitted power, i.e. the rms of the filter taps is 1.

Initialisation parameters:

source
Source sound or filterbank.
f
List or array of the sweep starting frequencies ( ).
time_constant
Determines the duration of the envelope and consequently the length of the impulse response.
c=1
The glide slope (or sweep rate) given in Hz/second. The time-dependent instantaneous frequency is f+c*t and is therefore going upward when c>0 and downward when c<0. c can either be a scalar and will be the same for every channel or an array with the same length as f.
phase=0
Phase shift of the carrier.

Has attributes:

length_impulse_response
Number of samples in the impulse responses.
impulse_response
Array of shape (nchannels, length_impulse_response) with each row being an impulse response for the corresponding channel.
class brian.hears.LinearGaborchirp(source, f, time_constant, c, phase=0)

Bank of gammachirp filters with linear frequency sweeps and gaussian envelope as described in Wagner et al. 2009, “Auditory responses in the barn owl’s nucleus laminaris to clicks: impulse response and signal analysis of neurophonic potential”, J. Neurophysiol.

The impulse response is defined as follows: , where corresponds to time_constant and to phase (see definition of parameters).

These filters are implemented as FIR filters using truncated time representations of gammachirp functions as the impulse response. The impulse responses, which need to have the same length for every channel, have a duration of 12 times the biggest time constant. The length of the impulse response is therefore 12*max(time_constant)*sampling_rate. The envelope is a gaussian function (Gabor filter). The impulse responses are normalized with respect to the transmitted power, i.e. the rms of the filter taps is 1.

Initialisation parameters:

source
Source sound or filterbank.
f
List or array of the sweep starting frequencies ( ).
time_constant
Determines the duration of the envelope and consequently the length of the impluse response.
c=1
The glide slope (or sweep rate) given ins Hz/second. The time-dependent instantaneous frequency is f+c*t and is therefore going upward when c>0 and downward when c<0. c can either be a scalar and will be the same for every channel or an array with the same length as f.
phase=0
Phase shift of the carrier.

Has attributes:

length_impulse_response
Number of sample in the impulse responses.
impulse_response
Array of shape (nchannels, length_impulse_response) with each row being an impulse response for the corresponding channel.
class brian.hears.IIRFilterbank(source, nchannels, passband, stopband, gpass, gstop, btype, ftype)

Filterbank of IIR filters. The filters can be low, high, bandstop or bandpass and be of type Elliptic, Butterworth, Chebyshev etc. The passband and stopband can be scalars (for low or high pass) or pairs of parameters (for stopband and passband) yielding similar filters for every channel. They can also be arrays of shape (1, nchannels) for low and high pass or (2, nchannels) for stopband and passband yielding different filters along channels. This class uses the scipy iirdesign function to generate filter coefficients for every channel.

See the documentation for scipy.signal.iirdesign for more details.

Initialisation parameters:

samplerate
The sample rate in Hz.
nchannels
The number of channels in the bank
passband, stopband
The edges of the pass and stop bands in Hz. For lowpass and highpass filters, in the case of similar filters for each channel, they are scalars and passband<stopband for low pass or stopband>passband for a highpass. For a bandpass or bandstop filter, in the case of similar filters for each channel, make passband and stopband a list with two elements, e.g. for a bandpass have passband=[200*Hz, 500*Hz] and stopband=[100*Hz, 600*Hz]. passband and stopband can also be arrays of shape (1, nchannels) for low and high pass or (2, nchannels) for stopband and passband yielding different filters along channels.
gpass
The maximum loss in the passband in dB. Can be a scalar or an array of length nchannels.
gstop
The minimum attenuation in the stopband in dB. Can be a scalar or an array of length nchannels.
btype
One of ‘low’, ‘high’, ‘bandpass’ or ‘bandstop’.
ftype
The type of IIR filter to design: ‘ellip’ (elliptic), ‘butter’ (Butterworth), ‘cheby1’ (Chebyshev I), ‘cheby2’ (Chebyshev II), ‘bessel’ (Bessel).
class brian.hears.Butterworth(source, nchannels, order, fc, btype='low')

Filterbank of low, high, bandstop or bandpass Butterworth filters. The cut-off frequencies or the band frequencies can either be the same for each channel or different along channels.

Initialisation parameters:

samplerate
Sample rate.
nchannels
Number of filters in the bank.
order
Order of the filters.
fc
Cutoff parameter(s) in Hz. For the case of a lowpass or highpass filterbank, fc is either a scalar (thus the same value for all of the channels) or an array of length nchannels. For the case of a bandpass or bandstop, fc is either a pair of scalar defining the bandpass or bandstop (thus the same values for all of the channels) or an array of shape (2, nchannels) to define a pair for every channel.
btype
One of ‘low’, ‘high’, ‘bandpass’ or ‘bandstop’.

Cascade of n times a linear filterbank.

Initialised with arguments:

source
Source of the new filterbank.
filterbank
Filterbank object to be put in cascade
n
class brian.hears.LowPass(source, fc)

Bank of 1st-order lowpass filters

The code is based on the code found in the Meddis toolbox. It was implemented here to be used in the DRNL cochlear model implementation.

Initialised with arguments:

source
Source of the filterbank.
fc
Value, list or array (with length = number of channels) of cutoff frequencies.
class brian.hears.AsymmetricCompensation(source, f, b=1.019, c=1, ncascades=4)

Bank of asymmetric compensation filters.

Those filters are meant to be used in cascade with gammatone filters to approximate gammachirp filters (Unoki et al., 2001, Improvement of an IIR asymmetric compensation gammachirp filter, Acoust. Sci. & Tech.). They are implemented a a cascade of low order filters. The code is based on the implementation found in the AIM-MAT toolbox.

Initialised with arguments:

source
Source of the filterbank.
f
List or array of the cut off frequencies.
b=1.019
Determines the duration of the impulse response. Can either be a scalar and will be the same for every channel or an array with the same length as cf.
c=1
The glide slope when this filter is used to implement a gammachirp. Can either be a scalar and will be the same for every channel or an array with the same length as cf.
The number of time the basic filter is put in cascade.

## Auditory model library¶

class brian.hears.DRNL(source, cf, type='human', param={})

Implementation of the dual resonance nonlinear (DRNL) filter as described in Lopez-Paveda, E. and Meddis, R., “A human nonlinear cochlear filterbank”, JASA 2001.

The entire pathway consists of the sum of a linear and a nonlinear pathway.

The linear path consists of a bank of bandpass filters (second order gammatone), a low pass function, and a gain/attenuation factor, g, in a cascade.

The nonlinear path is a cascade consisting of a bank of gammatone filters, a compression function, a second bank of gammatone filters, and a low pass function, in that order.

Initialised with arguments:

source
Source of the cochlear model.
cf
List or array of center frequencies.
type

defines the parameters set corresponding to a certain fit. It can be either:

type='human'
The parameters come from Lopez-Paveda, E. and Meddis, R.., “A human nonlinear cochlear filterbank”, JASA 2001.
type ='guinea pig'
The parameters come from Summer et al., “A nonlinear filter-bank model of the guinea-pig cochlear nerve: Rate responses”, JASA 2003.
param
Dictionary used to overwrite the default parameters given in the original papers.

The possible parameters to change and their default values for humans (see Lopez-Paveda, E. and Meddis, R.,”A human nonlinear cochlear filterbank”, JASA 2001. for notation) are:

param['stape_scale']=0.00014
param['order_linear']=3
param['order_nonlinear']=3


from there on the parameters are given in the form where cf is the center frequency:

param['cf_lin_p0']=-0.067
param['cf_lin_m']=1.016
param['bw_lin_p0']=0.037
param['bw_lin_m']=0.785
param['cf_nl_p0']=-0.052
param['cf_nl_m']=1.016
param['bw_nl_p0']=-0.031
param['bw_nl_m']=0.774
param['a_p0']=1.402
param['a_m']=0.819
param['b_p0']=1.619
param['b_m']=-0.818
param['c_p0']=-0.602
param['c_m']=0
param['g_p0']=4.2
param['g_m']=0.48
param['lp_lin_cutoff_p0']=-0.067
param['lp_lin_cutoff_m']=1.016
param['lp_nl_cutoff_p0']=-0.052
param['lp_nl_cutoff_m']=1.016

class brian.hears.DCGC(source, cf, update_interval=1, param={})

The compressive gammachirp auditory filter as described in Irino, T. and Patterson R., “A compressive gammachirp auditory filter for both physiological and psychophysical data”, JASA 2001.

Technical implementation details and notation can be found in Irino, T. and Patterson R., “A Dynamic Compressive Gammachirp Auditory Filterbank”, IEEE Trans Audio Speech Lang Processing.

The model consists of a control pathway and a signal pathway in parallel.

The control pathway consists of a bank of bandpass filters followed by a bank of highpass filters (this chain yields a bank of gammachirp filters).

The signal pathway consist of a bank of fix bandpass filters followed by a bank of highpass filters with variable cutoff frequencies (this chain yields a bank of gammachirp filters with a level-dependent bandwidth). The highpass filters of the signal pathway are controlled by the output levels of the two stages of the control pathway.

Initialised with arguments:

source
Source of the cochlear model.
cf
List or array of center frequencies.
update_interval
Interval in samples controlling how often the band pass filter of the signal pathway is updated. Smaller values are more accurate, but give longer computation times.
param
Dictionary used to overwrite the default parameters given in the original paper.

The possible parameters to change and their default values (see Irino, T. and Patterson R., “A Dynamic Compressive Gammachirp Auditory Filterbank”, IEEE Trans Audio Speech Lang Processing) are:

param['b1'] = 1.81
param['c1'] = -2.96
param['b2'] = 2.17
param['c2'] = 2.2
param['decay_tcst'] = .5*ms
param['lev_weight'] = .5
param['level_ref'] = 50.
param['level_pwr1'] = 1.5
param['level_pwr2'] = .5
param['RMStoSPL'] = 30.
param['frat0'] = .2330
param['frat1'] = .005
param['lct_ERB'] = 1.5  #value of the shift in ERB frequencies
param['frat_control'] = 1.08
param['order_gc']=4
param['ERBrate']= 21.4*log10(4.37*cf/1000+1) # cf is the center frequency
param['ERBwidth']= 24.7*(4.37*cf/1000 + 1)

class brian.hears.MiddleEar(source, gain=1, **kwds)

Implements the middle ear model from Tan & Carney (2003) (linear filter with two pole pairs and one double zero). The gain is normalized for the response of the analog filter at 1000Hz as in the model of Tan & Carney (their actual C code does however result in a slightly different normalization, the difference in overall level is about 0.33dB (to get exactly the same output as in their model, set the gain parameter to 0.962512703689).

Tan, Q., and L. H. Carney. “A Phenomenological Model for the Responses of Auditory-nerve Fibers. II. Nonlinear Tuning with a Frequency Glide”. The Journal of the Acoustical Society of America 114 (2003): 2007.

class brian.hears.TanCarney(source, cf, update_interval=1, param=None)

Class implementing the nonlinear auditory filterbank model as described in Tan, G. and Carney, L., “A phenomenological model for the responses of auditory-nerve fibers. II. Nonlinear tuning with a frequency glide”, JASA 2003.

The model consists of a control path and a signal path. The control path controls both its own bandwidth via a feedback loop and also the bandwidth of the signal path.

Initialised with arguments:

source
Source of the cochlear model.
cf
List or array of center frequencies.
update_interval
Interval in samples controlling how often the band pass filter of the signal pathway is updated. Smaller values are more accurate but increase the computation time.
param
Dictionary used to overwrite the default parameters given in the original paper.
class brian.hears.ZhangSynapse(source, CF, n_per_channel=1, params=None)

A FilterbankGroup that represents an IHC-AN synapse according to the Zhang et al. (2001) model. The source should be a filterbank, producing V_ihc (e.g. TanCarney). CF specifies the characteristic frequencies of the AN fibers. params overwrites any parameters values given in the publication.

The group emits spikes according to a time-varying Poisson process with absolute and relative refractoriness (probability of spiking is given by state variable R). The continuous probability of spiking without refractoriness is available in the state variable s.

The n_per_channel argument can be used to generate multiple spike trains for every channel.

If all you need is the state variable s, you can use the class ZhangSynapseRate instead which does not simulate the spike-generating Poisson process.

For details see: Zhang, X., M. G. Heinz, I. C. Bruce, and L. H. Carney. “A Phenomenological Model for the Responses of Auditory-nerve Fibers: I. Nonlinear Tuning with Compression and Suppression”. The Journal of the Acoustical Society of America 109 (2001): 648.

## Filterbank group¶

class brian.hears.FilterbankGroup(filterbank, targetvar, *args, **kwds)

Allows a Filterbank object to be used as a NeuronGroup

Initialised as a standard NeuronGroup object, but with two additional arguments at the beginning, and no N (number of neurons) argument. The number of neurons in the group will be the number of channels in the filterbank. (TODO: add reference to interleave/serial channel stuff here.)

filterbank
The Filterbank object to be used by the group. In fact, any Bufferable object can be used.
targetvar
The target variable to put the filterbank output into.

One additional keyword is available beyond that of NeuronGroup:

buffersize=32
The size of the buffered segments to fetch each time. The efficiency depends on this in an unpredictable way, larger values mean more time spent in optimised code, but are worse for the cache. In many cases, the default value is a good tradeoff. Values can be given as a number of samples, or a length of time in seconds.

Note that if you specify your own Clock, it should have 1/dt=samplerate.

## Functions¶

brian.hears.erbspace(*args, **kwds)

Returns the centre frequencies on an ERB scale.

low, high
Lower and upper frequencies
N
Number of channels
earQ=9.26449, minBW=24.7, order=1
Default Glasberg and Moore parameters.
brian.hears.asymmetric_compensation_coeffs(samplerate, fr, filt_b, filt_a, b, c, p0, p1, p2, p3, p4)

This function is used to generated the coefficient of the asymmetric compensation filter used for the gammachirp implementation.

## Plotting¶

brian.hears.log_frequency_xaxis_labels(ax=None, freqs=None)

Sets tick positions for log-scale frequency x-axis at sensible locations.

Also uses scalar representation rather than exponential (i.e. 100 rather than 10^2).

ax=None
The axis to set, or uses gca() if None.
freqs=None
Override the default frequency locations with your preferred tick locations.

Note: with log scaled axes, it can be useful to call axis('tight') before setting the ticks.

brian.hears.log_frequency_yaxis_labels(ax=None, freqs=None)

Sets tick positions for log-scale frequency x-axis at sensible locations.

Also uses scalar representation rather than exponential (i.e. 100 rather than 10^2).

ax=None
The axis to set, or uses gca() if None.
freqs=None
Override the default frequency locations with your preferred tick locations.

Note: with log scaled axes, it can be useful to call axis('tight') before setting the ticks.

## HRTFs¶

class brian.hears.HRTF(hrir_l, hrir_r=None)

Attributes

impulse_response
The pair of impulse responses (as stereo Sound objects)
fir
The impulse responses in a format suitable for using with FIRFilterbank (the transpose of impulse_response).
left, right
The two HRTFs (mono Sound objects)
samplerate
The sample rate of the HRTFs.

Methods

apply(sound)

Returns a stereo Sound object formed by applying the pair of HRTFs to the mono sound input. Equivalently, you can write hrtf(sound) for hrtf an HRTF object.

filterbank(source, **kwds)

Returns an FIRFilterbank object that can be used to apply the HRTF as part of a chain of filterbanks.

You can get the number of samples in the impulse response with len(hrtf).

class brian.hears.HRTFSet(data, samplerate, coordinates)

A collection of HRTFs, typically for a single individual.

Normally this object is created automatically by an HRTFDatabase.

Attributes

hrtf
A list of HRTF objects for each index.
num_indices
The number of HRTF locations. You can also use len(hrtfset).
num_samples
The sample length of each HRTF.
fir_serial, fir_interleaved
The impulse responses in a format suitable for using with FIRFilterbank, in serial (LLLLL...RRRRR....) or interleaved (LRLRLR...).

Methods

subset(condition)

Generates the subset of the set of HRTFs whose coordinates satisfy the condition. This should be one of: a boolean array of length the number of HRTFs in the set, with values of True/False to indicate if the corresponding HRTF should be included or not; an integer array with the indices of the HRTFs to keep; or a function whose argument names are names of the parameters of the coordinate system, e.g. condition=lambda azim:azim<pi/2.

filterbank(source, interleaved=False, **kwds)

Returns an FIRFilterbank object which applies all of the HRTFs in the set. If interleaved=False then the channels are arranged in the order LLLL...RRRR..., otherwise they are arranged in the order LRLRLR....

You can access an HRTF by index via hrtfset[index], or by its coordinates via hrtfset(coord1=val1, coord2=val2).

Initialisation

data
An array of shape (2, num_indices, num_samples) where data[0,:,:] is the left ear and data[1,:,:] is the right ear, num_indices is the number of HRTFs for each ear, and num_samples is the length of the HRTF.
samplerate
The sample rate for the HRTFs (should have units of Hz).
coordinates
A record array of length num_indices giving the coordinates of each HRTF. You can use make_coordinates() to help with this.
class brian.hears.HRTFDatabase(samplerate=None)

Base class for databases of HRTFs

Should have an attribute ‘subjects’ giving a list of available subjects, and a method load_subject(subject) which returns an HRTFSet for that subject.

The initialiser should take (optional) keywords:

samplerate
The intended samplerate (resampling will be used if it is wrong). If left unset, the natural samplerate of the data set will be used.
brian.hears.make_coordinates(**kwds)

Creates a numpy record array from the keywords passed to the function. Each keyword/value pair should be the name of the coordinate the array of values of that coordinate for each location. Returns a numpy record array. For example:

coords = make_coordinates(azimuth=[0, 30, 60, 0, 30, 60],
elevation=[0, 0, 0, 30, 30, 30])
print coords['azimuth']

class brian.hears.IRCAM_LISTEN(basedir, compensated=False, samplerate=None)

HRTFDatabase for the IRCAM LISTEN public HRTF database.

For details on the database, see the website.

The database object can be initialised with the following arguments:

basedir
The directory where the database has been downloaded and extracted, e.g. r'D:\HRTF\IRCAM'. Multiple directories in a list can be provided as well (e.g IRCAM and IRCAM New).
compensated=False
Whether to use the raw or compensated impulse responses.
samplerate=None
If specified, you can resample the impulse responses to a different samplerate, otherwise uses the default 44.1 kHz.

The coordinates are pairs (azim, elev) where azim ranges from 0 to 345 degrees in steps of 15 degrees, and elev ranges from -45 to 90 in steps of 15 degrees. After loading the database, the attribute ‘subjects’ gives all the subjects number that were detected as installed.

Obtaining the database

The database can be downloaded here. Each subject archive should be extracted to a folder (e.g. IRCAM) with the names of the subject, e.g. IRCAM/IRC_1002, etc.

class brian.hears.HeadlessDatabase(n=None, azim_max=1.5707963267948966, diameter=0.22308 * metre, itd=None, samplerate=None, fractional_itds=False)

Database for creating HRTFSet with artificial interaural time-differences

Initialisation keywords:

n, azim_max, diameter
Specify the ITDs for two ears separated by distance diameter with no head. ITDs corresponding to n angles equally spaced between -azim_max and azim_max are used. The default diameter is that which gives the maximum ITD as 650 microseconds. The ITDs are computed with the formula diameter*sin(azim)/speed_of_sound_in_air. In this case, the generated HRTFSet will have coordinates of azim and itd.
itd
Instead of specifying the keywords above, just give the ITDs directly. In this case, the generated HRTFSet will have coordinates of itd only.
fractional_itds=False
Set this to True to allow ITDs with a fractional multiple of the timestep 1/samplerate. Note that the filters used to do this are not perfect and so this will introduce a small amount of numerical error, and so shouldn’t be used unless this level of timing precision is required. See FractionalDelay for more details.

To get the HRTFSet, the simplest thing to do is just:

hrtfset = HeadlessDatabase(13).load_subject()


The generated ITDs can be returned using the itd attribute of the HeadlessDatabase object.

If fractional_itds=False then Note that the delays induced in the left and right channels are not symmetric as making them so wastes half the samplerate (if the delay to the left channel is itd/2 and the delay to the right channel is -itd/2). Instead, for each channel either the left channel delay is 0 and the right channel delay is -itd (if itd<0) or the left channel delay is itd and the right channel delay is 0 (if itd>0).

If fractional_itds=True then delays in the left and right channels will be symmetric around a global offset of delay_offset.

## Base classes¶

Useful for understanding more about the internals.

class brian.hears.Bufferable

Base class for Brian.hears classes

Defines a buffering interface of two methods:

buffer_init()
Initialise the buffer, should set the time pointer to zero and do any other initialisation that the object needs.
buffer_fetch(start, end)
Fetch the next samples start:end from the buffer. Value returned should be an array of shape (end-start, nchannels). Can throw an IndexError exception if it is outside the possible range.

In addition, bufferable objects should define attributes:

nchannels
The number of channels in the buffer.
samplerate
The sample rate in Hz.

By default, the class will define a default buffering mechanism which can easily be extended. To extend the default buffering mechanism, simply implement the method:

buffer_fetch_next(samples)
Returns the next samples from the buffer.

The default methods for buffer_init() and buffer_fetch() will define a buffer cache which will get larger if it needs to to accommodate a buffer_fetch(start, end) where end-start is larger than the current cache. If the filterbank has a minimum_buffer_size attribute, the internal cache will always have at least this size, and the buffer_fetch_next(samples) method will always get called with samples>=minimum_buffer_size. This can be useful to ensure that the buffering is done efficiently internally, even if the user request buffered chunks that are too small. If the filterbank has a maximum_buffer_size attribute then buffer_fetch_next(samples) will always be called with samples<=maximum_buffer_size - this can be useful for either memory consumption reasons or for implementing time varying filters that need to update on a shorter time window than the overall buffer size.

The following attributes will automatically be maintained:

self.cached_buffer_start, self.cached_buffer_end
The start and end of the cached segment of the buffer
self.cached_buffer_output
An array of shape ((cached_buffer_end-cached_buffer_start, nchannels) with the current cached segment of the buffer. Note that this array can change size.
class brian.hears.Filterbank(source)

Generalised filterbank object

Documentation common to all filterbanks

Filterbanks all share a few basic attributes:

source

The source of the filterbank, a Bufferable object, e.g. another Filterbank or a Sound. It can also be a tuple of sources. Can be changed after the object is created, although note that for some filterbanks this may cause problems if they do make assumptions about the input based on the first source object they were passed. If this is causing problems, you can insert a dummy filterbank (DoNothingFilterbank) which is guaranteed to work if you change the source.

nchannels

The number of channels.

samplerate

The sample rate.

duration

The duration of the filterbank. If it is not specified by the user, it is computed by finding the maximum of its source durations. If these are not specified a KeyError will be raised (for example, using OnlineSound as a source).

To process the output of a filterbank, the following method can be used:

process(func=None, duration=None, buffersize=32)

Returns the output of the filterbank for the given duration.

func
If a function is specified, it should be a function of one or two arguments that will be called on each filtered buffered segment (of shape (buffersize, nchannels) in order. If the function has one argument, the argument should be buffered segment. If it has two arguments, the second argument is the value returned by the previous application of the function (or 0 for the first application). In this case, the method will return the final value returned by the function. See example below.
duration=None
The length of time (in seconds) or number of samples to process. If no func is specified, the method will return an array of shape (duration, nchannels) with the filtered outputs. Note that in many cases, this will be too large to fit in memory, in which you will want to process the filtered outputs online, by providing a function func (see example below). If no duration is specified, the maximum duration of the inputs to the filterbank will be used, or an error raised if they do not have durations (e.g. in the case of OnlineSound).
buffersize=32
The size of the buffered segments to fetch, as a length of time or number of samples. 32 samples typically gives reasonably good performance.

For example, to compute the RMS of each channel in a filterbank, you would do:

def sum_of_squares(input, running_sum_of_squares):
return running_sum_of_squares+sum(input**2, axis=0)
rms = sqrt(fb.process(sum_of_squares)/nsamples)


Alternatively, the buffer interface can be used, which is described in more detail below.

Filterbank also defines arithmetical operations for +, -, *, / where the other operand can be a filterbank or scalar.

Details on the class

This class is a base class not designed to be instantiated. A Filterbank object should define the interface of Bufferable, as well as defining a source attribute. This is normally a Bufferable object, but could be an iterable of sources (for example, for filterbanks that mix or add multiple inputs).

The buffer_fetch_next(samples) method has a default implementation that fetches the next input, and calls the buffer_apply(input) method on it, which can be overridden by a derived class. This is typically the easiest way to implement a new filterbank. Filterbanks with multiple sources will need to override this default implementation.

There is a default __init__ method that can be called by a derived class that sets the source, nchannels and samplerate from that of the source object. For multiple sources, the default implementation will check that each source has the same number of channels and samplerate and will raise an error if not.

There is a default buffer_init() method that calls buffer_init() on the source (or list of sources).

Example of deriving a class

The following class takes N input channels and sums them to a single output channel:

class AccumulateFilterbank(Filterbank):
def __init__(self, source):
Filterbank.__init__(self, source)
self.nchannels = 1
def buffer_apply(self, input):
return reshape(sum(input, axis=1), (input.shape, 1))


Note that the default Filterbank.__init__ will set the number of channels equal to the number of source channels, but we want to change it to have a single output channel. We use the buffer_apply method which automatically handles the efficient cacheing of the buffer for us. The method receives the array input which has shape (bufsize, nchannels) and sums over the channels (axis=1). It’s important to reshape the output so that it has shape (bufsize, outputnchannels) so that it can be used as the input to subsequent filterbanks.

class brian.hears.BaseSound

Base class for Sound and OnlineSound

class brian.hears.OnlineSound

## Options¶

There are several relevant global options for Brian hears. In particular, activating scipy Weave support with the useweave=True will give considerably faster linear filtering. See Preferences and Compiled code for more details.

## Class diagram¶ ### Project Versions

#### Previous topic

Electrode compensation

#### Next topic

Magic in Brian

PKE�B<����?�?%brian-1.4.1/developer-guidelines.html Guidelines — Brian 1.4.1 documentation

# Guidelines¶

The basic principles of developing Brian are:

1. For the user, the emphasis is on making the package flexible, readable and easy to use. See the paper “The Brian simulator” in Frontiers in Neuroscience for more details.
2. For the developer, the emphasis is on keeping the package maintainable by a small number of people. To this end, we use stable, well maintained, existing open source packages whenever possible, rather than writing our own code.

Coding conventions. We use the PEP-8 coding conventions for our code. Syntax is chosen as much as possible from the user point of view, to reflect the concepts as directly as possible. Ideally, a Brian script should be readable by someone who doesn’t know Python or Brian, although this isn’t always possible. Function and class names should be explicit rather than abbreviated.

Documentation. It is very important to maintain documentation. We use the Sphinx documentation generator tools. The documentation is all hand written. Sphinx source files are stored in the docs_sphinx folder in the repository, and compiled HTML files are stored in the docs folder. Most of the documentation is stored directly in the Sphinx source text files, but reference documentation for important Brian classes and functions are kept in the documentation strings of those classes themselves. This is automatically pulled from these classes for the reference manual section of the documentation. The idea is to keep the definitive reference documentation near the code that it documents, serving as both a comment for the code itself, and to keep the documentation up to date with the code.

In the code, every class or function should start with an explanation of what it does, unless it is trivial. A good idea is to use explicit names rather than abbreviations, so that you instantly understand what it is about. Inside a function, important chunks should also be commented.

Testing. Brian uses the nose package for its testing framework. Tests are stored in the brian/tests directory. Tests associated to a Brian module are stored in brian/tests/testinterface and tests of the mathematical correctness of Brian’s algorithms are stored in brian/tests/testcorrectness.

Errors. It is a good idea to start an important function (e.g. object initialisation) with a check of the arguments, and possibly issue errors. This way errors are more understandable by the user.

Enhancements. Brian uses a system parallel to the Python Enhancement Proposals (PEPs) system for Python, called Brian Enhancement Proposals (BEPs). These are stored in dev/BEPs. Ideas for new functionality for Brian are put in here for comment and discussion. A BEP typically includes:

• How the feature will look from user point of view, with example scripts.
• Detailed implementation ideas and options.

We also use the Brian development mailing list.

## Contributing code¶

First of all, you should register to the developers mailing list. If you want to modify existing modules, you should make sure that you work on the latest SVN version. We use the Eclipse IDE because it has a nice Python plugin (Pydev) and SVN plugin, but of course you can use your preferred IDE. The next step is to carefully read the guidelines in this guide.

Now that you wrote your code:

• Write a test for it in brian/tests/testinterface. If it is a new module, create a new file test_mymodule.py;
• Write documentation, both in the file (see how it’s done in existing modules) and, if appropriate, in the relevant file in docs_sphinx. We use the Sphinx documentation generator tools. If you want to see how it looks, generate the html docs by executing dev/tools/docs/build_html.py. The html files will then be in docs.
• If it is a significant feature, write an example script in examples and insert a line in examples/examples_guide.txt.
• Create a patch file. For example with Eclipse, right-click on the Brian project, then Team > Create Patch > Save in filesystem, then Next > Project.
• Send your patch as an attachment to the developers mailing list and make sure the subject of your message starts with [PATCH]. Then describe your patch in your message.

From that point, your patch will either be directly included in the svn or (more likely) will be first discussed in the mailing list.

New modules. New Brian modules typically start in the dev/ideas folder, then go to brian/experimental when they starting looking like modules. They move to the main folder when they are stable (especially the user syntax).

### Project Versions

#### Previous topic

Developer’s guide

#### Next topic

Simulation principles

PKE�B�x 9�D�D6brian-1.4.1/examples-synapses_Diesmann_et_al_1999.html Example: Diesmann_et_al_1999 (synapses) — Brian 1.4.1 documentation

# Example: Diesmann_et_al_1999 (synapses)¶

## Synfire chains¶

M. Diesmann et al. (1999). Stable propagation of synchronous spiking in cortical neural networks. Nature 402, 529-533.

from brian import *

# Neuron model parameters
Vr = -70 * mV
Vt = -55 * mV
taum = 10 * ms
taupsp = 0.325 * ms
weight = 4.86 * mV
# Neuron model
eqs = '''
dV/dt=(-(V-Vr)+x)*(1./taum) : volt
dx/dt=(-x+y)*(1./taupsp) : volt
dy/dt=-y*(1./taupsp)+25.27*mV/ms+\
(39.24*mV/ms**0.5)*xi : volt
'''
# Neuron groups
P = NeuronGroup(N=1000, model=eqs,
threshold=Vt, reset=Vr, refractory=1 * ms)
Pinput = PulsePacket(t=50 * ms, n=85, sigma=1 * ms)
# The network structure
Pgp = [ P.subgroup(100) for i in range(10)]
C = Synapses(P, P, model='w:volt', pre='y+=w')
for i in range(9):
C[Pgp[i], Pgp[i + 1]]=True
C.w[Pgp[i], Pgp[i + 1]]=weight
Cinput = Synapses(Pinput, Pgp, model='w:volt', pre='y+=w')
Cinput[:,:]=True
Cinput.w[:,:]=weight
# Record the spikes
Mgp = [SpikeMonitor(p) for p in Pgp]
Minput = SpikeMonitor(Pinput)
monitors = [Minput] + Mgp
# Setup the network, and run it
P.V = Vr + rand(len(P)) * (Vt - Vr)
run(100 * ms)
# Plot result
raster_plot(showgrouplines=True, *monitors)
show()


### Project Versions

#### Previous topic

Example: synapse_construction (synapses)

#### Next topic

Example: barrelcortex (synapses)

PKE�B"���/F/Fbrian-1.4.1/clocks.html Clocks — Brian 1.4.1 documentation

# Clocks¶

Brian is a clock-based simulator: operations are done synchronously at each tick of a clock.

Many Brian objects store a clock object, passed in the initialiser with the optional keyword clock. For example, to simulate a neuron group with time step dt=1 ms:

myclock=Clock(dt=1*ms)
group=NeuronGroup(100,model='dx/dt=1*mV/ms : volt',clock=myclock)


If no clock is specified, the program uses the global default clock. When Brian is initially imported, this is the object defaultclock, and it has a default time step of 0.1 ms. In a simple script, you can override this by writing (for example):

defaultclock.dt = 1*ms


You may wish to use multiple clocks in your program. In this case, for each object which requires one, you have to pass a copy of its Clock object. The network run function automatically handles objects with different clocks, updating them all at the appropriate time according to their time steps (value of dt).

Multiple clocks can be useful, for example, for defining a simulation that runs with a very small dt, but with some computationally expensive operation running at a lower frequency. In the following example, the model is simulated with dt=0.01 ms and the variable x is recorded every ms:

simulation_clock=Clock(dt=0.01*ms)
record_clock=Clock(dt=1*ms)
group=NeuronGroup(100,model='dx/dt=-x/tau : volt',clock=simulation_clock)
M=StateMonitor(group,'x',record='True',clock=record_clock)


The current time of a clock is stored in the attribute t (simulation_clock.t) and the timestep is stored in the attribute dt.

When using multiple clocks, it can be important to specify the order in which they evaluated, which you can using the order keyword of the Clock object, e.g.:

clock_first = Clock(dt=1*ms, order=0)
clock_second = Clock(dt=5*ms, order=1)


Every 5ms, these two clocks will coincide, and the order attribute means that clock_first will always be evaluated before clock_second.

## Other clocks¶

The default clock uses an underlying integer representation. This behaviour was changed in Brian 1.3 from earlier versions which used a float representation. To recover the earlier behaviour if it is important, you can use FloatClock or NaiveClock.

You may want to have events that happen at regular times, but still want to use the default clock for all other objects, in which case you can use the EventClock for a network_operation() and it will not create any clock ambiguities, e.g.:

from brian import *

...

G = NeuronGroup(N, eqs, ...)

...

@network_operation(clock=EventClock(dt=1*second))
def do_something():
...


### Project Versions

Realtime control

#### Next topic

Simulation control

PKE�B=#BKBK brian-1.4.1/reference-units.html Units system — Brian 1.4.1 documentation

# Units system¶

brian.have_same_dimensions(obj1, obj2)

Tests if two scalar values have the same dimensions, returns a bool.

Note that the syntax may change in later releases of Brian, with tighter integration of scalar and array valued quantities.

brian.is_dimensionless(obj)

Tests if a scalar value is dimensionless or not, returns a bool.

Note that the syntax may change in later releases of Brian, with tighter integration of scalar and array valued quantities.

exception brian.DimensionMismatchError(description, *dims)

Exception class for attempted operations with inconsistent dimensions

For example, 3*mvolt + 2*amp raises this exception. The purpose of this class is to help catch errors based on incorrect units. The exception will print a representation of the dimensions of the two inconsistent objects that were operated on. If you want to check for inconsistent units in your code, do something like:

try:
...
...
except DimensionMismatchError, inst:
...
cleanup code here, e.g.
print "Found dimension mismatch, details:", inst
...
brian.check_units(**au)

Decorator to check units of arguments passed to a function

Sample usage:

@check_units(I=amp,R=ohm,wibble=metre,result=volt)
def getvoltage(I,R,**k):
return I*R


You don’t have to check the units of every variable in the function, and you can define what the units should be for variables that aren’t explicitly named in the definition of the function. For example, the code above checks that the variable wibble should be a length, so writing:

getvoltage(1*amp,1*ohm,wibble=1)


would fail, but:

getvoltage(1*amp,1*ohm,wibble=1*metre)


would pass. String arguments are not checked (e.g. getvoltage(wibble='hello') would pass).

The special name result is for the return value of the function.

An error in the input value raises a DimensionMismatchError, and an error in the return value raises an AssertionError (because it is a code problem rather than a value problem).

Notes

This decorator will destroy the signature of the original function, and replace it with the signature (*args, **kwds). Other decorators will do the same thing, and this decorator critically needs to know the signature of the function it is acting on, so it is important that it is the first decorator to act on a function. It cannot be used in combination with another decorator that also needs to know the signature of the function.

Typically, you shouldn’t need to use any details about the following two classes, and their implementations are subject to change in future releases of Brian.

class brian.Quantity(value)

A number with an associated physical dimension.

In most cases, it is not necessary to create a Quantity object by hand, instead use the constant unit names second, kilogram, etc. The details of how Quantity objects work is subject to change in future releases of Brian, as we plan to reimplement it in a more efficient manner, more tightly integrated with numpy. The following can be safely used:

• Quantity, this name will not change, and the usage isinstance(x,Quantity) should be safe.
• The standard unit objects, second, kilogram, etc. documented in the main documentation will not be subject to change (as they are based on SI standardisation).
• Scalar arithmetic will work with future implementations.
class brian.Unit(value)

A physical unit

Normally, you do not need to worry about the implementation of units. They are derived from the Quantity object with some additional information (name and string representation). You can define new units which will be used when generating string representations of quantities simply by doing an arithmetical operation with only units, for example:

Nm = newton * metre


Note that operations with units are slower than operations with Quantity objects, so for efficiency if you do not need the extra information that a Unit object carries around, write 1*second in preference to second.

### Project Versions

#### Previous topic

SciPy, NumPy and PyLab

#### Next topic

Clocks

PKE�B�<�Ե3�30brian-1.4.1/examples-misc_correlated_inputs.html Example: correlated_inputs (misc) — Brian 1.4.1 documentation

# Example: correlated_inputs (misc)¶

An example with correlated spike trains From: Brette, R. (2007). Generation of correlated spike trains.

from brian import *

N = 100
#input = HomogeneousCorrelatedSpikeTrains(N, r=10 * Hz, c=0.1, tauc=10 * ms)

c = .2
nu = linspace(1*Hz, 10*Hz, N)
P = c*dot(nu.reshape((N,1)), nu.reshape((1,N)))/mean(nu**2)
tauc = 5*ms

spikes = mixture_process(nu, P, tauc, 1*second)
#spikes = [(i,t*second) for i,t in spikes]
input = SpikeGeneratorGroup(N, spikes)

S = SpikeMonitor(input)
#S2 = PopulationRateMonitor(input)
#M = StateMonitor(input, 'rate', record=0)
run(1000 * ms)

#subplot(211)
raster_plot(S)
#subplot(212)
#plot(S2.times / ms, S2.smooth_rate(5 * ms))
#plot(M.times / ms, M / Hz)
show()


### Project Versions

#### Previous topic

Example: noisy_ring (misc)

#### Next topic

Example: HodgkinHuxley (misc)

PKE�B��!�<@<@9brian-1.4.1/examples-frompapers_Brette_Gerstner_2005.html Example: Brette_Gerstner_2005 (frompapers) — Brian 1.4.1 documentation

# Example: Brette_Gerstner_2005 (frompapers)¶

Introduced in Brette R. and Gerstner W. (2005), Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity, J. Neurophysiol. 94: 3637 - 3642.

from brian import *

# Parameters
C = 281 * pF
gL = 30 * nS
taum = C / gL
EL = -70.6 * mV
VT = -50.4 * mV
DeltaT = 2 * mV
Vcut = VT + 5 * DeltaT

# Pick an electrophysiological behaviour
tauw, a, b, Vr = 144 * ms, 4 * nS, 0.0805 * nA, -70.6 * mV # Regular spiking (as in the paper)
#tauw,a,b,Vr=20*ms,4*nS,0.5*nA,VT+5*mV # Bursting
#tauw,a,b,Vr=144*ms,2*C/(144*ms),0*nA,-70.6*mV # Fast spiking

eqs = """
dvm/dt=(gL*(EL-vm)+gL*DeltaT*exp((vm-VT)/DeltaT)+I-w)/C : volt
dw/dt=(a*(vm-EL)-w)/tauw : amp
I : amp
"""

neuron = NeuronGroup(1, model=eqs, threshold=Vcut, reset="vm=Vr;w+=b", freeze=True)
neuron.vm = EL
trace = StateMonitor(neuron, 'vm', record=0)
spikes = SpikeMonitor(neuron)

run(20 * ms)
neuron.I = 1 * nA
run(100 * ms)
neuron.I = 0 * nA
run(20 * ms)

# We draw nicer spikes
vm = trace
for _, t in spikes.spikes:
i = int(t / defaultclock.dt)
vm[i] = 20 * mV

plot(trace.times / ms, vm / mV)
show()


### Project Versions

#### Previous topic

Example: Brunel_Hakim_1999 (frompapers)

#### Next topic

Example: Rothman_Manis_2003 (frompapers)

PKE�B�\�u�>�>0brian-1.4.1/examples-misc_realtime_plotting.html Example: realtime_plotting (misc) — Brian 1.4.1 documentation

# Example: realtime_plotting (misc)¶

Realtime plotting example

# These lines are necessary for interactive plotting when launching from the
# Eclipse IDE, they may not be necessary in every environment.
import matplotlib
matplotlib.use('WXAgg') # You may need to experiment, try WXAgg, GTKAgg, QTAgg, TkAgg

from brian import *
###### Set up the standard CUBA example ######
N = 4000
eqs = '''
dv/dt = (ge+gi-(v+49*mV))/(20*ms) : volt
dge/dt = -ge/(5*ms) : volt
dgi/dt = -gi/(10*ms) : volt
'''
P = NeuronGroup(N, eqs, threshold= -50 * mV, reset= -60 * mV)
P.v = -60 * mV + 10 * mV * rand(len(P))
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
Ce = Connection(Pe, P, 'ge', weight=1.62 * mV, sparseness=0.02)
Ci = Connection(Pi, P, 'gi', weight= -9 * mV, sparseness=0.02)

M = SpikeMonitor(P)
trace = RecentStateMonitor(P, 'v', record=range(5), duration=200 * ms)

ion()
subplot(211)
raster_plot(M, refresh=10 * ms, showlast=200 * ms, redraw=False)
subplot(212)
trace.plot(refresh=10 * ms, showlast=200 * ms)

run(1 * second)

ioff() # switch interactive mode off
show() # and wait for user to close the window before shutting down


### Project Versions

#### Previous topic

Example: phase_locking (misc)

#### Next topic

Example: van_rossum_metric (misc)

PKE�B:w�w�1�1brian-1.4.1/developer.html Developer’s guide — Brian 1.4.1 documentation

# Developer’s guide¶

This section is intended as a guide to how Brian functions internally for people developing Brian itself, or extensions to Brian. It may also be of some interest to others wishing to better understand how Brian works internally.

### Project Versions

#### Previous topic

Automatic Model Documentation

#### Next topic

Guidelines

PKE�B^� 7>>+brian-1.4.1/examples-hears_butterworth.html Example: butterworth (hears) — Brian 1.4.1 documentation

# Example: butterworth (hears)¶

Example of the use of the class Butterworth available in the library. In this example, a white noise is filtered by a bank of butterworth bandpass filters and lowpass filters which are different for every channels. The centre or cutoff frequency of the filters are linearly taken between 100kHz and 1000kHz and its bandwidth frequency increases linearly with frequency.

from brian import *
from brian.hears import *

level = 50*dB  # level of the input sound in rms dB SPL
sound = whitenoise(100*ms).ramp()
sound = sound.atlevel(level)
order = 2 #order of the filters

#### example of a bank of bandpass filter ################
nchannels = 50
center_frequencies = linspace(100*Hz, 1000*Hz, nchannels)
bw = linspace(50*Hz, 300*Hz, nchannels) # bandwidth of the filters
#arrays of shape (2 x nchannels) defining the passband frequencies (Hz)
fc = vstack((center_frequencies-bw/2, center_frequencies+bw/2))

filterbank = Butterworth(sound, nchannels, order, fc, 'bandpass')

filterbank_mon = filterbank.process()

figure()
subplot(211)
imshow(flipud(filterbank_mon.T), aspect='auto')

### example of a bank of lowpass filter ################
nchannels = 50
cutoff_frequencies = linspace(200*Hz, 1000*Hz, nchannels)

filterbank = Butterworth(sound, nchannels, order, cutoff_frequencies, 'low')

filterbank_mon = filterbank.process()

subplot(212)
imshow(flipud(filterbank_mon.T), aspect='auto')
show()


### Project Versions

#### Previous topic

Example: ircam_hrtf (hears)

#### Next topic

Example: time_varying_filter2 (hears)

PKE�Bw��x�x*brian-1.4.1/reference-standard-groups.html Standard Groups — Brian 1.4.1 documentation

# Standard Groups¶

Some standard types of NeuronGroup have already been defined. PoissonGroup and PoissonInput to generate spikes with Poisson statistics, PulsePacket to generate pulse packets with specified parameters, and SpikeGeneratorGroup to generate spikes which fire at prespecified times.

class brian.PoissonGroup(N, rates=0.0 * hertz, clock=None)

A group that generates independent Poisson spike trains.

Initialised as:

PoissonGroup(N,rates[,clock])

with arguments:

N
The number of neurons in the group
rates
A scalar, array or function returning a scalar or array. The array should have the same length as the number of neurons in the group. The function should take one argument t the current simulation time.
clock
The clock which the group will update with, do not specify to use the default clock.
class brian.PoissonInput(target, N=None, rate=None, weight=None, state=None, jitter=None, reliability=None, copies=1, record=False, freeze=False)

Adds a Poisson input to a NeuronGroup. Allows to efficiently simulate a large number of independent Poisson inputs to a NeuronGroup variable, without simulating every synapse individually. The synaptic events are generated randomly during the simulation and are not preloaded and stored in memory (unless record=True is used). All the inputs must target the same variable, have the same frequency and same synaptic weight. You can use as many PoissonInput objects as you want, even targetting a same NeuronGroup. There is the possibility to consider time jitter in the presynaptic spikes, and synaptic unreliability. The inputs can also be recorded if needed. Finally, all neurons from the NeuronGroup receive independent realizations of Poisson spike trains, except if the keyword freeze=True is used, in which case all neurons receive the same Poisson input.

Initialised as:

PoissonInput(target[, N][, rate][, weight][, state][, jitter][, reliability][, copies][, record][, freeze])

with arguments:

target
The target NeuronGroup
N
The number of independent Poisson inputs
rate
The rate of each Poisson process
weight
The synaptic weight
state
The name or the index of the synaptic variable of the NeuronGroup
jitter
is None by default. There is the possibility to consider copies presynaptic spikes at each Poisson event, randomly shifted according to an exponential law with parameter jitter=taujitter (in second).
reliability
is None by default. There is the possibility to consider copies presynaptic spikes at each Poisson event, where each of these spikes is unreliable, i.e. it occurs with probability jitter=alpha (between 0 and 1).
copies
The number of copies of each Poisson event. This is identical to weight=copies*w, except if jitter or reliability are specified.
record
True if the input has to be recorded. In this case, the recorded events are stored in the recorded_events attribute, as a list of pairs (i,t) where i is the neuron index and t is the event time.
freeze
True if the input must be the same for all neurons of the NeuronGroup
class brian.PulsePacket(*args, **kwds)

Fires a Gaussian distributed packet of n spikes with given spread

Initialised as:

PulsePacket(t,n,sigma[,clock])

with arguments:

t
The mean firing time
n
The number of spikes in the packet
sigma
The standard deviation of the firing times.
clock
The clock to use (omit to use default or local clock)

Methods

This class is derived from SpikeGeneratorGroup and has all its methods as well as one additional method:

generate(t, n, sigma)

Change the parameters and/or generate a new pulse packet.

class brian.SpikeGeneratorGroup(N, spiketimes, clock=None, period=None, sort=True, gather=None)

Emits spikes at given times

Initialised as:

SpikeGeneratorGroup(N,spiketimes[,clock[,period]])

with arguments:

N
The number of neurons in the group.
spiketimes
An object specifying which neurons should fire and when. It can be a container such as a list, containing tuples (i,t) meaning neuron i fires at time t, or a callable object which returns such a container (which allows you to use generator objects even though this is slower, see below). i can be an integer or an array (list of neurons that spike at the same time). If spiketimes is not a list or tuple, the pairs (i,t) need to be sorted in time. You can also pass a numpy array spiketimes where the first column of the array is the neuron indices, and the second column is the times in seconds. Alternatively you can pass a tuple with two arrays, the first one being the neuron indices and the second one times. WARNING: units are not checked in this case, the time array should be in seconds.
clock
An optional clock to update with (omit to use the default clock).
period
Optionally makes the spikes recur periodically with the given period. Note that iterator objects cannot be used as the spikelist with a period as they cannot be reinitialised.
gather=False
Set to True if you want to gather spike events that fall in the same timestep. (Deprecated since Brian 1.3.1)
sort=True

Has an attribute:

spiketimes
This can be used to reset the list of spike times, however the values of N, clock and period cannot be changed.

Sample usages

The simplest usage would be a list of pairs (i,t):

spiketimes = [(0,1*ms), (1,2*ms)]
SpikeGeneratorGroup(N,spiketimes)


A more complicated example would be to pass a generator:

import random
def nextspike():
nexttime = random.uniform(0*ms,10*ms)
while True:
yield (random.randint(0,9),nexttime)
nexttime = nexttime + random.uniform(0*ms,10*ms)
P = SpikeGeneratorGroup(10,nextspike())


This would give a neuron group P with 10 neurons, where a random one of the neurons fires at an average rate of one every 5ms. Please note that as of 1.3.1, this behavior is preserved but will run slower than initializing with arrays, or lists.

Notes

Note that if a neuron fires more than one spike in a given interval dt, additional spikes will be discarded. A warning will be issued if this is detected.

Also, if you want to use a SpikeGeneratorGroup with many spikes and/or neurons, please use an initialization with arrays.

Also note that if you pass a generator, then reinitialising the group will not have the expected effect because a generator object cannot be reinitialised. Instead, you should pass a callable object which returns a generator. In the example above, that would be done by calling:

P = SpikeGeneratorGroup(10,nextspike)


Whenever P is reinitialised, it will call nextspike() to create the required spike container.

class brian.MultipleSpikeGeneratorGroup(spiketimes, clock=None, period=None)

Emits spikes at given times

Warning

This function has been deprecated after Brian 1.3.1 and will be removed in a future release. Use SpikeGeneratorGroup instead. To convert spiketimes for MultipleSpikeGeneratorGroup into a form suitable for SpikeGeneratorGroup, do:

N = len(spiketimes)
spiketimes = [(i, t) for i in xrange(N) for t in spiketimes[i]]


Initialised as:

MultipleSpikeGeneratorGroup(spiketimes[,clock[,period]])

with arguments:

spiketimes
a list of spike time containers, one for each neuron in the group, although note that elements of spiketimes can also be callable objects which return spike time containers if you want to be able to reinitialise (see below). At it’s simplest, spiketimes could be a list of lists, where spiketimes contains the firing times for neuron 0, spiketimes for neuron 1, etc. But, any iterable object can be passed, so spiketimes could be a generator for example. Each spike time container should be sorted in time. If the containers are numpy arrays units will not be checked (times should be in seconds).
clock
A clock, if omitted the default clock will be used.
period
Optionally makes the spikes recur periodically with the given period. Note that iterator objects cannot be used as the spikelist with a period as they cannot be reinitialised.

Note that if two or more spike times fall within the same dt, spikes will stack up and come out one per dt until the stack is exhausted. A warning will be generated if this happens.

Also note that if you pass a generator, then reinitialising the group will not have the expected effect because a generator object cannot be reinitialised. Instead, you should pass a callable object which returns a generator, this will be called each time the object is reinitialised by calling the reinit() method.

Sample usage:

spiketimes = [[1*msecond, 2*msecond]]
P = MultipleSpikeGeneratorGroup(spiketimes)


### Project Versions

Integration

#### Next topic

Connections

PKE�B�����Y�Yrbrian-1.4.1/examples-frompapers-computing with neural synchrony-coincidence detection and synchrony_Fig1F_ROC.html Example: Fig1F_ROC (frompapers/computing with neural synchrony/coincidence detection and synchrony) — Brian 1.4.1 documentation

# Example: Fig1F_ROC (frompapers/computing with neural synchrony/coincidence detection and synchrony)¶

## Brette R (2012). Computing with neural synchrony. PLoS Comp Biol. 8(6): e1002561. doi:10.1371/journal.pcbi.1002561¶

Figure 1F. (simulation takes about 10 mins)

Coincidence detection is seen as a signal detection problem, that of detecting a given depolarization in a background of noise, within one characteristic time constant. The choice of the spike threshold implements a particular trade-off between false alarms (depolarization was due to noise) and misses (depolarization was not detected).

Caption (Fig 1F). Receiver-operation characteristic (ROC) for one level of noise, obtained by varying the threshold (black curve). The hit rate is the probability that the neuron fires within one integration time constant t when depolarized by Dv, and the false alarm rate is the firing probability without depolarization. The corresponding theoretical curve, with sensitivity index d’ =Dv/sigma, is shown in red.

from brian import *
from scipy.special import erf

def spike_probability(x): # firing probability for unit variance and zero mean, and threshold = x
return .5*(1-erf(x/sqrt(2.)))

tau_cd=5*ms     # membrane time constant (cd for coincidence detector)
tau_n=tau_cd    # input is an Ornstein-Uhlenbeck process with the same time constant as the membrane time constant
T=3*tau_n       # neurons are depolarized by w at regular intervales, T is the spacing
Nspikes=10000   # number of input spikes
T0=T*Nspikes    # initial period without inputs, to calculate the false alarm rate
N=500           # number of neurons, each neuron has a different threshold between 0. and 3.
w=1             # synaptic weight (depolarization)
sigma=1.        # input noise s.d.
sigmav=sigma*sqrt(tau_n/(tau_n+tau_cd)) # noise s.d. on the membrane potential
print "d'=",1./sigmav # discriminability index

# Integrate-and-fire neurons
eqs='''
dv/dt=(sigma*n-v)/tau_cd : 1
dn/dt=-n/tau_n+(2/tau_n)**.5*xi : 1
vt : 1 # spike threshold
'''
neurons=NeuronGroup(N,model=eqs,threshold='v>vt',reset='v=0',refractory=tau_cd)
neurons.vt=linspace(0.,3,N) # spike threshold varies across neurons
counter=SpikeCounter(neurons)

# Inputs are regular spikes, starting at T0
input=SpikeGeneratorGroup(1,[(0,n*T+T0) for n in range(Nspikes)])
C=Connection(input,neurons,'v',weight=w)

# Calculate the false alarm rate
run(T0,report='text')
FR=tau_cd*counter.count*1./T0
# Calculate the hit rate
counter.reinit()
run(Nspikes*T,report='text')
HR=counter.count*1./Nspikes-FR*(T-tau_cd)/tau_cd

# Prediction based on Gaussian statistics
FRpred=spike_probability(neurons.vt/sigmav)
HRpred=spike_probability((neurons.vt-w)/sigmav)

# Figure
plot(FR*100,HR*100,'k')          # simulations
plot(FRpred*100,HRpred*100,'r')  # theoretical predictions
plot([0,100],[0,100],'k--')
plot([0,100],[50,50],'k--')
xlim(0,100)
ylim(0,100)
xlabel('False alarm rate (%)')
ylabel('Hit rate (%)')

show()


### Project Versions

#### Previous topic

Example: Fig1C (frompapers/computing with neural synchrony/coincidence detection and synchrony)

#### Next topic

Example: Fig12A_Goodman_Brette_2010 (frompapers/computing with neural synchrony/hearing)

PKE�BDĆ ? ?6brian-1.4.1/examples-misc_spike_triggered_average.html Example: spike_triggered_average (misc) — Brian 1.4.1 documentation

# Example: spike_triggered_average (misc)¶

Example of the use of the function spike_triggered_average. A white noise is filtered by a gaussian filter (low pass filter) which output is used to generate spikes (poission process) Those spikes are used in conjunction with the input signal to retrieve the filter function.

from brian import *
from brian.hears import *
from numpy.random import randn
from numpy.linalg import norm
from matplotlib import pyplot

dt = 0.1*ms
defaultclock.dt = dt
stimulus_duration = 15000*ms
stimulus = randn(int(stimulus_duration/     dt))

#filter
n=200
filt = exp(-((linspace(0.5,n,n))-(n+5)/2)**2/(n/3));
filt = filt/norm(filt)*1000;
filtered_stimulus = convolve(stimulus,filt)

neuron = PoissonGroup(1,lambda t:filtered_stimulus[int(t/dt)])

spikes = SpikeMonitor(neuron)
run(stimulus_duration,report='text')
spikes = spikes #resulting spikes

max_interval = 20*ms #window duration of the spike triggered average
onset = 10*ms
sta,time_axis = spike_triggered_average(spikes,stimulus,max_interval,dt,onset=onset,display=True)

figure()
plot(time_axis,filt/max(filt))
plot(time_axis,sta/max(sta))
xlabel('time axis')
ylabel('sta')
legend(('real filter','estimated filter'))

show()


### Project Versions

#### Previous topic

Example: expIF_network (misc)

#### Next topic

Example: noisy_ring (misc)

PKE�B�פ�Y�Y:brian-1.4.1/examples-frompapers_Rossant_et_al_2011bis.html Example: Rossant_et_al_2011bis (frompapers) — Brian 1.4.1 documentation

# Distributed synchrony example¶

Fig. 14 from:

Rossant C, Leijon S, Magnusson AK, Brette R (2011). “Sensitivity of noisy neurons to coincident inputs”. Journal of Neuroscience, 31(47).

5000 independent E/I Poisson inputs are injected into a leaky integrate-and-fire neuron. Synchronous events, following an independent Poisson process at 40 Hz, are considered, where 15 E Poisson spikes are randomly shifted to be synchronous at those events. The output firing rate is then significantly higher, showing that the spike timing of less than 1% of the excitatory synapses have an important impact on the postsynaptic firing.

from brian import *

# neuron parameters
theta = -55*mV
El = -65*mV
vmean = -65*mV
taum = 5*ms
taue = 3*ms
taui = 10*ms
eqs = Equations("""
dv/dt  = (ge+gi-(v-El))/taum : volt
dge/dt = -ge/taue : volt
dgi/dt = -gi/taui : volt
""")

# input parameters
p = 15
ne = 4000
ni = 1000
lambdac = 40*Hz
lambdae = lambdai = 1*Hz

# synapse parameters
we = .5*mV/(taum/taue)**(taum/(taue-taum))
wi = (vmean-El-lambdae*ne*we*taue)/(lambdae*ni*taui)

# NeuronGroup definition
group = NeuronGroup(N=2, model=eqs, reset=El, threshold=theta, refractory=5*ms)
group.v = El
group.ge = group.gi = 0

# independent E/I Poisson inputs
p1 = PoissonInput(group, N=ne, rate=lambdae, weight=we, state='ge')
p2 = PoissonInput(group, N=ni, rate=lambdai, weight=wi, state='gi')

# independent E/I Poisson inputs + synchronous E events
p3 = PoissonInput(group, N=ne, rate=lambdae-(p*1.0/ne)*lambdac, weight=we, state='ge')
p4 = PoissonInput(group, N=ni, rate=lambdai, weight=wi, state='gi')
p5 = PoissonInput(group, N=1, rate=lambdac, weight=p*we, state='ge')

# run the simulation
reinit_default_clock()
M = SpikeMonitor(group)
SM = StateMonitor(group, 'v', record=True)
run(1*second)

# plot trace and spikes
for i in [0,1]:
spikes = M.spiketimes[i]-.0001
val = SM.values[i]
subplot(2,1,i+1)
plot(SM.times, val)
plot(tile(spikes, (2,1)),
vstack((val[array(spikes*10000, dtype=int)],
zeros(len(spikes)))), 'b')
title("%s: %d spikes/second" % (["uncorrelated inputs", "correlated inputs"][i],
len(M.spiketimes[i])))
show()


### Project Versions

#### Previous topic

Example: Muller_et_al_2011 (frompapers)

#### Next topic

Example: Touboul_Brette_2008 (frompapers)

PKE�B�[)E�>�>3brian-1.4.1/examples-synapses_poisson_synapses.html Example: poisson_synapses (synapses) — Brian 1.4.1 documentation

# Example: poisson_synapses (synapses)¶

This example shows how to efficiently simulate neurons with a large number of Poisson inputs targetting arbitrarily complex synapses. The approach is very similiar to what the PoissonInput class does internally, but PoissonInput cannot be combined with the Synapses class. You could also just use many PoissonGroup objects as inputs, but this is very slow and memory consuming.

from brian import *

# Poisson inputs
M = 1000 # number of Poisson inputs
max_rate = 100

# Neurons
N = 50 # number of neurons
tau = 10 * ms
E_exc = 0 * mV
E_L = -70 * mV
G = NeuronGroup(N, model='dvm/dt = -(vm - E_L)/tau : mV')
G.rest()

# Dummy neuron group
P = NeuronGroup(1, 'v : 1', threshold= -1, reset=0) # spikes every timestep

# time varying rate
def varying_rate(t):
return defaultclock.dt * max_rate * (0.5 + 0.5 * sin(2 * pi * 5 * t))

# Synaptic connections: binomial(cellM, varying_rate(t)) gives the number of
# events per timestep. The synapse model is a conductance-based instanteneous
# jump in postsynaptic membrane potential
S = Synapses(P, G, model='''
J : 1
cellM : 1
''',
pre='vm += binomial(cellM, varying_rate(t)) * J * (E_exc - vm)')
S[:, :] = True
S.cellM = M #we need one value for M per cell, so that binomial is vectorized
S.J = 0.0005

mon = StateMonitor(G, 'vm', record=True)
run(1 * second, report='text')

mon.plot()
show()


### Project Versions

#### Previous topic

Example: jeffress (synapses)

#### Next topic

Example: gapjunctions (synapses)

PKE�BqfZ�7�73brian-1.4.1/tutorial_1d_introducing_randomness.html Tutorial 1d: Introducing randomness — Brian 1.4.1 documentation

# Tutorial 1d: Introducing randomness¶

In the previous part of the tutorial, all the neurons start at the same values and proceed deterministically, so they all spike at exactly the same times. In this part, we introduce some randomness by initialising all the membrane potentials to uniform random values between the reset and threshold values.

We start as before:

from brian import *

tau = 20 * msecond        # membrane time constant
Vt = -50 * mvolt          # spike threshold
Vr = -60 * mvolt          # reset value
El = -49 * mvolt          # resting potential (same as the reset)

G = NeuronGroup(N=40, model='dV/dt = -(V-El)/tau : volt',
threshold=Vt, reset=Vr)

M = SpikeMonitor(G)


But before we run the simulation, we set the values of the membrane potentials directly. The notation G.V refers to the array of values for the variable V in group G. In our case, this is an array of length 40. We set its values by generating an array of random numbers using Brian’s rand function. The syntax is rand(size) generates an array of length size consisting of uniformly distributed random numbers in the interval 0, 1.

G.V = Vr + rand(40) * (Vt - Vr)


And now we run the simulation as before.

run(1 * second)

print M.nspikes


But this time we get a varying number of spikes each time we run it, roughly between 800 and 850 spikes. In the next part of this tutorial, we introduce a bit more interest into this network by connecting the neurons together.

### Project Versions

#### Previous topic

Tutorial 1c: Making some activity

#### Next topic

Tutorial 1e: Connecting neurons

PKE�B7�&UDUD&brian-1.4.1/reference-preferences.html Preferences — Brian 1.4.1 documentation

# Preferences¶

## Functions¶

Setting and getting global preferences is done with the following functions:

brian.set_global_preferences(**kwds)

Set global preferences for Brian

Usage:

set_global_preferences(...)


where ... is a list of keyword assignments.

brian.get_global_preference(k)

Get the value of the named global preference

## Global configuration file¶

If you have a module named brian_global_config anywhere on your Python path, Brian will attempt to import it to define global preferences. For example, to automatically enable weave compilation for all your Brian projects, create a file brian_global_config.py somewhere in the Python path with the following contents:

from brian.globalprefs import *
set_global_preferences(useweave=True)


## Global preferences for Brian¶

The following global preferences have been defined:

defaultclock = Clock(dt=0.1*msecond)
The default clock to use if none is provided or defined in any enclosing scope.
useweave_linear_diffeq = False
Whether to use weave C++ acceleration for the solution of linear differential equations. Note that on some platforms, typically older ones, this is faster and on some platforms, typically new ones, this is actually slower.
useweave = False
Defines whether or not functions should use inlined compiled C code where defined. Requires a compatible C++ compiler. The gcc and g++ compilers are probably the easiest option (use Cygwin on Windows machines). See also the weavecompiler global preference.
weavecompiler = gcc
Defines the compiler to use for weave compilation. On Windows machines, installing Cygwin is the easiest way to get access to the gcc compiler.
gcc_options = ['-ffast-math']
Defines the compiler switches passed to the gcc compiler. For gcc versions 4.2+ we recommend using -march=native. By default, the -ffast-math optimisations are turned on - if you need IEEE guaranteed results, turn this switch off.
openmp = False
Whether or not to use OpenMP pragmas in generated C code. If supported on your compiler (gcc 4.2+) it will use multiple CPUs and can run substantially faster. However, if you are already running several simulations in parallel this will not improve the speed and may even slow it down. In addition, for smaller networks or for simpler neuron models the parallelisation overheads can make it take longer.
usecodegen = False
Whether or not to use experimental code generation support.
usecodegenweave = False
Whether or not to use C with experimental code generation support.
usecodegenstateupdate = True
Whether or not to use experimental code generation support on state updaters.
usecodegenreset = False
Whether or not to use experimental code generation support on resets. Typically slower due to weave overheads, so usually leave this off.
usecodegenthreshold = True
Whether or not to use experimental code generation support on thresholds.
usenewpropagate = False
Whether or not to use experimental new C propagation functions.
usecstdp = False
Whether or not to use experimental new C STDP.
brianhears_usegpu = False
Whether or not to use the GPU (if available) in Brian.hears. Support is experimental at the moment, and requires the PyCUDA package to be installed.
magic_useframes = True
Defines whether or not the magic functions should search for objects defined only in the calling frame or if they should find all objects defined in any frame. This should be set to False if you are using Brian from an interactive shell like IDLE or IPython where each command has its own frame, otherwise set it to True.

### Project Versions

#### Previous topic

Precalculated tables

#### Next topic

Logging

PKE�B�]::(brian-1.4.1/examples-misc_I-F_curve.html Example: I-F_curve (misc) — Brian 1.4.1 documentation

# Example: I-F_curve (misc)¶

Input-Frequency curve of a neuron (cortical RS type) Network: 1000 unconnected integrate-and-fire neurons (Brette-Gerstner) with an input parameter I. The input is set differently for each neuron. Spikes are sent to a ‘neuron’ group with the same size and variable n, which has the role of a spike counter.

from brian import *
from brian.library.IF import *

N = 1000
eqs = Brette_Gerstner() + Current('I:amp')
print eqs
group = NeuronGroup(N, model=eqs, threshold= -20 * mV, reset=AdaptiveReset())
group.vm = -70 * mV
group.I = linspace(0 * nA, 1 * nA, N)

counter = NeuronGroup(N, model='n:1')
C = IdentityConnection(group, counter, 'n')

i = N * 8 / 10
trace = StateMonitor(group, 'vm', record=i)

duration = 5 * second
run(duration)
subplot(211)
plot(group.I / nA, counter.n / duration)
xlabel('I (nA)')
ylabel('Firing rate (Hz)')
subplot(212)
plot(trace.times / ms, trace[i] / mV)
xlabel('Time (ms)')
ylabel('Vm (mV)')
show()


### Project Versions

#### Previous topic

Example: rate_model (misc)

#### Next topic

Example: COBAHH (misc)

PKE�B���QdQdtbrian-1.4.1/examples-frompapers-computing with neural synchrony-duration selectivity_Fig1D_duration_selectivity.html Example: Fig1D_duration_selectivity (frompapers/computing with neural synchrony/duration selectivity) — Brian 1.4.1 documentation

# Example: Fig1D_duration_selectivity (frompapers/computing with neural synchrony/duration selectivity)¶

## Brette R (2012). Computing with neural synchrony. PLoS Comp Biol. 8(6): e1002561. doi:10.1371/journal.pcbi.1002561¶

Figure 1C,D. Duration selectivity. (takes about 1 min)

Caption (Fig. 1C,D). A postsynaptic neuron receives inputs from A and B. It is more likely to fire when the stimulus in the synchrony receptive field of A and B.

from brian import *

# Parameters and equations of the rebound neurons
Vt=-55*mV
Vr=-70*mV
El=-35*mV
EK=-90*mV
Va=Vr
ka=5*mV
gmax2=2
tau=20*ms
ginh_max=5.
tauK2=100*ms
N=10000 # number of neurons (= different durations)
rest_time=1*second # initial time
tmin=rest_time-20*ms #for plots
tmax=rest_time+600*ms

eqs='''
dv/dt=(El-v+(gmax*gK+gmax2*gK2+ginh)*(EK-v))/tau : volt
dgK/dt=(gKinf-gK)/tauK : 1 # IKLT
dgK2/dt=-gK2/tauK2 : 1 # Delayed rectifier
gKinf=1./(1+exp((Va-v)/ka)) : 1
duration : second
ginh = ginh_max*((t>rest_time) & (t<(rest_time+duration))) : 1
tauK : ms
gmax : 1
theta : volt # threshold
'''

neurons=NeuronGroup(2*N,model=eqs,threshold='v>theta',reset='v=Vr;gK2=1')
neurons.v=Vr
neurons.theta=Vt
neurons.gK=1./(1+exp((Va-El)/ka))
# Neuron A, duplicated to simulate multiple input durations simultaneously
neuronsA=neurons[:N]
neuronsA.tauK=400*ms
neuronsA.gmax=1
neuronsA.theta=-55*mV
neuronsA.duration=linspace(100*ms,1*second,N)
# Neuron B, duplicated to simulate multiple input durations simultaneously
neuronsB=neurons[N:]
neuronsB.tauK=100*ms
neuronsB.gmax=1.5
neuronsB.theta=-54*mV
neuronsB.duration=linspace(100*ms,1*second,N)

# Noisy coincidence detectors
tau_cd=5*ms
tau_n=tau_cd
sigma=0.2 # noise s.d. in units of the threshold
eqs_post='''
dv/dt=(n-v)/tau_cd : 1
dn/dt=-n/tau_n+sigma*(2/tau_n)**.5*xi : 1
'''
postneurons=NeuronGroup(N,model=eqs_post,threshold=1,reset=0)
CA=IdentityConnection(neuronsA,postneurons,'v',weight=0.5)
CB=IdentityConnection(neuronsB,postneurons,'v',weight=0.5)

spikes=SpikeCounter(postneurons)
M=StateMonitor(postneurons,'v',record=N/3)

run(rest_time+1.1*second,report='text')

# Figure
subplot(121) # Fig. 1C, example trace
plot(M.times/ms,M[N/3],'k')
xlim(1350,1500)
ylim(-.3,1)
xlabel('Time (ms)')
ylabel('V')

subplot(122) # Fig. 1D, duration tuning curve
count=spikes.count
# Smooth the tuning curve
window=200
rate=zeros(len(count)-window)
for i in range(0,len(count)-window):
rate[i]=mean(count[i:i+window])
plot((neuronsA.duration[window/2:-window/2]/ms)[::10],rate[::10],'k')
xlim(0,1000)
ylim(0,0.5)
xlabel('Duration (ms)')
ylabel('Spiking probability')
show()


### Project Versions

#### Previous topic

Example: params (frompapers/computing with neural synchrony/duration selectivity)

#### Next topic

Example: Fig11B_olfaction_stdp_testing (frompapers/computing with neural synchrony/olfaction)

PKE�B��h^�I�I#brian-1.4.1/examples-misc_COBA.html Example: COBA (misc) — Brian 1.4.1 documentation

# Example: COBA (misc)¶

This is a Brian script implementing a benchmark described in the following review paper:

Simulation of networks of spiking neurons: A review of tools and strategies (2007). Brette, Rudolph, Carnevale, Hines, Beeman, Bower, Diesmann, Goodman, Harris, Zirpe, Natschlager, Pecevski, Ermentrout, Djurfeldt, Lansner, Rochel, Vibert, Alvarez, Muller, Davison, El Boustani and Destexhe. Journal of Computational Neuroscience 23(3):349-98

Benchmark 1: random network of integrate-and-fire neurons with exponential synaptic conductances

Clock-driven implementation with Euler integration (no spike time interpolation)

## R. Brette - Dec 2007¶

Brian is a simulator for spiking neural networks written in Python, developed by R. Brette and D. Goodman. http://brian.di.ens.fr

from brian import *
import time

# Time constants
taum = 20 * msecond
taue = 5 * msecond
taui = 10 * msecond
# Reversal potentials
Ee = (0. + 60.) * mvolt
Ei = (-80. + 60.) * mvolt

start_time = time.time()
eqs = Equations('''
dv/dt = (-v+ge*(Ee-v)+gi*(Ei-v))*(1./taum) : volt
dge/dt = -ge*(1./taue) : 1
dgi/dt = -gi*(1./taui) : 1
''')
# NB 1: conductances are in units of the leak conductance
# NB 2: multiplication is faster than division

P = NeuronGroup(4000, model=eqs, threshold=10 * mvolt, \
reset=0 * mvolt, refractory=5 * msecond,
order=1, compile=True)
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
we = 6. / 10. # excitatory synaptic weight (voltage)
wi = 67. / 10. # inhibitory synaptic weight
Ce = Connection(Pe, P, 'ge', weight=we, sparseness=0.02)
Ci = Connection(Pi, P, 'gi', weight=wi, sparseness=0.02)
# Initialization
P.v = (randn(len(P)) * 5 - 5) * mvolt
P.ge = randn(len(P)) * 1.5 + 4
P.gi = randn(len(P)) * 12 + 20

# Record the number of spikes
Me = PopulationSpikeCounter(Pe)
Mi = PopulationSpikeCounter(Pi)

print "Network construction time:", time.time() - start_time, "seconds"
print "Simulation running..."
start_time = time.time()

run(1 * second)
duration = time.time() - start_time
print "Simulation time:", duration, "seconds"
print Me.nspikes, "excitatory spikes"
print Mi.nspikes, "inhibitory spikes"


### Project Versions

#### Previous topic

Example: mirollo_strogatz (misc)

#### Next topic

Example: ring (misc)

PKE�BMC���<�<:brian-1.4.1/examples-plasticity_short_term_plasticity.html Example: short_term_plasticity (plasticity) — Brian 1.4.1 documentation

# Example: short_term_plasticity (plasticity)¶

Example with short term plasticity model Neurons with regular inputs and depressing synapses

from brian import *

tau_e = 3 * ms
taum = 10 * ms
A_SE = 250 * pA
Rm = 100 * Mohm
N = 10

eqs = '''
dx/dt=rate : 1
rate : Hz
'''

input = NeuronGroup(N, model=eqs, threshold=1., reset=0)
input.rate = linspace(5 * Hz, 30 * Hz, N)

eqs_neuron = '''
dv/dt=(Rm*i-v)/taum:volt
di/dt=-i/tau_e:amp
'''
neuron = NeuronGroup(N, model=eqs_neuron)

C = Connection(input, neuron, 'i')
C.connect_one_to_one(weight=A_SE)
stp = STP(C, taud=1 * ms, tauf=100 * ms, U=.1) # facilitation
#stp=STP(C,taud=100*ms,tauf=10*ms,U=.6) # depression
trace = StateMonitor(neuron, 'v', record=[0, N - 1])

run(1000 * ms)
subplot(211)
plot(trace.times / ms, trace / mV)
title('Vm')
subplot(212)
plot(trace.times / ms, trace[N - 1] / mV)
title('Vm')
show()


### Project Versions

#### Previous topic

Example: STDP2 (plasticity)

#### Next topic

Example: sounds (hears)

PKE�Bz{��v�vbrian-1.4.1/modelfitting.html Model fitting — Brian 1.4.1 documentation

# Model fitting¶

The modelfitting library is used for fitting a neuron model to data.

The library provides a single function modelfitting(), which accepts the model and the data as arguments and returns the model parameters that fit best the data. The model is a spiking neuron model, whereas the data consists of both an input (time-varying signal, for example an injected current) and a set of spike trains. Only spikes are considered for the fitness. Several target spike trains can be specified in order to fit independently several data sets. In this case, the modelfitting() function returns as many parameters sets as there are target spike trains.

The model is defined as any spiking neuron model in Brian, by giving the equations as mathematical equations, and the reset and threshold values. The free parameters of the model that shall be fitted by the library are also specified. The data is specified by the input (a vector containing the time-varying injected current), the timestep of the input, and the data as a list of spike times.

## How it works¶

Fitting a spiking neuron model to electrophysiological data is performed by maximizing a fitness function measuring the adequacy of the model to the data. This function is defined as the gamma factor, which is based on the number of coincidences between the model spikes and the experimentally-recorded spikes, defined as the number of spikes in the experimental train such that there is at least one spike in the model train within plus or minus delta, where delta is the size of the temporal window (typically a few milliseconds). For more details on the gamma factor, see Jolivet et al. 2008, “A benchmark test for a quantitative assessment of simple neuron models”, J. Neurosci. Methods (available in PDF here).

The optimization procedure is performed by an optimization algorithm. The optimization toolbox used by modelfitting is implemented in the external Python package Playdoh. It also supports distributed and parallel optimization across CPUs and machines. Different optimization algorithms are supported, the default one is CMAES. All those algorithms require the evaluation of the fitness function for a large number of parameter sets. Each iteration of the algorithm involves the simulation of a large number of neurons (one neuron corresponding to one parameter set) as well as the computation of the gamma factor for each neuron. The quality of the result depends on the number of neurons used, which is specified in the modelfitting() function.

Playdoh supports the use of graphical processing units (GPUs) in order to accelerate the speed of convergence of the algorithm. If multiple cores are detected, the library will use all of them by default. Also, if a CUDA-enabled GPU is present on the system, and if PyCUDA is installed, the library will automatically use the GPU by default. In addition, several computers can be networked over IP, see Clusters.

## Usage example¶

To import the library, use

from brian.library.modelfitting import *


To fit the parameters of a neuron model with respect to some data, use the modelfitting() function

results = modelfitting(model = equations, reset = 0, threshold = 1,
data = spikes,
input = input, dt = .1*ms,
popsize = 1000, maxiter = 10,
R = [1.0e9, 1.0e10], tau = [1*ms, 50*ms])

print_table(results)


Warning

Windows users should read the section Important note for Windows users.

The model is defined by equations (an Equations object), reset (a scalar value or a set of equations as a string) and threshold (a scalar value or a set of equations as a string).

The target spike trains are defined by data (a list of pairs (neuron index, spike time) or a list of spike times if there is only one target spike train).

The input is specified with input (a vector containing the time-varying signal) and dt (the time step of the signal). The input variable should be I in the equations, although the input variable name can be specified with input_var.

The number of particles per target train used in the optimization algorithm is specified with popsize. The total number of neurons is popsize multiplied by the number of target spike trains. The number of iterations in the algorithm is specified with maxiter.

Each free parameter of the model that shall be fitted is defined by two values

param_name = [min, max]


param_name should correspond to the parameter name in the model equations. min and max specify the initial interval from which the parameter values will be uniformly sampled at the beginning of the optimization algorithm. A boundary interval can also be specified by giving four values

param_name = [bound_min, min, max, bound_max]


The parameter values will be forced to stay inside the interval [bound_min, bound_max] during the optimization.

The complete list of arguments can be found in the reference section of the modelfitting() function.

The best parameters and the corresponding best fitness values found by the optimization procedure are returned in the OptimizationResult object result.

## Important note for Windows users¶

The model fitting library uses the Python multiprocessing package to distribute fitting across processors in a single computer or across multiple computers. However, there is a limitation of the Windows version of multiprocessing which you can read about here. The end result is that a script like this:

from brian.library.modelfitting import *
...
results = modelfitting(...)


will crash, going into an endless loop and creating hundreds of Python processes that have to be shut down by hand. Instead, you have to do this:

from brian.library.modelfitting import *
...
if __name__=='__main__':
results = modelfitting(...)


## Clusters¶

The model fitting package can be used with a cluster of computers connected over IP. Every computer must have Brian and Playdoh installed, and they must run the Playdoh server: see the Playdoh documentation. Then, you can launch the modelfitting function with the machines keyword, which is the list of the IP addresses of the machines to use in parallel for the fitting procedure. You must also specify the unit_type keyword, which is CPU or GPU, to indicate whether you want to use CPUs or GPUs on these computers. You can’t mix CPUs and GPUs for the same optimization.

### IP¶

To connect several machines via IP, pass a list of host names or IP addresses as strings to the machines keyword of the modelfitting() function. To specify a specific port, use a tuple (IP, port) instead of a string. You can also specify a default port in the Playdoh user preferences, see the Playdoh documentation.

### Authentication¶

You can specify an authentication string on all the computers running the Playdoh server to secure communications. See the Playdoh documentation.

### Example¶

The following script launches a fitting procedure in parallel on two machines:

from brian import loadtxt, ms, Equations
from brian.library.modelfitting import *

if __name__ == '__main__':
# List of machines IP addresses
machines = ['bobs-machine.university.com',
'jims-machine.university.com']

equations = Equations('''
dV/dt=(R*I-V)/tau : 1
I : 1
R : 1
tau : second
''')
results = modelfitting( model = equations,
reset = 0,
threshold = 1,
data = spikes,
input = input,
dt = .1*ms,
popsize = 1000,
maxiter = 3,
delta = 4*ms,
unit_type = 'CPU',
machines = machines,
R = [1.0e9, 9.0e9],
tau = [10*ms, 40*ms],
refractory = [0*ms, 10*ms])
print_table(results)


The two remote machines would run the Playdoh server.

### Project Versions

#### Previous topic

Electrophysiology: trace analysis

#### Next topic

Brian hears

PKE�B�#���6�6*brian-1.4.1/examples-hears_simple_anf.html Example: simple_anf (hears) — Brian 1.4.1 documentation

# Example: simple_anf (hears)¶

Example of a simple auditory nerve fibre model with Brian hears.

from brian import *
from brian.hears import *

sound1 = tone(1*kHz, .1*second)
sound2 = whitenoise(.1*second)

sound = sound1+sound2
sound = sound.ramp()

cf = erbspace(20*Hz, 20*kHz, 3000)
cochlea = Gammatone(sound, cf)

# Half-wave rectification and compression [x]^(1/3)
ihc = FunctionFilterbank(cochlea, lambda x: 3*clip(x, 0, Inf)**(1.0/3.0))

# Leaky integrate-and-fire model with noise and refractoriness
eqs = '''
dv/dt = (I-v)/(1*ms)+0.2*xi*(2/(1*ms))**.5 : 1
I : 1
'''
anf = FilterbankGroup(ihc, 'I', eqs, reset=0, threshold=1, refractory=5*ms)

M = SpikeMonitor(anf)
run(sound.duration)
raster_plot(M)
show()


### Project Versions

#### Previous topic

Example: cochleagram (hears)

#### Next topic

Example: sound_localisation_model (hears)

PKE�B�d�gGgGbrian-1.4.1/index.html Welcome to Brian’s documentation! — Brian 1.4.1 documentation

### Project Versions

#### Next topic

Introduction

PKE�B��%�e�e7brian-1.4.1/examples-frompapers_Rossant_et_al_2011.html Example: Rossant_et_al_2011 (frompapers) — Brian 1.4.1 documentation

# Coincidence detection example¶

Fig. 4 from:

Rossant C, Leijon S, Magnusson AK, Brette R (2011). “Sensitivity of noisy neurons to coincident inputs”. Journal of Neuroscience, 31(47).

Two distant or coincident spikes are injected into a noisy balanced leaky integrate-and-fire neuron. The PSTH of the neuron in response to these inputs is calculated along with the extra number of spikes in the two cases. This number is higher for the coincident spikes, showing the sensitivity of a noisy neuron to coincident inputs.

from brian import *
import matplotlib.patches as patches
import matplotlib.path as path

def histo(bins, cc, ax):
# get the corners of the rectangles for the histogram
left = array(bins[:-1])
right = array(bins[1:])
bottom = zeros(len(left))
top = bottom + cc

# we need a (numrects x numsides x 2) numpy array for the path helper
# function to build a compound path
XY = array([[left,left,right,right], [bottom,top,top,bottom]]).T

# get the Path object
barpath = path.Path.make_compound_path_from_polys(XY)

# make a patch out of it
patch = patches.PathPatch(barpath, facecolor='blue', edgecolor='gray', alpha=0.8)

# update the view limits
ax.set_xlim(left, right[-1])
ax.set_ylim(bottom.min(), top.max())

# neuron parameters
theta = -55*mV
vmean = -65*mV
taum = 5*ms
taue = 3*ms
taun = 15*ms
sigma = 4*mV

# input times
t1 = 100*ms
t2 = 120*ms

# simulation duration
dur = 200*ms

# number of neuron
N = 10000
bin = 2*ms

# EPSP size
int_EPSP=taue
int_EPSP2=taue*taue/(2*(taum+taue))
max_EPSP=(taum/taue)**(taum/(taue-taum))
we = 3.0*mV/max_EPSP

# model equations
eqs = '''
V=V0+noise : volt
dV0/dt=(-V0+psp)/taum : volt
dpsp/dt=-psp/taue : volt
dnoise/dt=(vmean-noise)/taun+sigma*(2./taun)**.5*xi : volt
'''
threshold = 'V>theta'
reset = vmean

# initialization of the NeuronGroup
reinit_default_clock()
group = NeuronGroup(2*N, model=eqs, reset=reset, threshold=threshold)
group.V0 = group.psp = 0*volt
group.noise = vmean + sigma * randn(2*N)

# input spikes
input_spikes = [(0, t1), (0, t2), (1, t1)]
input = SpikeGeneratorGroup(2, array(input_spikes))

# connections
C = Connection(input, group, 'psp')
C.connect_full(input, group[:N], weight=we)
C.connect_full(input, group[N:], weight=2*we)

# monitors
prM1 = PopulationRateMonitor(group[:N], bin=bin)
prM2 = PopulationRateMonitor(group[N:], bin=bin)

# launch simulation
run(dur)

# PSTH plot
figure(figsize=(10,10))
prMs = [prM1, prM2]
for i in [0,1]:
prM = prMs[i]
r = prM.rate[:-1]*bin
m = mean(r[:len(r)/2])

ax = subplot(211+i)
histo(prM.times, r, ax)
plot([0,dur],[m,m],'--r')
title("%.2f extra spikes" % sum(r[t1/bin:(t2+20*ms)/bin]-m))
xlim(.05, .2)
ylim(0, .125)

show()
`