How to write efficient Brian code¶
There are a few keys to writing fast and efficient Brian code. The first is to use Brian itself efficiently. The second is to write good vectorised code, which is using Python and NumPy efficiently. For more performance tips, see also Compiled code.
You can switch off Brian’s entire unit checking module by including the line:
before importing Brian itself. Good practice is to leave unit checking on most of the time when developing and debugging a model, but switching it off for long runs once the basic model is stable.
Another way to speed up code is to store references to arrays rather
than extracting them from Brian objects each time you need them. For
example, if you know the custom reset object in the code above is
only ever applied to a group
custom_group say, then you could
do something like this:
def myreset(P, spikes): custom_group_V_[spikes] = 0*mV custom_group_Vt_[spikes] = 2*mV custom_group = ... custom_group_V_ = custom_group.V_ custom_group_Vt_ = custom_group.Vt_
In this case, the speed increase will be quite small, and probably
not worth doing because it makes it less readable, but in more
complicated examples where you repeatedly refer to
it could add up.
Python is a fast language, but each line of Python code has an associated overhead attached to it. Sometimes you can get considerable increases in speed by writing a vectorised version of it. A good guide to this in general is the Performance Python page. Here we will do a single worked example in Brian.
Suppose you wanted to multiplicatively depress the connection strengths every time step by some amount, you might do something like this:
C = Connection(G1, G2, 'V', structure='dense') ... @network_operation(when='end') def depress_C(): for i in range(len(G1)): for j in range(len(G2)): C[i,j] = C[i,j]*depression_factor
This will work, but it will be very, very slow.
The first thing to note is that the Python expression
actually constructs a list
[0,1,2,...,N-1] each time it is called,
which is not really necessary if you are only iterating over the list.
Instead, use the
xrange iterator which doesn’t construct the list
for i in xrange(len(G1)): for j in xrange(len(G2)): C[i,j] = C[i,j]*depression_factor
The next thing to note is that when you call C[i,j] you are doing an
operation on the
Connection object, not directly on the underlying
matrix. Instead, do something like this:
C = Connection(G1, G2, 'V', structure='dense') C_matrix = asarray(C.W) ... @network_operation(when='end') def depress_C(): for i in xrange(len(G1)): for j in xrange(len(G2)): C_matrix[i,j] *= depression_factor
What’s going on here? First of all,
C.W refers to the
object, which is a 2D NumPy array with some extra stuff - we don’t need the extra
stuff so we convert it to a straight NumPy array
asarray(C.W). We also store
a copy of this as the variable
C_matrix so we don’t need to do this every
time step. The other thing we do is to use the
*= operator instead of the
The most important step of all though is to vectorise the entire operation. You
don’t need to loop over
j at all, you can manipulate the array
object with a single NumPy expression:
C = Connection(G1, G2, 'V', structure='dense') C_matrix = asarray(C.W) ... @network_operation(when='end') def depress_C(): C_matrix *= depression_factor
This final version will probably be hundreds if not thousands of times faster than the original. It’s usually possible to work out a way using NumPy expressions only to do whatever you want in a vectorised way, but in some very rare instances it might be necessary to have a loop. In this case, if this loop is slowing your code down, you might want to try writing that loop in inline C++ using the SciPy Weave package. See the documentation at that link for more details, but as an example we could rewrite the code above using inline C++ as follows:
from scipy import weave ... C = Connection(G1, G2, 'V', structure='dense') C_matrix = asarray(C.W) ... @network_operation(when='end') def depress_C(): n = len(G1) m = len(G2) code = ''' for(int i=0;i<n;i++) for(int j=0;j<m;j++) C_matrix(i,j) *= depression_factor ''' weave.inline(code, ['C_matrix', 'n', 'm', 'depression_factor'], type_converters=weave.converters.blitz, compiler='gcc', extra_compile_args=['-O3'])
The first time you run this it will be slower because it compiles the
C++ code and stores a copy, but the second time will be much faster as
it just loads the saved copy. The way it works is that Weave converts
the listed Python and NumPy variables (
depression_factor) into C++ compatible data types.
m are turned into
int``s, ``depression_factor is turned into
C_matrix is turned into a Weave
Array class. The only thing you need to know about this is that
elements of a Weave array are referenced with parentheses rather than
C_matrix(i,j) rather than
this example, I have used the
gcc compiler and added the optimisation
-O3 for maximum optimisations. Again, in this case it’s much
simpler to just use the
C_matrix *= depression_factor NumPy expression,
but in some cases using inline C++ might be necessary, and as you can see
above, it’s very easy to do this with Weave, and the C++ code for a
snippet like this is often almost as simple as the Python code would be.