Alexandre Gravier

Week 6: Networks of neurons

Modeling synapses

Modifying the HH model

We want to model the effect of a chemical synapse on the membrane potential.

Q: How do you think we can form a computational model of the effects of inputs on the neuron’s electrical potential V?

In the RC circuit model of the membrane, with an input current per unit area A and total input current , specific membrane resistance , specific membrane capacitance , total membrane resistance , total membrane capacitance , membrane voltage , Voltage at rest , membrane time constant , we have

equivalent to

Q: What is the effect of τm, the “membrane time constant”, on how fast the cell’s voltage changes in response to an input?

  • As τm increases, it takes longer for the cell to reach steady state when an input is turned on, and longer to decrease to equilibrium when it is turned off.
  • As τm increases, it takes longer for the cell to reach steady state when an input is turned on, but falls back to equilibrium state more quickly.
  • As τm decreases, it takes longer for the cell to reach steady state when an input is turned on, and longer to decrease to equilibrium when it is turned off.
  • As τm decreases, it takes longer for the cell to reach steady state when an input is turned on, but falls back to equilibrium state more quickly.

Explanation: If you divide both sides of the membrane potential equation by τm, you can see that if the time constant increases, the terms on the right hand side governing the rate of change of voltage decrease.

Q: One more question before we proceed… Going back to the mathematical definition of τm, what then can we say is the relationship between the cell’s surface area and how fast the cell reacts to an input?

  • As surface area increases, the cell reaches steady state or equilibrium state more quickly.
  • As surface area increases, the cell reaches steady state or equilibrium state more slowly.
  • As surface area increases, the cell reaches steady state more quickly, but takes longer to fall back to equilibrium.
  • The cell’s surface area does not affect how fast it reacts.

Explanation: τm=RmCm=rm/A∗cmA=rmcm, so the surface area does not affect the time constant (which determines how fast the cell reacts)!

In the passive HH model (), the passive membrane channels are modeled by .

Similarly, an active ion channel s with equilibrium potential is modeled by where the synaptic conductance is function of inputs received by the synapse:

where is the maximum conductance, is the probability that the transmitter is released given that an input spike happens and is the probability that post-synaptic channels are open given that NT have been released in the synaptic cleft (fraction of channels open).

Assuming , the effect of a spike on is modeled by the following kinetic model:

In English: the fraction of channels open equals the opening rate times the fraction of channels closed minus the closing rate times the fraction of channels open.

The effect of one spike on synaptic conductance

Q: What is the value of Ps at equilibrium? That is, what is the value of Ps such that dPs/dt is 0?

  • αs/(βs−αs)
  • αs/(αs+βs)
  • αs(1−Ps)/βs
  • None of these

In recorded data, (so, normalized to have a maximum value of 1 by dividing it by it recorded maximum P_\textrm{max}$$) as a function of time after a spike is reasonably modeled by:

  • an exponential function for GABA(A) and AMPA synapses: ,
  • an alpha function for NMDA synapses:

The effect of several spike on synaptic conductance: the linear filter model of a synapse

Modeling the input spike train to synapse b as a sum of Dirac delta functions:

We can apply the expression of as filter to the input spike train. For instance, an AMPA synapse with filter , the synaptic conductance changes as:

or, in the continuous case:

Graphically, it looks like we are stacking the graph of K each time a spike arrives.

Synapses in action in a minimal network model of two IaF neurons

Each neuron is connected to the other via a synapse s, and receives a constant input current . Each neuron’s equation is:

And synapses are modeled by alpha functions:

with , , , ,

When (excitatory synapses), the neurons fire in alternation.

When (inhibitory synapses), the neurons fire in synchrony.

From spiking networks to rate-coded networks

Simulations with spiking neurons can reveal synchrony and correlation, and spike timing effects. However, the computational costs are high.

The option of simulating neurons that use firing-rate outputs lets us scale to larger networks. However, any phenomenon related to spike timing is lost.

The linear filter model of synapses with an input spike train linearly filtered by gives a continuous real-valued synaptic conductance .

For multiple synapses with individual weights and spike trains , the total synaptic current of the receiving cell is the sum of individual currents, , so

By approximating the instantaneous firing rate using the spike train , we get a new expression of the input current in function of the firing rates of input neurons:

Problems:

  • synchrony of spike trains
  • correlation between

These problems will most often not sufficiently affect the results, and we ignore them.

Integrating input current equation to arrive at the kinetic expression of the firing-rate base network model

With an exponential synaptic filter , the differential of the input current equation in time is

This gives the following expression of the network dynamics (input current change in vector form and output firing rate change): :

General form of the firing rate based network model

The function is an ad-hoc transformation.

The firing rate based network model, neglecting synapse dynamics

If , the synaptic input converges quickly, and , and the network is entirely determined by

The firing rate based network model, neglecting output dynamics

If , the output dynamics is negligible compared to the output current, and , and the network is determined by:

The firing rate based network model with static input (typical ANN case)

If the input is static or approximately so for a long period of time, we can look at the steady state: and giving

That’s how unit output is computed in ANN. is often a sigmoidal threshold function.

Multiple output neurons in a feedforward network

We assume that the synapses are sufficiently fast to neglect their dynamics, and use as equation for a single output.

For multiple output neurons, with the matrix of all weight vectors so as is the synaptic weight from unit to unit and the vector of combined outputs of all units, the network equation is:

Q: We have officially moved to a higher level of abstraction! When we talked about biophysics last week, we looked at some detailed models of individual neurons. Now we have abstracted away some of those detailed dynamics as we move towards modelling whole networks of neurons. This is a common investigative strategy in the sciences. Why don’t we keep all the low-level details when we build large scale models?

  • The math may become a lot harder or complicated, making progress difficult.
  • Computational resources limit our ability to fully implement all the low-level details when building a larger system.
  • Ease of use - we do not necessarily need all of the low-level details in order to explore the dynamics of higher-level systems such as whole networks.
  • All of these

Recurrent networks

In a more general formulation where output unit can be connected to output unit with weight ,

In English: the rate of change of the output is a function of the weighted input from the previous layer and the weighted intra-layer feedback , minus the decay .

In the special case of feedforward networks, is the null matrix.

Q: Consider a simple recurrent network with 2 output neurons. Maybe one of them represents the ‘fight’ impulse, and the other one the ‘flight’ impulse. The input neurons represent various inputs from the environment. Given the following recurrent weight matrix M, what can we say about the relationship between the fight and flight responses in this particular animal?

M=[0 −0.5; −0.5 0]

  • They tend to balance each other - if one is likely, the other is going to be likely too, leading to “cognitive dissonance.”
  • They are disconnected responses from each other - they do not really affect each other; this more resembles a feedforward network.
  • They tend to inhibit each other - as one becomes the more likely response, it suppresses the likelihood of the other.
  • None of the above.

Explanation: Suppose that v contains outputs for the fight impulse in the first element and flight in the second element. When we multiply M by v, we see that higher outputs of the fight response will lead to lower outputs of the flight response, and vice versa. Note: this is not a statement about fight and flight responses in nature in general!

Linear feedforward network (performing numerical differentiation)

Given the network model

the weight matrix

and the static input vector

Q: What is ?

Q: What is this network doing?

  • Calculating some sort of derivative or difference, like looking for edges in a picture.
  • Looking for sequences of repeats or similarity, like finding spots of homogenous color in a picture.
  • Suppressing inputs - similar to turning the volume down or darkening an image.
  • None of the above.

Indeed, considering the weight matrix as a set of filters, each of its row calculates the difference between two adjacent input units.

The parallel should be made with the V1 oriented receptive fields (+|-), which perform first-order differentiation: the definition of the derivative of a single-valued real function of a real scalar is approximated in the discrete case by the difference . In our case, performs that exact operation on .

It should also be noted that the V1 oriented center-surround receptive fields (+|-|+) perform second-order derivation: , and the discrete approximation for that, , is approximated by the coefficients in the following matrix W:

Recurrent networks

Linear recurrent network

let the weighted feedforward input vector.

How does affect ?

Using eigenvectors to solve the network equation

If M is symmetric, it has N orthogonal eigenvectors and eigenvalues which satisfy

As they are orthogonal, .

Normalizing the eigenvectors, they are now orthonormal, , and all vectors can be easily expressed in the eigenbasis as a linear combination of the eigenvectors. For the vector of membrane voltages,

We substitute it in , and replace by :

The sums disappear if one takes the dot product of each side with any arbitrary , because all are orthogonal:

Solving the above gives:

With that, we can get the expression of .

Using eigenvalues to determine network stability

If , the first exponential term in will grow indefinitely with time, so the network is unstable.

If , the network is stable, it converges to

Q: We have used the term “steady state” several times now over the past few weeks. What do we mean by “steady state value” here?

  • The value of v, given our weight matrices W and M, such that it changes at a constant (steady) rate over time.
  • The value of v, given our weight matrices W and M, such that it changes as the exponential of a constant rate over time - it follows a perfect exponential curve.
  • The value of v, given our weight matrices W and M, such that v does not change further over time.
  • None of these

Amplification of inputs in a recurrent network

If all and is close to 1, and , then

And the network is amplifying the projection of the input on eigenvector by a factor of .

Example: the angles network.

The 5 output units of a linear network are labeled -180, -90, 0, 90, 180, for the angles that they represent.

the matrix is defined by the cosine of the relative angle between the units’ labels, ported to .

Q: Do you think this matrix M is symmetric?

  • Yes

The connectivity matrix is such that close neighbors are amplified, and remote ones are inhibited, a pattern often encountered in the brain.

With that network, if all eigenvalues are 0 except , we observe the amplification of the input around the neuron’s preferred angle.

Network memory with an eigenvalue of one (performing numerical integration)

If , and all other , then:

Solving for ,

in , assuming ,

indicating that the firing rate depends on the integral of the past input, even if the current input is 0.

This type of integrator neurons is present in the medial vestibular nucleus, maintaining a memory of eye position by integrating bursts from on-direction and off-direction movement neurons.

Nonlinear recurrent networks

We now apply a nonlinear function F to the input and recurrent feedback:

Let’s pick the rectification nonlinearity for :

It guarantees that the firing rate remains positive.

Stability even with large

If we again take our example of the angles network with the cosines in and all eigenvalues 0 except , but this time , and we use as above, we observe the amplification of the firing rate around the preferred value, and the network remains stable thanks to .

Winner-takes-all

The same type network can select one peak in the input over the other thanks to lateral inhibition.

Gain modulation

An increase in the input value is multiplicatively amplified in the output.

Memory

Like in the linear case, network memory (integration of past input) can take place, in combination with the gain modulation, etc…

Non-symmetric recurrent networks

If there are excitatory and inhibitory neurons, connections can’t be symmetric.

For excitatory neurons:

For inhibitory neurons:

Linear stability analysis

The idea is to look at the stability of the network near fixed points (where and .

We take the derivative of the kinetic expression w.r.t. and . The result in Jacobian matrix is a stability matrix:

The two eigenvalues obtained by solving determine the dynamics of the network near the fixed point. The solutions can have imaginary components. The imaginary component determine the oscillation frequency of the corresponding neurons, and the real part determines the stability of the fixed point: stable if negative.

Example: 1 inhibitory and 1 excitatory neuron:

The excitatory neuron is:

For the inhibitory neuron, is a parameter that we will vary:

The Jacobian:

Solving

gives

Varying :

makes the real part or the eigenvalues negative. I consequence, the system spirals down to a stable fixed point in the phase plane .

The convergence to the fixed point corresponds to damped oscillations of and .

makes the real part or the eigenvalues positive, resulting in an unstable network. On the phase plane, the network diverges from the fixed point. However, thanks to , the network loops on a limit cycle.

The transition from stable to unstable system corresponds to a Hopf bifurcation.