(Tentative) Schedule
Models of Neural Computation (~2 meetings)
(2/15) Introduction, Cameron and Nancy.
(2/20) Algorithms and Data Structures in the Brain, Saket Navlakha's Special Seminar. Kiva, 45pm
(2/22) Spiking neural network models, models with memory, asynchrony, Brabeeba, Lili, and CJ
To read:
 Review stochastic spiking model of Musco, Lynch, Parter work. See Section 5.2 of Cameron's thesis. See Section 5.5.1 for an extension of the model incorporating firing history.
 Skim over composition paper handed out by Nancy just to get flavor of results
 Maass lower bound paper.
To be presented:
 Overview of composition paper by Nancy.
 Overview of spiking models with history, including her own work and Section 5.5.1 of Cameron's thesis by Lili.
 Overview of single neuron modeling, firing rate based model and an integrateandfire model modified from HodgkinHuxley biophysical model by CJ. References:
 Maass lower bound paper, Brabeeba.
Details on what was presented:
 Covered compisition paper.
 Point raised: our notion of neural network "behavior" is very general. Interesting examples, or even general transformation or expressiveness results, would use restrictions. Our result just givess a general foundation.
 Question raised: what functions in the brain are actually compositional?
 Lili discussed twolayer neural networks as considered commonly in the machine learning community following these notes. Covered basic universality (i.e. the network can approximate real valued functions to given accuracy) results. See summary, courtesy of Lili.
 An interesting question is if one can prove universality results for stochastic spiking networks. I.e. for mapping inifite sequences of Boolean vectors to sets of probability distributions on infinite output sequences of Boolean vectors.
 A simple example might be where the inputs are stable (i.e., the infinite input sequence is fixed over time) and the output is w.h.p at any time after some convergence time t_c the value of some simple function like WTA applied to the inputs.
 Lili discussed a spiking neural network model with history that is used in her kWTA paper.
 Did not get to single neuron models or Maass lower bound.
Selection, Focus, and Attention Problems (~2 meetings)
(3/1) Winnertakeall competition (Lynch, Musco, Parter, Lili's work, and Fiete et al.'s work), Lili and CJ + Eghbal(?)
To read:
To be presented:
Details on what was presented:
 CJ discussed the Wilson and Cowan paper, giving pioneering analyses of firingrate model. Discussed treshold models and firing rate models, classification of neurons as excitatory vs.
inhibitory, and the interaction between these groups of neurons. Also discussed refractory periods and covered the main mathematical formulas governing activation.
 Question: How do these models compare/related too discrete (stochastic) spiking neural network models?
 CJ also covered the Hodgkin and Huxley model and the relate FitzhughNagumo model. The level of detail in these models makes them suitable for describing the behavior of individual neurons, but maybe not so much for describing networks and their behavior.
 CJ suggests this review of the integrateandfire neuron model with homogeneous synaptic input and this followup extending to inhomogenous synaptic input and network properties.
 Brabeeba presented Lower Bounds for the Computational Power of Networks of Spiking Neurons, Maass.
 Considers emulating Turing machines on a neural network; seems to be mainly of theoretical interest. The networks must encode unbounded amounts of information
in a finite network. They do this by encoding detailed structures
such as stacks and counters in the amount of time between spikes.
This means that the networks are not tolerant to noise, since the
encodings are arbitrarily fine.
 However, parts of the network construction seemed interesting:
the construction was decomposed in terms of modules such as a Delay
module, and Inhibition module, a Synchronizer, a Compare module, a
Multiply module. These pieces are used to build the larger network.
Some of the individual pieces may be interesting to study as neural
network problems.
(3/8) WTA + Decision making and higher level modeling.
To read:
To be presented (a lot of spill over from last time):
 WTA neuroscience background, including covering What the fly’s nose tells the fly’s brain, Stevens and Howo the Basal Ganglia Make Decisions, Berns and Sejnowski. Presented Eghbal and CJ

Computational Tradeoffs in Biological Neural Networks: SelfStabilizing WinnerTakeAll Networks, Lynch, Musco, and Parter.
Also see Cameron's thesis Chapter 5.
 How fast is neural winnertakeall when deciding between many options?, Kriener, Chaudhuri, and Fiete. CJ and Eghbal.
 On the Computational Power of WinnerTakeAll, Maass. Overview by Brabeeba.
 "SpikeBased WinnerTakeAll Computation: Fundamental Limits and OrderOptimal Circuits", Lili Su (pdf to be emailed out).
 Graybiel lab modeling methods and overview, Sabrina and Alexander.
 Will cover work in two cell papers here and here looking at excitationinhibition balance modeling during decision making and learning.
On 3/8 Brabeeba will also be presenting his + Nancy's paper "Integrating Temporal Information to Spatial Information in a Neural Circuit" at the TDS group meeting. 1:00pm1:30pm, G631.
(3/15) WTA + Decision Making Continued
To read:
To be presented:
 Eghbal will finish reviewing Fiete Lab's work on WTA.
 Lili will cover here work on kWTA: "SpikeBased WinnerTakeAll Computation: Fundamental Limits and OrderOptimal Circuits" (pdf to be emailed out).
 Brabeeba will cover On the Computational Power of WinnerTakeAll, Maass.
 Sabrina and Alexander will overview Graybiel lab work, including the two papers above. Some of this may spill over until next time.
(3/22) Decision making, continued.
To read:
To be presented:
 Sabrina and Alexander will continue Graybiel lab work on learning and decision making in mice from last time. See slides.
 CJ will present her work, 'Valence coding in the basolateral amygdala.'. References:
Neural Coding, Random Projection, and Linear Algebra (~4 meetings)
(4/5) Finishing up Past Topics + Introduction to dimensionality reduction and random projection.
To be presented:
 Sabrina will finish talking about here work on learning a model for learning in mice.
 Brabeeba will finally present On the Computational Power of WinnerTakeAll.
 Then Cameron will introduce random projection in general, and its conjectured use in neural dimensionality reduction.
To be read:
What was presented:
 Sabrina finished covering the Graybiel lab work. Discussed her model, based on basic spiking network model, with parameters (weights, biases, etc.) learned via a genetic algorithm to mimic observations.
 Brabeeba discussed Maass's work on lower bounds for computing WTA and a one layer network with a WTA module can be used e.g. to simulate any two layer linear threshold network.
 Cameron very briefly started introduction to random projection. To be continued next time.
(4/12) Random Projection Continued + Optimization in Spiking Networks, Cameron, ChiNing
To be presented:
 Cameron will finish introduction of random projection/compression methods in neural systems and algorithm design/data analysis.
 ChiNing will discuss his work on implementing optimization in spiking neural network models.
To read:
(4/19) Sign consistent random projection, Rati and Meena.
Learning (~4 meetings)
(4/26) Overview of models for learning in neural networks, Brabeeba + Quanquan
To read:
What was presented:
 Meena finished the sign consistent JL proof from last time.
 Brabeeba introduced Hebbian learning in both ratebased and spiking models, Oja's learning rule and its connection to PCA. See his notes.
(5/3) Finish learning model overview. Theoretical models for neural learning, Brabeeba and Quanquan
To read:
What was presented:
 Brabeeba finished discussion of learning models, focusing on Hopfield networks and their use a memories.
 Quanquan lead discussion of learning and computation in asymmetric Hopfield networks.
(5/10) Theoretical models for neural learning, machine Learning with spiking networks, Quanquan, Brabeeba, and Lili
To read:
What was presented:
 Quanquan led discussion of the first two papers above.
 Concluded group with a discussion of future work and collaborations.