- We want to develop an algorithmic theory for brain networks: Define simple formal models, design algorithms, and prove lower bounds.
- The networks are supposed to be abstract versions of networks that arise in brains. Thus, bio-plausibility is important; in particular, the algorithms should be distributed, with local decisions like those made in real brain networks.
- To make the models usable for theoretical analysis, the models will have to abstract away many detailed features of brain networks; however, we want them to retain key, important features of real brain networks so that they could conceivably be used to explain what happens in real brain computation.
- Specifically, we are considering a model for brain networks that is based on discrete spiking events. To make things simple and tractable, we will generally consider synchronous rounds. The models will usually (but not always) be stochastic. Thus, these are synchronous, stochastic Spiking Neural Networks (SNNs).
- We will consider several cost measures. Size of network, e.g., number of auxiliary neurons, range of values for the weights. Time to converge to an answer. Length of time that the answer remains stable. Energy usage, as measure by amount of spiking.
- We hope that these models and measures will give us a new type of theory for brain computation. We hope that it will be elegant, and perhaps more expressive than previous theories, e.g., those using rate-based differential-equation models. We hope that algorithms expressed within these models will be amenable to the usual sorts of algorithmic, distributed-computing-theory analysis. We hope that they will be able to reach similar conclusions from those that have appeared in papers using previous models (such as rate-based), only now proved using discrete methods. But perhaps we can get new types of results as well, because of the extra power of discrete events, synchrony, and stochasticity.
- One approach that we will take from theoretical computer science which is less common in other areas is that, once we have fixed a model of neural computation and a neurally motivated computational problem, we will allow ourselves free reign to design algorithms (i.e., networks) in this model that solve the problem as efficiently as possible. Even if these algorithms themselves are not biologically plausible, this is a useful exercise. If they are not plausible, why not? Is there something we should add to the model to capture this? Are there additional cost measures that biological systems may be optimizing that we are not considering? Are there simpler, more biologically plausible mechanisms that achieve similar complexity?

- 1. Models: Review the models we have defined so far, and some possible embellishments and variations. Also visit some alternatives (Maass, models with different kinds of history, Poisson instead of synchronous, rate-based).

- 2. Problems of focus and attention, selection (e.g., WTA).
- 3. Problems of neural coding and recognition.
- 4. Problems of learning.