Neural Algorithms Reading Group

Time: Fridays 11am-12:30pm, Spring Semester 2019 (first Meeting 2/15)
Location: Stata 397
(Take the Gates Tower elevators to the 3rd floor, turn right when you leave the elevator and go through the gray double doors. Follow signs to room 397.)
Organizers: Nancy Lynch (lynch at csail dot mit dot edu), Cameron Musco (cnmusco at mit dot edu), Lili Su (lilisu3 at csail dot mit dot edu)
Accessibility: https://accessibility.mit.edu/
my photo

(Tentative) Schedule

Models of Neural Computation (~2 meetings)

(2/15) Introduction, Cameron and Nancy.

(2/20) Algorithms and Data Structures in the Brain, Saket Navlakha's Special Seminar. Kiva, 4-5pm

(2/22) Spiking neural network models, models with memory, asynchrony, Brabeeba, Lili, and CJ

    To read:
    • Review stochastic spiking model of Musco, Lynch, Parter work. See Section 5.2 of Cameron's thesis. See Section 5.5.1 for an extension of the model incorporating firing history.
    • Skim over composition paper handed out by Nancy just to get flavor of results
    • Maass lower bound paper.
    To be presented: Details on what was presented:
    • Covered compisition paper.
      • Point raised: our notion of neural network "behavior" is very general. Interesting examples, or even general transformation or expressiveness results, would use restrictions. Our result just givess a general foundation.
      • Question raised: what functions in the brain are actually compositional?
    • Lili discussed two-layer neural networks as considered commonly in the machine learning community following these notes. Covered basic universality (i.e. the network can approximate real valued functions to given accuracy) results. See summary, courtesy of Lili.
      • An interesting question is if one can prove universality results for stochastic spiking networks. I.e. for mapping inifite sequences of Boolean vectors to sets of probability distributions on infinite output sequences of Boolean vectors.
      • A simple example might be where the inputs are stable (i.e., the infinite input sequence is fixed over time) and the output is w.h.p at any time after some convergence time t_c the value of some simple function like WTA applied to the inputs.
    • Lili discussed a spiking neural network model with history that is used in her k-WTA paper.
    • Did not get to single neuron models or Maass lower bound.

Selection, Focus, and Attention Problems (~2 meetings)

(3/1) Winner-take-all competition (Lynch, Musco, Parter, Lili's work, and Fiete et al.'s work), Lili and CJ + Eghbal(?)

(3/8) WTA + Decision making and higher level modeling.

  • On 3/8 Brabeeba will also be presenting his + Nancy's paper "Integrating Temporal Information to Spatial Information in a Neural Circuit" at the TDS group meeting. 1:00pm-1:30pm, G-631.

    (3/15) WTA + Decision Making Continued

    (3/22) Decision making, continued.

    Neural Coding, Random Projection, and Linear Algebra (~4 meetings)

    (4/5) Finishing up Past Topics + Introduction to dimensionality reduction and random projection.

    (4/12) Random Projection Continued + Optimization in Spiking Networks, Cameron, Chi-Ning

    (4/19) Sign consistent random projection, Rati and Meena.

    Learning (~4 meetings)

    (4/26) Overview of models for learning in neural networks, Brabeeba + Quanquan

    (5/3) Finish learning model overview. Theoretical models for neural learning, Brabeeba and Quanquan

    (5/10) Theoretical models for neural learning, machine Learning with spiking networks, Quanquan, Brabeeba, and Lili