Static Image

Neural coding and dynamics

Our group is interested in better understanding neural codes and dynamics, to learn how the brain computes. Our tools are numerical and theoretical, and our approach is to work closely with collaborators on specific experimental systems.

Coding: In principle, the brain could encode information about a variable in any of myriad ways. The choice of coding scheme sheds light on the computational priorities of the brain in representing that variable. For instance, codes can differ in capacity, ease of readout by downstream areas, or noise tolerance. Understanding a neural code means not only learning what or how much is encoded, but learning the tradeoffs of the coding scheme, to see "why" it was selected.

Error correction: Representations in the brain are necessarily noisy because of the stochastic dynamics of neurons and synapses. Avoiding such problems requires agressive error reduction and correction, but our understanding of how the brain does this is at best primitive. We are investigating strong error correcting codes as they may exist in the brain.

Dynamics of learning and memory: How robust are neural memory networks to ongoing noise? What kinds of network connectivity support integration and memory? How do such structures form through development and plasticity? We study these questions through simulation and theory. We also analyze neural data with a view toward discovering mechanism.

Funding: Our research is funded by the American public through the National Science Foundation (NSF-EAGER), the Office of Naval Research (ONR-MURI, ONR-YIP), and the University of Texas at Austin; and through generous support from the Sloan Foundation, the Searle Foundation, and the McKnight Foundation. IRF has been funded by the Broad Foundation, and we have received support for student travel and summer school scholarships from the Burroughs Wellcome Fund, the Gatsby Charitable Foundation, Qualcomm Incoropration, Brain Corporation, Cell Press, and Evolved Machines.

Static Image