Shafeeq Ibraheem spent the summer in the Lab’s Biosciences Area studying computational neuroscience. He used machine learning methods to reduce trial-to-trial variability in neural data. This made it possible to draw meaningful interpretations from single-trial data. He is a Ph.D. student in the Electrical Engineering and Computer Sciences Department at Berkeley. This is his report.
Several experiments in neuroscience use the spiking response of an animal’s neurons to interpret its brain activity. In these experiments, spikes are recorded while an animal is exposed to some stimuli or performs some action. The timing of these spikes are then used to analyze the brain activity of the animal. Experiments are normally repeated over several trials to give statistically robust results.
The precise timing of spikes can heavily influence the accuracy of analysis, so it is important to remove as much variability from the spike times as possible. One source of variability that presents a particular challenge is variability in the spiking pattern between trials. In many experiments, the spike patterns of different trials vary in a way that cannot be explained by random noise. Known as trial-to-trial variability, this variation in the spiking pattern prevents consistent analysis of spiking neural activity across trails. To resolve this issue, a common method is to analyze trial-averaged data instead of data from individual trials. However, this presents its own problems, as correct analysis relies on each trial being temporally aligned to the others. Proper alignment methods vary for different domains and experiments and tend to be ad hoc adjustments made by experts. To get around this, we wish to process the data so that meaningful interpretations can be drawn from a single trial. This led me to examine two analysis methods that clean up single-trial neural data in interpretable ways: Robust and Interpretable Time Warping and latent factor analysis via dynamical systems (LFADS).
Time Warping is a statistical framework for finding spiking patterns across trials. It does this by shifting and stretching a template time series to match the data. LFADS is a deep learning method to infer the dynamics of neural data. It assumes the data is generated from a nonlinear dynamical system, then trains an auto-encoder to infer the dynamical system, the initial conditions, and inputs to the system. These methods present different ways to uncover the dynamics of high dimensional neural data. Studying these dynamics will allow us to investigate the structure of networks in the brain, and determine how specific neurons contribute to population dynamics. This will advance our ability to understand how coordinated behavior and perception arise in the brain, give new insight into ways neurological disorders can be treated, and allow technologies to be developed that take advantage of these dynamics (such as brain-machine interfaces).
This experience has allowed me to work with neural data in many different forms and has given me insight into mathematical ways of modeling these different data types. It has also shown me the expressive power artificial neural networks have for modeling complex dynamics in high dimensional data. I have been able to better understand the field of computational neuroscience, and bring together my engineering and mathematics backgrounds to explore the brain. I hope to continue working with Dr. Bouchard, further developing my skills and developing new ways to interpret brain dynamics.