Flexible and Interpretable Multivariate Point Processes for Neural Dynamics
從多元點過程解釋時變神經動態系統
Student thesis: Doctoral Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution | |
---|---|
Supervisors/Advisors |
|
Award date | 6 Aug 2018 |
Link(s)
Permanent Link | https://scholars.cityu.edu.hk/en/theses/theses(603d05e4-39b0-4da0-b89c-be7151ab0471).html |
---|---|
Other link(s) | Links |
Abstract
Neuroscience is entering an exciting new age. Modern neural recording methods enable us to measure thousands of neurons simultaneously, which makes it possible to reconsider many longstanding questions in neuroscience, including how functional neural networks are formed and how to derive latent structures from large-scale recordings. Although such recordings can offer an unprecedented opportunity to glean insight into these mechanistic underpinnings of intelligence, they also present an extraordinary statistical and computational challenge: how do we make sense of these large-scale recordings?
In this thesis, we have developed two kinds of models for analyzing neural dynamics, and we aim to enhance our understanding of how the brain works during complex behaviors. The first kind of model is based on Generalized Linear Models (GLMs), which can capture neural interactions with biophysically plausible meaning. To have tractable computations for large-scale datasets, we introduced kernel methods; To further analyze networks, we link the properties of inferred functional connectivity to complex behaviors of organisms. The second kind of model is based on dimensionality reduction latent variable models (LVMs). We develop a suite of tools that instantiate hypotheses about neural computation in the form of probabilistic models and a corresponding set of Bayesian inference algorithms that efficiently fit these models to neural spike trains. From the posterior distribution of model parameters and variables, we seek to advance our understanding of how the brain works.
Four main problems are encountered during applying these two kinds of models. Firstly, how to derive the properties of unknown whole networks based on sampled networks; Secondly, how to model count data when evaluating functional connectivity; Thirdly, how to encode flexible and interpretable latent structures in a probabilistic model, and efficiently fit the model to neural spike trains; Finally, how to retrieve intrinsic dimensionality of neural populations.
To surmount the first three challenges, we focus on probabilistic models by incorporating latent types and features of neurons into Bayesian framework. In order to reconcile these models with the discrete nature of spike trains, we focus on multivariate point processes model and derive elegant auxiliary variables and efficient inference algorithms. In a variety of real neural recordings, we show how our methods reveal interpretable structure underlying neural spike trains, and the latent structures of functional networks can be flexibly modeled using graph priors. The corresponding set of Bayesian inference algorithms, including novel Laplace approximation, variational inference, and Markov chain Monte Carlo (MCMC), are demonstrated to fit these models efficiently and accurately. The final problem is overcame with dimensionality reduction techniques, which are explored to reduce redundant neural variability. Finally, the high-dimensional time series can be characterized by some underlying, low-dimensional and time-varying latent states. Both Linear and nonlinear dynamical models are fitted for neural data with promising prediction performance and interpretability of intrinsic neural dynamics.
For catching the dynamics of both neural interactions and evolutions, we conduct research from Generalized Linear Models to Latent Variable Models. Not only the latent network structures should be explored but also intrinsic evolution can be identified with these data-driven methods. We provide these two kinds of ideas for flexibly and interpretably modeling neural spike trains, and derive efficient inference and learning algorithms. These tools are expected to analyze thousands of neurons in organisms performing complex behaviors, and relate properties of latent structures to complex behavioral tasks. In summary, GLM-LVMs suggest a path toward translating large-scale recording capabilities into new insights about neural computation.
In this thesis, we have developed two kinds of models for analyzing neural dynamics, and we aim to enhance our understanding of how the brain works during complex behaviors. The first kind of model is based on Generalized Linear Models (GLMs), which can capture neural interactions with biophysically plausible meaning. To have tractable computations for large-scale datasets, we introduced kernel methods; To further analyze networks, we link the properties of inferred functional connectivity to complex behaviors of organisms. The second kind of model is based on dimensionality reduction latent variable models (LVMs). We develop a suite of tools that instantiate hypotheses about neural computation in the form of probabilistic models and a corresponding set of Bayesian inference algorithms that efficiently fit these models to neural spike trains. From the posterior distribution of model parameters and variables, we seek to advance our understanding of how the brain works.
Four main problems are encountered during applying these two kinds of models. Firstly, how to derive the properties of unknown whole networks based on sampled networks; Secondly, how to model count data when evaluating functional connectivity; Thirdly, how to encode flexible and interpretable latent structures in a probabilistic model, and efficiently fit the model to neural spike trains; Finally, how to retrieve intrinsic dimensionality of neural populations.
To surmount the first three challenges, we focus on probabilistic models by incorporating latent types and features of neurons into Bayesian framework. In order to reconcile these models with the discrete nature of spike trains, we focus on multivariate point processes model and derive elegant auxiliary variables and efficient inference algorithms. In a variety of real neural recordings, we show how our methods reveal interpretable structure underlying neural spike trains, and the latent structures of functional networks can be flexibly modeled using graph priors. The corresponding set of Bayesian inference algorithms, including novel Laplace approximation, variational inference, and Markov chain Monte Carlo (MCMC), are demonstrated to fit these models efficiently and accurately. The final problem is overcame with dimensionality reduction techniques, which are explored to reduce redundant neural variability. Finally, the high-dimensional time series can be characterized by some underlying, low-dimensional and time-varying latent states. Both Linear and nonlinear dynamical models are fitted for neural data with promising prediction performance and interpretability of intrinsic neural dynamics.
For catching the dynamics of both neural interactions and evolutions, we conduct research from Generalized Linear Models to Latent Variable Models. Not only the latent network structures should be explored but also intrinsic evolution can be identified with these data-driven methods. We provide these two kinds of ideas for flexibly and interpretably modeling neural spike trains, and derive efficient inference and learning algorithms. These tools are expected to analyze thousands of neurons in organisms performing complex behaviors, and relate properties of latent structures to complex behavioral tasks. In summary, GLM-LVMs suggest a path toward translating large-scale recording capabilities into new insights about neural computation.