Latent Network Structure Learning From High-Dimensional Multivariate Point Processes

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

2 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Pages (from-to)95-108
Journal / PublicationJournal of the American Statistical Association
Volume119
Issue number545
Online published7 Sept 2022
Publication statusPublished - 2024
Externally publishedYes

Abstract

Learning the latent network structure from large scale multivariate point process data is an important task in a wide range of scientific and business applications. For instance, we might wish to estimate the neuronal functional connectivity network based on spiking times recorded from a collection of neurons. To characterize the complex processes underlying the observed data, we propose a new and flexible class of nonstationary Hawkes processes that allow both excitatory and inhibitory effects. We estimate the latent network structure using an efficient sparse least squares estimation approach. Using a thinning representation, we establish concentration inequalities for the first and second order statistics of the proposed Hawkes process. Such theoretical results enable us to establish the non-asymptotic error bound and the selection consistency of the estimated parameters. Furthermore, we describe a least squares loss based statistic for testing if the background intensity is constant in time. We demonstrate the efficacy of our proposed method through simulation studies and an application to a neuron spike train dataset. Supplementary materials for this article are available online.

© 2022 American Statistical Association

Research Area(s)

  • Multivariate Hawkes process, Non asymptotic error bound, Nonlinear Hawkes process, Nonstationary, Selection consistency