Recursive Learning for Sparse Markov Models

A1 Journal article (refereed)


Internal Authors/Editors


Publication Details

List of Authors: Xiong J, Jääskinen V, Corander J
Publisher: INT SOC BAYESIAN ANALYSIS
Publication year: 2016
Journal: Bayesian Analysis
Journal acronym: BAYESIAN ANAL
Volume number: 11
Issue number: 1
Start page: 247
End page: 263
Number of pages: 17
ISSN: 1931-6690
eISSN: 1931-6690


Abstract

Markov chains of higher order are popular models for a wide variety of applications in natural language and DNA sequence processing. However, since the number of parameters grows exponentially with the order of a Markov chain, several alternative model classes have been proposed that allow for stability and higher rate of data compression. The common notion to these models is that they cluster the possible sample paths used to predict the next state into invariance classes with identical conditional distributions assigned to the same class. The models vary in particular with respect to constraints imposed on legitime partitions of the sample paths. Here we consider the class of sparse Markov chains for which the partition is left unconstrained a priori. A recursive computation scheme based on Delaunay triangulation of the parameter space is introduced to enable fast approximation of the posterior mode partition. Comparisons with stochastic optimization, k-means and nearest neighbor algorithms show that our approach is both considerably faster and leads on average to a more accurate estimate of the underlying partition. We show additionally that the criterion used in the recursive steps for comparison of triangulation cell contents leads to consistent estimation of the local structure in the sparse Markov model.


Keywords

clustering, Delaunay triangulation, recursive learning, sequence analysis, sparse Markov chains

Last updated on 2019-14-10 at 05:30