Of Kalman Filters and Hidden Markov Models

This provides some background relating to some work we did on part of speech tagging for a modest, domain-specific corpus.  The path is from Hsu et al 2012, which discusses spectral methods based on singular value decomposition (SVD) as a better method for learning hidden Markov models (HMM) and the use of word vectors instead of clustering to improve aspects of NLP, such as part of speech tagging.

The use of vector representations of words for machine learning in natural language is not all that new.  A quarter century ago, Brown et al 1992 introduced hierarchical agglomerative clustering of words based on mutual information and the use of those clusters within hidden Markov language models.  One notable difference versus today’s word vectors is that paths through the hierarchy of clusters to words at the leaves correspond to vectors of bits (i.e., Boolean features) rather than real-valued features.

Continue reading “Of Kalman Filters and Hidden Markov Models”