Learning
The parameter learning task in HMMs is to find, given an output sequence or a set of such sequences, the best set of state transition and output probabilities. The task is usually to derive the maximum likelihood estimate of the parameters of the HMM given the set of output sequences. No tractable algorithm is known for solving this problem exactly, but a local maximum likelihood can be derived efficiently using the Baum–Welch algorithm or the Baldi–Chauvin algorithm. The Baum–Welch algorithm is a special case of the expectation-maximization algorithm.
Read more about this topic: Hidden Markov Model
Famous quotes containing the word learning:
“Even from their infancy we frame them to the sports of love: their instruction, behaviour, attire, grace, learning and all their words aimeth only at love, respects only affection. Their nurses and their keepers imprint no other thing in them.”
—Michel de Montaigne (15331592)
“A little learning is a dangrous thing;
Drink deep, or taste not the Pierian spring:
There shallow draughts intoxicate the brain,
And drinking largely sobers us again.”
—Alexander Pope (16881744)
“The academy is not paradise. But learning is a place where paradise can be created.”
—bell hooks (b. c. 1955)