$30.00
Order NowEnter your responses in a file report.pdf/docx .
J=1 | W=1 | B=1 | C=1 | R=1 |
T | 80 | 20 | 70 | 50 |
F | 30 | 50 | 30 | 40 |
Part II: HMM naive solution (10 points)
In the remainder of this assignment, you will implement a basic Hidden Markov model. We’ll use the HMM from our in-class part-of-speech tagging example, whose states are PropNoun, Noun, Verb, Det. The transition probabilities are the same as in the example shown in class. The observation probabilities are defined as follows, and are defined in the provided file hmm_starter.m.
State/Observation | john | mary | cat | saw | ate | a | the |
PropNoun | 0.40 | 0.40 | 0.10 | 0.01 | 0.05 | 0.03 | 0.01 |
Noun | 0.25 | 0.05 | 0.30 | 0.25 | 0.05 | 0.05 | 0.05 |
Verb | 0.04 | 0.05 | 0.04 | 0.45 | 0.40 | 0.01 | 0.01 |
Det | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.45 | 0.50 |
In a function naive_solution.m, write code to compute the probability of observing each of the following sentences, using the naive solution. You can map each word to a number that is its index into our vocabulary (the union of the column headers above, except the first one), then a sentence is just a vector of numbers; see Part III for an example. Use the provided permn.zip to compute combinations with replacement, to get your list of possible state sequences.
Inputs:
Outputs:
Part III: Testing HMM on part-of-speech tagging (10 points)
Finally, in a script hmm_demo.m, pick five of the sentences below, and compute their probability of occurrence. In a file report.pdf/docx, discuss what you observe about which of them seem more likely than others, and whether what you observe makes sense.
Submission: Please include the following files:
WhatsApp us