Yes, this is a bit circular, strictly speaking, but the idea is to rediscover these parameters from the Baum-Welch (expectation-maximization) method. We will use the function dhmm em.m from the HMM toolbox. As a write up for this week, please copy form the screen your best approximation to the parameteres we used to generate data, as learned by the training algorithm dhmm em. What adjustments seemed to help or harm your getting this result? That is, did changing the threshold number of repetitions help? Did generating more data help? Did insisting on a more stringent threshold for change in LL from one iteration to the next help?