Author Topic: parameter learning  (Read 8656 times)

Offline ngh

  • Newbie
  • *
  • Posts: 6
    • View Profile
parameter learning
« on: November 12, 2010, 18:36:58 »
Hello All
Please somebody tell me why whenever I run the EM wizard on my same data sets the out put result is different and also why shold I randomize the data set when I run the EM wizard? ::)
Your help would be appreciated,
NGH

Offline Anders L Madsen

  • HUGIN Expert
  • Hero Member
  • *****
  • Posts: 2281
    • View Profile
Re: parameter learning
« Reply #1 on: November 16, 2010, 10:05:31 »
Hello,

If you run the EM wizard on the same network structure using the same data, the same parameter setting and the same initial starting point (i.e., the same initial parameterisation of the CPTs), then the result will be the same each time.

From the second part of your question it seems that you are randomising the CPTs before running EM. In this case the results will not necessarily be the same. When you rerun the EM algorithm, you should select the parameterisation producing the highest quality score (i.e., AIC or BIC score).

If you have missing data, then the EM algorithm may have different local maximum quality scores. Rerunning the EM algorithm with different initial parameterisations is one way to try to find an optimal quality score.
HUGIN EXPERT A/S

Offline ngh

  • Newbie
  • *
  • Posts: 6
    • View Profile
Re: parameter learning
« Reply #2 on: November 16, 2010, 12:12:09 »
Thank you very much