Author Topic: equal probability distribution (EM learning)  (Read 14817 times)

Offline Marloes

  • Newbie
  • *
  • Posts: 2
    • View Profile
equal probability distribution (EM learning)
« on: March 19, 2008, 10:21:17 »
I have a question about parameter learning in Hugin; if there is no data available for an exact combination of states of parent nodes the EM algorithm results in an equal probability distribution for that specific combination of states. Randomize the initial distribution in the step prior distribution knowledge is not a solution to avoid this equal probability distribution. What is the reason for this equal probability distribution, is that a rule in the EM algorithm? Are there other possibilities?

thanks in advance,
Marloes

Offline Frank Jensen

  • HUGIN Expert
  • Hero Member
  • *****
  • Posts: 576
    • View Profile
Re: equal probability distribution (EM learning)
« Reply #1 on: March 25, 2008, 15:57:40 »
The EM algoritm in Hugin essentially works as follows:

In each iteration of the algorithm, the cases are processed once: For each case, the joint probability distribution for each node and its parents (given the case) is computed.  For a given node, the average (over all cases) of these probability distributions is formed.  At the end of an iteration of the EM algorithm, this average probability distribution is converted to a conditional probability table.  If some parent state configuration has never been seen, then the average probability distribution contains zeros for all states of the child node given the parent state configuration.  The Hugin EM implementation arbitrarily chooses a uniform distribution in this case.  But perhaps it would be better to keep the original distribution in this case?

Offline Marloes

  • Newbie
  • *
  • Posts: 2
    • View Profile
Re: equal probability distribution (EM learning)
« Reply #2 on: March 31, 2008, 16:54:40 »
Thanks for the reply and the clear explanation. Is it possible to keep the original distribution in this case, like you already suggested? Are there settings in Hugin to change this or are there other possible solutions? Thanks, Marloes

Offline Frank Jensen

  • HUGIN Expert
  • Hero Member
  • *****
  • Posts: 576
    • View Profile
Re: equal probability distribution (EM learning)
« Reply #3 on: April 01, 2008, 15:21:43 »
There are no ways to avoid the uniform probability distributions in the EM algorithm in the current version of Hugin.  Except to directly modify the CPTs after the EM algorithm has run.

We have updated the source code to keep the original conditional distributions for the parent state configurations with no case data.  The changes are in the native code (not the Java GUI code), so the update cannot be distributed as part of a GUI update.  But the update will be in the next version of Hugin.