Author Topic: Entering Marginal Distributions as Evidence  (Read 17152 times)

Offline bill.raynor

  • Newbie
  • *
  • Posts: 10
    • View Profile
Entering Marginal Distributions as Evidence
« on: May 22, 2012, 23:14:37 »
As I understand it, HUGIN allows us to enter single case data as evidence. A recent paper (Peng et.al., 2012, Internation Journal of Uncertainty Fuzziness and Knowledge Based Systems) make a passing reference to  the Madsen & Jensen(1999) as implementing IPFP for BN Junction Trees (to match marginals). Is that capability available in HUGIN? If so, how would I do that? I have tried entering likelihoods, but the second marginal entered as a likelihood will modify the first when it is propogated.

As an example, suppose I have a BN constructed on some historical data, and have observed the marginals for some of the variables in a new population. I would like to adapt the BN to match those marginals and see how that propogates through the network, test possible interventions in that new populations, etc.

Thanks

Offline Anders L Madsen

  • HUGIN Expert
  • Hero Member
  • *****
  • Posts: 2295
    • View Profile
Re: Entering Marginal Distributions as Evidence
« Reply #1 on: May 30, 2012, 14:55:16 »
The Madsen & Jensen (1999) paper does not consider IPFP. It introduces Lazy Propagation as an inference algorithm for belief update in Bayesian networks.

HUGIN does not support IPFP.
HUGIN EXPERT A/S

Offline bill.raynor

  • Newbie
  • *
  • Posts: 10
    • View Profile
Re: Entering Marginal Distributions as Evidence
« Reply #2 on: July 13, 2012, 15:18:01 »
Hello Anders,

Actually, one can fake IPFP by repeatedly entering likelihood evidence on the nodes of interest. For example, if I want to look at the Asia net where both bronchitis and "lung cancer" are set to 15%/85%, I would need to enter the likelihood for bronchitis then lung given bronchitis then bronchitis then lung, etc. till I get close enough to my target marginals.

This is of interest to me so that I can take a generic network which describes one population and retarget it towards another, related population, and was, as I read it, the original intent of the Deming/Stephans (1940)paper. This is the way I orginally learned of it in sampling and demography.

Is there any chance to get this automated in future versions of Hugin?

thanks

Bill



 

Offline Martin

  • HUGIN Expert
  • Hero Member
  • *****
  • Posts: 613
    • View Profile
Re: Entering Marginal Distributions as Evidence
« Reply #3 on: July 18, 2012, 09:57:06 »
Quote
For example, if I want to look at the Asia net where both bronchitis and "lung cancer" are set to 15%/85%, I would need to enter the likelihood for bronchitis then lung given bronchitis then bronchitis then lung, etc. till I get close enough to my target marginals.

We are looking into what could be done to automate these manual steps, hopefully it makes it into the next release of HUGIN.
Hugin Expert A/S

Offline bill.raynor

  • Newbie
  • *
  • Posts: 10
    • View Profile
Re: Entering Marginal Distributions as Evidence
« Reply #4 on: July 19, 2012, 16:25:40 »
Thanks Martin for this and your other replies.

To summarize my interest:
1. from time to time I get individual level (panel) data from market surveys/ studies, which after some form of centering, can be used to update/generate a bayes net. (I have to do all my testing by hand to control for "stratification"/"selection bias", via  Cochran-Mantel-Haenzel tests or a Quade's matched nonparamteric correlation)

2. When I get marginals back on a different study, there is an immediate interest in tuning the BN to match those marginals and get some information about the structure of that study (particularly what caused the primary outcome). This is where the soft evidence and IPFP gets used. At the moment I am doing this by hand, which is of course a pain.

3. Some of the variables never show up in individual form, but just show up as confidence intervals, means and standard deviations and so on for each strata. I can convert those to likelihood weights, using the usual soft discretization tricks (e.g. mixture weights for known kernals). Which leads to further need for likelihood evidence.