Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Anders L Madsen

Pages: 1 2 [3] 4 5 ... 19
FAQ / Re: Model validation
« on: June 14, 2014, 13:59:26  »
The number of free parameters in a discrete CPT is (n-1) * m, where n is the number of states in the child and m is the number of parent configurations.

You can find a lot of information on AIC and BIC by performing a few Google searches.

General Discussion / Re: Decision Trees
« on: May 01, 2014, 11:13:58  »
Dear Marco,

It is correct that HUGIN does not directly support decision trees.

Instead HUGIN supports the use of influence diagrams and limited memory influence diagrams (LIMIDs). LIMIDs were introduced as part of version 7.0. With the introduction of LIMIDs, the solution algorithm was changed from being based on Jensen, Jensen & Dittmer (1994) to being based on Lauritzen & Nilsson  (2001). The solution algorithm is referred to as Single Policy Updating (SPU). SPU requires that all informational links are specified in the model meaning a change in the interpretation of the structure of the influence diagram compared to previous versions of HUGIN.

Hope this helps

Lauritzen, S. L. and Nilsson, D., (2001), Representing and solving decision problems with limited information. Management Science, 47, 1238 - 1251.

Jensen, F., Jensen, F. V., Dittmer, S. L., (1994), From influence diagrams to junction trees, Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pages 367-373.

OpenNESS / Re: zero intervals and benefit cost analysis
« on: April 14, 2014, 19:36:44  »
Can you help?

We can try.

1) Explanation.

The zero width interval -feature was introduced to extend the Table generator functionality. We did not consider including this as part of the Learning Wizards. So, the discretization operator in the Learning Wizard simply ignores the zero width interval.

2) Workaround.

Here is a workaround (I assume that you are using the Learning Wizard and not only the EM Learning Wizard):

  • do the discretization  and structure learning in Learning Wizard
  • leave the Learning Wizard prior to the EM part (parameter estimation)
  • manually add the "zero width interval" to the appropriate node
  • use the EM learning wizard to perform the parameter estimation on the adjusted model

If your are using the EM learning wizard, then the steps should be adjusted accordingly.

As you have no data on zero, then you will probably not learning anything about the relation between the parents for this value. The result will probably be a uniform likelihood.

OpenNESS / Re: OpenNESS forum
« on: April 14, 2014, 18:55:41  »
Hi David,

This is not a restricted access area. Everyone can read and post messages here.

Only members of the OpenNESS group can read this part of the forum:,29.0.html

best regards

FAQ / Re: Credible Interval
« on: February 11, 2014, 15:26:21  »
Maybe you can use the variance, which can be displayed in the monitor window of the node in the  Graphical User Interface (GUI)?

Display of the variance in the GUI can be enabled in "Network->Network Properties" under the "Monitors"-tab.

This functionality is not available in the API.

Hope this helps

Probabilistic Graphical Models (PGM2014) / Call for papers
« on: February 06, 2014, 15:36:21  »
By request of Silja Renooij, Universiteit Utrecht we post this CfP:

Call for Papers
 PGM 2014: The Seventh European Workshop on Probabilistic Graphical Models
 September 17-19, 2014
 Utrecht, The Netherlands

 Important Dates:

 Deadline for abstract submissions: May 12, 2014
 Deadline for paper submissions: May 16, 2014
 Notification of acceptance: June 23, 2014
 Final versions due: July 18, 2014
 Workshop dates September 17-19, 2014


 Call for papers:

The aim of the workshop is to bring together people interested in
probabilistic graphical models and to provide a forum for discussion
of the latest research developments in this field. To promote interactions
among the participants, parallel sessions will be avoided and the workshop
will be organised around a single thread of plenary and poster sessions.
We welcome theoretical and applied contributions related to various aspects
of probabilistic graphical models, such as:

 - Principles of Bayesian (belief) networks, chain graphs, decision networks, influence diagrams,
   probabilistic relational models, and other probabilistic graphical models (PGMs).
 - Information processing in PGMs (exact and approximate inference).
 - Learning and data mining in the context of PGMs: machine learning approaches,
   statistical testing and search methods, MCMC simulation.
 - Exploitation for the construction of PGMs of results from related disciplines
   such as statistics, information theory, optimization, and decision making under uncertainty.
 - Software systems based on PGMs.
 - Application of PGMs to real-world problems.

Papers submitted for review should report on original, previously unpublished work.
Each submitted paper will be reviewed by at least three reviewers.

All accepted papers will be included in the workshop proceedings.
At this stage, we are negotiating publication by a renowned publisher.

Submission procedure:

PGM 2014 requires electronic submission of papers according to the
instructions found on the web site of the workshop:

The authors must prepare their papers according to the Springer LNCS
format, and submit the full version of the papers by the deadline stated above.
The page limit is 16 pages, including figures and bibliography.
The web page for author instructions contains further format information
and provides access to style files and templates.

Paper presentation:

Papers will be accepted for plenary or poster presentation.
In the proceedings, no distinction is made between the two.
At least one author of accepted papers is required to register for the workshop.

After the workshop, authors of selected papers will be invited to submit
an extended version of their paper for a special issue of a journal, which is yet to be decided.
The extended papers will undergo a full reviewing process.


The workshop is hosted by the Department of Information and Computing Sciences,
Utrecht University, The Netherlands.

OpenNESS / Project Description
« on: January 22, 2014, 19:44:25  »
The web-site is dedicated to hosting information on the OpenNESS project with the main focus on the use of Bayesian networks.

OpenNESS / HUGIN OpenNESS installation package
« on: January 17, 2014, 10:44:13  »
Please find the installation package here:

Dear Marco,

Are there any new examples of LIMIDs that have been published since your previous post of April 13, 2012?

Not that I can recall. I would be very interested in hearing about this too.

Best regards

SelSus / Project Description
« on: November 28, 2013, 11:24:58  »
The web-site is dedicated to hosting information on the SelSus project with the main focus on the use of Bayesian networks.

Java / Re: Entropy and mutual information
« on: October 23, 2013, 18:33:31  »

Please post the part of the code where these two methods are called including the definition of the nodes in order to determine the types of the nodes you are trying to compute the entropy and mutual information for.

Best regards

If you want to evaluate the impact of observing a node on the distribution of a specific node, then you may use value of information. The HUGIN GUI has a Value of Information dialog that computes the pair-wise mutual information between a target node and each node in a set of selected nodes.

Also, the Evidence Sensitivity dialog allows you to determine the minimum and maximum probability observing a node will produce for a specific target node. This analysis can be performed using a set of selected nodes.

Both types of analysis will help you to determine the impact of individual nodes on the distribution of a target node.

The information below is taken from Front Matter PDF file that can be found on this page:

Compared to the first edition, this second edition of the book contains additional material: Sections 6.3.3 and 6.3.4 elaborate on model construction, Section 8.3 describes search and score-based structure learning, Section 8.4 presents a worked example on structure learning, Section 10.3 describes two-way parameter sensitivity analysis, Section 10.4 describes parameter tuning, and an appendix provides a quick reference to model construction with lists of recommendations and pitfalls for the model builder. Finally, lists of examples, figures, and tables have been added

Pages: 1 2 [3] 4 5 ... 19