Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Anders L Madsen

Pages: 1 2 [3] 4 5 ... 20
31
Sorry for the belated reply.

Please read this: http://forum.hugin.com/index.php?topic=259.msg645#msg645.

32
HUGIN Training Course Discussion / Re: HUGIN Commandline problemm
« on: April 15, 2016, 13:48:11  »
Sorry for the belated reply.

Please read this: http://forum.hugin.com/index.php?topic=259.msg645#msg645.

33
It is possible to specify a limit on the number of parents for the greedy search-and-score structure learning algorithm.

Alternatively, you can consider using structure restricted models such as, e.g., the tree-augmented naive Bayes model.

35
General Discussion / Re: Analysing AIC, BIC and LL scores
« on: February 24, 2015, 14:34:54  »
Quote
My doubts concern the analysis of the results reached after the training of the model: when I test the network with a test set (of examples that don't belong to the training set), how should I interpret the AIC, BIC and log-likelihood scores? When are they consistent or good enough? Have they significance on their own or only compared with the scores of another model? And in the comparison with another model, how to choose the best one?

The AIC, BIC and LogLikelihood (LL) scores are criteria for selecting between a set of candidate models representing a data set. The LL specifies how well a model represents a data set and the LL can be increased by making the model more complex. So, this score should only be used to compare models with the same complexity. Both the BIC and AIC scores are based on the LL with a penalty score for complexity. The penalty score is different for BIC and AIC.

You should use these scores to select between candidate models as a representation of a data set. The higher the score, the better the model represents the data.

Quote
can I deduce something (good or bad) about my network?
No, not as far as I know. You can use it as a relative score to compare models and select a model with highest score.

Quote
Another parameter of the analysis is the ROC curve: which role does it play in the valuation of the goodness of the model?
The ROC can be used for assessing the performance of a classification model, i.e., your model should be used to assign a class label to a set of instances. The ROC (and the area under the ROC) is a measure of classification performance showing the True Positive rate as a function of the False Positive rate.

You can find some introductory material on these concepts on Wikipedia.

38
FAQ / Re: Analysis Wizard
« on: September 08, 2014, 09:09:37  »
Hi,

Quote
In the Analysis Wizard, the Error rate for Test Data Accuracy pane of how much is acceptable?

This depends on the domain and the application. It may also be relevant to take a look at AUC of the ROC (which should be at least 0.5). By searching the internet you will be able to find rules of thumb on how to interpret/classify the performance of a classifier based on the AUC of the ROC.

Quote
In section Case table, The probability values 0.5 or less, Large difference between the actual data and test data are available (By multiplying the probabilities of each category for discrete data.), the cause is?what to do to fix it?

I do not understand this comment. If the performance of the model is not sufficient, then the model should be improved. When building a Bayesian network classifier from data a number of design choices have to be made, e.g., which variables to include, how do discretize numerical variables and which edges should be present in the model. It is impossible to say how you model could be improved without a detailed description of the model and data.

39
AMIDST / Project Description
« on: August 11, 2014, 10:35:15  »
The web-site http://amidst.hugin.com is dedicated to hosting information on the AMIDST project with the main focus on the use of Bayesian networks.

41
ActiveX / Re: Is it possible to unselect a state ?
« on: August 02, 2014, 08:21:54  »
Hi

Use RetractFindings for this


42
FAQ / Re: Model validation
« on: August 02, 2014, 08:18:51  »
You cannot compute the AIC and BIS scores from this information. See section 12.2 of the HUGIN API Reference Manual on how graphical models are scored.


43
FAQ / Re: Model validation
« on: June 14, 2014, 13:59:26  »
The number of free parameters in a discrete CPT is (n-1) * m, where n is the number of states in the child and m is the number of parent configurations.

You can find a lot of information on AIC and BIC by performing a few Google searches.

44
General Discussion / Re: Decision Trees
« on: May 01, 2014, 11:13:58  »
Dear Marco,

It is correct that HUGIN does not directly support decision trees.

Instead HUGIN supports the use of influence diagrams and limited memory influence diagrams (LIMIDs). LIMIDs were introduced as part of version 7.0. With the introduction of LIMIDs, the solution algorithm was changed from being based on Jensen, Jensen & Dittmer (1994) to being based on Lauritzen & Nilsson  (2001). The solution algorithm is referred to as Single Policy Updating (SPU). SPU requires that all informational links are specified in the model meaning a change in the interpretation of the structure of the influence diagram compared to previous versions of HUGIN.

Hope this helps
Anders

Lauritzen, S. L. and Nilsson, D., (2001), Representing and solving decision problems with limited information. Management Science, 47, 1238 - 1251.

Jensen, F., Jensen, F. V., Dittmer, S. L., (1994), From influence diagrams to junction trees, Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pages 367-373.

45
OpenNESS / Re: zero intervals and benefit cost analysis
« on: April 14, 2014, 19:36:44  »
Quote
Can you help?

We can try.

1) Explanation.

The zero width interval -feature was introduced to extend the Table generator functionality. We did not consider including this as part of the Learning Wizards. So, the discretization operator in the Learning Wizard simply ignores the zero width interval.

2) Workaround.

Here is a workaround (I assume that you are using the Learning Wizard and not only the EM Learning Wizard):

  • do the discretization  and structure learning in Learning Wizard
  • leave the Learning Wizard prior to the EM part (parameter estimation)
  • manually add the "zero width interval" to the appropriate node
  • use the EM learning wizard to perform the parameter estimation on the adjusted model

If your are using the EM learning wizard, then the steps should be adjusted accordingly.

As you have no data on zero, then you will probably not learning anything about the relation between the parents for this value. The result will probably be a uniform likelihood.

Pages: 1 2 [3] 4 5 ... 20