Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Anders L Madsen

Pages: [1] 2 3 ... 19
1
Our HUGIN is a tool for probabilistic graphical models (see www.hugin.com).

2
Hello,

Are you aware that our HUGIN is a tool for probabilistic graphical models?

Best regards
anders

3
General Discussion / Re: How to show panoramas on a web site
« on: February 16, 2017, 20:04:39  »
Are you aware that our HUGIN is a tool for probabilistic graphical models?

4
General Discussion / Re: File exchange between Hugin and Netica
« on: February 16, 2017, 20:03:32  »
Hi Marco,

No, unfortunately, I do not know of any conversion tool. How many and how complex are the models you need to convert? I'm assuming you are converting from Netica to HUGIN.

Best regards
anders

5
General Discussion / Re: Instalation with W10
« on: May 19, 2016, 10:38:20  »
Did you actually read this: http://forum.hugin.com/index.php?topic=259.msg645#msg645:

Our HUGIN is a tool for probabilistic graphical models  (see www.hugin.com) and does not support such operations.

Perhaps you are looking for the open source Panorama photo stitcher (with which we are not affiliated)
http://hugin.sourceforge.net/

There is a mailing list for this and similar programs at:
http://groups.google.com/group/hugin-ptx

6
General Discussion / Re: Hugin whitin matlab
« on: May 09, 2016, 13:54:41  »
Hello,

Were you able to run the example from the post http://forum.hugin.com/index.php?topic=233.0? This example shows how to interact with the HUGIN Decision Engine using a few functions of the HUGIN API. In the same way you should be able to use any other function from the HUGIN API.

There are a number of code examples included in the installation of HUGIN. Take a look in the Test folder.

Hope this helps.

7
General Discussion / Re: Instalation with W10
« on: May 09, 2016, 13:48:04  »
Sorry for the belated reply.

Please read this: http://forum.hugin.com/index.php?topic=259.msg645#msg645.

8
Sorry for the belated reply.

Please read this: http://forum.hugin.com/index.php?topic=259.msg645#msg645.

9
HUGIN Training Course Discussion / Re: HUGIN Commandline problemm
« on: April 15, 2016, 13:48:11  »
Sorry for the belated reply.

Please read this: http://forum.hugin.com/index.php?topic=259.msg645#msg645.

10
It is possible to specify a limit on the number of parents for the greedy search-and-score structure learning algorithm.

Alternatively, you can consider using structure restricted models such as, e.g., the tree-augmented naive Bayes model.

12
General Discussion / Re: Analysing AIC, BIC and LL scores
« on: February 24, 2015, 14:34:54  »
Quote
My doubts concern the analysis of the results reached after the training of the model: when I test the network with a test set (of examples that don't belong to the training set), how should I interpret the AIC, BIC and log-likelihood scores? When are they consistent or good enough? Have they significance on their own or only compared with the scores of another model? And in the comparison with another model, how to choose the best one?

The AIC, BIC and LogLikelihood (LL) scores are criteria for selecting between a set of candidate models representing a data set. The LL specifies how well a model represents a data set and the LL can be increased by making the model more complex. So, this score should only be used to compare models with the same complexity. Both the BIC and AIC scores are based on the LL with a penalty score for complexity. The penalty score is different for BIC and AIC.

You should use these scores to select between candidate models as a representation of a data set. The higher the score, the better the model represents the data.

Quote
can I deduce something (good or bad) about my network?
No, not as far as I know. You can use it as a relative score to compare models and select a model with highest score.

Quote
Another parameter of the analysis is the ROC curve: which role does it play in the valuation of the goodness of the model?
The ROC can be used for assessing the performance of a classification model, i.e., your model should be used to assign a class label to a set of instances. The ROC (and the area under the ROC) is a measure of classification performance showing the True Positive rate as a function of the False Positive rate.

You can find some introductory material on these concepts on Wikipedia.

15
FAQ / Re: Analysis Wizard
« on: September 08, 2014, 09:09:37  »
Hi,

Quote
In the Analysis Wizard, the Error rate for Test Data Accuracy pane of how much is acceptable?

This depends on the domain and the application. It may also be relevant to take a look at AUC of the ROC (which should be at least 0.5). By searching the internet you will be able to find rules of thumb on how to interpret/classify the performance of a classifier based on the AUC of the ROC.

Quote
In section Case table, The probability values 0.5 or less, Large difference between the actual data and test data are available (By multiplying the probabilities of each category for discrete data.), the cause is?what to do to fix it?

I do not understand this comment. If the performance of the model is not sufficient, then the model should be improved. When building a Bayesian network classifier from data a number of design choices have to be made, e.g., which variables to include, how do discretize numerical variables and which edges should be present in the model. It is impossible to say how you model could be improved without a detailed description of the model and data.

Pages: [1] 2 3 ... 19