Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Martin

Pages: [1] 2 3 ... 8
1
HUGIN GUI Discussion / Re: How to classify data
« on: February 06, 2015, 12:00:19  »
Regarding
Quote
1) Is it possible to get something like a multivariate z score (Mahalanobis distance) instead of P(e)?
Sorry, the answer is no.

2
HUGIN GUI Discussion / Re: How to classify data
« on: February 02, 2015, 11:28:49  »
Dear Ivana Cace

You need to perform a number of steps to pre-compute probabilities before doing classification.

1) load the network (and switch to run-mode).
2) load data
3) select your network as the 'run mode model'
4) add colum(s) for collecting beliefs
5) process entire data file in batch - this yields numbers in the colums configured in step 4.
6) start the classification

for step 1-4 follow the description in the 'Data Frame' manual page
http://download.hugin.com/webdocs/manuals/Htmlhelp/descr_dataframe.html

and lastly follow the steps described in the 'Data classifier performance' manual page
http://download.hugin.com/webdocs/manuals/Htmlhelp/descr_data_classifier_performance.html


I hope this helps
Martin

3
HUGIN GUI Discussion / Re: Default heap size
« on: September 29, 2014, 09:23:47  »
The default value used by the launcher is -Xmx512m.

The only way to set such options manually is starting hugin using the command line option as described in the other post.

Kind regards
Martin

4
As you found out, the error means that the value in the case cannot be matched to a state on the node.

Yes we now use an empty value to indicate missing data - it seems some of the tutorials needs to be updated, sorry.

In addition to manually editing the file, one can also replace N/A with the empty value by right-click a column header in the data frame and select replace values.

5
General Discussion / Re: Query on EM learning in HUGIN
« on: November 14, 2013, 16:03:15  »
Quote
"The state range of the node is insufficient for the chosen standard distribution"
This usually means that some values in your data cannot be mapped to node states.
You must correct your data before you can continue.
Look for red cells in the EM leraning data window. A red cell means that the value cannot be mapped to a state.

6
Java / Re: batch propagation
« on: September 24, 2013, 09:25:37  »
Yes that would be exactly the way to do it!

If you know the names of the set of utility nodes beforehand, you could structure your code like so:

Code: [Select]
//reference the utility nodes un1 and un2
UtilityNode un1 = (UtilityNode)domain.getNodeByName("un1");
UtilityNode un2 = (UtilityNode)domain.getNodeByName("un2");
...
...
//iterate all cases
for (int i = 0; i < numCases; ++i) {
 //enter case #i, propagate, SPU etc.
 ...
 //report utilities for un1 and un2
 System.out.println("utility 1: " + un1.getExpectedUtility());
 System.out.println("utility 2: " + un2.getExpectedUtility());
 ...
}


By the way, since you are working with a LIMID.

If your network contains uninstantiated decisions after propagating a case then you might want to perform SPU (single policy update) before reading out beliefs and utilities.

SPU takes the entered evidence into account to compute new policies for uninstantiated decisions which may improve overall expected utility of the decision problem (and influence beliefs for the nodes that depends on uninstantiated decisions).

See chapter 9.3 "Inference in LIMIDs: Computing optimal policies" in the API reference manual for details about SPU.
http://download.hugin.com/webdocs/manuals/api-manual.pdf

In the Java API the SPU function is Domain.updatePolicies()

7
Java / Re: batch propagation
« on: September 09, 2013, 11:37:50  »
Dear FranzisA

This should give you a good start at solving the task.

To propagate a number of cases read from a data file in batch one could do the following:
1) parse the data file into the Domain, this inserts a number of cases in the Domain object, which we can refer to by case index.
Note: In order to successfully parse a data file, the data file _must_ strictly be a valid hugin data file (see the API reference manual http://download.hugin.com/webdocs/manuals/api-manual.pdf ch. 11.3 Data files).
2) iterate: enter a case then propagate then perform other actions

In code such a function could look like this:

Code: [Select]
void doBatch() throws ExceptionHugin {
 //parse data file into domain
 domain.parseCases("path-to-my-data.dat", new DefaultClassParseListener());

 //get number of cases
 int numCases = domain.getNumberOfCases();

 //now iterate, propagate each case and perform some actions
 for (int i = 0; i < numCases; ++i) {
  try {
   domain.enterCase(i);
   domain.propagate(Domain.H_EQUILIBRIUM_SUM, Domain.H_EVIDENCE_MODE_NORMAL);
   //perform some actions, e.g. record beliefs utilities etc
   //...
  } catch (ExceptionHugin eh) {
   System.out.println("Exception processing case #" + i + ": " + eh.getMessage());
  }
 }
}

8
General Discussion / Re: Markov Blanket
« on: July 05, 2013, 11:37:48  »
One can display the markov blanket for a specific node.

1. Select single target node
2. Navigate the HUGIN menu: Network -> Analysis -> d-separation -> Show Markov Blanket
3. Nodes in markov blanket are painted green.

9
FAQ / Aggregate operator example
« on: January 15, 2013, 16:09:28  »
The aggregate operator takes as parameter a numbered (frequency) and interval chance node (severity) and must itself be an interval discrete function node.

It is a requirement that the state values of the numbered node form an increasing sequence (0,1,2,3,4,5....) and that the interval state range in the discrete function node is sufficient to cover zero as well as min and max product of frequency and severity state values.
(If you get insufficient state range messages when compiling a network using discrete function node and aggregate operator, it is most likely the intervals of the discrete function node that needs to be revisited to make sure that this last requirement is met).
 
Also there is a small description in the C API manual on page 74 (look for h_operator_aggregate).

Here are two example network files:
aggregate1.oobn
http://download.hugin.com/webdocs/forum_example/aggregate/aggregate1.oobn

aggregate2.oobn
http://download.hugin.com/webdocs/forum_example/aggregate/aggregate2.oobn

In aggregate1 the interval range is finite and in aggregate2 it goes to infinity.

In case you are familiar with the old convolution dialog, the aggregate operator can produce the same results.
The aggregate2 example meets the requirements of the old convolution dialog (interval range from 0 to infinity). One can then use convolution (select frequency+severity, network->analysis->convolution) and compute the total impact distribution - which should produce same result as the discrete function node distribution produced by the aggregate operator.
But where the convolution dialog only lets us inspect numbers, the aggregate operator allows us to use the numbers for further computaions in the network through the child nodes of the discrete function node.
 
Finally, as with normal function nodes the belief updating can only go forward through discrete function nodes. This means that evidence entered on the descendants of a discrete function node has no impact on beliefs of the parents.

10
If you are using the .NET 4.0 version of the HUGIN API, then you may need to install the VC++ 2010 runtime.

The runtime installation packages can be downloaded from MSDN:

32-bit:
Microsoft Visual C++ 2010 Redistributable Package (x86)
http://www.microsoft.com/en-us/download/details.aspx?id=5555

64-bit:
Microsoft Visual C++ 2010 Redistributable Package (x64)
http://www.microsoft.com/en-us/download/details.aspx?id=14632

11
HUGIN GUI Discussion / Re: Most Likely Configuration
« on: November 27, 2012, 15:41:48  »
Max propagation is not supported for networks with decisions and/or utilities.
Joint configurations wizard does not work with decision nodes either (although utility nodes are ignored).

If you convert decision nodes to chance nodes, then you can use the joint configurations wizard.

12
HUGIN GUI Discussion / Re: Most Likely Configuration
« on: November 20, 2012, 16:18:47  »
You can check out the max-propagate normal button (look for 'Max Propagation' in the manual), and Network -> Analysis -> Joint Configurations wizard (look for 'Joint Probability Distribution' in the manual).

13
HUGIN Training Course Discussion / Re: Importing and Exporting CPTs
« on: October 01, 2012, 15:44:20  »
Generating tables from expressions should not take that long.

I suspect that the >20 min wait you experience is spent on triangulating the network when switching to run-mode.

You could try experimenting with the triangulation options (Click Network -> Network Properties, switch to compilation tab).

Specifically you could try saving and re-using the triangulation:
Step 1: switc to run-mode (and do the >20 min wait)
Step 2: then save the triangualtion - click Network -> Save Elimination Order
Step 3:-Finally in Network Properties, compilation tab - configure triangulation method to be the "load triangulation" option - and save the network.

Next time you load the network in HUGIN and switch to run-mode, load the triangulation when prompted and see if the wait is shorter.

14
HUGIN Training Course Discussion / Re: Importing and Exporting CPTs
« on: September 27, 2012, 15:20:56  »
Hello.

I am not sure exactly what you mean.

If you are using expressions, then HUGIN automatically generates the tables when switching to run mode - there are no extra manual steps needed to do inference.

To export a CPT to text file. In edit mode press and hold CTRL and then click the node to open the CPT.
In the table window select Functions->Export Table to export the table to a text file.
If the table has an expression only the expression is written to the text file.

But why export and import tables each time you start HUGIN?
It is better to just save the net file (click File->Save As) and load it next time you start HUGIN.

15
Quote
For example, if I want to look at the Asia net where both bronchitis and "lung cancer" are set to 15%/85%, I would need to enter the likelihood for bronchitis then lung given bronchitis then bronchitis then lung, etc. till I get close enough to my target marginals.

We are looking into what could be done to automate these manual steps, hopefully it makes it into the next release of HUGIN.

Pages: [1] 2 3 ... 8