Recent Posts

Pages: 1 ... 8 9 [10]
Java / Generate multiple simulations in the API
« Last post by DS on September 15, 2015, 14:43:21  »
I have been trying to write a program that generates multiple simulations given some evidence. I.e. I want to sample a Bayesian network a number of times given some observations. I know this is possible to do using the GUI (with the "generate cases" button) and generally I think that the API is quite straightforward to use, but this time I have got stuck. It might be the case that I will have to write the entire sampling method myself, in which case I can probably do it, but it feels like it should be possible to do with the "saveCases(...)" method for the domain object straight away. I can however not get it to work and only get "NA" values in my output (like if the values are missing) so I guess I somehow need to set up the "cases" before-hand, but can't figure out how. Does anyone have an idea of what I do wrong or what I am missing?
I have attached my code below and I think it is quite straightforward to understand with the comments.
Code: [Select]
import java.awt.geom.Point2D;

import COM.hugin.HAPI.ContinuousChanceNode;
import COM.hugin.HAPI.Domain;
import COM.hugin.HAPI.ExceptionHugin;

public class exampleCase {

public static void main(String[] args) {
try {
Domain d;
d = new Domain ();
d.setNodeSize (new Point2D.Double (50, 30));

//set up the basic stucture with a collider over node3
ContinuousChanceNode node1 = new ContinuousChanceNode(d);
node1.setLabel ("node1");
node1.setName ("node1");
node1.setPosition(new Point2D.Double (50, 50));
ContinuousChanceNode node2 = new ContinuousChanceNode(d);
node2.setLabel ("node2");
node2.setName ("node2");
node2.setPosition(new Point2D.Double (150, 50));
ContinuousChanceNode node3 = new ContinuousChanceNode(d);
node3.setLabel ("node3");
node3.setName ("node3");
node3.setPosition(new Point2D.Double (100, 100));
d.compile ();

//set the parameters
node1.setAlpha(0.1, 0); //alpha = intercept
                node1.setGamma(10, 0); //variance
                node2.setAlpha(0.2, 0); //alpha = intercept
                node2.setGamma(10, 0); //variance
                node3.setBeta(0.3, node1, 0); //beta = weights
                node3.setBeta(0.4, node2, 0);
                node3.setGamma(0.5,0); //variance
d.saveAsNet ("");

//So what I want to do is to sample this BN 10 times with the evidence set below.
//Here I can also check that the network is updated correctly (which it is)
System.out.println("Name: "+node1.getName()+" val: "+node1.getMean());
System.out.println("Name: "+node2.getName()+" val: "+node2.getMean());
System.out.println("Name: "+node3.getName()+" val: "+node3.getMean());

//However, for the simulation (sampling)-part I am unsure how to do it .
//If I want to save the evidence of a single case I know I can simply write:
d.saveCase("Single_sample.dat"); //saves the set evidence
//but what if I want to save multiple (10) simulations given the evidence?
//Intuitively I would think that this code would do the trick:
d.saveCases("Multiple_samples.dat", d.getNodes(), null, false, ",", "NA");
//but unfortunately only NA-values are given in the output file
//like if all values are missing. So what do I need more?
//And what do the following methods do? Are they related to the simulation part?
d.newCase(); //is this if I want different evidence in the different cases?
d.enterCase(0); //does this select case 0 for altering the evidence in that case?
d.setCaseCount(0,3); //repeats the case 0 3 times or?
d.adapt(); //Is this related to the simulations?

} catch (ExceptionHugin e) {

I have an equirectangular video from an architectural visualization, which I would like to use in a game engine (Unity 3D). The final goal is to embed a series of architectural visualizations that one can navigate using a VR headset. My first approach was to use a the video as a texture for a sphere, but the glitches in the north and south pole are too noticeable. My second try will be to decode the video as a series of images with ffmpeg, convert from equirectangular to cubemap, and encode again. I think the conversion can be acheived with hugin command line tools but I havent been able to find the appropriate commands for doing so...

I would be very grateful I someone could shed me some light on this...

Thank you very much...

General Discussion / HUGIN Web Service API - Unique Instance ID
« Last post by danielle8776 on September 02, 2015, 00:01:22  »

Currently, I am looking at the HUGIN web service API website ( Under resources, it states that the unique instance ID can be extracted from the URL. However, I am having trouble finding where the unique instance ID is.

Can someone lead me in the right direction?

Thank you,
Web Service / Hugin Web Service API - Unique Instance ID
« Last post by danielle8776 on September 01, 2015, 23:47:41  »

Currently, I am looking at the HUGIN web service API website ( Under resources, it states that the unique instance ID can be extracted from the URL. However, I am having trouble finding where the unique instance ID is.

Can someone lead me in the right direction?

Thank you,
General Discussion / Challenging dynamic range problem
« Last post by poorbokeh on June 16, 2015, 17:53:46  »
Hi Folks,

I'm working on a project that's giving problems I can't figure out. I've photographed and stitched a panorama of a living room that has pretty high dynamic range as the image includes darker corners of the room and brightly lit areas. The first go-round worked out very well, retaining detail in the shadows and the brightly lit walls.

However, I took the same batch of images and switched out one of them with a picture that included the homeowners. I stitched in the one image, using all the previously made control points with the other images. Now, the image with the brightly lit wall/ceiling is blown out. I didn't change any setting that I know of, but I can't improve that blown out image. I tried changing the EV, and while that brings in more detail in the highlights the darker portions are now too dark.

If anyone has some good suggestions I'd love to hear them.

On a related note, there are so many features in Hugin that I don't understand (image adjustments - exposure, alignment, etc.). I've tried to keep my workflow as simple as possible so that I don't get lost in the quicksand of options, but if there's a consolidated 'manual' to help me understand all the features (I currently search through tutorials when needed) I'd really appreciate being pointed to it.

Thanks for any help you can offer!

Network Repository / Accidents OOBN Example (Koller&Pfeffer97)
« Last post by Nicolaj on May 21, 2015, 10:48:48  »
The file Accident_cc.oobn is an Example based on:

Daphne Koller and Avi Pfeffer
Object-Oriented Bayesian Networks, In
Proceedings of the Thirteenth Annual Conference
 on Uncertainty in Artificial Intelligence (UAI-97),
pages 302-313,
Providence, Rhode Island, August 1-3, 1997

You need to be logged in to download the file.
Network Repository / Hugin's OOBN Tutorial
« Last post by Nicolaj on May 21, 2015, 10:38:00  »
The Disease example is part of the OOBN Tutorial at Hugin's website.

Link to the OOBN Tutorial:

The files are attached below (remember to log-in).
disease.oobn must be edited according to the tutorial before diseases.oobn can be opened.
Specifically, the boolean's variable true wold be represented by a log-normal distribution, e.g., using mean and error-factor (=sqrt of 95 and 5 percentiles quotient), or other "input" data.
General Discussion / Re: Analysing AIC, BIC and LL scores
« Last post by Anders L Madsen on February 24, 2015, 14:34:54  »
My doubts concern the analysis of the results reached after the training of the model: when I test the network with a test set (of examples that don't belong to the training set), how should I interpret the AIC, BIC and log-likelihood scores? When are they consistent or good enough? Have they significance on their own or only compared with the scores of another model? And in the comparison with another model, how to choose the best one?

The AIC, BIC and LogLikelihood (LL) scores are criteria for selecting between a set of candidate models representing a data set. The LL specifies how well a model represents a data set and the LL can be increased by making the model more complex. So, this score should only be used to compare models with the same complexity. Both the BIC and AIC scores are based on the LL with a penalty score for complexity. The penalty score is different for BIC and AIC.

You should use these scores to select between candidate models as a representation of a data set. The higher the score, the better the model represents the data.

can I deduce something (good or bad) about my network?
No, not as far as I know. You can use it as a relative score to compare models and select a model with highest score.

Another parameter of the analysis is the ROC curve: which role does it play in the valuation of the goodness of the model?
The ROC can be used for assessing the performance of a classification model, i.e., your model should be used to assign a class label to a set of instances. The ROC (and the area under the ROC) is a measure of classification performance showing the True Positive rate as a function of the False Positive rate.

You can find some introductory material on these concepts on Wikipedia.
Pages: 1 ... 8 9 [10]