Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Gary

Pages: [1]
1
General Discussion / Free Will
« on: April 27, 2010, 18:47:54  »


Hi - is there a good definition of the 'free will' condition?

I can trap uninstantiated decision nodes in a LIMID within my codes but is this the only condition for the violation of free will?  Is there a list of routines that can fail on violation of 'free will' ... i.e. I get failures from routines such as HAPI.DconnectedNodes?

Thanks

gary

2
General Discussion / Markov Blanket
« on: March 31, 2008, 11:23:43  »

Is there a way to display or list the markov blanket for a particular node in a large network? (Parents, children and childrens parents). For a large network with lots of dependencies this is very difficult to obtain form the diagram (especially if the diagram has been learnt from data so is not displayed in a systematic fashion) or from the net file (where the info is in the potential clauses but spread throughout a large file) ?

Thanks

Gary

3
General Discussion / D connected nodes
« on: July 23, 2007, 16:37:22  »

Is it possible to be more precise about which nodes are included in the collection of D-connected (separated nodes)?

Although a Utility node can not be part of the soft/hard node list it appears that they are included in the list of Connected or Separated nodes produced by the API calls. Are these nodes treated as Chance nodes without evidence? Can a utility be considered as a source node? What is the corresponding situation for decision nodes? Does their status depend on whether the arrow enters or leaves? (i.e time order?).

Is the source node always included in the D-connected list?

I have not been able to get the d-separation analysis tool to work from the GUI for influence diagrams to check these conditions visually?

Thanks

Gary

4
Network Repository / Re: Water
« on: July 10, 2007, 17:17:20  »
Martin,
I have noticed that the water.net file (and some of the others you uploaded) have been stripped of their hugin headers (i.e. HR_Propagate_AutoSum = "1" etc).

Has this been done with a setting in Hugin Runtime or have the headers been stripped manually? (or with another tool?)

If so why? Does it add to the network in anyway? (presumably all the attributes are given default values?)

Additionally I have noticed that water.net has some additional text 'commented' into the file in some kind of free format? This would be a very useful facility in hugin but ... how do I preserve free format text beyond a call to bbn.SaveAsNet(strFullPath) etc? I understood that user defined attributes were the only way to preserve additional info in the net description?

Thanks

Gary

5
HUGIN Training Course Discussion / Re: distribution parameters
« on: June 13, 2007, 10:43:32  »
Joost,

I have lots of code for performing tasks with the Hugin beliefs (including a direct route to MS Excel using typical office automation techniques). I am sure we can share this.

However your observation about the sensitivity of posterior marginal parameters, like the mean or the sd, with respect to the discretization is interesting. Optimization of the discretization is tricky not least because the appropriate measure for optimization is not always clear - sensitivity of the posterior moments to the discretization is one such measure (the devil is in the tail!).

Strictly within the 'discretized' networks the individual state probabilities could be considered as parameters and it is then easier to measure the sensitivity of one posterior state probability (parameter) with respect to the probability of an input state. This scheme is a lot easier to implement but not necessarily as easy to interpret for real world problems.

Gary

6
FAQ / Re: How may likelihood evidence be used?
« on: May 21, 2007, 13:46:17  »


Joost - likelihood can be quite tricky to interpret but I think, if you are careful, what you say is correct. In practice I dont use likelihood evidence and prefer to transform it to a finding with an appropriate likelihood table. Some while ago I wrote some notes on this process and I have included them below - hope it helps (it may have lost some detail in transforming from MS word)

Gary


Likelihood evidence?

There are several ways to enter new information into an established belief system. Most commonly, for variables that are observed to be in specific discrete states, the information is entered as a finding and often called evidence. The marginal probability of a state that corresponds to a finding is fixed at unity – consequently all the complementary states (other states for the same variable) have probabilities fixed at zero.

Likelihood is an alternative form of new information which is sometimes difficult to interpret – in practice it is simply shorthand for real evidence. Consequently it is always possible to operate belief systems without entering likelihoods – but this may involve a slightly different network structure.

We may illustrate the relationship between likelihood information and evidence with a simple three node network.  Consider two variables A and B which have a relationship that is represented by a conditional probability table p(B|A) i.e. B depends on A. The form of the relationship is unimportant but might be constructed from a ‘model’ and some table generator tools. The prior probabilities for A, p(A), are equally unimportant.

In many real situations a direct observation of B is used to establish belief concerning the state of the parent variable A – this is classical inference. In the probabilistic scheme this involves Bayes’ theorem, the conditional probability table p(B|A) and the prior information p(A). In a belief system we would enter new information as a finding for the observed state of B and the ‘updated’ probability for variable A (i.e. following propagation) would express subsequent beliefs (posterior information).

However observations are not always this simple. In many cases an observation does not give definitive information about the variable under investigation. In most cases there are uncertainties surrounding the experimental technique and there is a small probability that the real state of the variable in question is different from the state that is indicated by any particular observation. If we know the small probabilities that describe ‘failures’ in the experimental method we can express the relationship between the state of the observable variable, B, and the state indicated by an experimental observation. Then this relationship can be quantified as a conditional probability table, p(B’|B), where B’ is a variable that represents the experimental observation. B’ has states that are identical to B and the conditional probability, p(B’|B) is a square table. The diagonal elements of p(B’|B) will be close to unity if the experimental technique is a reliable one – i.e. if by observing that B is in a particular state we are confident that it is actually in that state.

A belief system that explicitly includes the experimental observation of B has an additional node B’ and a cpt p(B’|B) that is usually called a ‘likelihood’ function. In this three node network new information concerning B has the form of a finding for B’. When evidence is entered for B’ the marginal probabilities for B and A are updated according to the usual laws for propagating evidence.

The normal rule for propagating evidence entered at B’ uses Bayes’ theorem

p(B|B’) = p(B’|B) p(B) / p(B’)

Although the table for p(B|B’) has not been specified explicitly within the belief system the right hand side of this expression gives a prescription for computing updates in terms of components that are explicitly included in the structure.  When there is evidence concerning B’, i.e. B’ is in a definite state B’ = b, the computation of p(B|B’=b) is particularly simple. On the right hand side the elements from one row of p(B’ = b |B) multiply the elements of the marginal probability, p(B), to give the corresponding elements of the posterior marginal probability (the denominator p(B’ = b) is a constant only used for normalization – it is an element of the marginal at B’ prior to adding new information). The new evidence can then be propagated onwards through the network to update the probabilities for A etc.

This update process

p(Bi | B’ = b) ~ p(B’ = b | Bi) x p(Bi)
is particularly common and it is included in shorthand in several belief modelling tools (Bi is a state of variable B). In shorthand, rather than entering the finding B’ = b for a third node B’ that represents the actual result of an observation, the corresponding elements p(B’ = b | Bi) (i.e. conditional probabilities for observing a state b while the variable in question is in each of its possible states) are entered directly at variable B. This is called likelihood information as the entries correspond the one row of the likelihood table for the B’ variable. The belief system is updated (including the node where likelihood in entered) as if the corresponding auxiliary node and cpt existed (this is not the only belief system operation where additional nodes are useful).
Thus updating the belief system with uncertain measurements includes a choice between entering likelihood information directly or adding a new network node, and a likelihood table, so that information can be entered directly as a finding.
Additionally it is possible to extend this interpretation of likelihood: the likelihood table p(B’|B) can be considered in terms of the frequencies of observations in a large ensemble of independent events. In this case the analysis presented above is unchanged but the interpretation is slightly different.
In a simple example we can consider a variable A with uniform prior and four interval states [0-1], [1-2], [2-3] and [3-4]. A dependent variable B = A2 has four interval states [0-2],[2-4], [4-6] and [6-8]. We describe a likelihood function for measurement of B by a table
 
where B’ is the measurement variable with states identical to B. In the figure we show the three node network and a corresponding two node network (with nodes A2, B2 that are identical to A,B but without a node that corresponds to B’). In the figure the networks correspond to a situation in which a finding B’ = [2-4] has been entered into the three node network and likelihood B2 = [0.1,0.7,0.1,0.1] has been entered into the two node network. Posterior beliefs concerning B, A and B2, A2 are identical.
 

7
HUGIN Training Course Discussion / Re: interval nodes
« on: May 16, 2007, 17:52:38  »
Joost - this effect arises from the way that the table generator builds a table. It uses a set of points distributed over the (input)  intervals and performs the computation (described by your expression) at each point. It weights elements of the cpt in terms of the fraction of the points which occur in each output interval.

By default there are 25 evenly distributed points so that the output probabilities have to be in units of 1/25 ... you have probs 12/25 and 13/25...

If in edit mode you select B the goto expressions ... and select 'samples per interval' ... you will see 25 in the text box ... change it to 26 (or any even number) and recompile ... bingo!

Actually this situation is not ideal ... but is a property of any numerical integration scheme. Arithmetic addition is particularly susceptible to this problem. (You can only use numerical nodes if you know the complete set of output states (very rare for computation with discretized continuous variables) or if you are willing to accept some truncation etc. (e.g. using the floor() function).

If you are willing to compute cpts outside of the table generator there are some simple routines for these kind of structures

Gary

Pages: [1]