A Bayesian model for event-based trust
Elements of a foundation for computational trust
Vladimiro Sassone
ECS, University of Southampton
joint work K. Krukow and M. Nielsen
Computational trust
Trust is an ineffable notion that permeates very many things. What trust are we going to have in this talk?
Computer idealisation of “trust” to support decision-making in open networks. No human emotion, nor philosophical/sociological concept. Gathering prominence in open applications involving safety guarantees in a wide sense
credential-based trust: e.g., public-key infrastructures,
authentication and resource access control, network security.
reputation-based trust: e.g., social networks, P2P, trust metrics, probabilistic approaches.
trust models: e.g., security policies, languages, game theory.
Computational trust
Trust is an ineffable notion that permeates very many things. What trust are we going to have in this talk?
Computer idealisation of “trust” to support decision-making in open networks. No human emotion, nor philosophical/sociological concept. Gathering prominence in open applications involving safety guarantees in a wide sense
credential-based trust: e.g., public-key infrastructures,
authentication and resource access control, network security. reputation-based trust: e.g., social networks, P2P, trust metrics, probabilistic approaches.
trust models: e.g., security policies, languages, game theory.
Computational trust
Trust is an ineffable notion that permeates very many things. What trust are we going to have in this talk?
Computer idealisation of “trust” to support decision-making in open networks. No human emotion, nor philosophical/sociological concept. Gathering prominence in open applications involving safety guarantees in a wide sense
credential-based trust: e.g., public-key infrastructures,
authentication and resource access control, network security.
reputation-based trust: e.g., social networks, P2P, trust metrics,
probabilistic approaches.
trust models: e.g., security policies, languages, game theory.
Computational trust
Trust is an ineffable notion that permeates very many things. What trust are we going to have in this talk?
Computer idealisation of “trust” to support decision-making in open networks. No human emotion, nor philosophical/sociological concept. Gathering prominence in open applications involving safety guarantees in a wide sense
credential-based trust: e.g., public-key infrastructures,
authentication and resource access control, network security.
reputation-based trust: e.g., social networks, P2P, trust metrics,
probabilistic approaches.
trust models: e.g., security policies, languages, game theory.
Computational trust
Trust is an ineffable notion that permeates very many things. What trust are we going to have in this talk?
Computer idealisation of “trust” to support decision-making in open networks. No human emotion, nor philosophical/sociological concept. Gathering prominence in open applications involving safety guarantees in a wide sense
credential-based trust: e.g., public-key infrastructures,
authentication and resource access control, network security.
reputation-based trust: e.g., social networks, P2P, trust metrics,
probabilistic approaches.
trust models: e.g., security policies, languages, game theory.
Trust and reputation systems
Reputation
behavioural:perception that an agent creates through past
actions about its intentions and norms of behaviour.
social: calculated on the basis of observations made by others.
An agent’s reputation may affect the trust that others have toward it. Trust
subjective: a level of the subjective expectation an agent has about another’s future behaviour based on the history of their encounters and of hearsay.
Trust and reputation systems
Reputation
behavioural:perception that an agent creates through past
actions about its intentions and norms of behaviour.
social: calculated on the basis of observations made by others.
An agent’s reputation may affect the trust that others have toward it.
Trust
subjective: a level of the subjective expectation an agent has
about another’s future behaviour based on the history of their encounters and of hearsay.
Trust and security
E.g.: Reputation-based access control
p’s ‘trust’ inq’s actions at timet, is determined byp’s observations of q’s behaviour up until timetaccording to a given policyψ.
Example
You download what claims to be a new cool browser from some unknown site. Your trust policy may be:
Trust and security
E.g.: Reputation-based access control
p’s ‘trust’ inq’s actions at timet, is determined byp’s observations of q’s behaviour up until timetaccording to a given policyψ.
Example
You download what claims to be a new cool browser from some unknown site. Your trust policy may be:
Outline
1 Some computational trust systems
2 Towards model comparison
3 Modelling behavioural information
Event structures as a trust model
4 Probabilistic event structures
Outline
1 Some computational trust systems
2 Towards model comparison
3 Modelling behavioural information Event structures as a trust model
4 Probabilistic event structures
Simple Probabilistic Systems
The modelλθ:
Each principalpbehaves in each interaction according to a fixed and independent probabilityθp of ‘success’ (and therefore1−θp of ‘failure’).
The framework:
Interface(Trust computation algorithm,A):
I Input: A sequenceh=x1x2· · ·xnforn≥0andxi ∈ {s,f}.
I Output: A probability distributionπ:{s,f} →[0,1].
Goal:
Simple Probabilistic Systems
The modelλθ:
Each principalpbehaves in each interaction according to a fixed and independent probabilityθp of ‘success’ (and therefore1−θp of ‘failure’).
The framework:
Interface(Trust computation algorithm,A):
I Input: A sequenceh=x1x2· · ·xnforn≥0andxi ∈ {s,f}.
I Output: A probability distributionπ:{s,f} →[0,1].
Goal:
Maximum likelihood
(Despotovic and Aberer)Trust computationA0
A0(s|h) = Ns(h)
|h| A0(f|h) =
Nf(h) |h|
Nx(h) =“number ofx’s inh”
Bayesian analysis inspired byλβ model: f(θ|α β)∝θα−1(1−θ)β−1
Properties:
Well defined semantics: A0(s|h)is interpreted as a probability of success in the next interaction.
Solidly based on probability theory and Bayesian analysis.
Maximum likelihood
(Despotovic and Aberer)Trust computationA0
A0(s|h) = Ns(h)
|h| A0(f|h) =
Nf(h) |h|
Nx(h) =“number ofx’s inh”
Bayesian analysis inspired byλβ model: f(θ|α β)∝θα−1(1−θ)β−1
Properties:
Well defined semantics: A0(s|h)is interpreted as a probability of success in the next interaction.
Beta models
(Mui et al)Even more tightly inspired by Bayesian analysis and byλβ
Trust computationA1
A1(s|h) =
Ns(h) +1
|h|+2 A1(f|h) =
Nf(h) +1 |h|+2
Nx(h) =“number ofx’s inh”
Properties:
Well defined semantics: A1(s|h)is interpreted as a probability of success in the next interaction.
Solidly based on probability theory and Bayesian analysis.
Formal result: Chernoff boundProb[error ≥]≤2e−2m2
Beta models
(Mui et al)Even more tightly inspired by Bayesian analysis and byλβ
Trust computationA1
A1(s|h) =
Ns(h) +1
|h|+2 A1(f|h) =
Nf(h) +1 |h|+2
Nx(h) =“number ofx’s inh”
Properties:
Well defined semantics: A1(s|h)is interpreted as a probability of success in the next interaction.
Solidly based on probability theory and Bayesian analysis. Formal result: Chernoff boundProb[error ≥]≤2e−2m2
Our elements of foundation
Recall the framework
Interface(Trust computation algorithm,A):
I Input: A sequenceh=x1x2· · ·xnforn≥0andxi ∈ {s,f}.
I Output: A probability distributionπ:{s,f} →[0,1].
Goal:
I Outputπapproximates(θp,1−θp)as well as possible, under the
hypothesis that inputhis the outcome of interactions withp.
We would like to consolidate in two directions:
1 model comparison
Our elements of foundation
Recall the framework
Interface(Trust computation algorithm,A):
I Input: A sequenceh=x1x2· · ·xnforn≥0andxi ∈ {s,f}.
I Output: A probability distributionπ:{s,f} →[0,1].
Goal:
I Outputπapproximates(θp,1−θp)as well as possible, under the
hypothesis that inputhis the outcome of interactions withp.
We would like to consolidate in two directions: 1 model comparison
Outline
1 Some computational trust systems
2 Towards model comparison
3 Modelling behavioural information Event structures as a trust model
4 Probabilistic event structures
Cross entropy
An information-theoretic “distance” on distributions
Cross entropyof distributionsp,q:{o1, . . . ,om} →[0,1].
D(p||q) = m X
i=1
p(oi)·log p(oi)/q(oi)
It holds 0≤D(p||q)≤ ∞, andD(p||q) =0 iff p=q.
Established measure in statistics for comparing distributions.
Cross entropy
An information-theoretic “distance” on distributions
Cross entropyof distributionsp,q:{o1, . . . ,om} →[0,1].
D(p||q) = m X
i=1
p(oi)·log p(oi)/q(oi)
It holds 0≤D(p||q)≤ ∞, andD(p||q) =0 iff p=q.
Expected cross entropy
A measure on probabilistic trust algorithmsGoal of a probabilistic trust algorithmA: given a historyX, approximate a distribution on the outcomesO={o1, . . . ,om}. Different historiesXresult in different output distributionsA(· |X).
Expected cross entropy fromλtoA
EDn(λ|| A) = X
X∈On
Expected cross entropy
A measure on probabilistic trust algorithmsGoal of a probabilistic trust algorithmA: given a historyX, approximate a distribution on the outcomesO={o1, . . . ,om}. Different historiesXresult in different output distributionsA(· |X).
Expected cross entropy fromλtoA
EDn(λ|| A) = X X∈On
An application of cross entropy
(1/2)Consider the beta modelλβ and the algorithmsA0of maximum likelihood (Despotovic et al.) andA1beta (Mui et al.).
Theorem
Ifθ=0orθ=1thenA0computes the exact distribution, whereasA1 does not. That is, for alln>0we have:
EDn(λβ || A0) =0<EDn(λβ || A1)
An application of cross entropy
(1/2)Consider the beta modelλβ and the algorithmsA0of maximum likelihood (Despotovic et al.) andA1beta (Mui et al.).
Theorem
Ifθ=0orθ=1thenA0computes the exact distribution, whereasA1 does not. That is, for alln>0we have:
EDn(λβ || A0) =0<EDn(λβ || A1)
An application of cross entropy
(2/2)A parametric algorithmA
A(s|h) = Ns(h) +
|h|+2 , A(f|h) =
Nf(h) + |h|+2
Theorem
For anyθ∈[0,1],θ6=1/2there exists¯∈[0,∞)that minimises EDn(λβ || A), simultaneously for alln.
An application of cross entropy
(2/2)A parametric algorithmA
A(s|h) = Ns(h) +
|h|+2 , A(f|h) =
Nf(h) + |h|+2
Theorem
For anyθ∈[0,1],θ6=1/2there exists¯∈[0,∞)that minimises
EDn(λβ || A), simultaneously for alln.
Furthermore,EDn(λβ || A)is a decreasing function ofon the interval
An application of cross entropy
(2/2)A parametric algorithmA
A(s|h) =
Ns(h) +
|h|+2 , A(f|h) =
Nf(h) + |h|+2
Theorem
For anyθ∈[0,1],θ6=1/2there exists¯∈[0,∞)that minimises
EDn(λβ || A), simultaneously for alln.
Furthermore,EDn(λβ || A)is a decreasing function ofon the interval
(0,¯), and increasing on(¯,∞).
An application of cross entropy
(2/2)A parametric algorithmA
A(s|h) =
Ns(h) +
|h|+2 , A(f|h) =
Nf(h) + |h|+2
Theorem
For anyθ∈[0,1],θ6=1/2there exists¯∈[0,∞)that minimises
EDn(λβ || A), simultaneously for alln.
Furthermore,EDn(λβ || A)is a decreasing function ofon the interval
(0,¯), and increasing on(¯,∞).
Outline
1 Some computational trust systems
2 Towards model comparison
3 Modelling behavioural information
Event structures as a trust model
4 Probabilistic event structures
A trust model based on event structures
Move fromO={s,f}to complex outcomesInteractions and protocols
At an abstract level, entities in a distributed system interact according to protocols;
Information about an external entity is just information about (the outcome of) a number of (past) protocol runs with that entity.
Events as model of information
A protocol can be specified as aconcurrent process, at different levels of abstractions.
A model for behavioural information
ES= (E,≤,#), withE a set of events,≤and#relations onE. Information about a session is a finite set of eventsx ⊆E, called a
configuration(which is ‘conflict-free’ and ‘causally-closed’).
Information about several interactions is a sequence of outcomes h=x1x2· · ·xn∈ CES∗ , called ahistory.
eBay (simplified) example:
confirm /o /o /o time-out
pay /o /o /o /o /o /o /o /o /o
`
`
AAAA
AAAA }}}}}}>> } }
ignore
positive
6v 5u 5u 4t 4t 3s 2r 2r 1q 1q 0p 0p /o .n .n -m -m ,l ,l +k *j *j )i )i (h
/o
/o
A model for behavioural information
ES= (E,≤,#), withE a set of events,≤and#relations onE. Information about a session is a finite set of eventsx ⊆E, called a
configuration(which is ‘conflict-free’ and ‘causally-closed’).
Information about several interactions is a sequence of outcomes h=x1x2· · ·xn∈ CES∗ , called ahistory.
eBay (simplified) example:
confirm /o /o /o time-out
pay /o /o /o /o /o /o /o /o /o
`
`
AAAA
AAAA }}}}}}>> } }
ignore
positive
6v 5u 5u 4t 4t 3s 2r 2r 1q 1q 0p 0p /o .n .n -m -m ,l ,l +k *j *j )i )i (h
/o
/o
A model for behavioural information
ES= (E,≤,#), withE a set of events,≤and#relations onE. Information about a session is a finite set of eventsx ⊆E, called a
configuration(which is ‘conflict-free’ and ‘causally-closed’).
Information about several interactions is a sequence of outcomes h=x1x2· · ·xn∈ CES∗ , called ahistory.
eBay (simplified) example:
confirm /o /o /o time-out
pay /o /o /o /o /o /o /o /o /o
`
`
AAAA
AAAA }}}}}}>> } }
ignore
positive
6v 5u 5u 4t 4t 3s 2r 2r 1q 1q 0p 0p /o .n .n -m -m ,l ,l +k *j *j )i )i (h
/o
/o
A model for behavioural information
ES= (E,≤,#), withE a set of events,≤and#relations onE. Information about a session is a finite set of eventsx ⊆E, called a
configuration(which is ‘conflict-free’ and ‘causally-closed’).
Information about several interactions is a sequence of outcomes h=x1x2· · ·xn∈ CES∗ , called ahistory.
eBay (simplified) example:
confirm /o /o /o time-out
pay /o /o /o /o /o /o /o /o /o
``
AAAA
AAAA }}}}}}>> } }
ignore
positive
6v 5u 5u 4t 4t 3s 2r 2r 1q 1q 0p 0p /o .n .n -m -m ,l ,l +k *j *j )i )i (h
/o
/o
A model for behavioural information
ES= (E,≤,#), withE a set of events,≤and#relations onE. Information about a session is a finite set of eventsx ⊆E, called a
configuration(which is ‘conflict-free’ and ‘causally-closed’).
Information about several interactions is a sequence of outcomes h=x1x2· · ·xn∈ CES∗ , called ahistory.
eBay (simplified) example:
confirm /o /o /o time-out
pay /o /o /o /o /o /o /o /o /o
``
AAAA
AAAA }}}}}}>> } }
ignore
positive
6v 5u 5u 4t 4t 3s 2r 2r 1q 1q 0p 0p /o .n .n -m -m ,l ,l +k *j *j )i )i (h
/o
/o
Running example: interactions over an e-purse
authentic /o /o forged correct /o /o incorrect
reject /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o grant
j
j
UUUUUUUUUUUUUUUU
UU
7
7
o o o o o o o o o o o
d
d
HHHHHHHH H
O
Modelling outcomes and behaviour
Outcomesare (maximal) configurations
The e-purse example:
{g,a,c} {g,f,c} {g,a,i} {g,f,i}
{g,c} o o o o o o o o o o {g,a} i i i i i i i i i i i i i i i i i OOOOO OOOOO {g,f}
UUUUUUUUUUUUUUUUU oooo o o o o o o {g,i} OOOOO OOOOO {r} {g} KKKKKKKKK
VVVVVVVVVVVVVVVVVVVVV sss s s s s s s h h h h h h h h h h h h h h h h h h h h h ∅ PPPPPPPPPPPP P h h h h h h h h h h h h h h h h h h h h h h
Modelling outcomes and behaviour
Outcomesare (maximal) configurations
The e-purse example:
{g,a,c} {g,f,c} {g,a,i} {g,f,i}
{g,c} o o o o o o o o o o {g,a} i i i i i i i i i i i i i i i i i OOOOO OOOOO {g,f}
UUUUUUUUUUUUUUUUU oooo o o o o o o
{g,i}
OOOOO OOOOO
{r} {g}
KKKKKKKKK
VVVVVVVVVVVVVVVVVVVVV sss s s s s s s h h h h h h h h h h h h h h h h h h h h h ∅ PPPPPPPPPPPP P h h h h h h h h h h h h h h h h h h h h h h
Modelling outcomes and behaviour
Outcomesare (maximal) configurations
The e-purse example:
{g,a,c} {g,f,c} {g,a,i} {g,f,i}
{g,c}
o o o o o o o o o o {g,a} i i i i i i i i i i i i i i i i i OOOOO OOOOO {g,f}
UUUUUUUUUUUUUUUUU oooo o o o o o o
{g,i}
OOOOO OOOOO
{r} {g}
KKKKKKKKK
VVVVVVVVVVVVVVVVVVVVV sss s s s s s s h h h h h h h h h h h h h h h h h h h h h ∅ PPPPPPPPPPPP P h h h h h h h h h h h h h h h h h h h h h h
Modelling outcomes and behaviour
Outcomesare (maximal) configurations
The e-purse example:
{g,a,c} {g,f,c} {g,a,i} {g,f,i}
{g,c}
o o o o o o o o o o
{g,a}
i i i i i i i i i i i i i i i i i OOOOO OOOOO
{g,f}
UUUUUUUUUUUUUUUUU oooo o o o o o o
{g,i}
OOOOO OOOOO
{r} {g}
KKKKKKKKK
VVVVVVVVVVVVVVVVVVVVV sss s s s s s s h h h h h h h h h h h h h h h h h h h h h ∅ PPPPPPPPPPPP P h h h h h h h h h h h h h h h h h h h h h h
Outline
1 Some computational trust systems
2 Towards model comparison
3 Modelling behavioural information Event structures as a trust model
4 Probabilistic event structures
Confusion-free event structures
(Varacca et al)Immediate conflict#µ:e#e0 and there isx that enables both.
Confusion free:#µis transitive ande#µe0 implies[e) = [e0).
Confusion-free event structures
(Varacca et al)Immediate conflict#µ:e#e0 and there isx that enables both.
Confusion free:#µis transitive ande#µe0 implies[e) = [e0).
Cell: maximalc⊆E such thate,e0 ∈c impliese#µe0.
{g,a,c} {g,f,c} {g,a,i} {g,f,i}
{g,c}
o o o o o o o o o o
{g,a}
i i i i i i i i i i i i i i i i i OOOOOOOOOO
{g,f}
UUUUUUUUUUUUUUUUU oooo o o o o o o
{g,i}
OOOOOOOOOO
{r} {g}
KKKKKKKKK
Confusion-free event structures
(Varacca et al)Immediate conflict#µ:e#e0 and there isx that enables both.
Confusion free:#µis transitive ande#µe0 implies[e) = [e0).
Cell: maximalc⊆E such thate,e0 ∈c impliese#µe0.
So, there are three cells in the e-purse event structure
authentic /o /o forged correct /o /o incorrect
reject /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o grant
jj
UUUUUUUUUUUUUUUU UU
77
o o o o o o o o o o o
dd
HHHHHHHH H
Cell valuation
authentic
4/5
/o
/o forged1/5 correct
3/5
/o
/o incorrect2/5
reject
1/4
/o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o
/o grant 3/4
jj UUUUUUUUUUUUUUUU UU 77 o o o o o o o o o o o dd HHHHHHHH H OO
{g,a,c}
9/25 {g,f,c}9/100 6/25{g,a,i} {g,f,i}6/100
{g,c} 9/20 o o o o o o o o o o
{g,a} 3/5
i i i i i i i i i i i i i i i i i OOOOOOOOOO
{g,f} 3/20
UUUUUUUUUUUUUUUUU oooo o o o o o o
{g,i}3/10
OOOOOOOOOO
{r} 1/4
{g} 3/4
KKKKKKKKK
Cell valuation
authentic
4/5
/o
/o forged1/5 correct
3/5
/o
/o incorrect2/5
reject
1/4
/o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o
/o grant 3/4
jj UUUUUUUUUUUUUUUU UU 77 o o o o o o o o o o o dd HHHHHHHH H OO {g,a,c}
9/25 {g,f,c}9/100 6/25{g,a,i} {g,f,i}6/100
{g,c} 9/20
o o o o o o o o o o
{g,a} 3/5
i i i i i i i i i i i i i i i i i OOOOOOOOOO {g,f} 3/20
UUUUUUUUUUUUUUUUU oooo o o o o o o
{g,i}3/10
OOOOOOOOOO
{r} 1/4
{g} 3/4
KKKKKKKKK
Properties of cell valuations
Definep(x) =Qe∈xp(e). Then p(∅) =1;
p(x)≥p(x0)ifx ⊆x0;
pis a probability distribution on maximal configurations.
{g,a,c}
9/25 {g,f,c}9/100 6/25{g,a,i} {g,f,i}6/100
{g,c} 9/20
m m m m m m m m m
{g,a} 3/5
h h h h h h h h h h h h h h h h QQQQQQQQQ {g,f} 3/20
VVVVVVVVVVVVVVVV mmmm m m m m m
{g,i}3/10
QQQQQQQQQ
{r} 1/4
{g} 3/4
MMMMMMMM
WWWWWWWWWWWWWWWWWWWW qqqq q q q q g g g g g g g g g g g g g g g g g g g g ∅ 1
Outline
1 Some computational trust systems
2 Towards model comparison
3 Modelling behavioural information Event structures as a trust model
4 Probabilistic event structures
Estimating cell valuations
How to assign valuations to cells? They are the model’s unknowns.
Theorem (Bayes)
Prob[Θ|Xλ]∝Prob[X|Θλ]·Prob[Θ|λ]
A second-order notion: we not are interested inXor its probability, but in the expected value ofΘ! So, we will:
start with apriorhypothesisΘ; this will be acell valuation;
record the eventsXas they happen during the interactions;
compute theposterior; this is a new model fitting better with the evidence and allowing us better predictions (in a precise sense).
Estimating cell valuations
How to assign valuations to cells? They are the model’s unknowns. Theorem (Bayes)
Prob[Θ|Xλ]∝Prob[X|Θλ]·Prob[Θ|λ]
A second-order notion: we not are interested inXor its probability, but in the expected value ofΘ!So, we will:
start with apriorhypothesisΘ; this will be acell valuation;
record the eventsXas they happen during the interactions;
compute theposterior; this is a new model fitting better with the evidence and allowing us better predictions (in a precise sense).
Estimating cell valuations
How to assign valuations to cells? They are the model’s unknowns. Theorem (Bayes)
Prob[Θ|Xλ]∝Prob[X|Θλ]·Prob[Θ|λ]
A second-order notion: we not are interested inXor its probability, but in the expected value ofΘ! So, we will:
start with apriorhypothesisΘ; this will be acell valuation; record the eventsXas they happen during the interactions;
compute theposterior; this is a new model fitting better with the evidence and allowing us better predictions (in a precise sense).
Estimating cell valuations
How to assign valuations to cells? They are the model’s unknowns. Theorem (Bayes)
Prob[Θ|Xλ]∝Prob[X|Θλ]·Prob[Θ|λ]
A second-order notion: we not are interested inXor its probability, but in the expected value ofΘ! So, we will:
start with apriorhypothesisΘ; this will be acell valuation; record the eventsXas they happen during the interactions; compute theposterior; this is a new model fitting better with the evidence and allowing us better predictions (in a precise sense).
Estimating cell valuations
How to assign valuations to cells? They are the model’s unknowns. Theorem (Bayes)
Prob[Θ|Xλ]∝Prob[X|Θλ]·Prob[Θ|λ]
A second-order notion: we not are interested inXor its probability, but in the expected value ofΘ! So, we will:
start with apriorhypothesisΘ; this will be acell valuation; record the eventsXas they happen during the interactions; compute theposterior; this is a new model fitting better with the evidence and allowing us better predictions (in a precise sense).
Estimating cell valuations
How to assign valuations to cells? They are the model’s unknowns. Theorem (Bayes)
Prob[Θ|Xλ]∝Prob[X|Θλ]·Prob[Θ|λ]
A second-order notion: we not are interested inXor its probability, but in the expected value ofΘ! So, we will:
Cells vs eventless outcomes
Letc1, . . . ,cM be the set of cells ofE, withci ={ei1, . . . ,eiKi}.
A cell valuation assigns a distributionΘci to eachci, the same way as an eventless model assigns a distributionθto{s,f}.
The occurrence of anx from{s,f}is a random process with two outcomes, abinomial (Bernoulli) trialonθ.
The occurrence of an event from cellci is a random process with Ki outcomes. That is, amultinomial trialonΘci.
To exploit this analogy we only need to lift theλβ model to a model
Cells vs eventless outcomes
Letc1, . . . ,cM be the set of cells ofE, withci ={ei1, . . . ,eiKi}.
A cell valuation assigns a distributionΘci to eachci, the same way as an eventless model assigns a distributionθto{s,f}.
The occurrence of anx from{s,f}is a random process with two outcomes, abinomial (Bernoulli) trialonθ.
The occurrence of an event from cellci is a random process with Ki outcomes. That is, amultinomial trialonΘci.
To exploit this analogy we only need to lift theλβ model to a model
Cells vs eventless outcomes
Letc1, . . . ,cM be the set of cells ofE, withci ={ei1, . . . ,eiKi}.
A cell valuation assigns a distributionΘci to eachci, the same way as an eventless model assigns a distributionθto{s,f}.
The occurrence of anx from{s,f}is a random process with two outcomes, abinomial (Bernoulli) trialonθ.
The occurrence of an event from cellci is a random process with Ki outcomes. That is, amultinomial trialonΘci.
To exploit this analogy we only need to lift theλβ model to a model
Cells vs eventless outcomes
Letc1, . . . ,cM be the set of cells ofE, withci ={ei1, . . . ,eiKi}.
A cell valuation assigns a distributionΘci to eachci, the same way as an eventless model assigns a distributionθto{s,f}.
The occurrence of anx from{s,f}is a random process with two outcomes, abinomial (Bernoulli) trialonθ.
The occurrence of an event from cellci is a random process with Ki outcomes. That is, amultinomial trialonΘci.
To exploit this analogy we only need to lift theλβ model to a model
A bit of magic: the Dirichlet probability distribution
The Dirichlet familyD(Θ|α)∝Q Θα1−1
1 · · ·Θ
αK−1 K
Theorem
The Dirichlet family is aconjugate priorformultinomial trials. That is, if
Prob[Θ|λ]isD(Θ|α1, ..., αK)and
Prob[X|Θλ]follows the law of multinomial trialsΘn1 1 · · ·Θ
nK K , thenProb[Θ|Xλ]isD(Θ|α1+n1, ..., αK +nK)according to Bayes.
A bit of magic: the Dirichlet probability distribution
The Dirichlet familyD(Θ|α)∝Q Θα1−1
1 · · ·Θ
αK−1 K
Theorem
The Dirichlet family is aconjugate priorformultinomial trials. That is, if
Prob[Θ|λ]isD(Θ|α1, ..., αK)and
Prob[X|Θλ]follows the law of multinomial trialsΘn1 1 · · ·Θ
nK K , thenProb[Θ|Xλ]isD(Θ|α1+n1, ..., αK +nK)according to Bayes.
A bit of magic: the Dirichlet probability distribution
The Dirichlet familyD(Θ|α)∝Q Θα1−1
1 · · ·Θ
αK−1 K
Theorem
The Dirichlet family is aconjugate priorformultinomial trials. That is, if
Prob[Θ|λ]isD(Θ|α1, ..., αK)and
Prob[X|Θλ]follows the law of multinomial trialsΘn1 1 · · ·Θ
nK K , thenProb[Θ|Xλ]isD(Θ|α1+n1, ..., αK +nK)according to Bayes.
The Bayesian process
Start with a uniform distribution for each cell.
authentic
1/2
/o
/o forged1/2 correct
1/2
/o
/o incorrect1/2
reject
1/2
/o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o
/o grant 1/2
jj UUUUUUUUUUUUUUUU UU 77 o o o o o o o o o o o dd HHHHHHHH H OO Theorem
E[Θei
j |Xλ] =
αei j +X(e
i j)
PKi
k=1(αei
k +X(e i k))
Corollary
The Bayesian process
Suppose thatX={r 7→2,g 7→8,a7→7,f 7→1,c7→3,i7→5}. Then
authentic
1/2
/o
/o forged
1/2
correct
1/2
/o
/o incorrect1/2
reject
1/2
/o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o
/o grant 1/2
jj UUUUUUUUUUUUUUUU UU 77 o o o o o o o o o o o dd HHHHHHHH H OO Theorem
E[Θei
j |Xλ] =
αei j +X(e
i j)
PKi k=1(αei
k +X(e i k))
Corollary
The Bayesian process
Suppose thatX={r 7→2,g 7→8,a7→7,f 7→1,c7→3,i7→5}. Then
authentic
4/5
/o
/o forged1/5 correct
3/5
/o
/o incorrect2/5
reject
1/4
/o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o
/o grant 3/4
jj UUUUUUUUUUUUUUUU UU 77 o o o o o o o o o o o dd HHHHHHHH H OO Theorem
E[Θei
j |Xλ] =
αei j +X(e
i j)
PKi
k=1(αei
k +X(e i k))
Corollary
The Bayesian process
Suppose thatX={r 7→2,g 7→8,a7→7,f 7→1,c7→3,i7→5}. Then
authentic
4/5
/o
/o forged1/5 correct
3/5
/o
/o incorrect2/5
reject
1/4
/o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o
/o grant 3/4
jj UUUUUUUUUUUUUUUU UU 77 o o o o o o o o o o o dd HHHHHHHH H OO Theorem
E[Θei
j |Xλ] =
αei j +X(e
i j)
PKi k=1(αei
k +X(e i k))
Corollary
The Bayesian process
Suppose thatX={r 7→2,g 7→8,a7→7,f 7→1,c7→3,i7→5}. Then
authentic
4/5
/o
/o forged1/5 correct
3/5
/o
/o incorrect2/5
reject
1/4
/o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o
/o grant 3/4
jj UUUUUUUUUUUUUUUU UU 77 o o o o o o o o o o o dd HHHHHHHH H OO Theorem
E[Θei
j |Xλ] =
αei j +X(e
i j)
PKi k=1(αei
k +X(e i k))
Corollary
Interpretation of results
Lifted the trust computational algorithms based onλβ to our
event-base models by replacing
Binomials (Bernoulli) trials 7→ multinomial trials;
Future directions (1/2)
Hidden Markov ModelsProbability parameters can change as the internal state change, probabilistically. HMMisλ= (A,B, π), where
Ais a Markov chain, describing state transitions;
Bis family of distributionsBs :O→[0,1];
π is the initial state distribution.
1
.01
+
+2
.25
k
k
π1=1 B1(a) =.95 B1(b) =.05
O={a,b}
Future directions (1/2)
Hidden Markov ModelsProbability parameters can change as the internal state change, probabilistically. HMMisλ= (A,B, π), where
Ais a Markov chain, describing state transitions; Bis family of distributionsBs :O→[0,1]; π is the initial state distribution.
1
.01
++2
.25
kk
π1=1
B1(a) =.95
B1(b) =.05
O={a,b}
π2=0
B2(a) =.05
Future directions (2/2)
Hidden Markov Models1
.01
++2
.25
kk
π1=1
B1(a) =.95
B1(b) =.05
O={a,b}
π2=0
B2(a) =.05
B2(b) =.95
Bayesian analysis:
Future directions (2/2)
Hidden Markov Models1
.01
++2
.25
kk
π1=1
B1(a) =.95
B1(b) =.05
O={a,b}
π2=0
B2(a) =.05
B2(b) =.95
Bayesian analysis:
What models best explain (and thus predict) observations? How to approximate a HMM from a sequence of observations?
Historyh=a10b2. A counting algorithm would then assign high
Summary
A framework for “trust and reputation systems”
I applications tosecurityand history-basedaccess control.
Bayesianapproach to observations and approximations,formal
resultsbased on probability theory. Towards model comparison
and complex-outcomes Bayesian model.
Future work
Summary
A framework for “trust and reputation systems”
I applications tosecurityand history-basedaccess control.
Bayesianapproach to observations and approximations,formal
resultsbased on probability theory. Towards model comparison
and complex-outcomes Bayesian model.
Future work