• No results found

A Bayesian model for event based trust Elements of a foundation for computational trust (talk)

N/A
N/A
Protected

Academic year: 2020

Share "A Bayesian model for event based trust Elements of a foundation for computational trust (talk)"

Copied!
76
0
0

Loading.... (view fulltext now)

Full text

(1)

A Bayesian model for event-based trust

Elements of a foundation for computational trust

Vladimiro Sassone

ECS, University of Southampton

joint work K. Krukow and M. Nielsen

(2)

Computational trust

Trust is an ineffable notion that permeates very many things. What trust are we going to have in this talk?

Computer idealisation of “trust” to support decision-making in open networks. No human emotion, nor philosophical/sociological concept. Gathering prominence in open applications involving safety guarantees in a wide sense

credential-based trust: e.g., public-key infrastructures,

authentication and resource access control, network security.

reputation-based trust: e.g., social networks, P2P, trust metrics, probabilistic approaches.

trust models: e.g., security policies, languages, game theory.

(3)

Computational trust

Trust is an ineffable notion that permeates very many things. What trust are we going to have in this talk?

Computer idealisation of “trust” to support decision-making in open networks. No human emotion, nor philosophical/sociological concept. Gathering prominence in open applications involving safety guarantees in a wide sense

credential-based trust: e.g., public-key infrastructures,

authentication and resource access control, network security. reputation-based trust: e.g., social networks, P2P, trust metrics, probabilistic approaches.

trust models: e.g., security policies, languages, game theory.

(4)

Computational trust

Trust is an ineffable notion that permeates very many things. What trust are we going to have in this talk?

Computer idealisation of “trust” to support decision-making in open networks. No human emotion, nor philosophical/sociological concept. Gathering prominence in open applications involving safety guarantees in a wide sense

credential-based trust: e.g., public-key infrastructures,

authentication and resource access control, network security.

reputation-based trust: e.g., social networks, P2P, trust metrics,

probabilistic approaches.

trust models: e.g., security policies, languages, game theory.

(5)

Computational trust

Trust is an ineffable notion that permeates very many things. What trust are we going to have in this talk?

Computer idealisation of “trust” to support decision-making in open networks. No human emotion, nor philosophical/sociological concept. Gathering prominence in open applications involving safety guarantees in a wide sense

credential-based trust: e.g., public-key infrastructures,

authentication and resource access control, network security.

reputation-based trust: e.g., social networks, P2P, trust metrics,

probabilistic approaches.

trust models: e.g., security policies, languages, game theory.

(6)

Computational trust

Trust is an ineffable notion that permeates very many things. What trust are we going to have in this talk?

Computer idealisation of “trust” to support decision-making in open networks. No human emotion, nor philosophical/sociological concept. Gathering prominence in open applications involving safety guarantees in a wide sense

credential-based trust: e.g., public-key infrastructures,

authentication and resource access control, network security.

reputation-based trust: e.g., social networks, P2P, trust metrics,

probabilistic approaches.

trust models: e.g., security policies, languages, game theory.

(7)

Trust and reputation systems

Reputation

behavioural:perception that an agent creates through past

actions about its intentions and norms of behaviour.

social: calculated on the basis of observations made by others.

An agent’s reputation may affect the trust that others have toward it. Trust

subjective: a level of the subjective expectation an agent has about another’s future behaviour based on the history of their encounters and of hearsay.

(8)

Trust and reputation systems

Reputation

behavioural:perception that an agent creates through past

actions about its intentions and norms of behaviour.

social: calculated on the basis of observations made by others.

An agent’s reputation may affect the trust that others have toward it.

Trust

subjective: a level of the subjective expectation an agent has

about another’s future behaviour based on the history of their encounters and of hearsay.

(9)

Trust and security

E.g.: Reputation-based access control

p’s ‘trust’ inq’s actions at timet, is determined byp’s observations of q’s behaviour up until timetaccording to a given policyψ.

Example

You download what claims to be a new cool browser from some unknown site. Your trust policy may be:

(10)

Trust and security

E.g.: Reputation-based access control

p’s ‘trust’ inq’s actions at timet, is determined byp’s observations of q’s behaviour up until timetaccording to a given policyψ.

Example

You download what claims to be a new cool browser from some unknown site. Your trust policy may be:

(11)

Outline

1 Some computational trust systems

2 Towards model comparison

3 Modelling behavioural information

Event structures as a trust model

4 Probabilistic event structures

(12)

Outline

1 Some computational trust systems

2 Towards model comparison

3 Modelling behavioural information Event structures as a trust model

4 Probabilistic event structures

(13)

Simple Probabilistic Systems

The modelλθ:

Each principalpbehaves in each interaction according to a fixed and independent probabilityθp of ‘success’ (and therefore1−θp of ‘failure’).

The framework:

Interface(Trust computation algorithm,A):

I Input: A sequenceh=x1x2· · ·xnforn≥0andxi ∈ {s,f}.

I Output: A probability distributionπ:{s,f} →[0,1].

Goal:

(14)

Simple Probabilistic Systems

The modelλθ:

Each principalpbehaves in each interaction according to a fixed and independent probabilityθp of ‘success’ (and therefore1−θp of ‘failure’).

The framework:

Interface(Trust computation algorithm,A):

I Input: A sequenceh=x1x2· · ·xnforn≥0andxi ∈ {s,f}.

I Output: A probability distributionπ:{s,f} →[0,1].

Goal:

(15)

Maximum likelihood

(Despotovic and Aberer)

Trust computationA0

A0(s|h) = Ns(h)

|h| A0(f|h) =

Nf(h) |h|

Nx(h) =“number ofx’s inh”

Bayesian analysis inspired byλβ model: f(θ|α β)∝θα−1(1−θ)β−1

Properties:

Well defined semantics: A0(s|h)is interpreted as a probability of success in the next interaction.

Solidly based on probability theory and Bayesian analysis.

(16)

Maximum likelihood

(Despotovic and Aberer)

Trust computationA0

A0(s|h) = Ns(h)

|h| A0(f|h) =

Nf(h) |h|

Nx(h) =“number ofx’s inh”

Bayesian analysis inspired byλβ model: f(θ|α β)∝θα−1(1−θ)β−1

Properties:

Well defined semantics: A0(s|h)is interpreted as a probability of success in the next interaction.

(17)

Beta models

(Mui et al)

Even more tightly inspired by Bayesian analysis and byλβ

Trust computationA1

A1(s|h) =

Ns(h) +1

|h|+2 A1(f|h) =

Nf(h) +1 |h|+2

Nx(h) =“number ofx’s inh”

Properties:

Well defined semantics: A1(s|h)is interpreted as a probability of success in the next interaction.

Solidly based on probability theory and Bayesian analysis.

Formal result: Chernoff boundProb[error ≥]≤2e2m2

(18)

Beta models

(Mui et al)

Even more tightly inspired by Bayesian analysis and byλβ

Trust computationA1

A1(s|h) =

Ns(h) +1

|h|+2 A1(f|h) =

Nf(h) +1 |h|+2

Nx(h) =“number ofx’s inh”

Properties:

Well defined semantics: A1(s|h)is interpreted as a probability of success in the next interaction.

Solidly based on probability theory and Bayesian analysis. Formal result: Chernoff boundProb[error ≥]≤2e2m2

(19)

Our elements of foundation

Recall the framework

Interface(Trust computation algorithm,A):

I Input: A sequenceh=x1x2· · ·xnforn≥0andxi ∈ {s,f}.

I Output: A probability distributionπ:{s,f} →[0,1].

Goal:

I Outputπapproximates(θp,1−θp)as well as possible, under the

hypothesis that inputhis the outcome of interactions withp.

We would like to consolidate in two directions:

1 model comparison

(20)

Our elements of foundation

Recall the framework

Interface(Trust computation algorithm,A):

I Input: A sequenceh=x1x2· · ·xnforn≥0andxi ∈ {s,f}.

I Output: A probability distributionπ:{s,f} →[0,1].

Goal:

I Outputπapproximates(θp,1−θp)as well as possible, under the

hypothesis that inputhis the outcome of interactions withp.

We would like to consolidate in two directions: 1 model comparison

(21)

Outline

1 Some computational trust systems

2 Towards model comparison

3 Modelling behavioural information Event structures as a trust model

4 Probabilistic event structures

(22)

Cross entropy

An information-theoretic “distance” on distributions

Cross entropyof distributionsp,q:{o1, . . . ,om} →[0,1].

D(p||q) = m X

i=1

p(oilog p(oi)/q(oi)

It holds 0≤D(p||q)≤ ∞, andD(p||q) =0 iff p=q.

Established measure in statistics for comparing distributions.

(23)

Cross entropy

An information-theoretic “distance” on distributions

Cross entropyof distributionsp,q:{o1, . . . ,om} →[0,1].

D(p||q) = m X

i=1

p(oilog p(oi)/q(oi)

It holds 0≤D(p||q)≤ ∞, andD(p||q) =0 iff p=q.

(24)

Expected cross entropy

A measure on probabilistic trust algorithms

Goal of a probabilistic trust algorithmA: given a historyX, approximate a distribution on the outcomesO={o1, . . . ,om}. Different historiesXresult in different output distributionsA(· |X).

Expected cross entropy fromλtoA

EDn(λ|| A) = X

X∈On

(25)

Expected cross entropy

A measure on probabilistic trust algorithms

Goal of a probabilistic trust algorithmA: given a historyX, approximate a distribution on the outcomesO={o1, . . . ,om}. Different historiesXresult in different output distributionsA(· |X).

Expected cross entropy fromλtoA

EDn(λ|| A) = X XOn

(26)

An application of cross entropy

(1/2)

Consider the beta modelλβ and the algorithmsA0of maximum likelihood (Despotovic et al.) andA1beta (Mui et al.).

Theorem

Ifθ=0orθ=1thenA0computes the exact distribution, whereasA1 does not. That is, for alln>0we have:

EDn(λβ || A0) =0<EDn(λβ || A1)

(27)

An application of cross entropy

(1/2)

Consider the beta modelλβ and the algorithmsA0of maximum likelihood (Despotovic et al.) andA1beta (Mui et al.).

Theorem

Ifθ=0orθ=1thenA0computes the exact distribution, whereasA1 does not. That is, for alln>0we have:

EDn(λβ || A0) =0<EDn(λβ || A1)

(28)

An application of cross entropy

(2/2)

A parametric algorithmA

A(s|h) = Ns(h) +

|h|+2 , A(f|h) =

Nf(h) + |h|+2

Theorem

For anyθ∈[0,1],θ6=1/2there exists¯∈[0,∞)that minimises EDn(λβ || A), simultaneously for alln.

(29)

An application of cross entropy

(2/2)

A parametric algorithmA

A(s|h) = Ns(h) +

|h|+2 , A(f|h) =

Nf(h) + |h|+2

Theorem

For anyθ∈[0,1],θ6=1/2there exists¯∈[0,∞)that minimises

EDn(λβ || A), simultaneously for alln.

Furthermore,EDn(λβ || A)is a decreasing function ofon the interval

(30)

An application of cross entropy

(2/2)

A parametric algorithmA

A(s|h) =

Ns(h) +

|h|+2 , A(f|h) =

Nf(h) + |h|+2

Theorem

For anyθ∈[0,1],θ6=1/2there exists¯∈[0,∞)that minimises

EDn(λβ || A), simultaneously for alln.

Furthermore,EDn(λβ || A)is a decreasing function ofon the interval

(0,¯), and increasing on(¯,∞).

(31)

An application of cross entropy

(2/2)

A parametric algorithmA

A(s|h) =

Ns(h) +

|h|+2 , A(f|h) =

Nf(h) + |h|+2

Theorem

For anyθ∈[0,1],θ6=1/2there exists¯∈[0,∞)that minimises

EDn(λβ || A), simultaneously for alln.

Furthermore,EDn(λβ || A)is a decreasing function ofon the interval

(0,¯), and increasing on(¯,∞).

(32)

Outline

1 Some computational trust systems

2 Towards model comparison

3 Modelling behavioural information

Event structures as a trust model

4 Probabilistic event structures

(33)

A trust model based on event structures

Move fromO={s,f}to complex outcomes

Interactions and protocols

At an abstract level, entities in a distributed system interact according to protocols;

Information about an external entity is just information about (the outcome of) a number of (past) protocol runs with that entity.

Events as model of information

A protocol can be specified as aconcurrent process, at different levels of abstractions.

(34)

A model for behavioural information

ES= (E,≤,#), withE a set of events,≤and#relations onE. Information about a session is a finite set of eventsxE, called a

configuration(which is ‘conflict-free’ and ‘causally-closed’).

Information about several interactions is a sequence of outcomes h=x1x2· · ·xn∈ CES∗ , called ahistory.

eBay (simplified) example:

confirm /o /o /o time-out

pay /o /o /o /o /o /o /o /o /o

`

`

AAAA

AAAA }}}}}}>> } }

ignore

positive

6v 5u 5u 4t 4t 3s 2r 2r 1q 1q 0p 0p /o .n .n -m -m ,l ,l +k *j *j )i )i (h

/o

/o

(35)

A model for behavioural information

ES= (E,≤,#), withE a set of events,≤and#relations onE. Information about a session is a finite set of eventsxE, called a

configuration(which is ‘conflict-free’ and ‘causally-closed’).

Information about several interactions is a sequence of outcomes h=x1x2· · ·xn∈ CES∗ , called ahistory.

eBay (simplified) example:

confirm /o /o /o time-out

pay /o /o /o /o /o /o /o /o /o

`

`

AAAA

AAAA }}}}}}>> } }

ignore

positive

6v 5u 5u 4t 4t 3s 2r 2r 1q 1q 0p 0p /o .n .n -m -m ,l ,l +k *j *j )i )i (h

/o

/o

(36)

A model for behavioural information

ES= (E,≤,#), withE a set of events,≤and#relations onE. Information about a session is a finite set of eventsxE, called a

configuration(which is ‘conflict-free’ and ‘causally-closed’).

Information about several interactions is a sequence of outcomes h=x1x2· · ·xn∈ CES∗ , called ahistory.

eBay (simplified) example:

confirm /o /o /o time-out

pay /o /o /o /o /o /o /o /o /o

`

`

AAAA

AAAA }}}}}}>> } }

ignore

positive

6v 5u 5u 4t 4t 3s 2r 2r 1q 1q 0p 0p /o .n .n -m -m ,l ,l +k *j *j )i )i (h

/o

/o

(37)

A model for behavioural information

ES= (E,≤,#), withE a set of events,≤and#relations onE. Information about a session is a finite set of eventsxE, called a

configuration(which is ‘conflict-free’ and ‘causally-closed’).

Information about several interactions is a sequence of outcomes h=x1x2· · ·xn∈ CES∗ , called ahistory.

eBay (simplified) example:

confirm /o /o /o time-out

pay /o /o /o /o /o /o /o /o /o

``

AAAA

AAAA }}}}}}>> } }

ignore

positive

6v 5u 5u 4t 4t 3s 2r 2r 1q 1q 0p 0p /o .n .n -m -m ,l ,l +k *j *j )i )i (h

/o

/o

(38)

A model for behavioural information

ES= (E,≤,#), withE a set of events,≤and#relations onE. Information about a session is a finite set of eventsxE, called a

configuration(which is ‘conflict-free’ and ‘causally-closed’).

Information about several interactions is a sequence of outcomes h=x1x2· · ·xn∈ CES∗ , called ahistory.

eBay (simplified) example:

confirm /o /o /o time-out

pay /o /o /o /o /o /o /o /o /o

``

AAAA

AAAA }}}}}}>> } }

ignore

positive

6v 5u 5u 4t 4t 3s 2r 2r 1q 1q 0p 0p /o .n .n -m -m ,l ,l +k *j *j )i )i (h

/o

/o

(39)

Running example: interactions over an e-purse

authentic /o /o forged correct /o /o incorrect

reject /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o grant

j

j

UUUUUUUUUUUUUUUU

UU

7

7

o o o o o o o o o o o

d

d

HHHHHHHH H

O

(40)

Modelling outcomes and behaviour

Outcomesare (maximal) configurations

The e-purse example:

{g,a,c} {g,f,c} {g,a,i} {g,f,i}

{g,c} o o o o o o o o o o {g,a} i i i i i i i i i i i i i i i i i OOOOO OOOOO {g,f}

UUUUUUUUUUUUUUUUU oooo o o o o o o {g,i} OOOOO OOOOO {r} {g} KKKKKKKKK

VVVVVVVVVVVVVVVVVVVVV sss s s s s s s h h h h h h h h h h h h h h h h h h h h h ∅ PPPPPPPPPPPP P h h h h h h h h h h h h h h h h h h h h h h

(41)

Modelling outcomes and behaviour

Outcomesare (maximal) configurations

The e-purse example:

{g,a,c} {g,f,c} {g,a,i} {g,f,i}

{g,c} o o o o o o o o o o {g,a} i i i i i i i i i i i i i i i i i OOOOO OOOOO {g,f}

UUUUUUUUUUUUUUUUU oooo o o o o o o

{g,i}

OOOOO OOOOO

{r} {g}

KKKKKKKKK

VVVVVVVVVVVVVVVVVVVVV sss s s s s s s h h h h h h h h h h h h h h h h h h h h h ∅ PPPPPPPPPPPP P h h h h h h h h h h h h h h h h h h h h h h

(42)

Modelling outcomes and behaviour

Outcomesare (maximal) configurations

The e-purse example:

{g,a,c} {g,f,c} {g,a,i} {g,f,i}

{g,c}

o o o o o o o o o o {g,a} i i i i i i i i i i i i i i i i i OOOOO OOOOO {g,f}

UUUUUUUUUUUUUUUUU oooo o o o o o o

{g,i}

OOOOO OOOOO

{r} {g}

KKKKKKKKK

VVVVVVVVVVVVVVVVVVVVV sss s s s s s s h h h h h h h h h h h h h h h h h h h h h ∅ PPPPPPPPPPPP P h h h h h h h h h h h h h h h h h h h h h h

(43)

Modelling outcomes and behaviour

Outcomesare (maximal) configurations

The e-purse example:

{g,a,c} {g,f,c} {g,a,i} {g,f,i}

{g,c}

o o o o o o o o o o

{g,a}

i i i i i i i i i i i i i i i i i OOOOO OOOOO

{g,f}

UUUUUUUUUUUUUUUUU oooo o o o o o o

{g,i}

OOOOO OOOOO

{r} {g}

KKKKKKKKK

VVVVVVVVVVVVVVVVVVVVV sss s s s s s s h h h h h h h h h h h h h h h h h h h h h ∅ PPPPPPPPPPPP P h h h h h h h h h h h h h h h h h h h h h h

(44)

Outline

1 Some computational trust systems

2 Towards model comparison

3 Modelling behavioural information Event structures as a trust model

4 Probabilistic event structures

(45)

Confusion-free event structures

(Varacca et al)

Immediate conflict#µ:e#e0 and there isx that enables both.

Confusion free:#µis transitive andee0 implies[e) = [e0).

(46)

Confusion-free event structures

(Varacca et al)

Immediate conflict#µ:e#e0 and there isx that enables both.

Confusion free:#µis transitive andee0 implies[e) = [e0).

Cell: maximalcE such thate,e0 ∈c impliesee0.

{g,a,c} {g,f,c} {g,a,i} {g,f,i}

{g,c}

o o o o o o o o o o

{g,a}

i i i i i i i i i i i i i i i i i OOOOOOOOOO

{g,f}

UUUUUUUUUUUUUUUUU oooo o o o o o o

{g,i}

OOOOOOOOOO

{r} {g}

KKKKKKKKK

(47)

Confusion-free event structures

(Varacca et al)

Immediate conflict#µ:e#e0 and there isx that enables both.

Confusion free:#µis transitive andee0 implies[e) = [e0).

Cell: maximalcE such thate,e0 ∈c impliesee0.

So, there are three cells in the e-purse event structure

authentic /o /o forged correct /o /o incorrect

reject /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o grant

jj

UUUUUUUUUUUUUUUU UU

77

o o o o o o o o o o o

dd

HHHHHHHH H

(48)

Cell valuation

authentic

4/5

/o

/o forged1/5 correct

3/5

/o

/o incorrect2/5

reject

1/4

/o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o

/o grant 3/4

jj UUUUUUUUUUUUUUUU UU 77 o o o o o o o o o o o dd HHHHHHHH H OO

{g,a,c}

9/25 {g,f,c}9/100 6/25{g,a,i} {g,f,i}6/100

{g,c} 9/20 o o o o o o o o o o

{g,a} 3/5

i i i i i i i i i i i i i i i i i OOOOOOOOOO

{g,f} 3/20

UUUUUUUUUUUUUUUUU oooo o o o o o o

{g,i}3/10

OOOOOOOOOO

{r} 1/4

{g} 3/4

KKKKKKKKK

(49)

Cell valuation

authentic

4/5

/o

/o forged1/5 correct

3/5

/o

/o incorrect2/5

reject

1/4

/o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o

/o grant 3/4

jj UUUUUUUUUUUUUUUU UU 77 o o o o o o o o o o o dd HHHHHHHH H OO {g,a,c}

9/25 {g,f,c}9/100 6/25{g,a,i} {g,f,i}6/100

{g,c} 9/20

o o o o o o o o o o

{g,a} 3/5

i i i i i i i i i i i i i i i i i OOOOOOOOOO {g,f} 3/20

UUUUUUUUUUUUUUUUU oooo o o o o o o

{g,i}3/10

OOOOOOOOOO

{r} 1/4

{g} 3/4

KKKKKKKKK

(50)

Properties of cell valuations

Definep(x) =Q

exp(e). Then p(∅) =1;

p(x)≥p(x0)ifxx0;

pis a probability distribution on maximal configurations.

{g,a,c}

9/25 {g,f,c}9/100 6/25{g,a,i} {g,f,i}6/100

{g,c} 9/20

m m m m m m m m m

{g,a} 3/5

h h h h h h h h h h h h h h h h QQQQQQQQQ {g,f} 3/20

VVVVVVVVVVVVVVVV mmmm m m m m m

{g,i}3/10

QQQQQQQQQ

{r} 1/4

{g} 3/4

MMMMMMMM

WWWWWWWWWWWWWWWWWWWW qqqq q q q q g g g g g g g g g g g g g g g g g g g g ∅ 1

(51)

Outline

1 Some computational trust systems

2 Towards model comparison

3 Modelling behavioural information Event structures as a trust model

4 Probabilistic event structures

(52)

Estimating cell valuations

How to assign valuations to cells? They are the model’s unknowns.

Theorem (Bayes)

Prob[Θ|Xλ]∝Prob[X|Θλ]·Prob[Θ|λ]

A second-order notion: we not are interested inXor its probability, but in the expected value ofΘ! So, we will:

start with apriorhypothesisΘ; this will be acell valuation;

record the eventsXas they happen during the interactions;

compute theposterior; this is a new model fitting better with the evidence and allowing us better predictions (in a precise sense).

(53)

Estimating cell valuations

How to assign valuations to cells? They are the model’s unknowns. Theorem (Bayes)

Prob[Θ|Xλ]∝Prob[X|Θλ]·Prob[Θ|λ]

A second-order notion: we not are interested inXor its probability, but in the expected value ofΘ!So, we will:

start with apriorhypothesisΘ; this will be acell valuation;

record the eventsXas they happen during the interactions;

compute theposterior; this is a new model fitting better with the evidence and allowing us better predictions (in a precise sense).

(54)

Estimating cell valuations

How to assign valuations to cells? They are the model’s unknowns. Theorem (Bayes)

Prob[Θ|Xλ]∝Prob[X|Θλ]·Prob[Θ|λ]

A second-order notion: we not are interested inXor its probability, but in the expected value ofΘ! So, we will:

start with apriorhypothesisΘ; this will be acell valuation; record the eventsXas they happen during the interactions;

compute theposterior; this is a new model fitting better with the evidence and allowing us better predictions (in a precise sense).

(55)

Estimating cell valuations

How to assign valuations to cells? They are the model’s unknowns. Theorem (Bayes)

Prob[Θ|Xλ]∝Prob[X|Θλ]·Prob[Θ|λ]

A second-order notion: we not are interested inXor its probability, but in the expected value ofΘ! So, we will:

start with apriorhypothesisΘ; this will be acell valuation; record the eventsXas they happen during the interactions; compute theposterior; this is a new model fitting better with the evidence and allowing us better predictions (in a precise sense).

(56)

Estimating cell valuations

How to assign valuations to cells? They are the model’s unknowns. Theorem (Bayes)

Prob[Θ|Xλ]∝Prob[X|Θλ]·Prob[Θ|λ]

A second-order notion: we not are interested inXor its probability, but in the expected value ofΘ! So, we will:

start with apriorhypothesisΘ; this will be acell valuation; record the eventsXas they happen during the interactions; compute theposterior; this is a new model fitting better with the evidence and allowing us better predictions (in a precise sense).

(57)

Estimating cell valuations

How to assign valuations to cells? They are the model’s unknowns. Theorem (Bayes)

Prob[Θ|Xλ]∝Prob[X|Θλ]·Prob[Θ|λ]

A second-order notion: we not are interested inXor its probability, but in the expected value ofΘ! So, we will:

(58)

Cells vs eventless outcomes

Letc1, . . . ,cM be the set of cells ofE, withci ={ei1, . . . ,eiKi}.

A cell valuation assigns a distributionΘci to eachci, the same way as an eventless model assigns a distributionθto{s,f}.

The occurrence of anx from{s,f}is a random process with two outcomes, abinomial (Bernoulli) trialonθ.

The occurrence of an event from cellci is a random process with Ki outcomes. That is, amultinomial trialonΘci.

To exploit this analogy we only need to lift theλβ model to a model

(59)

Cells vs eventless outcomes

Letc1, . . . ,cM be the set of cells ofE, withci ={ei1, . . . ,eiKi}.

A cell valuation assigns a distributionΘci to eachci, the same way as an eventless model assigns a distributionθto{s,f}.

The occurrence of anx from{s,f}is a random process with two outcomes, abinomial (Bernoulli) trialonθ.

The occurrence of an event from cellci is a random process with Ki outcomes. That is, amultinomial trialonΘci.

To exploit this analogy we only need to lift theλβ model to a model

(60)

Cells vs eventless outcomes

Letc1, . . . ,cM be the set of cells ofE, withci ={ei1, . . . ,eiKi}.

A cell valuation assigns a distributionΘci to eachci, the same way as an eventless model assigns a distributionθto{s,f}.

The occurrence of anx from{s,f}is a random process with two outcomes, abinomial (Bernoulli) trialonθ.

The occurrence of an event from cellci is a random process with Ki outcomes. That is, amultinomial trialonΘci.

To exploit this analogy we only need to lift theλβ model to a model

(61)

Cells vs eventless outcomes

Letc1, . . . ,cM be the set of cells ofE, withci ={ei1, . . . ,eiKi}.

A cell valuation assigns a distributionΘci to eachci, the same way as an eventless model assigns a distributionθto{s,f}.

The occurrence of anx from{s,f}is a random process with two outcomes, abinomial (Bernoulli) trialonθ.

The occurrence of an event from cellci is a random process with Ki outcomes. That is, amultinomial trialonΘci.

To exploit this analogy we only need to lift theλβ model to a model

(62)

A bit of magic: the Dirichlet probability distribution

The Dirichlet familyD(Θ|α)∝Q Θα1−1

1 · · ·Θ

αK−1 K

Theorem

The Dirichlet family is aconjugate priorformultinomial trials. That is, if

Prob[Θ|λ]isD(Θ|α1, ..., αK)and

Prob[X|Θλ]follows the law of multinomial trialsΘn1 1 · · ·Θ

nK K , thenProb[Θ|Xλ]isD(Θ|α1+n1, ..., αK +nK)according to Bayes.

(63)

A bit of magic: the Dirichlet probability distribution

The Dirichlet familyD(Θ|α)∝Q Θα1−1

1 · · ·Θ

αK−1 K

Theorem

The Dirichlet family is aconjugate priorformultinomial trials. That is, if

Prob[Θ|λ]isD(Θ|α1, ..., αK)and

Prob[X|Θλ]follows the law of multinomial trialsΘn1 1 · · ·Θ

nK K , thenProb[Θ|Xλ]isD(Θ|α1+n1, ..., αK +nK)according to Bayes.

(64)

A bit of magic: the Dirichlet probability distribution

The Dirichlet familyD(Θ|α)∝Q Θα1−1

1 · · ·Θ

αK−1 K

Theorem

The Dirichlet family is aconjugate priorformultinomial trials. That is, if

Prob[Θ|λ]isD(Θ|α1, ..., αK)and

Prob[X|Θλ]follows the law of multinomial trialsΘn1 1 · · ·Θ

nK K , thenProb[Θ|Xλ]isD(Θ|α1+n1, ..., αK +nK)according to Bayes.

(65)

The Bayesian process

Start with a uniform distribution for each cell.

authentic

1/2

/o

/o forged1/2 correct

1/2

/o

/o incorrect1/2

reject

1/2

/o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o

/o grant 1/2

jj UUUUUUUUUUUUUUUU UU 77 o o o o o o o o o o o dd HHHHHHHH H OO Theorem

E[Θei

j |Xλ] =

αei j +X(e

i j)

PKi

k=1ei

k +X(e i k))

Corollary

(66)

The Bayesian process

Suppose thatX={r 7→2,g 7→8,a7→7,f 7→1,c7→3,i7→5}. Then

authentic

1/2

/o

/o forged

1/2

correct

1/2

/o

/o incorrect1/2

reject

1/2

/o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o

/o grant 1/2

jj UUUUUUUUUUUUUUUU UU 77 o o o o o o o o o o o dd HHHHHHHH H OO Theorem

E[Θei

j |Xλ] =

αei j +X(e

i j)

PKi k=1ei

k +X(e i k))

Corollary

(67)

The Bayesian process

Suppose thatX={r 7→2,g 7→8,a7→7,f 7→1,c7→3,i7→5}. Then

authentic

4/5

/o

/o forged1/5 correct

3/5

/o

/o incorrect2/5

reject

1/4

/o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o

/o grant 3/4

jj UUUUUUUUUUUUUUUU UU 77 o o o o o o o o o o o dd HHHHHHHH H OO Theorem

E[Θei

j |Xλ] =

αei j +X(e

i j)

PKi

k=1ei

k +X(e i k))

Corollary

(68)

The Bayesian process

Suppose thatX={r 7→2,g 7→8,a7→7,f 7→1,c7→3,i7→5}. Then

authentic

4/5

/o

/o forged1/5 correct

3/5

/o

/o incorrect2/5

reject

1/4

/o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o

/o grant 3/4

jj UUUUUUUUUUUUUUUU UU 77 o o o o o o o o o o o dd HHHHHHHH H OO Theorem

E[Θei

j |Xλ] =

αei j +X(e

i j)

PKi k=1(αei

k +X(e i k))

Corollary

(69)

The Bayesian process

Suppose thatX={r 7→2,g 7→8,a7→7,f 7→1,c7→3,i7→5}. Then

authentic

4/5

/o

/o forged1/5 correct

3/5

/o

/o incorrect2/5

reject

1/4

/o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o /o

/o grant 3/4

jj UUUUUUUUUUUUUUUU UU 77 o o o o o o o o o o o dd HHHHHHHH H OO Theorem

E[Θei

j |Xλ] =

αei j +X(e

i j)

PKi k=1(αei

k +X(e i k))

Corollary

(70)

Interpretation of results

Lifted the trust computational algorithms based onλβ to our

event-base models by replacing

Binomials (Bernoulli) trials 7→ multinomial trials;

(71)

Future directions (1/2)

Hidden Markov Models

Probability parameters can change as the internal state change, probabilistically. HMMisλ= (A,B, π), where

Ais a Markov chain, describing state transitions;

Bis family of distributionsBs :O→[0,1];

π is the initial state distribution.

1

.01

+

+2

.25

k

k

π1=1 B1(a) =.95 B1(b) =.05

O={a,b}

(72)

Future directions (1/2)

Hidden Markov Models

Probability parameters can change as the internal state change, probabilistically. HMMisλ= (A,B, π), where

Ais a Markov chain, describing state transitions; Bis family of distributionsBs :O→[0,1]; π is the initial state distribution.

1

.01

++2

.25

kk

π1=1

B1(a) =.95

B1(b) =.05

O={a,b}

π2=0

B2(a) =.05

(73)

Future directions (2/2)

Hidden Markov Models

1

.01

++2

.25

kk

π1=1

B1(a) =.95

B1(b) =.05

O={a,b}

π2=0

B2(a) =.05

B2(b) =.95

Bayesian analysis:

(74)

Future directions (2/2)

Hidden Markov Models

1

.01

++2

.25

kk

π1=1

B1(a) =.95

B1(b) =.05

O={a,b}

π2=0

B2(a) =.05

B2(b) =.95

Bayesian analysis:

What models best explain (and thus predict) observations? How to approximate a HMM from a sequence of observations?

Historyh=a10b2. A counting algorithm would then assign high

(75)

Summary

A framework for “trust and reputation systems”

I applications tosecurityand history-basedaccess control.

Bayesianapproach to observations and approximations,formal

resultsbased on probability theory. Towards model comparison

and complex-outcomes Bayesian model.

Future work

(76)

Summary

A framework for “trust and reputation systems”

I applications tosecurityand history-basedaccess control.

Bayesianapproach to observations and approximations,formal

resultsbased on probability theory. Towards model comparison

and complex-outcomes Bayesian model.

Future work

References

Related documents

Results: In this study we aimed to analyze coordinated gene expression changes to find molecular pathways involved in melanoma progression. To achieve this goal,

the United States and France on racial attitudes and on ethnic segregation in places and schools. Conventional and ideological conceits around race and racial research shape

 Results based on fuzzy techniques are not much accurate as it depends on some training pattern of fuzzy controller, whereas in ANSYS, it is much accurate as

The meeting management system is expected to have a database for storing minutes of meetings and other information pertaining to meetings; Capture documents

The hit rate ( H ) to detect SGI-based groundwater droughts using the SPI with the (a) optimal accumulation period and (b, c) 6 and 12 months of uniform accumulation periods at

of parallel operation with the existing PSAPs was required before the PSAPs were retired from service. The resulting data set for a statistical comparison of the CLAP and

relationships on counselling training courses do have a positive impact on students’. learning experiences

By studying the techniques in nonphotorealistic rendering, we can build on the research of scientists who are already using computer graphics to “paint.” To create