Top PDF Abstraction Refinement Guided by a Learnt Probabilistic Model

Abstraction Refinement Guided by a Learnt Probabilistic Model

Abstraction Refinement Guided by a Learnt Probabilistic Model

Horn clauses taken altogether form a hypergraph. This hypergraph changes when we try the analysis again with a different parameter setting. Given a hypergraph obtained under one parameter setting, we build a probabilistic model that predicts how the hypergraph would change if a new and more precise parameter setting were used. In particular, the probabilistic model estimates how likely it is that the new parameter setting will end the refinement process, which happens when the new hypergraph includes evidence that the analysis will never prove a query. Technically, our probabilistic model is a variant of the Erd˝os–R´enyi random graph model [ 11 ]: given a template hypergraph G, each of its subhypergraphs H is assigned a probability, which depends on the values of the hyper- parameters. Intuitively, this probability quantifies the chance that H correctly describes the changes in G when the analysis is run with the new and more precise parameter settings. The hyperparam- eters quantify how much approximation occurs in each of the quan- tified Horn clauses of the analysis. We provide an efficient method for learning hyperparameters from prior analysis runs. Our method uses certain analytic bounds in order to avoid the combinatorial ex- plosion of a naive learning method based on maximum likelihood; the explosion is caused by H being a latent variable, which can be observed only indirectly.
Show more

19 Read more

Local abstraction refinement for probabilistic timed programs

Local abstraction refinement for probabilistic timed programs

PRISM and its modelling language already have native support for PTAs, and these can incorporate data variables using PRISM’s built-in datatypes (bounded integers and Booleans) so we use this as a basis for modelling PTPs. We evaluated our implementation on 11 case studies. Firstly, we used 6 existing PTA benchmarks [26]: csma full , csma abst (two models of the CSMA/CD protocol); nrp malicious, nrp honest (two variants of Markowitch & Roggeman’s non-repudiation protocol); and firewire impl, firewire abst (two models of the FireWire root contention protocol). Secondly, we used real-time versions of two more complex models: the Zeroconf model of [13] (zeroconf full), and the sliding window model of [11] (sliding window real-time), which uses infinite data types (an unbounded integer storing the round counter). Finally, we tested our implementation on three discrete-time (MDP) models with infinite data types: the (original, discrete-time) sliding window protocol (sliding window discr . ) and bounded retransmission protocol (brp) models from [11]; and the discrete-time model of Zeroconf (zeroconf discr . ) from [13], modified so that the counter of “probe” messages sent is an unbounded integer. This change does not affect the value of the property we check. For brp, we fix the parameter TIMEOUT = 16. In order to support infinite data types, we slightly extended the PRISM modelling language in the style of PASS [9]. All models and properties, and our prototype implementation, are available at [27].
Show more

17 Read more

Softstar: Heuristic-guided probabilistic inference

Softstar: Heuristic-guided probabilistic inference

Recent machine learning methods for sequential behavior prediction estimate the motives of behavior rather than the behavior itself. This higher-level abstraction improves generalization in different prediction settings, but computing predic- tions often becomes intractable in large decision spaces. We propose the Soft- star algorithm, a softened heuristic-guided search technique for the maximum entropy inverse optimal control model of sequential behavior. This approach sup- ports probabilistic search with bounded approximation error at a significantly re- duced computational cost when compared to sampling based methods. We present the algorithm, analyze approximation guarantees, and compare performance with simulation-based inference on two distinct complex decision tasks.
Show more

9 Read more

Equational Abstraction Refinement for Certified Tree Regular Model Checking

Equational Abstraction Refinement for Certified Tree Regular Model Checking

We have presented a new CounterExample Guided Abstraction Refinement pro- cedure for TRMC based on equational abstraction. Our approach has been im- plemented in TimbukCEGAR that is the first TRMC toolset certified correct. Our approach leads, in part, to a java program analyzer starting from code to verification, but without relying on (1) potentially heavy assumptions on datas and architectures, (2) abstraction techniques when translating the code to TRS. We are convinced that our work open news doors in application of RMC approaches to rigorous system design. One of the remaining challenge is defini- tively to consider non left-linear TRS. Completion can be extended to deal with such TRS [25]. This is necessary to verify cryptographic protocols with comple- tion [26, 6]. The theoretical challenge is to extend the CEGAR completion to non left-linear TRS. The technical challenge is to extend the Coq checker to handle non left-linear TRS and tree automata with epsilon transitions. Tackling those two goals would allow us to propose the first certified automatic verification tool for security protocols, a major advance in the formal verification area.
Show more

33 Read more

Policy Explanation and Model Refinement in Decision-Theoretic Planning

Policy Explanation and Model Refinement in Decision-Theoretic Planning

Reinforcement learning [ 89 ] is used when the transition and reward functions are not known a priori. Reinforcement learning learns the dynamics of the system (transition and reward functions) by balancing the exploration-exploitation trade-off. The exploration step is concerned with acquiring new samples to learn more about the effects of different actions and the preferences of the user. This can lead to the execution of sub-optimal actions. The exploitation step utilizes knowledge from previous exploration to select the action that is believed to provide the maximum expected reward. In most domains, it is not acceptable to allow the system to explore and execute sub-optimal actions due to an incomplete model. If the system is providing advice to a human, exploration may not be possible at all. If humans feel the recommendation is sub-optimal, they may never execute that action and the system will never be able to learn that it is sub-optimal. This can happen even if an action recommended by the system is actually optimal for a state, as the human may not necessarily agree with this choice. Also, if the observations are dependent on a human executing an action, often it may not be possible to collect enough data to learn the model in any case. Finally, if a user is presented a recommendation and she ignores it in favor of an alternate action, her assumption may be that the system will learn not to make the mistake again which is not obvious under the reinforcement learning paradigm. The system repeating the same mistakes will lead to user frustration.
Show more

115 Read more

Agricultural policy analysis in Finland with the AGMEMOD model: Lessons to be learnt?

Agricultural policy analysis in Finland with the AGMEMOD model: Lessons to be learnt?

One important issue affecting the AGMEMOD model results is therefore the assumptions relating to the supply inducing impact of decoupled direct payments. Decoupling represents a relatively new policy shift for EU agriculture and there is considerable uncertainty regarding the extent to which these payments are treated by farmers as being ‘truly’ decoupled. The decoupled payments still require that farmers carry out some activity on land, and imposing conditions on maintaining land in agricultural use generate costs that make the “set aside” option less attractive than other alternative activities. It is also known that risk-related effects of direct payments can be quite large and often a similar magnitude to standard relative price effects. Decoupled payments influence farmers’ behaviour by increasing overall wealth, decreasing risk aversion or making credit more accessible (Hennessy 1998, Adams et al. 2001).
Show more

17 Read more

Symbolic Model Checking of Probabilistic Knowledge

Symbolic Model Checking of Probabilistic Knowledge

To date, there has been no implemented model checking sys- tem that deals with the logic of knowledge, time and the subjec- tive probability, i.e., probability relative to agent knowledge. The model checking system that we develop for this combination in this paper is based on the logic of knowledge and probability [11], which adds to the logic of knowledge the ability to express facts about probability (implicitly) conditioned on what an agent knows, using expressions such as “Agent i’s probability of formula φ is greater than 1/2". We follow one of the semantics of Halpern and Tuttle [15], in which agent’s probabilistic knowledge is determined using the assumption of synchronous perfect recall. The models that we handle are in the form of discrete-time Markov chains. Our approach to model checking this logic is a generalisation of an ap- proach, due to van der Meyden and Su [28], using symbolic model checking techniques for model checking a fragment of the logic of knowledge and time with perfect recall.
Show more

10 Read more

Fast Directed Model Checking via Russian Doll Abstraction

Fast Directed Model Checking via Russian Doll Abstraction

We have already listed the previous methods for generating heuristic functions for di- rected model checking [ 6 , 9 , 11 , 14 , 15 , 16 ]. By far the closest relative to our work is the work by Qian and Nymeyer [ 16 ] which uses an intuitively similar strategy for generating pattern database heuristics. As we have shown, our improved strategy yields much better heuristic functions, at least in our suite of benchmarks. It remains to be seen whether that is also the case for other problems. It should also be noted that Qian and Nymeyer [ 16 ] use their heuristic function in a rather unusual BDD-based iterative deepening A ∗ procedure, and compare that to a BDD-based breadth-first search. As the authors state themselves, it is not clear in this configuration how much of their empir- ically observed improvements is due to the heuristic guidance, and how much of it is due to all the other differences between the two search procedures. In our work, we use standard heuristic search algorithms. We finally note that Qian and Nymeyer [ 16 ] state as the foremost topic for future work to find better techniques choosing the abstraction; this is exactly what we have done in this paper. 8
Show more

16 Read more

A probabilistic model of Ancient Egyptian writing

A probabilistic model of Ancient Egyptian writing

These factors motivate the search for an ac- curate and robust model that can be trained on data, and that becomes more accurate as more data becomes available. Ideally, the model should be amenable to unsupervised training. Whereas linguistic models should generally avoid unwar- ranted preconceptions, we see it as inevitable that our model has some knowledge about the writing system already built in, for two reasons. First, little training material is currently available, and second, the number of signs is quite large, so that the little training material is spread out over many parameters. The a priori knowledge in our model consists of a sign list that enumerates possi- ble functions of signs and a formalization of how these functions produce words. This knowledge sufficiently reduces the search space, so that prob- abilistic parameters can be relatively easily esti- mated.
Show more

9 Read more

Fast Directed Model Checking via Russian Doll Abstraction

Fast Directed Model Checking via Russian Doll Abstraction

With half of the related work discussed above, namely [5–7], we share the fact that we are working with U PPAAL , and we also share the set of benchmarks with these works. The benchmarks are meaningful in that they stem from two industrial case stud- ies [12, 13]. Table 1 gives a preview of our results with our “Russian Doll” approach; we re-implemented the two heuristic functions defined in [1]; for each of [5–7], we could run the original implementation; finally, we implemented the abstraction strategy of [8], for comparison with our more sophisticated abstraction strategy (we created the pattern database with U PPAAL for our strategy). Every entry in Table 1 gives the total runtime (seconds), as well as the length of the found error path. The result shown is the best one that could be achieved, on that instance, with the respective technique: from the data points with shortest error path length, we selected the one with the smallest runtime (detailed empirical results are given in Section 5). A dash means the technique runs out of memory on a 4 GByte machine. Quite evidently, our approach drastically outperforms all the other approaches. This signifies a real boost in the performance of directed model checking, at least on these benchmarks.
Show more

15 Read more

Abstraction In Model Checking Real-Time Temporal Logic of Knowledge

Abstraction In Model Checking Real-Time Temporal Logic of Knowledge

Abstract — Model checking in real-time temporal logic of knowledge TACTLK confronts the same challenge as in traditional model checking, that is the state space explosion problem. In order to alleviate this problem, we present our abstraction techniques. For the real time part of TACTLK, that is TACTL, we adopt the abstract discrete clock valuations, and in this way the infinite state space of a real time interpreted system can be converted into a finite form. For the epistemic operator K in TACTLK, the definition of epistemic equivalent to an agent between abstract states is given, therefore, the corresponding equivalent relations can be deduced and which can be used to combine the abstract states, as a result, the state space of the real time interpreted system can be further simplified. Finally, we adopt a variant of the standard railroad crossing system to illustrate the effectiveness of our abstraction techniques.
Show more

9 Read more

Abstraction for Model Checking Modular Interpreted Systems over ATL

Abstraction for Model Checking Modular Interpreted Systems over ATL

Note that we will assume the existence of handcrafted equivalence rela- tions, e.g. generated from manual annotations of program code, since any automatic abstraction generation or refinement (as in [14] for two-player games) can only work in typical cases but not in the worst case. That is also the rea- son why we do not work out how our algorithms can be implemented fully symbolically: In the worst case it will be as bad as a non-symbolical algo- rithm. We are not trying to neglect the usefulness of either of those tech- niques but our focus lies on something else: a provable upper bound for the runtime which is exponential in the sum of the sizes of the abstract systems but linear in the size of a succinct representation of the concrete system (see Theorem 6.1). The exponential part of this is not as worse as it sounds be- cause, as argued above, our technique allows for more than one abstraction and therefore each abstraction can be quite small and still much of the rele- vant information of the whole concrete system can be preserved for the over- all model checking process.
Show more

23 Read more

A probabilistic model of Ancient Egyptian writing

A probabilistic model of Ancient Egyptian writing

These factors motivate the search for an accurate and robust model that can be trained on data, and that becomes more accurate as more data becomes available. Ideally, the model should be amenable to unsupervised training. Whereas linguistic models should generally avoid unwarranted preconceptions, we see it as inevitable that our model has some knowledge about the writing system already built in, for two reasons. First, little training material is currently available, and second, the number of signs is quite large, so that the little training material is spread out over many parameters. The a priori knowledge in our model consists of a sign list that enumerates possible functions of signs and a formalization of how these functions produce words. This knowledge sufficiently reduces the search space, so that proba- bilistic parameters can be relatively easily estimated.
Show more

33 Read more

Probabilistic model checking : a comparison of tools

Probabilistic model checking : a comparison of tools

sparse engine outperforms the hybrid engine, in this case by at most a factor of 340. For the unbounded until property in Figure 4.2(b) there appears to be no performance difference between both engines, we will address this phenomenon later on (in Section 4.4.2, which discusses the causes of performance differences.) When examining Figure 4.2(a) more closely there emerges another difference between the tools. When the time bound t increases from 5 to 40, we can see that it takes MRMC longer to verify the formula. Which is to be expected, since there is a direct relation between t and the number of calculations performed by MRMC (see Section 2.5.3). The difference between t = 5 and t = 40 is hardly noticeable in the performance graph of both PRISM engines. This can be explained by the way PRISM operates. Prior to performing the computations that depend on t, PRISM will carry out some pre-computations on the matrix of the model. These pre-computations are independent of t and take much longer than the second (t dependent) part. Because the second part of the process is so quick compared to the first part we hardly notice the overall difference between model checking various time bounds. One might argue that the difference between the chosen time bounds (t = 5 and t = 40) is too marginal to show a significant difference in model checking time. We therefore performed a series of measurements with an increased time bound of t = 400 and found, as before, no significant difference in model checking time 11 . Detailed timing values for the SLE case study can be found in Appendix C.1.
Show more

108 Read more

On the approximation of the SABR model: a probabilistic approach

On the approximation of the SABR model: a probabilistic approach

We will study one of the most frequently used stochastic volatility models in practice: the SABR model that was originally proposed in Hagan et al. (2002). It is widely used to model the forward price of the stock or the forward LIBOR/Swap rates in the fixed income market. The model is essentially a stochastic volatility extension of the constant elastic of variance (CEV) model (studied in Schroder (1989) and Cox (1996)) with a lognormal specification of the volatility process. In Hagan et al. (2002), the authors use singular perturbation techniques to obtain explicit, closed-form algebraic formulae for the implied volatility enabling very efficient implementation of the model on a daily basis. The quality of this so-called SABR formula is quite satisfactory given short maturity and strikes not so far from the current underlying. It becomes much poorer for pricing the long dated options or strikes on the wing. In addition, the formula itself has an internal flaw, i.e. implied volatilities for long maturity computed by this formula usually imply negative density of the underlying at very low strike.
Show more

34 Read more

A probabilistic model for information and sensor validation

A probabilistic model for information and sensor validation

• To what extent do the assumptions of the model hold in practice? • How does the model scale up as the number of variables increases? The model makes two non-trivial assumptions to enable the development of the theory. One assumption is that if there is an error in a variable, it will be detected and the second is that if there is a real fault it will lead to the de- tection of apparent faults in related variables. The first assumption is made by most models of validation that depends on the process of estimating an expected value and the extent to which a reading departs from it. The diverse range and scale of successful applications of Bayesian networks on scheduling, medical diagnosis, vehicle control and weather forecasting suggests they are as good as any other approach for developing estimation models (see Chap- ter 12 of Neapolitan [25]). Their use does, however, offer the advantage that they were developed specifically to represent the probability distribution of the data, their use enables a sound estimate of the expected probability dis- tribution of a variable and that there is a substantial body of knowledge and experience of utilising learning algorithms for producing Bayesian networks. Like other models, the assumption does break down if a reading does not depart sufficiently from an expected value, for example when there is a very mild failure.
Show more

32 Read more

A Probabilistic Earley Parser as a Psycholinguistic Model

A Probabilistic Earley Parser as a Psycholinguistic Model

In human sentence processing, cognitive load can be defined many ways. This report considers a defini- tion of cognitive load in terms of the total probability of structural options that have been disconfirmed at some point in a sentence: the surprisal of word w i given its prefix w 0...i−1 on a phrase-structural lan- guage model. These loads can be efficiently calcu- lated using a probabilistic Earley parser (Stolcke, 1995) which is interpreted as generating predictions about reading time on a word-by-word basis. Un- der grammatical assumptions supported by corpus- frequency data, the operation of Stolcke’s probabilis- tic Earley parser correctly predicts processing phe- nomena associated with garden path structural am- biguity and with the subject/object relative asym- metry.
Show more

8 Read more

Exploiting a Probabilistic Hierarchical Model for Generation

Exploiting a Probabilistic Hierarchical Model for Generation

could easily be augmented with a preprocessor that maps a semantic representation to our syn- tactic input; this is not the focus of our research. However, there are two more important dier- ences. First, the hand-crafted grammar in Ni- trogen maps directly from semantics to a linear representation, skipping the arborescent repre- sentation usually favored for the representation of syntax. There is no stochastic tree model, since there are no trees. In Fergus , initial

7 Read more

Bisimulation minimisation and probabilistic model checking

Bisimulation minimisation and probabilistic model checking

Line 1 (see Algorithm 1 on page 27) of LUMP initialises L. This set is implemented as a linked list. Every item in this list has a pointer to a block. Line 5 counts the number of blocks in the final partition. In the implementation every block is assigned a unique number which corresponds to its row index in the lumped transition matrix. Line 9 chooses an arbitrary state from a block. Our implementation simply takes the first state. Since some model checking algorithms of MRMC require the matrix values to be ordered by column index, each row (i. e. the arrays cols and vals) of the lumped transition matrix is sorted after it has been filled completely. This is done using an slightly adapted version of quicksort [1].
Show more

79 Read more

Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement

Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement

For human beings, our visual attention system is mainly made up by both bottom-up and top-down attention mechanisms that enable us to allocate to the most salient stimuli, location, or feature that evokes the stronger neural activation than others in the natural scenes [5-7]. Bottom-up attention helps us gather information from separated feature maps e.g. color or spatial measurements, which is then incorporated to a global contrast map representing the most salient objects/regions that pop out from their surroundings [11]. Top-down attention modulates the bottom-up attentional signals and helps us voluntarily focus on specific targets/objects i.e. face and cars [15]. However, due to the high level of subjectivity and lack of formal mathematical representation, it is still very challenging for computers to imitate the characteristics of our visual attention mechanisms. In [11], it is found that the two attentional functions have distinct neural mechanisms but constantly influence each other to attentions. To this end, we aim to build a cognitive framework where separated model for each attentional mechanism is integrated together to determine the visual attention refer to the salient object detection.
Show more

29 Read more

Show all 10000 documents...