Lofti. A. Zadeh, in 1984 [Zadeh, 1984], reviewed the book on basic belief theory. He argued some examples where Dempster-Shafer combination rule failed in human in- tuitional logic and reasoning. Many other such examples can also be find in literature, where the information from multiple sources highly conflict from each other. As the DS- combina- tion rule fails to deal with the conflicting information. This opened a new field for researchers to propose combination rules in the theory of information fusion, when the informa- tion from multiple sources highly conflict due to any reason. The researchers working in this area of research, pro- posed many algorithms for the management of confecting information. In particular, this problem deals with the redistribution of confecting information with in the possible targets. A simple algorithm for redistribution of conflicting mass are proposed in [Yager, 1986, Smets, 2007], where the yager’s approach put the conflicting mass to the ig- norance of evidences. The other algorithms include the probabilistic approaches [Smets, 1993, Choi et al., 2009, Denceux, 2006, Joshi et al., 1995], idempotent combination rule [Destercke and Dubois, 2011], algebraic operators [Ali et al., 2012, Lee and Zhu, 1992, Smets, 2007]. The mixing or averaging is proposed in [Sentz and Ferson, 2002, Murphy, 2000], weighted averages are proposed in [Han et al., 2008, Han et al., 2011]. In these methods, the weights of the evidences is defined by the pig- nistic probability of the evidences. A distance based combination of continuous operators are developed in [Liang-zhou et al., 2005, Attiaoui et al., 2012]. A distance based similarity measure is used to define the averaging operator for the combination of evidences. These measures [ZHANG et al., 2012, Jousselme and Maupin, 2012] use the Jousselme metric for defining averaging rules. Some new metrics are defined in [Djiknavorian et al., 2012], these metrics can also be used for defining weights for information fusion.
Murphy’s method is a modification of the model without changing the Dempster's rule. Murphy proposes a combination rule of evidence mean: first of all, the basic probability assignments of n pieces of evidence are arranged, then combine them for n-1 times using Dempster's rule. Compared with other methods, Murphy’s method can deal with conflicting evidence combination and has faster convergence speed. But Murphy’s method is obviously inadequate for multi-source information, which only makes simple average without considering the linkages between evidence.
The Dempster-Shafer mathematical theory of evidence is both a theory of evidence and a theory of probable reasoning. The degree of belief models the evidence, while Dempster’s rule of combination is the procedure to aggregate and summarize a corpus of evidences. However, previous research efforts identify several limitations of the Dempster’s rule of combination 1. Associative. For DRC, the order of the information in the aggregated evidences does not impact the result. As shown in , a non associative combination rule is necessary for many cases. 2. Non weighted. DRC implies that we trust all evidences equally . However, in reality, our trust on different evidences may differ. In other words, it means we should consider various factors for each evidence. Yager and Yamada and Kudo proposed rules to combine several evidences presented sequentially for the first limitation. Wu et al.  suggested a weighted combination rule to handle the second limitation. However, the weight for different evidences in their proposed rule is ineffective and insufficient to differentiate and prioritize different evidences in terms of security and criticality. Our extended Dempster- Shafer theory with importance factors can overcome both of the aforementioned limitations.
This rule is proposed by Dempster in mathematical theory of evidence, which is used to combine evidences obtained from two or more independent sources. This rule is the agreement between multiple sources and it ignores all the conflicting evidence through a normalization factor. Dempster's rule of combination is a method for fusing belief constraints. It always produces correct results in situation of fusing belief constraints from different sources. Dempster’s rule of combination is the appropriate fusion operator. This rule shows that the level of conflict between some sources of information. The degree of conflict is satisfied only with the high belief and low conflicting sources of information. The steps to evaluate Dempster’s rule is as follows:
Information retrieval model using Dempster-Shafer theory also known as evidence theory is used in this paper. In this model, each query-document pair is taken as a piece of evidence for the relevance between a document and a query. The evidence is combined using Dempster’s rule of combination and the belief committed to the relevance is obtained which then ranked accordingly. To validate the feasibility of this approach, evidences for sample document collection i.e. TREC-9 filtering track i.e. OSHUMED dataset are considered and the results are compared with traditional VSM model in terms of precision and recall measures. It is found that Dempster Shafer Model’s performance is better than VSM for information retrieval.
In this paper, the Dempster-Shafer method is employed as the theoretical basis for creating data classification systems. Testing is carried out using three popular (multiple attribute) benchmark datasets that have two, three and four classes. In each case, a subset of the available data is used for training to establish thresholds, limits or likelihoods of class membership for each attribute, and hence create mass functions that establish probability of class membership for each attribute of the test data. Classification of each data item is achieved by combination of these probabilities via Dempster’s Rule of Combination. Results for the first two datasets show extremely high classification accuracy that is competitive with other popular methods. The third dataset is non-numerical and difficult to classify, but good results can be achieved provided the system and mass functions are designed carefully and the right attributes are chosen for combination. In all cases the Dempster-Shafer method provides comparable performance to other more popular algorithms, but the overhead of generating accurate mass functions increases the complexity with the addition of new attributes. Overall, the results suggest that the D-S approach provides a suitable framework for the design of classification systems and that automating the mass function design and calculation would increase the viability of the algorithm for complex classification problems.
21 Read more
Consistent with the qualitative analysis of the model surface (Fig. 6), the t test comparing mean values for positive locations (n = 30) with 50 randomly sampled points indicated a significant difference (p < 0.001) between these two samples (Fig. 9). Further, the nega- tive binomial regression failed to indicate a significant relationship (p > 0.05) between mosquito counts and the D-S surface values. However, when the same analysis was performed after removal of an outlier where a high mosquito count was obtained (n = 323 catches), the rela- tionship between abundance and D-S model values was significant (Wald Chi Square = 18.86, p = 0.00, df = 14). This suggests that the D-S model results produced a rela- tively realistic depiction of both presence and abundance of An. gambiae at low to moderate density, but appeared insufficient to estimate areas of very high density popu- lations. This assumption is consistent with the work of other researchers [56, 57] who have found positive rela- tionships between predicted probability of presence and abundance within a range of taxa when using species dis- tribution models such as, Maxent.
12 Read more
The subtask 1 of the BB BioNLP-ST ranks competitors using the SER measure that must be as close as possible to 0. We are quite close to the winner with a SER of 48.7% against 46%. Our F-measure (60.8%) is even better than the win- ner’s F-measure (57%). Without our mistake, we would have been placed equal first with a far bet- ter F-measure (62.9%). We can also notice that the WHISK rule set contribution is negative while it was not the case on the developement data. 8 Conclusion and perspectives
In our work, we have slowly moved towards a partic- ular vision of what kinds of tools are needed in order to attain this aim. They should allow any number of indepen- dent knowledge sources to work together, each contributing a piece of the whole solution. This in itself is not a new idea; many NLP programs use more than one knowledge source to do what they were designed to do. More often than not, however, their combination is ‘hard-wired’ into the program. What we are experimenting with is a setup where new sources of knowledge can be ‘plugged in’ to the system without extensive rewiring of the whole application. For the time being, this ‘blackboard model’ view of tag- ging and alignment is more of a conceptual tool than a real one, it is still only a way of looking at the problem of tag- ging and aligning a multilingual parallel corpus. Even so, this way of looking at things has yielded some interesting fresh ideas and insights.
characteristics are grouped. As we discussed earlier, the real performance of global production planning rule is higher than our model predicts because of the manual rescheduling. Furthermore, a global planning rule provides clarity on the shop floor by not only including the production orders waiting but also production orders that arrive in the future. One practical implication is that it is easier to respond to high workloads in near future by planning more personnel to speed up the process. Another example is that Jongbloed is capable of planning preventive maintenance on exactly the right time, when the machine has a relatively low workload. Capturing ‘vague’ variables such as clarity in a performance indicator is almost impossible to do, but we should definitely take these into account when picking a new production planning method.
92 Read more
The direct application of GA with MC adaptation rule to the pattern association has been explored in this paper. The aim is to introduce as alternative approach to solve the pattern association problem. The results from the experiments conducted on the algorithm are quite encouraging. Nevertheless more work needs to be perform especially on the tests for noisy input patterns. We can extend this concept for pattern recognition for alphabets of different languages, shapes, numerals. Some real dataset of handwritten characters may also be tested using the presented approach and the comparison with the previous approaches may be analyzed.
We show here that during a simple social interaction, such as producing a rule-dependent motor response to the movements of another individual, 2 distinct processes occur in the motor system of the participant, in the critical period between the onset of the cue movement and completion of the task. The earliest phenomenon that can be read in the motor output of the participant appears around 150 ms from the onset of the cue movement (Fig. 12). It is a visuomotor mapping that specifically imitates of the observed motor act and does not depend on the arbitrary rule to be implemented. Two basic properties of this early response appear to be particularly relevant to identify it as a pure mirror response. First, the response is specific for the observed effector’s identity, because it mirrors equally the cue-movements of the little finger or the index finger. Second, the early mirror effect is independent from the spatial relations between the participant’s effector and the observed one because (Fig. 10) visual stimuli were orthogonal to the participant’s hand and randomly oriented leftward or rightward. Furthermore, this early phenomenon can be defined as automatic, if the pivotal property that distinguishes automatic from controlled processes is that an automatic process is triggered without the actor’s intending to do so and cannot be stopped even when the actor intends to and it is in that actor’s best interests to do so (Kornblum et al. 1990).
97 Read more
The output gap which enters the Taylor rule (3) is an unobservable variable and can only be inferred indirectly from observed output dynamics. Various empirical decompositions of actual output into a long-run trend component (potential output) and a short-run cyclical component (output gap) have been proposed in the literature. These include the Hodrick- Prescott ﬁlter, linear or quadratic detrending as well as decompositions suggested by Watson (1986) and Clark (1989).
29 Read more
However, the model is able to demonstrate local superiority of an ensemble with transfer between strangers at high and very low substrate concentrations in combination with a higher success factor (figures 8, and 9A and 9B) than in a comparable entangled party. The success factor is able to compensate for absent entanglement locally. Surprisingly there is room for charity in this completely selfish and rational model. The invested substrate is lost to the source. But the ensemble of strangers is doing locally better at a higher success factor than the biologically entangled ensemble at a lower success factor. In man a success factor is an argument per se. Therefore, in non-entangled human ensembles the size of success factors will be a central aspect of discussions and manipulations. A second ingredient will be the fabrication of apocalyptic stories with a scenario of hardship and menace as evolution has thought our species that giving to non-entangled parties is only useful then. The ensemble will end when in repeated cycles the source is constantly giving without the ability to regenerate in an open system.
53 Read more
The adviser must acknowledge in writing that he is a fiduciary and must agree to adhere to the best interest standard of care. (As a practical matter, the best interest standard of care is a combination of ERISA’s prudent man rule and ERISA’s duty of loyalty. In other words, those concepts are being extended from ERISA to IRAs.)
C5: PICO element assessment and selection identi- fies the most potential sentence for each PICO element. At the classification phase (C4), different sentences can be classified under the same PICO element, e.g. element P. We need to assess the pertinence of each sentence that competes for the same PICO element. In the review of literature some of the authors have only used the pos- itional aspect as a main criterion [5, 8, 24]; others have used a baseline [9, 25], cross-validation [14, 17] or voting between many MLM classifier . In our case, we sug- gest some rules to assess the pertinence of the sentence against the PICO elements. These rules are based on the positional features, the semantic features and the coex- istence of different PICO elements in the same phrase. For example, we define the following rule to assess the most potential sentence for the P element:
14 Read more
snippet and are linked to the dialogue act(s) with the same ids. The relation between monologue snippets and dialogue act segments is one-to-many. In other words, one snippet (e.g. snippets with id=0, id=2) can be expressed by multiple dialogue act segments. Rules are extracted as follows: For each (auto- matically extracted) sub-structure of the RST struc- tures on the monologue side, a rule is created (see Table 4). Two constraints restrict extraction of sub- structures: 1) spans of the structure’s terminal nodes must be consecutive and 2) none of the ids of the terminal nodes are shared with a node outside the sub-structure.
10 Read more
In the space–time analysis, we try to detect clusters that happen in space and time concomitantly. One pos- sible methodology is the space–time Scan statistic. The main difference between the Scan circular statistic and the space–time Scan is the time period and the cylin- drical scanning format. The sweep is made by means of cylinders that present a circular base, equivalent to the geographic dimension, and the height, corresponding to the interval of time. This base is centered on one of the centroids of the geo-objects contained in the geographic region of study with the radius varying in size continu- ously. It is indicated that the time interval is limited to half of the total period and the geographical dimension (3) S = max a ∈ A
10 Read more
Abstract—We present a new decision making model by using the Dempster-Shafer belief structure that uses probabilities, weighted averages and the ordered weighted averaging (OWA) operator. Thus, we are able to represent the decision making problem considering objective and subjective information and the attitudinal character of the decision maker. For doing so, we use the ordered weighted averaging – weighted average (OWAWA) operator. It is an aggregation operator that unifies the weighted average and the OWA in the same formulation. As a result, we form the belief structure – OWAWA (BS-OWAWA) aggregation. We study some of its main properties and particular cases. We also present an application of the new approach in a decision making problem concerning political management.
Fuzzy Set theory develops models of uncertainty based on the degree to which the combined evidence indicates membership to the set under consideration (e.g. the membership to a land cover class). The support for different hypotheses can be evaluated using a suite of methods in fuzzy theory from (simple) weighted linear or convex combination of evidence to (more complex) ordered weighted averaging. Fisher  noted that the minimum interval is the standard approach for combining information in fuzzy sets but is counter-intuitive when it is used to compare different land cover classes – it only makes sense in the context of fuzzy land cover when comparing fuzzy sets of the same. For these reasons a number of alternative operators have been suggested. In this case the fuzzy memberships were defined using the weights, w i,j , defined in Equation 1 which were transformed into fuzzy memberships,
35 Read more