Abstract—Recently, there has been much research into effective representation and analysis of uncertainty in human responses, with applications in cyber-security, forest and wildlife manage- ment, and product development, to name a few. Most of this research has focused on representing the response uncertainty as intervals, e.g., “I give the movie between 2 and 4 stars.” In this paper, we extend upon the model-based interval agreement approach (IAA) for combining interval data into fuzzy sets and propose the efficient IAA (eIAA) algorithm, which enables effi- cient representation of and operation on the fuzzy sets produced by IAA (and other interval-based approaches, for that matter). We develop methods for efficiently modeling, representing, and aggregating both crisp and uncertain interval data (where the interval endpoints are intervals themselves). These intervals are assumed to be collected from individual or multiple survey respondents over single or repeated surveys; although, without loss of generality, the approaches put forth in this paper could be used for any interval-based data where representation and analysis is desired. The proposed method is designed to minimize loss of information when transferring the interval-based data into fuzzy set models and then when projecting onto a compressed set of basis functions. We provide full details of eIAA and demonstrate it on real-world and synthetic data.
Among all methods discussed in Section 2.2, our method is closet to those based on shock graphs with a larger scope of applicability. To the best of our knowledge no previous attempt has been made to specifically address topologically diverse shapes. Specifically, our method is applicable to shapes which have nonzero genus, while those in [78, 65], for example, will fall short, as they view a shock graph as a tree, which is generally invalid in such cases. We must note that level set methods [64, 48, 83, 18] have also been pro- posed for shape modeling, which employ distance fields for topology preservation. While these low level modeling methods have remarkable segmentation capabilities, they have not been applied for shape classification. Our proposed method will use a shape (such as the segmented output of level set technique) as an input for modeling to ultimately achieve a compact representation suitable for subsequent storage and classification.
The proposed solution considers the use of block structures for system modeling and presents a formal way of representation. The approach considers a system as a set of subsystems or components from a high level of perception. The following issues are addressed. i) proper labeling of components and ii) further labeling of subgroups, iii) relative positioning, iv) association and relationships with each other, v) association and relationship labeling vi) different types of correspondencies. Nodes in the graphical notation serve to represent the components and communication channel for joining components. The edges serve to connect the components via a communication channel respectively.
The most common ASR feature extraction method applies auditory-inspired Mel-filter banks to the squared magnitude of the short-time Fourier transform of the windowed speech signal. Filter parameters are inspired by knowledge of the human speech perception. Filter bank outputs are then used to train an acoustic model (AM). These perceptually motivated filter- bank features are not always the best choice in statistical modeling frameworks such as ASR acoustic modeling, where the end goal is phone classification. This motivates the data-driven learning schemes for joint feature learning from audio signals and acoustic modeling. Audio signals can be represented in both the time and frequency domains. Applying the Discrete Fourier Transform (DFT) on time-domain waveforms, can normalize the more variant wave- form signals (due to phase shifts and temporal distortions) and generate equivalent feature representation (complex DFT features), to make it easier to train acoustic models, such as neural-network based models.
MRO knowledge representation is a kind of description of MRO knowledge and the first step of the MRO knowledge management. At present, the main method of knowledge representation can be classified into three categories.  : The first kind is the unconstructed knowledge representation; The second kind is the construction knowledge representation; The third kind is the knowledge representation method of semantic Web. The emergence and development of semantic Web technology will greatly promote the innovation and development of resource knowledge management technology.  At present, in the definition of ontology, the most popular is Gruber .T R given: “Ontology is an explicit specification of the conceptual model.”  , He also puts forward five rules of ontology modeling including clarity and objectivity, consistency, completeness, maximum monotonic scalability, and minimal commitment. In addition, foreign universities and research institutions have developed a number of ontology construction tools including Onto Edit, KAON, Web ODE and Protégé, among them, the Protégé tool developed by Stanford University is the most powerful and widely used. Ontology is a conceptual model for formal description of resources on the semantic and knowledge level and is the core of the semantic Web, has become an important way of XBOM semantic modeling for MRO services.
Recently several large pre-trained language mod- els using transformation architecture, like BERT (Devlin et al., 2018), or bidirectional LSTM with attention, such as ELMo (Peters et al., 2018), have achieved state of the art results across a variety of NLP tasks. We opted not to experiment with any of these pre-trained language models for our task. The LSTM architecture of our LMs is far simpler, which facilitates testing the contribution of explicit feature representation to correlation in the acceptability prediction task, and perplexity for the language modeling task.
Owing to the algorithm developed for prefixing names of the components being inher- ited, also entailing virtual inheritance modes, a designer is unconstrained in naming com- ponents, without any concerns pertaining to ambiguity related to uniqueness of names. At the same time, multiple inheritance is permissible, since by no means does it cause any ambiguity or any limitations as to names of components or inheritance structures. The only limitation, structurally imposed upon inheritance, is absence of cycles, which is detected and controlled by an association-oriented database using internal algorithms en- suring coherence detection as well as correctness of structures and data. The inheritance mechanism functioning in the AODB Metamodel is of major importance for modeling productivity, and its complete integration with other mechanisms exerts very significant influence on the model’s power of expression. The proposed solution could make a benefit for modeling ontologies  and other knowledge representation systems, as shown on the case studies of OCM SKB and ESNM SKB .
Abstract—When appearance variation of object, partial occlusion or illumination change in object images occurs, most existing tracking approaches fail to track the target effectively. To deal with the problem, this paper proposed a robust visual tracking method based on appearance modeling and sparse representation. The proposed method exploits two-dimensional principal component analysis (2DPCA) with sparse representation theory for constructing appearance model. Then tracking is achieved by Bayesian inference framework, in which a particle filter is applied to evaluate the target state sequentially over time. In addition, to make the observation model more robust, the incremental learning algorithm is used to update the template set. Both qualitative and quantitative evaluations on four publicly available benchmark video sequences demonstrate that the proposed visual tracking algorithm performs better than several state-of-the-art algorithms.
A markup language is a tool for adding information to the documents. Semantic markup is expected to have universal expressive power, syntactic and semant ic interoperability. The XML (eXtensible Markup Language) does not provide semantic interoperability. Using XML as a base , a number of new markup languages have been deve loped to meet semantic interoperability, these are RDF(Resource Description Framework), RDFS, SHOE, DAML+OIL and recently OWL (Daconta Michael, 2003). The work done earlier in domain knowledge representation using software engineering modeling techniques was concentrated towards developing the mapping of the software engineering models to the RDF, RDFS and other markup languages (Cranefield, 2001). At present the researchers are trying to map the domain models to the OWL representation (Djuric, Gasevic & Devedzic, 2004).
In this study, we represented the meaning of both adjectives and nouns in terms of their co- occurrences with 5 sensory verbs. While this type of representation might be justified for con- crete nouns (hypothesizing that their neural rep- resentations are largely grounded in sensory- motor features), it might be that a different repre- sentation is needed for adjectives. Further re- search is needed to investigate alternative repre- sentations for both nouns and adjectives. More- over, the composition models that we presented here are overly simplistic in a number of ways. We look forward to future research to extend the intermediate representation and to experiment with different modeling methodologies. An al- ternative approach is to model the semantic rep- resentation as a hidden variable using a genera- tive probabilistic model that describes how neu- ral activity is generated from some latent seman-
The current researches on business process redesign have focused on the use of process representations that are well-aligned with a production orientated process view. According to Kock and Murphey (2001) not enough attention is given to analyzing the processes based on information flow. More and more companies are dependent on information and knowledge for their core processes. Therefore the knowledge of which process modelingrepresentation fits knowledge intensive organizations is essential in order for them to perceive business process redesign success. This research focuses on the perceived success of business process redesign projects based on representational approaches.
In [Marketos et al. 2008], data originated from GPS sensors were loaded into a trajectory warehouse by using a set of ETL (Extract, Transform and Load) procedures. In addition, mechanisms for per- forming spatio-temporal aggregations analytical queries over trajectory data are detailed in [Leonardi et al. 2009]. Moreover, based on the trajectory conceptual model proposed in [Spaccapietra et al. 2008], a conceptual model schema for TDW that is aimed at supporting the monitoring of athletes’ biochemical data is discussed in [Porto et al. 2010]. However, these biochemical data are not organized into dimensional data structures, through the use of fact and dimension tables and the representation of different levels of aggregation. In fact, traditional data warehouse techniques and current analytical tools do not satisfy the requirements of TDW applications because the representation and aggrega- tion of trajectories imply the need for the modeling and aggregation of varied and interrelated data types, such as geometries, context information and spatiotemporal objects. Consequently, there is still no consensus about how trajectories are modeled multidimensionally, organized in different levels of aggregation and used for answering aggregation queries of OLAP systems.
As mentioned before, the MI activities are designed to follow a model building cycle, which helps guides students in the construction of each model. Brewe outlined this process as: 1) introduction and representation of a phenomenon, 2) coordination of representations, 3) application of the model to a variety of contexts, 4) abstraction and generalization of the model, and 5) refinement of the model . The first two steps of this process are explicitly focused on helping students to figure out how to represent a particular aspect of a phenomenon and how to coordinate the new representations with those that students already know. Within the curriculum materials, this cycle repeats roughly every two weeks and is used as a means to expand and incorporate new concepts into the model that students are constructing. At the end of the class period, students are also given a homework assignment that asks students to “make a model” of a particular phenomenon that is relevant to what was covered in class. As this process is rather involved and requires creating multiple representations, these assignments focus on one “make a model” question rather than assigning many short problem that ask students to solve for a particular quantity. Thus, students have instruction regarding building, using, and coordinating representations in physics, which is returned to every time that the cycle repeats in class. Furthermore, students have multiple opportunities to practice creating models (and thus using and coordinating representations) both through the in-class activities and the assigned homework.
Table 3 shows link prediction performances of the baseline, existing and proposed mh path learning models. The RNNs – trained to generate relation paths according to the method proposed in – are used as baseline models and referred as PathG-RNNs. We also compare with PTransE  which is a variant of TransE for integrating relation paths for knowledge representation learning. Results reveal that: (1) mh-RGAN performs significantly better than baseline and existing models. (2) The relation paths provide very useful supplement for representation learning of KBs, which have been effectively modeled by mh-RGAN.
Data fusion can be achieved through multimodal representation learn- ing. The general idea is to learn a shared knowledge representation and con- struct the semantic correspondence between modalities through a shared rep- resentation. An intuitive approach is to apply matrix factorization technique for the case of multiple modalities, where a single objective function can be developed to optimize each data modality . A link between data modal- ities is constructed through a newly-learned latent space that represents la- tent semantics. This achieves the feature-level data fusion, which provides a more systematic framework than decision-stage data integration  since the latter involves tuning the weights of decisions from different modalities. Moreover, various regularization terms (in Section 2.3.2) can be directly added in the objective function to achieve respective learning goals. Existing online update rules and visualization techniques for NMF can also be easily used for developing interactive frameworks [104, 62] (See Section 2.3.2 and Chapter 5). We have developed a data fusion algorithm based on matrix factorization to integrate multimodal data at an early stage (feature-level fusion) for our data (see Section 4.4). Besides the NMF-based learning framework, other represen- tation learning approaches could also be extended for data fusion , which can also incorporate sparsity constraints [162, 163].
The National Aeronautics and Space Administration [NASA-STD-8719.14. (2012)] and European Space Agency [ESA (2008)] guidelines require that any RSO re-entering the atmosphere (1) fully demise in the atmosphere, or (2) impact with an energy of no more than 15 Joules with an associated casu- alty risk of less than 1 in 10000. Complying with these guidelines require an end-of-life analysis for all future planned missions. The impact location of an object re-entering the atmosphere is affected by uncertainties in ini- tial conditions, atmospheric characteristics, and object properties, as well as break-up/fragmentation events. Therefore, it is important to have an ac- curate estimate of not just the deterministic impact location but also the statistical distribution due to the uncertainties involved. Existing re-entry modeling tools used by space agencies are either deterministic, proprietary, non-open source, and/or are not freely available to the research community [Rochelle et al. (1999), Koppenwallner et al. (2005), Martin et al. (2005)].
Abstract. Human water use has significantly increased dur- ing the recent past. Water withdrawals from surface and groundwater sources have altered terrestrial discharge and storage, with large variability in time and space. These with- drawals are driven by sectoral demands for water, but are commonly subject to supply constraints, which determine water allocation. Water supply and allocation, therefore, should be considered together with water demand and ap- propriately included in Earth system models to address var- ious large-scale effects with or without considering possible climate interactions. In a companion paper, we review the modeling of demand in large-scale models. Here, we review the algorithms developed to represent the elements of wa- ter supply and allocation in land surface and global hydro- logic models. We note that some potentially important on- line implications, such as the effects of large reservoirs on land–atmospheric feedbacks, have not yet been fully inves- tigated. Regarding offline implications, we find that there are important elements, such as groundwater availability and withdrawals, and the representation of large reservoirs, which should be improved. We identify major sources of uncertainty in current simulations due to limitations in data support, water allocation algorithms, host large-scale mod- els as well as propagation of various biases across the in- tegrated modeling system. Considering these findings with those highlighted in our companion paper, we note that ad- vancements in computation and coupling techniques as well as improvements in natural and anthropogenic process rep- resentation and parameterization in host large-scale mod- els, in conjunction with remote sensing and data assimila- tion can facilitate inclusion of water resource management at larger scales. Nonetheless, various modeling options should
The above comparisons and analyses reveal that incorpo- rating the inundation scheme into hydrologic modeling has prominent impacts on the simulated surface hydrology in the Amazon Basin and significantly improves both the stream- flow and the river-stage hydrographs, especially at reaches whose upstream area involves large floodplains. This result suggests that floodplain inundation is an important compo- nent of the surface water dynamics in the Amazon Basin and should be represented in hydrologic modeling for this basin. Some previous studies also examined and reported the im- pacts of representing the floodplain inundation on the mod- eled surface hydrology in the Amazon Basin (Getirana et al., 2012; Paiva et al., 2013a; Yamazaki et al., 2011). Yamazaki et al. (2011) showed the impacts of floodplain inundation on the streamflow, water depths and flow velocities at the Óbidos gauge (in their Fig. 5) and the main stem water surface pro- file (in their Fig. 7). Getirana et al. (2012) demonstrated the effects of floodplain inundation on streamflow of a few main stem gauges (in their Fig. 16). When investigating the im- pacts of floodplain inundation on surface hydrology, these two studies used the kinematic wave river routing method that could not represent the important backwater effects in the Amazon Basin, while we used the diffusion wave river routing method that captured backwater effects. Backwater effects were also represented in the dynamic wave river rout- ing method used by Paiva et al. (2013a) when they studied the impacts of floodplain inundation on streamflow of a few major tributary or main stem gauges including Óbidos and Manacapuru (in their Table 2 and Fig. 14). Besides stream- flow, in this study, we also examined and revealed the promi- nent impacts of floodplain inundation on the river stages near 11 major gauges or along the main stem.
In the unsupervised regime, we leverage the LSTM sequence-to-sequence learning framework of (Sutskever et al. 2014), which uses one LSTM to encode the input sequence into a vector and another LSTM to generate the output sequence from that vector. In some variations of our models, we use the same sequence as both input and output, mak- ing this a sequence-to-sequence LSTM autoencoder (Li et al. 2015a). We consider several types of substructures including random walks, shortest paths, and breadth-first search, with various levels of node neighborhood granularity to prepare the input to our mod- els. An unsupervised graph representation approach can be used not only in processing labeled data, such as in graph classification in bioinformatics, but can be also used in many practical applications, such as anomaly detection in social networks or streaming data, as well as in exploratory analysis and scientific hypothesis generation. An unsupervised method for learning graph representations provides a fundamental capability to analyze graphs based on their intrinsic properties. Figure 1 shows the result of using our unsuper- vised approach to embed the graphs in the Proteins dataset (Borgwardt et al. 2005) into a 100-dimensional space, visualized with t-SNE (Maaten and Hinton 2008). Each point is one of the 1113 graphs in the dataset. The two class labels were not used when learn- ing the graph representations, but we still generally find that graphs of the same class are clustered in the learned space.
In the case of body pose estimation, a significant drop in accuracy has been observed when a detector is trained on one dataset and it is evaluated on a dif- ferent one . The reason is that for some cases there are not enough samples in the training set. As the detector gives more priority to the positive sam- ples of training set, the chance of detecting uncommon (w.r.t positive samples) body poses decreases. A possible solution to this is to obtain the body pose groundtruth for the new dataset and re-train the classifier. However, this pro- cedure is very expensive and requires a considerable delay every time a new dataset has to be analyzed. Instead of training another classifier on the new dataset, we propose to use the already trained classifier, but we provide some additional information from the new dataset. Specifically, we used the clas- sifier trained on the Buffy dataset , using the approach from . Then we boost the classifier by exploiting the information of low-level cues from the ADL dataset. These low-level cues (i.e. optical flow and foreground pixels) can be easily extracted from a stationary webcam as in our case. To evaluate our contribution for pose estimation in a new dataset, we annotated the upper body poses for 371 frames obtained from different clips of the ADL dataset 3 . Then, we create our descriptors by combining our enhanced body-pose estimator with the local motion (i.e., optical flow), that is already extracted for enhancing the pose estimator. Finally, we apply a Fisher Kernel representation to our descrip- tors to model the temporal variation in video, and apply a popular non-linear SVM classifier (SVM with RBF kernel) on our descriptors to classify the activ- ities. The details of our approach are provided in the following sections.