In order to solve the detection problem (3) with unknown covariance matrices, a critical requirement for the detec- tors based on the GLRT is that the sample covariance matrices must be non-singular . To this end, we have to make sure that the number of available observations given by M, is not smaller than LN (i.e. M ≥ LN ). However, in quick spectrum sensing, a number of sam- ples greater than LN is a requirement difficult to fulfill in practice . Hence, the motivation of the remaining dis- cussion is to bring robustness against this small sample support. Note that in (9) we assume no prior knowl- edge about the spatio-temporal structure of the covari- ance matrix except that it is positive definite. One way to achieve the robustness against the small sample support is to look for possible a-priori known patterns/structures in the large spatio-temporal covariance matrix.
Our approach scales well with respect to the displayed timespan. With the multi-level data abstraction provided by the different views, even data from over a year can be investigated. However, our current implementation re- quires a pre-processing stage that calculates correlations and projections for a fixed timespan. For the application to streaming data, we will need to adapt our models to update dynamically if new relevant data becomes avail- able. One example for such an adaption could be the extension of the correlationmatrix to support dynamic updates or change it to an approximate model for rela- tional data . We further need to extend the reasoning process after an expert found a possible relation between event types. As our domain experts suggested during the feedback session, one possibility to improve this process is to incorporate more details about the events, such as the processed products during the error report. This could lead to new insights such as varying error type relations depending on products or other factors.
The proposed methodology is applied on the pipe sample #5. The pipe has been excavated from West of Melbourne, Australia, from a complex of grey-brown sandy soils and leached sands, which shows medium acidity. The proposed methodology is applied to estimate the residual strength failure probability of the pipe within time. The results are also compared for the conditions where correlation exists in over a large distance that represents the uniform reduction of cross-section and the condition where the time-dependent correlation model is applied. That is to examine the sensitivity of failure probability of the corroding pipe represented by the proposed methodology to the correlation length model. The correlation coefficient between two points in time, 𝜌 ) , is assumed to be 0.5. It has been shown elsewhere (Mahmoodian and Li, 2016b) that, the sensitivity of failure probability to this parameter is negligible. The value of l, the coefficient of variation of the zero mean normal random variable, is also estimated by simulating the resistance limit-state function 𝑅(𝑡) through MCS and investigating the coefficient of variation of fitted lognormal distributions for different points in time as suggested in (Li et al., 2016). In this research, the copula type has been assumed to be a Gaussian since it does not have tail dependence and also it requires only the correlationmatrix as the input. Of course, each copula has its own dependence characteristics and must be chosen with care in the practical uses. This is, however, out of the scope of this research.
those attacks. One is that the data is modified during the transmission process, which mostly can be detected by encryption technology. The other is that the data will be tampered with before the aggregation, and this kind of attack cannot be detected or blocked effectively by encoding . Therefore, the data aggregation algo- rithms are proposed to solve this problem by validating the perceived data before entering the aggregate function. However, once the attack is detected, the traditional method will discard the data being collected by monitoring nodes directly . This process mode can lead to a lot of waste of resources and reduce the utilization of network energy. In order to solve this problem, a variety of simple method of restoration and aggregation is proposed by using the samples that are not attacked , such as truncation method and shear mechanism, which can improve the energy utilization of the network. But those methods have some limitations. In this paper, we focus on spatio-temporalcorrelation of the perceptual data in cluster-based WSNs. In particular, we cope with the centroid distance and similarity to measure the attack degree of each cluster node’s perceived data and
bution in wide range of scales. In the large scales, this fea- ture manifests itself in the seismicity concentration close to the nodal plane of the future rupture. The laboratory experi- ments of Mogi and Scholz (Mogi, 1968; Scholz, 1968) have to be mentioned in this connection. Also this pattern has been observed in the laboratory experiments of Sobolev and Pono- marev (1999) and even in the actual studies of the seismicity behavior before some strong earthquakes of the Kamchatka region (Zavialov and Nikitin, 1999). In the small scales, this phenomena manifests itself in the seismicity clustering be- fore the mainshock (Zavialov and Nikitin, 1999) and forma- tion of spatio-temporal clusters of the acoustic emission in the laboratory experiments (Sobolev and Ponomarev, 1999). A suitable method for description of the inhomogeneity of the seismicity spatial distribution is based on fractal ap- proach. The results of corresponding investigations show that seismicity spatial distribution manifests statistically self- similar properties in a wide range of scales. Therefore, it can be treated as fractal or multifractal. Several papers are related to this item (Sadovsky et al., 1984; Okubo and Aki, 1987; Geilikman et. al., 1990; Hirata and Imoto, 1991; Turcotte, 1997; Wang and Lee, 1996; Lapenna et al., 2000). However, only a few papers are reported, where the dynamics of fractal properties of seismicity is studied before strong earthquakes. A gradual decrease of the correlation exponent of spatial distribution of acoustic shocks has been observed by Hirata et al. (1987) during the destruction of granite sample. The behavior of the fractal dimension (calculated by the box- counting method) of the continuum of earthquake epicenters has been studied by Uritsky and Troyan (1998). The author used the materials of the world-wide seismicity data-center. The fractal dimension had been calculated for the two-year periods before and after 23 strong earthquakes. In 16 cases the pre-earthquake period was characterized by a lower frac- tal dimension.
The basic principle of weighting function-based methods is to calculate the reflectance of the center fusion pixel through a weighting function which takes full account of the spectral, temporal and spatial information in similar pixels. Such methods have been used widely. Gao et al.  proposed the spatial and temporal adaptive reflectance fusion model (STARFM), which includes comprehensive consideration of the spectral difference between MODIS and Landsat ETM+ data, the temporal difference between MODIS data of the same pixel location, and the distance between the center pixel and similar pixels. Thus, different weights are applied to different pixels to predict the reflectance of the center pixel. Hilker et al.  proposed the spatial temporal adaptive algorithm for mapping reflectance change (STAARCH) to solve the problem of rapid land cover change that is not resolved by STARFM. Tasseled cap transform results were introduced to calculate the change sequence, which can increase the prediction accuracy effectively. To deal with low accuracy in heterogeneous regions, Zhu et al.  proposed the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM). The hypothesis made was that there is a linear relationship between the changes in the MODIS and Landsat reflectances during a given period. A conversion coefficient was introduced to express this relationship quantitatively, which ensures more accurate prediction of the reflectance of small and linear targets. Wang and Atkinson  proposed the Fit-FC model, which realizes spatio-temporal fusion through three steps; regression model fitting (RM fitting) spatial filtering (SF) and residual compensation (RC). It was found that the accuracy of the algorithm was greater than all the comparator methods, and the model can be implemented with only one pair of coarse-fine images. Weighting function-based methods can also be applied to predict land surface temperature with both fine spatial and temporal resolution .
 has introduced spatio-temporal sequential pattern mining for point based event data set. , ,  are frequency-based approaches whereas STS-miner proposed in  is a density-based approach to identify statistically significant spatio- temporal sequential patterns. In this approach, a spatio-temporal sequential pattern A→B says that events of type B are densely clustered around one or more events of type A. STS-Miner allows to mine patterns based on one of the temporal predicates - follow. The algorithm mines patterns based on the pre-defined thresholds to restrain the spatio- temporal neighborhood of an event. The concept of density ratio calculates the significance value of sequential patterns. Significance measure called as sequence index ensures that long sequences are associated with minimal strength. The sequence index of a pattern signifies the cause-effect relationship between different event types. STS- miner approach defines the spatio-temporal neighborhood on events. It handles one event type at a time and does not consider the interaction between event types while generating the patterns. Another algorithm Slicing STS-miner works with the databases that do not fit entirely in the memory.
I deal with perception and action (e.g. movements) using results from Synergetics, a comprehensive mathematical theory of the selforganized formation („emergence“) of spatial, temporal, or functional structures in complex systems. I illustrate basic concepts such as order parameters (ops), enslavement, complexity reduction, circular causality first by examples of well known collective, spontaneous modes of human behavior such as rhythmic clapping of hands etc., and then by face recognition. The role played by ops depends on context. In the case of face (or pattern) recognition an op represents the concept of an individual face (action of mind) and it enslaves the action (firing rates) of neurons (body). This insight allows me to interpret syndroms as order parameters playing their mind/body double role. I present criteria for the identification of ops and discuss their general properties including error correction and „remedy“ of deficiencies. Contact is made with a recent paper by Sabine Koch on embodied aesthetics. My approach includes the saturation of attention at various time scales (ambiguous figures/fashion). Adopting a psychological perspective, I discuss some ingredients of beauty such as proportionality and symmetry, but also the importance of irregularities.
The cells in the matrix display may include two types of images: feature images and index images. Feature images represent the objects to which the SOM tool has been applied, i.e. spatial situations in a space-in-time SOM and local temporal variations in a time-in-space SOM. Spatial situations are represented by maps (Fig.1) and local temporal variations by diagrams called ’temporal mosaics’ (Fig.2). A map image portrays the attribute values attained in all places in one time unit. A temporal mosaic portrays the attribute values attained in one place in all time units. Each pixel corresponds to one time unit; the pixels are arranged in rows of user-specified length. Values of space and time-dependent numeric attributes are represented by colour coding. In all illustrations given in this paper, one of the scales from Color Brewer (Harrower and Brewer 2003) is used. The colour scale and the division of the attribute value range into intervals can be explicitly specified by the user. If this is not done, the system automatically divides the value range into a default number of equal intervals and chooses a diverging colour scale with the midpoint corresponding to the central interval. Diverging colour scales are used for increasing the visual salience of the feature images. The feature images are not meant to support accurate decoding of the attribute values (which can be better done using other visualization techniques available in the system) but for a comparative overall assessment.
complexity, while keeping nearly the same RD perfor- mance as that of the HM encoder. The experimental results indicate that the algorithm performs well for all types of video sequences. This result verifies that there exists strong mode correlation between coding depth lev- els and between neighbouring frames. Once the correla- tion is fully exploited, the unlikely coding modes can be skipped. The comparative results of the proposed algo- rithm and the fast mode decision algorithm proposed by Jamili , Xiang  and Chen  are also included in Table 4. It can be seen that our algorithm saves more than 3, 14 and 14% in average encoding time compared to Jamili’s, Xiang’s and Chen’s methods, while achieving very similar coding efficiency. Our proposed algorithm reduces more encoding time than the existing methods for all types of video sequence. Therefore, the proposed fast coding mode decision algorithm provides the best performance in terms of encoding time reduction.
Previous research in this area, notably , introduces the notion of extracting Regions-of- Interest (RoIs) from trajectory data and then performing Sequential Pattern Mining (SPM) to ﬁnd popular movement paths between these RoIs. However, no previous work focuses especially on tourism information and additionally no approach can ﬂuidly discover and visualise results in three dimensions. For example in  only the spatial dimension is considered during RoI extraction and visualisation. Hence current approaches cannot extract valuable spatio-temporal patterns like which months tourist locations are popular to visit. Therefore, in this research we introduce our truly spatio-temporal trajectory RoI and SPM framework (see Section 3), in order to validate our framework we attempt to ﬁnd new patterns using the same Flickr data that was used in  (see Section 4), from our tests we summarise our ﬁndings, draw our con- clusions about the validity and eﬀectiveness of our approach, and ﬁnally introduce some future directions (see Section 5).
Objectives: Two methods have been described to as- sess fetal cardiac output (CO). It has usually been calculated by using 2D ultrasound to measure the diameter of outflow valves and Doppler ultrasound to measure flow velocity through the valves. Recently CO has been assessed using 3D spatio-temporal im- age correlation (STIC) to measure stroke volume. We aimed to compare the reproducibility of these tech- niques. Methods: In 27 women with singleton preg- nancies, examinations were performed in three gesta- tional age groups: 13 - 15, 19 - 21 and >30 weeks of gestation. Each mother was scanned once. Using 2D pulsed wave Doppler the duration of flow and avera- ge flow velocity in systole were measured through aortic and pulmonary valves. We averaged values f- rom three consecutive Doppler complexes. The outlet valve diameters were measured and the cardiac out- put was calculated for each valve. The measurements were repeated to assess reproducibility. In the same women, we acquired STIC volumes of the fetal heart. The volume measurements were made using the 3D Slice method by one observer. Using 2 mm slices the circumference of the ventricles was traced at the end of systole and diastole to calculate ventricular volume before and after contractions to calculate stroke vo- lume and hence cardiac output. The measurements were repeated to assess reproducibility. Results: The root mean square difference of log (CO) of repeat measurements ranged between 0.12 and 0.21 using Doppler compared to 0.7 to 1.47 using STIC. The differences in reproducibility reached statistical sig- nificance for both sides of the heart at all but one gestation. Conclusions: We found that Doppler asse- ssment of fetal cardiac output was more reproducible than measurement using STIC.
Summmary The availability of an accurate travel time database is of crucial importance to intelligent transportation systems. Dynamic Travel Time Maps (DTTM) are the appropriate data management means to derive dynamic - in terms of time and space - weights based on collections of large amounts of vehicle tracking data. The DTTM is essentially a spatio-temporal data warehouse that allows for an arbitrary aggregation of travel times based on spatial and temporal criteria to efficiently compute characteristic travel times and thus dynamic weights for a road network. To utilize the DTTM best, appropriate hypothesis with respect to the spatial and temporal causality of travel times have to be developed, resutling in accurate characteristic travel times for the road network. The Neighborhood and the Tiling Method as candidate travel time computation methods can be seen as the basic means to implement such hypothesis.
In this section, we consider the same model reduction problem as in Section 2, and we propose a computational optimal randomized proper orthogonal decomposition (RPOD ∗ ) algorithm for constructing ROMs of large scale systems. In Section 2, we had introduced the RPOD algorithm that randomly chooses a subset of the input/output trajectories. A sub-Hankel matrix is constructed using these sampled trajectories, which is then used to form a ROM in the usual BPOD fashion. The Markov parameters of the ROM constructed using the sub-Hankel matrix were shown to be close to the Markov parameters of the full order system with high probability. We proved that a lower bound exists for the number of the input/output trajectories that need to be sampled. The major problem associated with this algorithm is that different choices of the sampling algorithms would lead to different lower bounds, and the choice of a good sampling algorithm other than the uniform distribution is unclear.
d k k = ρ − ρ (6) It is customary to reproduce the matrices of all-to-all distances in the form of a dendrogram by placing closely-related markers in the same mold. To preserve the structure (6) as much as possible, we use the standard neighbor-joining tree-generating algorithm . We search the matrix (6) for the closest markers, and then con- nect them into a block. Once the markers are connected, they are removed from the distance matrix and replaced by the block connecting them. The neighbor-joining algorithm continues until all N markers are connected in a tree, and each branch acquires a length, with length being interpreted as the estimated number of substitutions required to resolve the block. The functional contingency between blocks of markers on the largest scale of the movement is disclosed by their geometric proximity in the resulting dendrogram. In spite of all participants sharing roughly the same anatomy and performing the same movements, the structures of calculated dendro- grams can be substantially different in terms of individual movement features and level of movement.
Emerging database management systems that can handle spatial data types have changed both GISystems and GIScience. Systemwise, this technology enables a transition from the current GIS technology to a new generation of spatial information appliances, tailored to specific user needs [Egenhofer 1999]. For the GIScience community, it enables many theoretical proposals to face the crucial test of practice. One of the important challenges for the GIScience community is finding ways to use spatially enabled DBMS to build innovative applications which deal with spatio-temporal data [Erwig, Güting et al. 1999] [Hornsby and Egenhofer 2000]. Modeling spatio-temporal applications is a complex task that involves representing objects with spatial extensions and attributes values that change over time [Frank 2003]. To deal with spatio-temporal data, one alternative is building a specialized DBMS created for efficient support of spatio-temporal data types, as in the projects CONCERT [Relly, Schek et al. 1997] and SECONDO [Dieker and Güting 2000]. When is not possible to use a specialized DBMS, one has to build a layered architecture on top of an existing object-relational DBMS. This is the focus of this paper, where we consider how to support applications of spatio-temporal data, using object-relational database management systems (OR- DBMS). In this case, one basic question arises: how to design a flexible query processor for spatio-temporal data using object-relational DBMS?
One line of research that is emerging in the investigation of critical Reynolds numbers is the consideration of alternating turbulent-laminar bands depicted in figure 1.3 which form within the flow. It has been established that in plane Couette flow (Prigent et al., 2002, 2003, Barkley & Tuckerman, 2005, 2007), counter-rotating Taylor-Couette flow (Prigent et al., 2002, 2003), and plane Poiseuille flow (Tsukahara et al., 2005), near transition the system can exhibit a remarkable phenomenon in which turbulent and laminar flow form persistent alternating patterns on scales very long relative to both wall separation and the spacing between turbulent streaks. While the origin of these patterns remains a mystery, they are intimately connected with the lower limit of turbulence in shear flows. Key to the investigation of these states is the consideration of the spatio-temporal aspects of the flow. Investigating these states in pipe flow is a more challenging endeavour, mostly due to the strong advection of fluid down the length of the pipe.