In this paper, we have improved two noneigenvector high- resolution methods, namely, the propagator method and the Ermolaev and Gershman algorithm. We proposed a noneigenvector version of MUSIC algorithm, replacing singular value decomposition by fixed-point algorithm. The improvement of the propagator method and Ermolaev and Gershman algorithms is based on LU or QR factorization of the spectral matrix. This leads to an e ﬃ cient local- ization of the narrow-band sources even if the SNR is low. Actually, the upper triangular matrices contain the information enabling source localization. We have modified the existing methods to estimate the number of sources based on eigenvalues by introducing the diagonal elements of the upper triangular matrix. The existing propagator method is a least square solution and still very sensitive to noise. On the opposite, the modified propagator method is calculated accurately from the upper triangular matrix even in the presence of noise. A major problem of the Ermolaev and Gershman algorithm is the estimation of the threshold. New and analytical thresholds are proposed to apply Ermolaev and Gershman algorithm. The threshold values are estimated thanks to the norm of the block matrix of the upper-right corner triangular matrix. The resulting algorithm for the localization without eigendecom- position is an approximation method, but the numerical results show its high accuracy even when the SNR is low. By adapting fixed-point algorithm for the estimation of leading eigenvectors, we obtained a noneigenvector source localization method. MUSIC algorithm has been shown to be up to 2.5 times faster with this improvement. We compared the performances of the three proposed noneigenvector methods, propagator and EG and MUSIC using fixed-point algorithm, by providing std evolution for several SNR values. MUSIC and propagator yield close std results. MUSIC with fixed-point algorithm has the smallest computational load and exhibits the best perfor- mance.
13 Read more
For the implementation of this technique, an RIFD tag is needed to attach on every object in scene. Once the attached tag is detected, the geometric model for that particular object is extracted from the local database and its pose is estimated. In general object recognition methods employing a 3D vision- based algorithm, the 3D models are extracted from the current image objects and then it is compared with the models of the objects from the entire database. As a result of such matching, the reduced set of objects is obtained having higher probability to be present in the current scene. The objects with higher matching score possess higher probability to be present in the scene. Normally, the iterative closest point (ICP) algorithm is exercised to work out the matching process. The complexity of computation is more in this process. With the additional information from RIFD tags, the searching tree becomes smaller and the false matches with similar objects also mitigates. This approach is suitable for medium and large databases. Moreover, the extra labour is needed to attach the tag to each object, thus this approach is somewhat tedious and time consuming at initial development.
Abstract — Over the past years many algorithms have been proposed addressing the challenges in automated systems to detect and localize the text information present in natural images. Applications like Keyword based image search, Text based image indexing, Tourist guide and Image text translation systems are dependent on this automated systems. The purpose of this paper is to compare the three basic methods for text extraction in natural images: edge-based, connected-component based and texture-based. The algorithms are implemented and applied to an image set with different text size, font styles and text language. Performance is evaluated based on the precision rate and recall rate for each method on the same image set.
Today there is a need for intelligent traffic management systems to deal with the continuously increasing traffic on the roads. Information about current situations can be automatically extracted by image processing algorithms. Along with vehicle detection and tracking, the identiﬁcation through license plate recognition is important for a variety of applications. These applications are automatic congestion charge systems, access control, tracing of stolen cars, or identiﬁcation of dangerous drivers . An automatic number plate recognition (ANPR) system is being applied for such type of applications. The embedded systems are cheaper than general purpose computers and suitable for deployment in tough environments due to their physical robustness. An automatic number plate recognition (ANPR) system is used for tracking, identifying and monitoring moving vehicles. . An ANPR system involves various steps that are image capturing, image processing and plate recognition. In the image processing phase, two tasks are included, i.e. Plate localization and character recognition .Plate localization normally requires two major tasks. The first one is to separate number plate area from non-number plate area and the second one is plate adjustment . The plate recognition stage requires a pre-processing step which is plate segmentation. In plate segmentation the symbols or characters will be separated from the NP so that only useful information is obtained for recognition where the image format will be converted into symbols or characters . The detection stage of the license plate is the most critical step in an automatic vehicle identification system.  Much research has been carried out to overcome many of the problems faced in this area, but there is no such method used for detecting license plates in different places or countries, because of the difference in plate style or design .
Figure 4. The box plot of RMSE for variable X in the case when the system is only partially observed. Results are shown for different localization strategies. For the definitions of localization strategies S1, S2, S3 and S4, see the text. The title of each panel indicates the localization radius (length of support). The lower and upper bounds of the box respectively give the 25th and 75th percentiles. The thick line going across the interior of the box gives the median. The whisker depends on the interquartile range (IQR), which is precisely equal to the vertical length of the box. The whiskers extend to the extreme values, which are no more than 1.5 IQR from the box. Any values that fall outside of the end points of whiskers are considered outliers, and they are displayed as circles. The numbers below S4 indicate the value of β. There is no box plot for β = 1 for the S4 with the Askey function, since the Askey function is not defined with β = 1 ( | β | ≤ 0.79; see Sect. 3.3).
13 Read more
The localization of mobile sensors is a key issue in WSNs. Specifically, an accurate location can maximize the benefits of WSNs. A high localization accuracy can be achieved through an efficient and lightweight scheme that is adaptable to sensor characteristics. Constructing an efficient scheme on the basis of the SMC method can improve the localization accuracy in dynamic systems, such as mobile sensors. In this study, we introduced a thematic taxonomy to classify the current SMC localization schemes. Moreover, we presented a comprehensive survey of state-of-the-art SMC schemes and classified them according to their localization requirements. The critical aspects of existing SMC localization schemes were analyzed to identify the advantages and disadvantages of each scheme. Furthermore, the similarities and differences of each scheme were investigated on the basis of important parameters, such as localization accuracy, computational cost, communications cost, and number of samples. We discussed the challenges and open research issues related to the parameters. The future work on the localization accuracy of range-free schemes can be improved by combining RSSI technology and SMC schemes. The RSSI can reduce com- putational and communication costs by utilizing the signal strength indicator.
20 Read more
Distributed source model considers the dipoles are distributed often in cerebral volume according to a 3D grid. The dipole’s positions are fixed, and their amplitudes should be estimated. Head model is another assumption to compute the inverse solution for the location of the source in the model. Head models ranging from the spherical to the more realistic based on the boundary and finite elements. The spherical head model contains concentric layers with different electrical conductivities, which represent the skull, scalp, etc.More realistic head models can be created using finite elements or boundary elements. These head models can be adjusted to extremely closely approximate a real head. Realistic head shapes, rather than the spherical head model, has been shown to cause dipole and other forms of EEG source modeling more accurate by up to 3 cm in focus localization. 14
Ying Zhang  has analysed the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO) is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object’s mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field.
Vineet Kumar et al.,  proposed Iris localization of unconstrained Infrared Iris images using Integro- differential operator and morphological operations. The iris images captured with Daugman reflections, low contrast illumination and occlusions due to eyelids and eyelashes are pre-processed before localizing iris part with morphological operations. This method gives good accuracy compared to other iris localization methods. Volnei Klehm et al., described a biometric recognition system based on human iris which uses a signal processing algorithms with correlation filters having minimum energy and principal component analysis. The correlation filter with minimum energy features gives more robustness to any variations of iris images. The method results in better recognition rate. Arezou Banitalebi et al.,  proposed adaptive fuzzy filtering method for noise free iris recognition by reducing noise component in iris part occurring due to eyelids, eyelashes and light illumination in iris images. The noise pixel density is eliminated with the help of filling filter of adaptive window size and fuzzification of noise pixels in iris pattern. Kiran et al.,  described iris recognition method to overcome the problems of low texture details of iris pattern of eye images captured under visible spectrum using white light emitting diode which extracts high quality iris patterns with more texture details to recognize correct iris test pattern against any illumination variations in iris images. Nalla et al.,  proposed iris classification method using iris fiber structure radiating from pupil. Iris structure is divided into different classes based on the density of iris fiber The features from these iris fiber structure are derived from log-gabor filter .Sparse representation which gives weighted sum of each class of iris structure is subjected to on-line dictionary algorithm The final classification of iris achieved using adjudication process wherein iris data is divided into groups based on dissimilarity measurement.
12 Read more
The two nodes have slightly different frequencies, which beat a low-frequency signal sensor can be measured by using a limited resource nodes to create a sinusoidal signal transmit interference. Signals received and processed by infrastructure nodes centrally, allow user’s status Information to be accessed by authorized. Other IR localization methods can be found other research papers listed in . Because all body area nodes onboard radio hardware, radio frequency (RF) signals a popular means for propagation localization. Signal strength, phase or frequency-like properties category data to get the status of estimates are analyzed using RF. One advantage is that it is in centimeters, sparse network command localization accuracy has been shown to achieve. Because radio frequencies typical sensor node can transmit between 2.6 GHz 400 MHz, on the other hand, phase or frequency hardware resources are limited to raw signal samples not. Instead, radio interferometer beat low-frequency signals such as methods, Shown in Figure 4 to produce should be as beat frequency and signal phase. Receive signal strength indicator can be measured by looking at the radio chip. Lighthouse and spotlight localization technology node used to determine the status of a light beacon use. Although both methods claim to high accuracy the line of sight, a powerful light source that illuminated areas, and light source adapted for hardware will perform well.
Localization methods have classified according to different criteria such as mode of operations, measurement and calculations, importance of accuracy, hardware complexity, network architecture, and node deployment models. According to measurement and classifications, the localization methods have divided into range-based and range-free methods. In the range-based localization method, the position of unlocalized nodes can be estimated with the help of distance or time difference or angle or signal strenth between the unknown node and reference nodes. Examples of range based methods are TDOA, TOA, AOA, RSSI. In range-free methods the position is calculated, connectivity between reference nodes in terms of hop count only no need of distance between reference nodes. The well-known methods for range-free localization are Centroid, Weighted centroid, DV-hop, Improved DV-hop, APIT, Bounding box, Amorphous,
Localization of technology within the industry assists the company to be certain of technology absorption, which enables it to experience future development. In this paper, we have identified the localization factors within the technology transfer processes and have used them to select the proper technology transfer method. Since the different methods follow the specific factors, we have aimed to provide an outline by comparing localization factors with the impact factors for selecting the technology transfer method. This could help the companies to choose the appropriate mechanism based on their short term and long-term goals in addition to the primary aim of technology localization. Since limited models were studied in this paper, it could be valuable to find the various models of technology transfer in order to compare them with these localization factors in a detailed analysis in future studies.
The focus of this work is to investigate the delamination damage in laminate composite beams, to fix a Vibration-based structural health monitoring (VSHM) method for the laminate structures. The analysis is concentrated on the vibration characteristics of the samples and, in particular, the attention is addressed on the first several natural frequencies of a composite laminate beam with a delamination damage. The core of this work is an experimental investigation on the vibration response of a composite laminate beam and its changes caused by delaminations with different sizes and in different locations of the beam. The study is divided in 3 sections: delamination detection, delamination localization, and delamination estimate. The aim is to determine how the first six harmonics frequencies change due to the delamination, and the results show that they can be successfully used to investigate the presence, the location and the dimensions of the delamination in a composite beam. A Pattern Recognition analysis is used to locate the damage, while the detection and the evaluation are done using the changes in the harmonic frequencies. A finite element analysis is performed, and the variations of the natural frequencies due to delamination are in good agreement with the experimental results.
15 Read more
As it can be seen, our method is robust against rotation, while rotation degrades the performance of other methods considerably, due to their circular edge detection nature. In general, circular edge detection process is based on determin- ing the location of the circle with maximum di ﬀ erences of pixel gray levels for two adjacent circular curves. In practice, these di ﬀ erences are calculated using two arches, instead of a whole circle. The performance of the iris localization depends on the location and angle of these arches in relation with the iris axis, and, as a consequence, rotating the image degrades the results of circular edge detection, mainly due to wrong arches used in the process and presence of eyelid and eyelashes. In contrast with these conventional methods, the iris localization in the proposed method is based on geodesic active contour model, which calculates the iris boundaries independently to any geometric shape, including circles and arches; therefore, it is robust to the image rotation problem.
13 Read more
ABSTRACT:Facial landmark localization is one of the basic approaches to identify face alignments, achieve facial expression recognition and perform the same, even in the presence of occlusions. The acquisition conditions such as background complexity, degree of occlusion, illumination variations and expressions on the face affect landmark localization performance. Use of Eigen faces for shape modeling degrades the performance of the system as Eigen faces are not robust to variations in shape, pose and expression. In the proposed work, explicit model based method is used for landmarking approach. Landmark detection and localization is carried out using Point Distribution Model (PDM) and Active Shape Model (ASM). The proposed method is implemented on MATLAB 2015a using existing standard datasets and very own dataset.Results obtained from the experiment indicate that proposed work detects and localizes facial landmarks more effectively when encountered with images exhibiting different degree of occlusions, expressions and pose variations. This, in turn makes the system more robust.
We used specific antisera and immunohistochemical methods to investigate the subcellular localization and expression of Bcr, Abl, and Bcr-Abl proteins in leukemic cell lines and in fresh human leukemic and normal samples at various stages of myeloid differentiation. Earlier studies of the subcellular localization of transfected murine type IV c-Abl protein in fibroblasts have shown that this molecule resides largely in the nucleus, whereas
16 Read more
tex area, receiving synchronicity information in studies related to epilepsy is very important. There are several ways to check the synchronization between the signals; for example, the phase synchronization methods that can ^^< ! # to non-linear behavior of neurons, there are also ways to consider the chaos as the basis of phase synchronization. In this study, phase synchronization is applied to im- prove the localization accuracy. Note that by assuming the stationary condition of EEG signals, we can use the FFT method for phase synchronization. For non- stationary signals, we must use the Wavelet method for phase synchronization.
15 Read more
G3BP and nsP3 were also shown to interact via two conserved repeats in the C-terminal variable domain of nsP3 . When we replaced the phenylalanine and gly- cine from either one of the nsP3 C-terminal TFGD re- peat with alanines there was no apparent change in the co-localization of nsP3 and Rin. However, the interaction between nsP3 and Rin was completely lost when both TFGD repeats were mutated (Fig. 3c). This suggests a direct interaction between these amino acid repeats and Rin, and shows that both repeats are redundant for the interaction with Rin. Ae. albopictus Rin was isolated from U4.4 cells. Sequence analysis revealed that the N- terminal NTF2-like domain has high homology with other NTF2-like domains including human G3BP (Fig. 4a). The three-dimensional crystal structures of the NTF2-like domains from Drosophila Rin and human G3BP have recently been resolved, and contain a binding pocket for FxFG containing peptides [44, 45]. The NTF2-like domain from Ae. albopictus Rin was mod- elled onto that of Drosophila, showing high resem- blance (Fig. 4b). As expected from this model, point mutations in the binding pocket of the Rin NTF2-like domain (position F34) greatly reduced the interaction between nsP3 and Rin (Fig. 4c). Although Rin still partly localized to nsP3-granules, this result does pro- vide evidence of an interaction between CHIKV nsP3 and the NTF2-like FxFG binding pocket of Rin. A re- cent study has confirmed this interaction between homologous sites in SFV nsP3 and mammalian G3BP . Additional interactions were predicted between FxFG peptides and residues in the NTF2-like binding pocket of G3BP , which could explain the strongly reduced, but not completely abolished, interaction of mutated Rin with nsP3.
15 Read more
HE4 was demonstrated to suppress the activity of serine proteases that degrade type I colla- gen in patients with renal fibrosis. Thus HE4 was considered a potential biomarker and ther- apeutic target for the treatment of renal fibrosis . However, the implication of HE4 level changes in serum and kidney samples in differ- ent kidney diseases remains largely unknown. In this study, expression and localization of HE4 in kidney biopsies were detected, and then HE4 levels in serum samples from patients with CKDs were analyzed. We found that, compared with controls, patients with CKD displayed a higher serum level of HE4. The correlations between anti-HE4 antibodies and tubulointer- stitial fibrosis or eGFR suggested that HE4 was an available biomarker for the progression of CKD.
11 Read more
Two methods are generally employed, upon color images, to solve this problem: texture analysis or color discrimination. In most cases the first method is very e ﬃ cient but the accu- racy depends on soil roughness (due to clods, tires, and im- plements prints) and needs high cost time algorithms. The second method is based on the color properties between soil and vegetation. In our case, the R, G, B color base did not ap- pear to give accurate colorimetric information when images are acquired under natural light. Indeed, color levels depend on lightness, that has to be separated from chromatic values.