In order to diagnose the machine condition based on thermal images, the next stage so called feature representation process is implemented to draw out useful information of image. According to Umbaugh , the image features consist of histogram, spectral, texture, and color. Among these, histogram features which are truly statistical features are a compact representation of image characteristics without requiring knowledge about them. Moreover, these features provide us with information about the characteristics of the intensity gray level distribution for the image. Hence, they are suitable for fully automatic characterization of images and are used in this study. Histogram features consist of mean, standard deviation, skew, energy, entropy, and kurtosis. Normally, these features are rarely usable due to the huge dimensionality which not only causes difficulties for data storage but also increases inaccuracy in fault diagnosis. Therefore, dimensionality reduction or feature extraction is an essential data preprocessing technique for classification tasks. In machine fault diagnosis, there have been numerous approaches for feature extraction such as independent component analysis, principal component analysis , and genetic algorithms . In this study, the generalized discriminant analysis (GDA) based feature extraction is investigated with the aim of improving the classification performance.
Diagnosis of radiological images obtained from compu- terized tomography (CT) or magnetic nuclear resonance (MR) underwent comparable changes too: the viewing of radiological films has been replaced by viewing com- pletely digitized radiological images. These changes have been remarkably promoted by its expense savings over conventional film, its development, and environmental considerations in reducing the pollution induced by film development by-products [15-18]. The technology has been matured, and spatially completely distributed diag- nostics are offered that separate the diagnostic work with the patient (CT, MR imaging, etc.) at the hardware locali- zation from viewing the images by radiologists and, in addition, from typing the radiologists' dictations at a dif- ferent, third, location by secretaries [19,17,20,18]. Ade- quate standards of image acquisition, interaction with the hospital information system and data documentation have been implemented contemporarily with this devel- opment . Picture Archiving and Computation System (PACS) and the DICOM standard have to be mentioned herein . These regulations and internationally accepted standards are nearly missing in diagnostic pathology to our knowledge. There exist recommenda- tions and regulations of laboratory practice which are included in the Code of Federal Regulations (CFR) of the United States of America as well as those mentioned in Good Laboratory Practice (GLP), and veterinary patholo- gists have stated a toxicologic pathology position paper on pathology image data (regulatory forum). Whether these recommendations can be transferred into human diagnostic pathology still remains an open question. The same consideration holds true when taking a closer look at the working conditions of radiology and diagnos- tic pathology: Digital images acquired from histological
knowledge. It is provide d knowledge and experience by one or more experts in some fields. This expert system simulates the decision-making process of human expert through deductive reasoning and judgment. Through the investigation of several large hospitals, there is not an expert system for clinical application. The general medical diagnosis expert systemknowledge base is based on the parameters of vital signs, The accuracy of medical diagnosis is very difficult to improve. When the facial expression was embedded in the construction of knowledge base, whole system is closer to the process of clinical diagnosis. The result shows that this method achieves higher diagnosis accuracy rate. The reasoning process of the system mainly includes two parts : Firstly through the video and digital image processing technology to get facial features, the inference as before; Secondly, the inference in the previous step is associated with the physiological parameters from the database and make the final diagnosis result.
The second part plays an important role that deals with the symptoms provided by patient that is further divided into 4 parts: Physical, Psychological, Cognitive, Motor function. The patient will enter the symptoms in all the various division provided in the interface correspond to the problems that he/she is experiencing in his/her daily life. The following Fig- 2 and Fig-3 depict the interface for matching the symptoms relative to the particular disease. Based on the above mentioned formula different readings are obtained and reading consists the minimum value will produce the relatively matched disease from the KBS. On the basis of sign and symptoms selected, patient data vector (PDV) generated as shown in the Fig-2. The PDV generated is (0.8359375, 0.15625, 0.0625, 0.125) that is matched with first row since difference calculated is smallest as compared to others. So the diagnosed disease is sleep apnea as shown in figure 3 that lies in stage 1 (light sleep) and stage 2 (no eye movement) as depicted in figure 4.
Knowledgebased Systems are rich with hidden information that can be used for intelligent decision making. Classiﬁcation and prediction are two forms of data analysis that can be used to extract models describing important data classes or to predict future data trends. Such analysis can help provide us with a better understanding of the data at large. Whereas classiﬁcation predicts categorical (discrete, unordered) labels, prediction models continuous-valued functions Han, Kamber (Ho, 2005). Bayesian classiﬁers are statistical classiﬁers. They can predict class membership probabilities, such as the probability that a given tuple belongs to a particular class. Bayesian classiﬁcation is based on Bayes’ theorem, described below. Studies comparing classiﬁcation algorithms have found a simple Bayesian classiﬁer known as the naïve Bayesian classiﬁer to be comparable in performance with decision tree and selected neural network classiﬁers. Bayesian classiﬁers have also exhibited high accuracy and speed when applied to large databases.
We use two databases of image lesions whose malign or benign nature is perfectly known after histological analysis. The first one has been collected in CHU Rouen France with the collaboration of the research laboratory PSI-INSA Rouen and has been supported by the French National League against Cancer. This database was digi- tized in true colours by a 35 mms slides Nikon LS- 1000S scanner. It was used in previous works [19,25] and [26,27]. We divide this database in a training (B0) and test (B1) sets used in the first assessment of classifi- cation.
paper implements the integration and innovation of orchard pests and diseases and information technology, builds a modern expert system intelligent service platform as well which can disseminate knowledge in orchard pests and diseases quickly, extensively and effectively. Using this system can reduce disaster losses and the use of pesticides.The level of diagnosis and prevention of pests and diseases can be improved. Furthermore, good economic, ecological and social benefits will be obtained.
Gaussian distributions and its restriction to orthogonal linear combinations, it remains popular due to its simplicity. The idea of applying PCA to image patches is not novel. Our contribution lies in rigorously demonstrating that PCA is well-suited to representing keypoint patches (once they have been transformed into a canonical scale, position and orientation), and that this representation significantly improves SIFT’s matching performance. Research showed that PCA-SIFT was both significantly more accurate and much faster than the standard SIFT local descriptor. However, these results are somewhat surprising since the latter was carefully designed while PCA-SIFT is a somewhat obvious idea. We now take a closer look at the algorithm.
The lecturer “Ala Grad” adviser in Excellence, training and member of the Board of Trustees of the Emirates College of Technology known institutional learning that «the process of amending the institutional behavior by hiring different processes activities, and lessons learned from both inside and outside the organization to improve performance in a systematic way and the transition to an educated Foundation. Grad added that «the OL is an enterprise which learns its members and benefit on a personal level and institutional, an institution that emerges joint from different levels of management vision, and not have an institutional memory, and feel every individual whereas on growth and excellence», an institution «which improve skills individual, behavioral and grow the creative abilities of its employees, and apply the spirit of collegiality, mutual respect and trust in words and deeds, and is focused on work teams and teamwork regulator, and where there is no culture of blame and allow free trial and experience ».The learning is generally having several levels: individual and collective, institutional and government learning. This is achieved individual learning through guidance and reflection and targeted training and coaching the cross, job rotation, conferences and workshops, and informal learning, while collective learning is achieved through quality workshops, team problem solving, revisions, and tasks sharing. The institutional learning can be achieved through benchmarking, analysis, organizational excellence awards, auditing, self-evaluation, secret shopper, suggestions, and complains system. (RasAlKhaimah Courts spread the Organizational learning Culture 2014).
Computer-assisted decision making is currently confined to specific areas of patient management such as routine follow-up and drug monitoring. It has not played a major part in clinical diagnosis despite the availability of some excellent programs. The CTX expert system can be adapted to many areas of medicine for developing a consultation and an educational program for nonspecialists in that field. It is ideal for performing triage functions for primary care physicians and emergency room phy- sicians. The multimedia capacity of the program al- lows for interactive learning and immediate feed- back.
Now a day’s image-based fingerprint matching and recognition approach has significantly attracted by many researchers, and essential number of research papers has showed in their related works sections [12, 13, 14, 15 and 16], the using of image-based approach has more reliable than minutiae based approach. In this section, the literature review of current image-based fingerprint matching and recognition aspects are presented based on using local descriptors techniques. In 2007, Wang et all , they used Support Vector Machine (SVM) classifier to calculate singularity information and coefficients of the given orientation model, where the singular points and orientation patterns are used for fingerprints matching. In 2009, Kant and Nath , they extracted singular delta points from fingerprints then only single print of person used for comparison manner. In 2010, Sanjekar and Dhabe , introduced a modified approach by using Haar wavelet transformation to decompose the given fingerprint samples up to three levels then extracting wavelet statistical features from decomposed images, then use distance vector to find the proximity among the given dataset. In 2014, Kumar et all , adopts extract local descriptors from region of interest (ROI) after pre- processing the given samples, then using such proximity measure (Euclidian distance, Histogram intersection, Chi-square distance and Support vector machine) to infer the matching score. In 2015, Zhong and Peng , used SIFT algorithm and Local Sensitive Hashing (LSH) approach for fingerprint matching and retrieving, where the extracted fixed length features are used in database indexing based on using multi-template image feature fusion technology. In 2015, Saini et all , used SURF algorithm for fingerprint recognition process by calculating the percentage of distance
analogical learning, 6 and problem-based learning or PBL 7 to name a few. The essence of most of these methods is ‘pattern recognition’, with the expectation that those in learning should be able to diagnose any variation on the basic themes. The AMC, and others recognise that this in itself is insufficient. 5, 7, 8
In this work, the issues of image transmission in wireless communication systems over OFDMA are discussed. Even though OFDM have various advantages, still it lacks in some criteria like increased bit error rate, high PAPR and reduced efficiency. To avoid these drawbacks, an SC-FDMA technique is implemented in this work for the transmission of image over wireless communication systems. In this proposed work, the SC-FDMA is implemented to check and reduce the error using wavelet transform analysis. The performances like SNR, PSNR, BER and MSE are evaluated using MATLAB. The results indicate that the proposed technique offers better performance with a reduced error. Also, the BER for different modulation techniques is evaluated and concluded that the proposed SC-FDMA technique with QPSK modulation provides reduced BER. In future, the security of the SC-FDMA technique will be improved by using suitable encryption algorithm.
The advancement in TIT has paved the way for the emergence of profitable techniques that have been extensively used for fault diagnosis of rotating electrical machines. In the past decennaries, enormous efforts have been dedicated to fault diagnosis of bearings in rotating electrical machinery established on thermal images. Younus and Yang  proposed a novel thermal-imaging based automatic fault diagnosis methodology to classify the different conditions of rotating machinery using a 2D-discrete wavelet transform (DWT). Mahalanobis distance and relief algorithm were applied to select the pertinent features for achieving a higher success rate. The selected features were further used as input vector for classifiers viz. SVM and linear discriminant analysis to categorize the different faults. Waqar and Demetgul  demonstrated a multilayer perceptron artificial neural network-based fault diagnosis approach to determine the condition of worm gear. It has been observed that the proposed strategy can be used to anticipate the oil level and speed of the gearbox and also notice the heating patterns for all those operating conditions. Garcia-Ramirez et al.  proposed a thermography image segmentation based approach for fault diagnosis of rotating machines. This methodology can identify the defects in bearings, broken bars in the rotor, misalignment, mechanical unbalance and also unbalance voltage in the incipient stage of fault in an induction motor. Nunez et al.  developed a low-cost TIT tool to identify the bearing failures in induction motor under different operating environments. The suggested approach is based on the thermal differential method to make the early detection of failure in changing environmental conditions. It has been inferred that an absolute thermogram is not enough to determine if a bearing is defective, it is necessary to consider the ambient temperature, by doing so this differential value is enough to detect the failure.
Visual cryptography is used to encrypt information like handwritten text, images, etc. The original information to be encrypted is called as secret. No mathematical computations are required to decrypt the secret. The generated ciphers are referred as shares. The part of secret in scrambled form is known as share. Fundamental idea behind visual cryptography is to share the secret among all participants. To share the secret, it is divided into number of shares. These shares are distributed among the participants. To retrieve the original secret, each participant provides his own share. Complete knowledge of all shares is unable to decrypt the secret.
Content-basedimage retrieval (CBIR), also known as query by image content (QBIC) and content-based visual information retrieval (CBVIR) is the application of computer vision techniques to the image retrieval issues that is image mining from large digital image databases. Concept based approaches and Content basedimage retrieval are very much distinct with each other.
ABSTRACT: Now a days shape recognition has got more importance because of its wide range of application and use. Each and every day researchers are coming out with their new innovative ideas and executions, but still it is a open problem for the researchers. In the proposed method, 2D shape characterization technique is explained using Content BasedImage Retrieval (CBIR) system. CBIR is the effective method towards identifying the similar shaped images based on the 3 factors. First one is morphological filters which make use of operations like erosion, dilation, opening and closing to find the similar images. In the next method contours of the images playa a major role in identifying the match. Third method is the combination one, which combines the above two methods to find the similar images from the large database. Support Vector Machines (SVM) classifier is used to find the accuracy of the matched objects. Experimental results show the good accuracy rate.
The watershed algorithm usually brings the problem of excessive segmentation. The researchers have proposed some improvement measures for this problem. Marker control is a commonly used improvement method.  proposed an image segmentation algorithm based on fast two-step mark control. The tongue segmentation technique of the color model conversion method has gradually matured.  proposed to first find an initial object region, by transforming and thresholding the morphological components of the image in the HSI color space and morphological operations, and then image clustering the RGB components of the initial object region to find the root of the tongue. The gap region between the upper lip and the upper lip is finally removed by means of the gap region to remove the false tongue region such as the upper lip, and the tongue is extracted from the initial target region. The tongue segmentation algorithm is summarized as shown in Table 1.
The last step of features extraction is usually applied to complete an automated strategy: common sematic features are related to shape and size (such as length, thickness, angles) and vascularity (number of 3-D structures, branching points, tortuosity), but with the high-throughput of nowadays computing, a larger number of image quantitative features can be extracted from a segmented ROI/VOI . Lesions heterogeneity and coarseness can be described by the employment of agnostic texture descriptors (Table 1.1): these features are mathematically extracted from the image and are generally not part of the radiologists’ lexicon. Agnostic features can be divided into first-, second-, or higher-order statistical outputs. First-order statistics describe the distribution of values of individual pixels/voxels without concern for spatial relationships. These are generally histogram-based methods and capture a ROI in single values for mean, standard deviation, variance, skewness (asymmetry), kurtosis (flatness) and histogram of values uniformity/randomness (entropy). Second-order statistical descriptors, firstly introduced by Haralick et al. in 1973 , describe the statistical interrelationships between voxels with similar (or dissimilar) contrast values. Higher-order statistical methods are able to extract repetitive or nonrepetitive patterns, such as the Run Lengths patterns proposed by Galloway in 1975  or the circular Local Binary Pattern proposed by Ojala from 1996 , .
Abstract— In today’s world, technology is enhancing day by day, the most enhanced research area in digital image processing is image retrieval system .The techniques used for retrieving image on the basis of content , the content as text, sketch, color and shape that can be describe the image. Here we present various image retrieval methods which is used as sketch content. So the system is referred to as Sketch BasedImage Retrieval System (SBIR). In this paper implement EHD, HOG and Integrated EHD and HOG algorithms and give the comparison of three algorithm based on their accuracy measured. SBIR is advantageous than purely text base image search. The retrieval system using sketches can be essential and effective in our daily life such as Medical diagnosis, digital library, search engines, crime prevention, geographical information, photo sharing sites and remote sensing systems.