This study evaluated the use of image features for estimating the quality of green coffee beans in bulk. These features are extracted from green bean image obtained with a camera installed inside in a closed chamber equipped with varied light sources. Image data acquisition was done by preparing green coffee beans on a sample board, leveling the surface, and then putting inside the chamber. This experiment used three sets of sample consisting of two sets of Robusta coffee (eight categories) and a set of Arabica coffee (seven categories). Each sample was captured thirty times with randomization prior measurement to various illuminations. The image data of each set were randomly divided into training and testing data set. Statistical features including first and second order statistical features were then extracted and concatenated as a feature set. The feature set extracted from the training data set was learned to a classifier being used to identify the testing data set. Recognition accuracy of the classifier was used to determine an appropriate combination of features applying for quality estimation system based on statistical features for the bulk coffee grains. The result indicates that the illumination influences on the classification accuracy in which the optimum rate is obtained at the range of 100-200 lux. The highest accuracy is obtained at 100 lux in which the use of either second order statistical features or combination of selected first and second order features reach the average recognition level of 80%. Therefore, these features can be recommended as meaningful feature for estimating the coffee grains quality in bulk required on the industry of secondary coffee processing.
10 Read more
tion systems of nuclear plants. The result of this study showed that thermogra- phy could assist in detecting the abnormal operation of various components at an early stage of impending failure. Leemans et al.  evaluated the possibility of IRT to online monitor the element temperatures of an industrial blower, which included a 500 kW electric motor, a drive motor bearing, and two bear- ings supporting the blower, to detect wear or other defects. In fault diagnosis, thermogram was used by Younus et al. . In their study, thermograms of ro- tating machinery conditions were decomposed by two-dimensional discrete wavelet transform. For each level obtained from the decomposition process, the first-order statistical features were extracted and selected by Mahalanobis dis- tance and relief algorithm to choose salient features. Subsequently, support vec- tor machine (SVM) and linear discriminant analysis were applied as classifiers for each level. Other studies related to the use of IRT for fault diagnosis could be found in the references   .
16 Read more
946 | P a g e used median filter for preprocessing the lung CT images and used Otsus thresholding for segmentation of lungs followed by extraction of geometrical features which were used to train the feed forward artificial neural networks . Md. Badrul Alam Miah applied median filtering to preprocess the lung CT image and segmented out the left and right lung separately using edge maps. They obtained 33 different features and then applied them to a feed forward neural network . S. A. Patil made use of median filter to preprocess the x-ray images. They utilized morphological operations and region growing technique for segmentation. Extracted geometrical features and first order statistical texture features were then applied to ANN for classifying the lung cancer .Amjed S.AlFahoum designed an automated intelligent system for nodule detection and classification of lung cancer in CT images.Muhammed Anshad and S.S Kumar gives a comparative survey of all the methods used for automated cancer detection systems. Comparisons are made based on advantages, disadvantages and accuracy of the methods .K.Punithavathy used PET/CT images and made use of morphological operations for lung segmentation. Second order statistical features were obtained using GLCMs and these features were given as an input to a FCM classifier. This detection system achieved an overall accuracy of 85.69% . Nooshin Hadavi segmented lung CT images using region growing based thresholding algorithm and used size as a feature of lung nodule. These features served as an input for cellular learning automata for training the images . Mohsen Keshani used an active contour for lung segmentation and detected ROIs by stochastic 2D features. To eliminate the segmented bronchus and bronchioles, 3D anatomical features are further used to detect the nodules followed by Active contour modeling . Gawade Prathamesh Pratap segmented PET/CT images by p- tile thresholding and used M type morphology to detect the cancer images with the help of MATLAB . Hence, in this paper, we propose a methodology of using statistical feature extraction and artificial neural network classification for automated detection of lung cancer.
Abstract: The remarkable performance achieved by machine learning for glioma classification has gained immense attention in the medical domain. The accurate knowledge of the glioma grading provides better treatment planning and diagnosis. In this research work a hybrid approach is proposed that integrates the Glioma segmentation and binary classification of the High and Low Grade Glioma. The proposed framework consists of several steps including targeted tumor segmentation, feature extraction, feature selection and classification using machine learning techniques (Support Vector Machine (SVM) and k-Nearest Neighbor (kNN)). An accurate segmentation of the targeted tumor region is obtained by applying the fuzzy clustering technique and the first order and second order statistical features are extracted from the complete imaging feature set. The most prominent features are selected using the t-test that are provided for performing the classification using SVM and kNN classifiers. The proposed hybrid framework was applied on a population of 300 MR brain tumor images diagnosed as 200 HGG tumors and 100 LGG tumors. The binary SVM and kNN classification, accuracy and performance metric is assessed by 10-fold cross validation. An accuracy of 94.9% and 91% is obtained for SVM and kNN classifiers respectively.
project gives a new technique for filtering, narrow-tailed and medium narrow-tailed noise by a fuzzy filter. Two important features are presented: first, the filter estimates a “fuzzy derivative” in order to be less sensitive to local variations due to image Structures such as edges; second, the membership functions are adapted accordingly to the noise level to perform “fuzzy smoothing.” For each pixel that is processed, the first stage computes a fuzzy derivative. Second, a set of 16 fuzzy rules is fired to determine a correction term. These rules make use of the fuzzy derivative as input. Fuzzy sets are employed to represent the properties and while the membership functions are fixed, the membership functions are adapted after each iteration. The adaptation scheme is extensive and can be combined with a statistical model for the noise. The result of this method can be compared with those obtained by other filters [1-2].
The morphological operation starts with dilate and erode on the distribution image to get connected regions. Morphologic operations are described by the shape and size of the structural element used. Considering both the anatomical structural knowledge of the abdomen and the CT image resolutions, a square structural element with a diameter of 6 pixels is chosen. The second stage of the morphological operation is to further purify the outcome of the first stage. This includes: to retain the largest ob- ject; to remove the pixels that are liver yet are misclassi- fied as non-liver using hole filling operation; and to de- lete the spurs and smooth the contour along edges with erode and dilate operation.
3.3.3 Video Key Frame Querying Interface. For image query systems, the interaction between the user and the system is crucial, since the entry of information is flexible and changes in the consultation may be obtained by involving the user with the search procedure. Typically, the environments of retrieval systems have an area to specify the query and to display the results. The video oracle interface tries to be simple and intuitive. The video oracle interface for search and navigation is shown in Fig. 4. The specification of the query is made by selecting an image file from the directory of files (see Fig. 4(2)). There is a button for "search by statistics" (Fig. 4(4)), which allows the search based on statistical feature vectors represented by a 9-dimensional vector as described in Subsection 3.3.7. The button "wavelet query" proceeds the query by using the image signature vector. Then, two strategies may be chosen for the classification task, "by example" or "by sketch" [Jacobs et al. 1995]. The query image is shown in Fig. 4(1) and the result containing the more similar target frames is represented by a collection of thumbnails shown in Fig. 4(5). The other key frames are shown in order of similarity from left to right from the first to second line of results, and so on, until the 10th minor similar frame. Results from the 11th ranking are shown in other "pages", each one with 10 frames, presented in decreasing order of similarity.
16 Read more
all vectors and matrix elements. Global histogram and dual histogram is the first order statistics whereas variance, blockiness and co-occurrence are the second order statistics. Among all dual histogram performance is better. Histogram and co-occurrence are the marginal statistics. The cropping and recompression of the stego image produce a calibrated image with the most macroscopic features similar to the original cover image. Detection becomes sensitive to wider range of steganography with the inculcation of calibration process. Calibrated image has macroscopic feature similar to the original cover image. The search of the data is in the non-zero coefficients. So, medium and higher frequency DCT coefficients are statistically unimportant. Therefore, the statistical properties of the DCT coefficients of the calibrated image are approximately same as its cover image. For steganalysis of JPEG images, features derived directly in the embedding domain from DCT Coefficients. JPEG hiding techniques i.e. F5, MB, Outguess are detected and classified. It outperforms (Lyu, S., 2003) in all the cases and MB is more detectable. The detection rate for outguess is increased by 2.6 % for 0.05 bpnz and 4.4 % for 0.2 bpnz. (Wahab, A.W., 2009) proposed feature extraction using conditional probability matrix in the three directions i.e. vertical, horizontal and diagonal from each block of the image DCT coefficients of dimension 54 on 5235 images. The classification is done using SVM. JPEG hiding techniques F5 is detected and classified. It outperforms (Shi, Y.Q., Chen, C., 2006) by increasing detection accuracy by 2.6 % for 618 bytes of hidden data in 640x480 image size.
21 Read more
In the present study, two set of features have been analyzed, Gabor-wavelet features and Statistical features. The first group of the feature extraction is Gabor-wavelet method, which has the ability to yield optimized diverse resolution information in both time and frequency domain . The second group of features is statistical type, which is based on grey scale histogram distribution among pixels. It includes First Order Statistics (FOS) features, Gray Level Co- occurrence Matrix (GLCM), Grey Level Run Length Matrix (GLRLM) and Statistical Features Matrix (SFM) methods. These features are based on the relationship among image pixels. Although, Gabor-wavelet features and statistical features like FOS, SFM and have been widely employed separately or in combination with each other in many different studies, their individual benefits and applicability have not been compared. This motivated the present research to investigate the efficacy and capability of these features in lesions classification.
12 Read more
Background: Color image segmentation has been so far applied in many areas; hence, recently many different techniques have been developed and proposed. In the medical imaging area, the image segmentation may be helpful to provide assistance to doctor in order to follow-up the disease of a certain patient from the breast cancer processed images. The main objective of this work is to rebuild and also to enhance each cell from the three component images provided by an input image. Indeed, from an initial segmentation obtained using the statistical features and histogram threshold techniques, the resulting segmentation may represent accurately the non complete and pasted cells and enhance them. This allows real help to doctors, and consequently, these cells become clear and easy to be counted. Methods: A novel method for color edges extraction based on statistical features and automatic threshold is presented. The traditional edge detector, based on the first and the second order neighborhood, describing the relationship between the current pixel and its neighbors, is extended to the statistical domain. Hence, color edges in an image are obtained by combining the statistical features and the automatic threshold techniques. Finally, on the obtained color edges with specific primitive color, a combination rule is used to integrate the edge results over the three color components.
18 Read more
pointed out that the features extracted using Daubechies- 4 Wavelet were too large and may not be suitable for the classification. The research work used Haar Wavelet of level 3 for feature extraction and further reduced features using Principal Component Analysis (PCA)  before classification. Though PCA reduce the dimension of fea- ture vector, but it has following disadvantages: 1) Inter- pretation of results obtained by transformed feature vec- tor become the non-trivial task which limits their us- ability; 2) The scatter matrix, which is maximized in PCA transformation, not only maximizes between-class scatter that is useful for classification, but also maximizes within-class scatter that is not desirable for classification; 3) PCA transformation requires huge computation time for high dimensional datasets.
Scintillation level and the power spectrum we investigate for two frequencies of an incident electromagnetic waves 3 MHz ( k 0 = 6.28 10 ⋅ − 2 m − 1 ) and 40 MHz ( k 0 = 0.84 m − 1 ). An RH-560 rocket flight was conducted from Sriharikota rocket range (SHAR) to study electron density and electric field irregularities during spread F . Large scale structures, with vertical scale sizes up to a few tens of km, are seen at the altitude from 150 km to 257 km. Small-scale structures having scale sizes of the order of few hundred meters are also seen, superimposed on large-scale structure in the entire region. A patch of very intense irregularities can be seen in 210-257 km region. Irregularities of a range of scale sizes starting from a few hundred meters to a few tens of kilometers are observed in this patch. Spaces receiver measurements made at Kingston, Jamaica, show  that irregularities causing the scintillation of signals from earth satellites are closely aligned along the magnetic field lines in the F-region, and the dip angle of the irregularities in the ionosphere is within 16 . 0
Finding exact solutions of nonlinear initial value prob- lems (IVPs) is a goal for mathematicians, engineers, and scientists, and it plays an important role in real world ap- plications. In recent years, ﬁrst and second order nonlin- ear IVPs were considered by many authors. For instance, [1-2] used the Adomian decomposition method (ADM) to solve nonlinear diﬀerential equations such as Duﬃng- Vanderpole equation, [3-5] solved nonlinear IVPs by the Laplace Adomian decomposition method (LADM), [6-7] obtained approximate solutions by the method of diﬀer- ential transforms (DTM), and the variational iteration methods (VIM) were used by many authors [8-9]. Although ADM, LADM and DTM are eﬀective and fa- mous methods for solving nonlinear equations, there are limitations for using. For example, ADM, LADM and DTM require inﬁnite series to get solutions which some- times it is diﬃcult to investigate closed form solution from inﬁnite series. And we have to use some analytical meth- ods to complete those schemes by inverse transformations
In this paper, we consider two interpolations of Birkhoff-type with integer-order derivative. The Birkhoff interpolation is related with collocation method for the corresponding initial or boun- dary value problems of differential equations. The solvability of the interpolation problems is proved. For Gauss-type interpolating points, error of interpolation approximation is deduced. Also, we give efficient algorithms to implement the concerned interpolations.
summation is less extensive for CM than for LM, (b) the overall transducer exponent (p) for the local CM mechanism (before area summation) is much greater than for LM (i.e., p 2; c.f. ﬁgure B3 in Meese & Summers, 2012), or (c) CM sensitivity is attenuated more heavily with eccentricity than is LM sensitivity. We shall present arguments against the ﬁrst two possibilities later. The third possibility is more prom- ising. For example, Hess, Baker, May, and Wang (2008) demonstrated that for low-modulation frequen- cies (;1 c/8 and below), the decline in CM sensitivity with eccentricity follows that predicted by the spatial- frequency of the carrier. As the deleterious eccentricity effects for LM signals are scale invariant (e.g., Baldwin et al., 2012), and as the center spatial-frequency of our band-limited noise carrier was 8 c/8, 6.4 times higher than our 1.25 c/8 signals, we might expect signal sensitivity to decline with eccentricity much more rapidly for CM than for LM, consistent with our results (Figure 3). Notwithstanding our remark above regarding the choice of noise carrier, the considerations here reveal a profound difﬁculty in deriving a true measure of spatial summation of second-order signals using conventional methods.
14 Read more
Psychophysical procedure. We implemented a modified version of a standard temporal- bisection task which was designed to to derive objective measures of first- and second-order temporal judgements in our participants. Each trial involved a (first-order) time estimation component in which participants were required to bisect two intervals of the same duration, and a (second-order) metacognitive appraisal component in which participants were required to identify which of the two preceding bisection estimates was closest to the predefined target duration (i.e. the verdical bisection-point). This hybrid paradigm can, we argue, be thought of as the combination of a temporal-bisection and interval-reproduction task forming the first- order judgement followed by a second-order forced-choice confidence judgement . Since we asked the participants to bisect the interval (through their button-press response) we will describe the data, and the over- or under-estimation of the veridical mid-point of each interval, in terms of a bisection rather than a production. So, for example, if a subject identifies the mid-point of the interval as being prior to the actual mid-point, we consider this to be an underestimation of time in the sense that the whole interval (bisection point x 2) would be experienced as being shorter than it actually is, though this could also be considered to be an overestimation of the passage of time in a (re)production scenario. A schematic representation of the generic trial structure is depicted in Fig 1.
28 Read more
model-based texture analysis. We can classify it into mono-fractal and multi-fractal. Since the surface com- plexity is fundamental to several properties and physical phenomena of a pattern. The diversified complexity of fractal may be described with the concept of fractal di- mension , which may easily describe the incom- pleteness or fragmentation of an entirety. Moreover, re- cent studies show that such surface complexity of image may be described not only by its fractal dimension but also its multifractal spectra, which its mathematical description of a surface can accurately reflect its features, and can be compatible with the various theoretical models which are related to surface structures. Thus we can apply fractal analysis to determine the surface com- plexity which is measured on the gray scale and by using multifractal spectra one can obtain more detailed in- formation than is possible with the fractal dimension alone. Recently this kind of technique is widely used in retinal vessels analysis.
10 Read more
of point at inﬁnity. Thus, the curve H is the Julia set for () in this case. This result shows that the discrete version of the th Hilbert problem does not hold, which is the problem if there exists a quadratic system of diﬀerence equations in the plane with an inﬁnite number of periodic solutions. It is well known that in the case of quadratic systems of diﬀerential equations the number of periodic solutions is ﬁnite; see [, ]. In this paper we give the explicit formula for the Julia set for a whole class of diﬀerence equations with cubic terms. The Julia set consists of an inﬁnite number of period-two solutions and thus provides the whole class of examples of the second order diﬀerence equations with the cubic terms with an inﬁnite number of a period-two solutions; see Theorem .
38 Read more
One factor that will affect the second-order information avail- able in the visual system is the effects of local compressive non- linearities that occur early in processing (He & Macleod, 1998; MacLeod, Williams, & Makous, 1992). These non-linearities can create first-order components at the orientation and frequency of second-order structure in the original image (Scott-Samuel & Georgeson, 1999; Smith & Ledgeway, 1997). In our analyses, we used the luminance channel of images that were represented as CIE LAB values. This representation includes a compressive nonlin- ear transformation of raw luminance values. These compressive non-linearities could therefore create additional first-order compo- nents, at the frequency and orientation of existing second-order components, in the same way as might occur in the visual system. It is important therefore to consider the effects of these nonlinear- ities, not just in terms of the effects that a particular image format has on the analysis, but also to understand the effect that early nonlinearities in the visual system may have on the relationship between first- and second-order information in images. To deter- mine the importance of this representation, the analysis was repeated using images in which the luminance was linearised.
13 Read more
The q-diﬀerence equations initiated at the beginning of the twentieth century [–] is a very interesting ﬁeld in diﬀerence equations. In the last few decades, it has evolved into a multidisciplinary subject and plays an important role in several ﬁelds of physics such as cosmic strings and black holes , conformal quantum mechanics , nuclear and high energy physics . However, the theory of boundary value problems (BVPs) for nonlinear q-diﬀerence equations is still in the initial stages and many aspects of this theory need to be explored. To the best of our knowledge, for the BVPs of nonlinear q-diﬀerence equations, a few works were done; see [–] and the references therein. In particular, the study of BVPs for nonlinear q-diﬀerence equation with ﬁrst-order q-diﬀerence is yet to be initiated. The main aim of this paper is to develop some existence and uniqueness results for BVP (.). Our results are based on a variety of ﬁxed point theorems such as the Banach
11 Read more