Abstract: Image and signalanalysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signalanalysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which ham- pers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signalanalysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signalanalysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensi- bility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM’s image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Ma- chine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signalanalysis.
used to compress that data. Therefore, MDL aims to find in a set of models that model structure which gives the lowest total codelength for both the data and the model. Over the years, several methodologies have been developed, following the ideology expressed by the MDL principle. Two-part coding  is the earliest implementation, and provides the simplest and most intuitive embodiment of the MDL principle, being the only implementable approach in some specific applications. The second main MDL approach is the normalized maximum likelihood (NML) model [21, 23], which departed from the separate two-part coding, by using in coding a single normalized distribution, which is a very elegant approach, but is rather complex to implement. A more recent MDL method is based on the sequentially normalized maximum likelihood (SNML) models [24, 25], which are especially designed for time series data, being introduced to overcome some of the problems encountered with the NML models, especially the implementation complexity issue. In the field of image segmentation, MDL was first introduced by Leclerc . Kanungo  proposed a two-part coding- and region-merging-based image segmentation algorithm for multilayer images such as color images. A similar approach was also taken by Luo , although Luo developed the approach further by adding smoothing to obtain segmentations at multiple scales, and left the selection of the correct scale as a task for the user of the algorithm.
EMD has been widely applied in signalanalysis areas, one great example is the application in the field of geophysics, especially when dealing with seismic signals. As it is mentioned in [ 18 ] by Huang et al, the Hilbert spectral representation for an earthquake can reveal the physical nature (both linear and nonlinear) of the phenomenon. They showed that Fourier based method underrepresented low-frequency energy for highly non-stationary signal. This is a problem when analyzing the earthquake phenomenon, as seismic waves are definitely non-stationary, and their low-frequency components contain more physics information. By applying the Hilbert-Huang transform, geophysicists can investigate the seismic waves at different scales. They reported success for applying EMD to reflected seismic waves [ 30 ] and seismic waves propagation [ 33 ].
Here, we give some examples using ANNs in image segmentation: Dawant et al. (1991) presented a backpropagation (BP) neural network approach to the automatic characterization of brain tissues from multi-modal MR images. The ability of a three-layer BP neural network to perform segmentation based on a set of MR images (T1-weighted, T2-weight and proton density weighted) acquired from a patient was studied. The results were compared to those obtained using a Maximum Likelihood Classifier. They showed there was no significant difference in the results obtained by both methods, though BP neural network gave cleaner segmentation images. By using the same analysis strategy, Reddick et al. (1997) first trained a self-organizing map (SOM) on multi-modal MR brain images to efficiently extract and convert the 3-D inputs (from T1-, T2- and PD-weighted images) into a feature space and utilized a BP neural network to separate them into classes of white matter, gray matter, and cerebral spinal fluid (CSF). Their work demonstrated high intraclass correlation between the automated segmentation and classification of tissues and standard radiologist identification as well as high intrasubject reproducibility.
While investigating the source of noise is beyond the scope of this paper, major source of temporal random noise is known as photon shot noise, readout noise etc. In general, photon arrival obeys Poisson distribution. However, when photon arrival rate is high, Poisson distribution can be approximated to Gaussian distribution 3 . We independently verify
this by experiments. Luminance value distribution of noisy image patch is shown in figure 2. The image patch is taken from Samsung 2M CMOS image sensor test module. As we can see, it is reasonable to say that the histogram is similar to Gaussian distribution.
DELTA: Delta waves lie within the range of 0.5 to 4 Hz, with variable amplitude. Delta waves are primarily associated with deep sleep, and in the waking state, were thought to 15 indicate physical defects in the brain. It is very easy to confuse artifact signals caused by the large muscles of the neck and jaw with the genuine delta responses. This is because the muscles are near the surface of the skin and produce large signals whereas the signal which is of interest originates deep in the brain and is severely attenuated in passing through the skull.
Figure 9: The left column shows time series of the estimated states functions of hemodynamic response to touch perception tasks. From top to bottom: BOLD signal y (the measured signal (red plus sign) and the filtering process (blue line)), ˙ f , f , v, and q. Note that our inversion scheme allows for random fluctuations on the parameters. This means that we have distribution as opposed to a point estimate for these quantities. These distributions are shown in the right column. From top to bottom: , τ s , τ f ,τ 0 , E 0 . Each stimulus event, which was simulated
- In the case of extracting image features and then using a machine learning classifier, the highest accuracy is reached with the KNN classifier, independently of the image descriptor used. The worst classifier is Naive Bayes, in most of the cases the accuracy is lower than 50%, it does not fulfill requirement R03 so it is discarded. The rest of classifiers satisfy this requirement so they are considered. The best image descriptors are, in decreasing order, GIST, Daisy, Histogram and Image to Graph. However, if we apply PCA to Daisy with 400 values, we obtain the highest accuracy (83.04%) conversely to GIST (without PCA, 81.91%, and with PCA, 81.83%), meaning that the characteristics of GIST are sufficiently good to perform an accurate classification but they are similar so we cannot select the best ones. The reason obtained after this process is that the characteristics extracted with Daisy are so distinguishing that if we select the best ones with PCA and we apply KNN, the computer is able to divide the families and assign a class to an unknown sample based on the neighbors’ features, which are analogous between them.
Image Processing Toolbox je výkonný, pružný a snadno ovladatelný nástroj pro zpracování a analýzu obrazu. Na základě mohutného výpočetního potenciálu MATLABu jsou vybudovány nadstavby pro návrhy filtrů, rekonstrukci a analýzu obrazů, dále nadstavby pro manipulaci s barvami, geometrií a strukturou obrazů včetně 2-D transformací. Na technologii zpracování obrazu jsou založeny špičkové metody lékařské i průmyslové diagnostiky, analýzy dat a automatizace. Analýza obrazu je nepostradatelná v astronomii, geofyzice, ale i v ekologii a dalších oborech jako jsou například komunikace, vojenství nebo spotřební elektronika. Pro svou výpočetní mohutnost, otevřenost a strukturu aplikačních knihoven je MATLAB spolu s Image Processing Toolboxem optimálním nástrojem v tak mnohaoborovém prostředí jako je digitální zpracování obrazu. [ 5 ]
In this paper, instead of using the old methods for controlling and estimating the traffic signal and density, a new method is proposed in which the traffic is controlled by estimating the density of vehicles present in the junction and the information is transmitted to vehicles. This proposed method helps in reducing the problems related to the heavy traffic congestion at the traffic signal junction which is continously increasing with the passing days. The present world is too busy that it doesnot want to waste there valuable time in the middle of crowded junctions. As a result of this the proposed system is introduced which is a time saving traffic signal control unit, based on density.This paper covers the methodology of implementing the traffic signal control system based on the density of vehicles on road and transferring valuable information to the vehicles. The main objective of this paper is to estimate the density of vehicles and to transmitt information using digital image processing, embedded system and wireless communication. The paper thus gives a method to reduce congestion on road, save fuel and valuable time and to choose a less congested, safe and easy route.
Take for example, the CT and MRI images of the brain; each one has high resolution but different multi-modality, the CT provides better analysis in hard tissue while the MRI is more useful in soft tissue. Positron emission tomography (PET) low resolution image contains functional information, while Single Photon Emission Computed Tomography (SPECT) image provides information about visceral metabolism and blood circulation . Fusion process is applied on these images to get a new image containing all texture of the original images as will be proved and discussed in the subsequent parts to come.
This article deals with a pristine method to estimate the noise introduced by optical imaging systems, such as CCD cameras. The puissance of the signal-dependent photon noise is decoupled from the puissance of the signal-independent electronic noise. The method relies on the multivariate regression of sample mean and variance. Statistically kindred image pixels, not obligatorily connected, engender scatter points that are clustered along a straight line, whose slope and intercept measure the signal-dependent and signal-independent components of the noise puissance, respectively. Experimental results carried out on a simulated strepitous image and on true data from a commercial CCD camera highlight the precision of the proposed method and its applicability to dissever R–G–B components that have been redressed for the nonlinear effects of the camera replication function, but not yet interpolated to the full size of the mosaiced R–G–B image.
This study estimated rainfall information more effectively by image signals through the information system of weather radar. Based on this, we suggest the way to estimate quantitative precipitation utilizing overlapped observation area of radars. We used the overlapped observation range of ground hyetometer observation network and radar observation network which are dense in our country. We chose the southern coast where precipitation entered from seaside is quite frequent and used Sungsan radar installed in Jeju island and Gudoksan radar installed in the southern coast area. We used the rainy season data generated in 2010 as the precipitation data. As a result, we found a reflectivity bias between two radar located in different area and developed the new quantitative precipitation estimation method using the bias. Estimated radar rainfall from this method showed the apt radar rainfall estimate than the other results from conventional method at overall rainfall field.
Abstract- EEG is brain signal processing technique that allows gaining the understanding of the complex inner mechanisms of the brain and abnormal brain waves have shown to be associated with particular brain disorders. The analysis of brain waves plays an important role in diagnosis of different brain disorders. MATLAB provides an interactive graphic user interface (GUI) allowing users to flexibly and interactively process their high-density EEG dataset and other brain signal data different techniques such as independent component analysis (ICA) and/or time/frequency analysis (TFA), as well as standard averaging methods. In this project we will analyze the entropy and power of the brain signal by EEG signal processing and this work is implemented by using MATLAB.
“A lot of ongoing R&D projects require the analysis of images and signals by means of state-of-the- art algorithms and methods. This results in an ever increasing demand for highly skilled engineers. This new joint master programme provides our students with a higher degree of specialization, allowing them to take advantage of a lot of carreer options and interesting job offers. Our alumni will be able to design and implement the next generation of image-guided intelligent systems.”
Abstract: Due to the advanced network technology, security of data transformation is a big problem in this society. The usage of cryptography secret key method along with watermarking provides the security of data transmission. Cryptography is a tool that can be used to keep information confidential and to ensure its integrity and authenticity. Cryptography is a method of encryption and decryption. The encryption is used to securely transmit the data in open network. Each type of data has its own features; therefore different technique should be used to protect confidential data from an unauthorized access. The proposed technique is simple to implement and has high encryption rate of security and this method embed the data into the image. The image is encrypted using secret key method and then watermarked into video signal. This encrypted image is transmitted through video signal and the security analysis is measured using some parameters. The comparison between the different file formats of the video signal.
= Φ Ψ = Φ X (1) where Φ is a N×P dimensional measurement matrix and N ≫ P, and image X=Ψ denotes a P-dimensional vector of DLWT coefficients. Assume only p transformation coefficients of X are significant, and other P-p coefficients are negligibly small. Generally, a real image X includes elements of feature and noise. We can decompose X as = + . Here, denotes the original X with the P-p smallest coefficients set to zero, and denotes X with the largest pcoefficients set to zero. Therefore, measurement value in Eq. (1) can be regarded as
Typically structures with fat have high signal intensity (ie, are white) on T1-weighted images and exhibit a relatively low signal intensity on T2-weighted images (ie, are dark). Tissue with a relatively large fraction of water (inflamma- tory tissue, BME, synovial fluid) will mirror this and typi- cally display a low-signal intensity on T1-weighted images and a high-signal intensity on T2-weighted images. There are however many exceptions to this rule of thumb. One of these is related to protons. If protons are fixed (cortical bone), or are fixed in molecules that are too big to reso- nate (cholesterol in xantoma’s), the signal intensity will be extremely low (black) on all pulse sequences. Flow within vessels is second parameter to take into consideration; it will be dependent on velocity, turbulence, direction of flow and details of the pulse sequence be bright or dark inde- pendent of T1 and T2 weighting. A third parameter is the type of pulse sequence. Currently, fast spin echo (FSE), also called turbo spin echo, is the main type allowing high-reso- lution time-efficient scanning. Because fat has a high-signal intensity similar to fluid on fluid-sensitive FSE (figure 1), fat suppression (FS) techniques are normally used. Gradient echo (GE) sequences are faster, but have the disadvantage of image distortion due to sensitivity to field inhomoge- neity and limited image contrast. To express this sensitivity to field inhomogeneity and the ensuing impact on the images, fluid-sensitive GE sequences are mentioned to express T2* contrast, rather than T2 contrast.