Purpose: To evaluate the accuracy of deformable image registration (DIR) between the planning kVCT (pCT) and the daily MVCT combined with the histogram matching (HM) algorithm, and evaluate the deformable dose ac- cumulation using a suggested method for adaptive radiotherapy with Helical Tomotharapy (HT). Methods: For five prostate cancer patients (76 Gy/38 Fr) treated with HT in our institution, seven MVCT series (a total of 35 series) acquired weekly were investigated. First, to minimize the effect of different HU values between pCT and MVCT, this image-processing method adjusts HU values between pCT and MVCT images by using image cumulative his- tograms of HU values, generating an HM-MVCT. Then, the DIR of the pCT to the HM-MVCT was performed, generating a deformed pCT. Finally, de- formable dose accumulation was performed toward the pCT image. Results: The accuracy of DIR was significantly improved by using the HM algorithm, compared with non-HM method for several structures (p < 0.05). The mean dice similarity coefficient of the non-HM method was 0.75 ± 0.05, 0.83 ± 0.06, and 0.90 ± 0.04 for the CTV, rectum, and bladder, respectively, while that of the HM method was 0.81 ± 0.06, 0.81 ± 0.04, and 0.92 ± 0.06, respectively. For the deformable dose accumulation, some difference was observed be- tween the two methods, particularly for the small calculated regions, such as rectum V60 and V70. Conclusion: Adapting the HM method can improve the accuracy of DIR. Furthermore, dose calculation using the deformed pCT using HM methods can be an effective tool for adaptive radiotherapy.
12 Read more
In order to verify effectiveness of the algorithm, online gesture recognition based on inner distance contour point distribution features and histogram matching is implemented in MATLAB R2012b platform, where gesture templates are same with offline test. Each type of gestures is tested 50 times online and specific results are shown in Table Ⅳ . This method keeps good robustness on deformations caused by joints and part structures, with an average recognition rate of 85.5% to eight kinds of predefined gestures. Meanwhile, processing time of each image is about 0.35s, which can meet real-time requirements. Fig. 5 shows some frames of online recognition.
In this method, similar to , we use a number of retinal images to create a template for optic disc. However, instead of creating an image as template, we construct three histograms as template, each corresponding to one color component. At the first step to decrease the effect of noise, we apply an average filter with the size of 6 × 6 pixels to retina images. Then, we use a window with the typical size of the optic disc (80 × 80 pixels) to extract the optic disc of each retinal image. In the next step, we separate color components (red, blue, and green) of each optic disc to obtain the histogram of each color compo- nent. Finally, the mean histogram of each color compo- nent for all retinal image samples is calculated as template. Histogram is a graph showing the number of pixels at each different intensity value found in an image. As illustrated before, we use the histogram of each three channels (red, green, and blue) as template for optic disc localization. Then, to decrease the effect of pathological regions and exudates that are high-bright regions like optic disc, we use the histogram of pixels which has the intensity value lower than 200. Therefore, we decrease the effect of high intensity regions that are common in optic disc, patho- logical regions and exudates and the role of vessels for optic disc localization will increase.
11 Read more
Up to now, we determined three histograms as template for localizing the center of optic disc. For localizing the center of optic disc, at first step to decrease the effect of noise an average filter with the size of 6 × 6 pixels is applied to retina image. Then, an 80 × 80 pixels window is moved through retinal image. In each moving window, we separate the channels (red, blue, and green) and obtain the histogram of each channel. Then, we calculate the correlation between the histogram of each channel in the moving window and the histograms of its corresponding channel in template. For this purpose, we can use correlation or cross-correlation function to obtain the similarity of the two histograms; however, the optic disc centers obtained using these methods are not accurate.
In this section, the performance analysis of proposed al- gorithms has been discussed. All the experiments have been done in MATLAB 2014a with 4.0 GB RAM and i3 processor, with multiple images taken from the medical databases, various sitesand from text books.We tested few sets of MR and CT images with general images as key images. Fig.4 and 6 shows that the original MR image, two key images i.e., images lena.jpg, which is a true color image and graylena.jpg, which is a gray scale image, en- crypted MR image and decrypted MR images. We can see that the encrypted image will not be decrypted if any one of the key matrix is not available. The decrypted image is almost equal to the original image which has been en- crypted by using HMB algorithm. Histogram of the origi- nal and decrypted MR images has been shown in fig.5. By observing the fig.4, 5, 6 and 7, we can conclude that the proposed HMB algorithm is a lossless cryptanalysis system.
In this article, a novel affine invariant descriptor, R- histogram, is proposed to describe the relative atti- tude relationship between the shapes. The shapes are handled as the longitudinal segments parallel to the line connecting centroids of two shapes, and the R- histogram is constructed by the length ratios of col- linear longitudinal segments from two shapes. In the shape-matching algorithm, the R-histograms of ori- ginal shape pairs are first found in the off-line pre- processing phase. Then in the matching phase, to improve the shape-matching accuracy, a voting strat- egy is applied to the candidate corresponding shape pairs, which are discovered by R-histogram matching. There are four advantages of the proposed algorithm. First, the contours of the shapes do not need to be extracted; second, the new descriptor is robust to af- fine transformation and noise; third, it is simple with low computational complexity; finally, it guarantees high shape matching accuracy by voting for all can didate correspondences with minimal error of R- histogram matching. The R-histogram of a shape pair is insensitive to the distance along the line connecting the centroids of the shapes, which results in that the shape pairs with the same attitudes but different dis- tances generate undistinguishable R-histograms. One
10 Read more
This paper describes an efficient Heart sound retrieval methodology using principles of Psychoacoustic. A novel method of histogram matching algorithm based on structural and perceptual features (Mel-frequency Ceptral coefficient -MFCC, Pitch, loudness, timber etc) is discussed. This algorithm extracts spectral features heart sounds based on human perception and retrieves similar sound from the heart sound database. Experiment results based on the example based query showed that the algorithm can achieve a search accuracy of about 97%.
In digital image processing, picture enhancement is used to improve appearance of picture. Contrast image enhancement plays important role in image enhancement. There are two types of approaches for contrast picture enhancement. This is shown as Context-Sensitive and Context-Free . In Contrast-Sensitive methodology, differentiation is characterized as far as rate of progress in force between neighboring pixels. This is inclined to antiquities, for example, ringing and amplified noise. In Context-Free methodology, it does not alter the nearby waveform on a pixel by pixel basis. Picture enhancement is done by histogram equalization for improving the contrast . There are also other methods such as Contrast Stretching, BPBHE, DSIHE and CLAHE used for brightness picture enhancement. Histogram equalization is the standout amongst the most prevalent, computational
Global image enhancement uses the histogram information of the entire input image for its transformation function. Though this global approach is suitable for overall enhancement, it fails to adapt with the local brightness features of the input image. If there are some gray levels in the image with very high frequencies, they dominate the other gray levels having lower frequencies. In such a situation, GHE remaps the gray levels in such a way that the contrast stretching becomes limited in some dominating gray levels having larger image histogram components and causes significant contrast loss for other small ones.
DQM results are displayed using di ff erent methods. Using the o ffl ine event display package BesVis, the reconstructed events can be shown in real-time. The Online Histogram Present (OHP) is used for the real-time histogram monitoring. Run information, such as the center- of-mass energy, cross section, Interaction Point (IP) position, and so on which are calculated after the end of a run, can be checked from the web page. Real-time IP information is sent to DIM too so BEPC monitoring system can easily retrieve it.
In order to evaluate the efficacy of the proposed designs, the proposed and existing designs are implemented and simulated on MATLAB and in Verilog. The histogram generated by the different histogram generator architectures are compared where the simulation results show the histogram using proposed algorithm provide same information and exhibits identical structure over the conventional design. Further, design implemented in Verilog are syntheses and post synthesis results shows that proposed architectures consume small area over the conventional histogram generator. The area in terms of LUTs register and LUT-BUF pairs is evaluated and compared. Finally, the delay metrics is also computed and compared for the different histogram architectures where the proposed architectures show small delay.
the frequently used preprocessing task in several image enhancement techniques. The software implementation of the histogram is hardware inefficient and provides poor performance. To improve the performance, hardware implementation is done. The existing algorithms/architectures are inefficient. Therefore, two new low complexity histogram algorithm are proposed in this paper. The proposed algorithms prop1 and prop2 reduce the number of counters by 50% and 75% and reduces the delay by 23.8% and 31.04%, respectively. Therefore, the proposed algorithm can be effectively applied to achieve high performance designs.
Chakraborti et al  has developed the vessel pattern extraction filter with the self-adaption capability to the variations in the retinal samples. The pertaining combination of the highly sensitive vessel extraction filter along with histogram orientation method has been realized for the purpose of vessel structure extraction. The Hessian matrix has been applied over the Eigen-analysis programmed in the different intensity based scales, which further undergoes the variable intensity ranges. The scalable Gaussian filtering has been arranged in the linear fashion over the pre- processed samples with Eigen-analysis using Hessian Matrix for the precision based pattern outlining. The lower value of the Sensitivity parameter (72% for DRIVE database, 67% for STARE database & 53% for CHASE database) indicates the presence of false negative cases in the higher density, which is the possible area of improvement in order to create the robust blood vessel extraction method.
5.4 Integrated Minimum Cost Sub-block Matching(IMCSBM) Integrated Minimum Cost Sub-block Matching (IMCSBM) is a technique of using the sub-block properties to measure the similarity between the images. A new Integrated Minimum Cost Sub-block matching (IMCSBM) is developed to avoid the drawbacks associated with Intergrated Region Matching (IRM). The IMCSBM allows matching a sub-block of one sample to a number of sub-blocks of another sample. That means, the matching of sub-blocks between any two images is a many-to-many relationship, which is applicable to measure the similarity between the images having fixed number of sub-blocks and fixed sized sub-blocks. There is no need to find the weight of the sub-block because all the sub-blocks are of equal size. The significance of each sub-block is assumed as one unit.
In this section, we turn our attention to the matching of the region adjacency graph for the CSMPs. This is the most complex part of the representation and, therefore, is the most difficult and expensive part to match. Many graph-matching methodologies have been reported in the literature and it is not our intention here to investigate these. However, most of the reported methods are tailored to the problem of finding a detailed pattern of correspon- dences between pairs of graphs. From the standpoint of computa- tional expense, these are not well-suited to finding the graph from a large database which is most similar to the query. Recently, Huet and Hancock ,  have reported a framework for measuring the similarity of attributed relational graphs for object recognition from large structural libraries. The method uses a variant of the Hausdorff distance as a simple and efficiently computed measure of graph similarity.
It has very important practical significance to analyze and research minority costume from the perspective of computer vision for minority culture protection and inheritance. As first exploration in minority costume image retrieval, this paper proposed a novel image feature representation method to describe the rich information of minority costume image. Firstly, the color histogram and edge orientation histogram are calculated for divided sub-blocks of minority costume image. Then, the final feature vector for minority costume image is formed by effective fusion of color histogram and edge orientation histogram. Finally, the improved Canberra distance is introduced to measure the similarity between query image and retrieval image. We have evaluated the performances of the proposed algorithm on self-build minority costume image dataset, and the experimental results show that our method can effectively express the integrated feature of minority costume images, including color, texture, shape and spatial information. Compared with some
color feature extraction consist of color space, color quantization, and the kind of similarity measurements. Color Feature can be extracted using color moment, color histogram, and Color Coherence Vector (CCV). Color Histogram is commonly based on the intensity of three channels. It represents the number of pixels that have colors in each of a fixed list of color ranges. Color Moment is used to overcome quantization effect in color histogram. It calculates the color similarity by weighted Euclidean distance. Color set is used for fast search over large collection of image. It is based on the selection of color from quantized color space.
Histogram equalization is a simple and effective method for image enhancement. Based on the original gray level distribution of image, histogram of the image is reshaped into a different one with uniform distribution property in order to increase the contrast . The essentiality of histogram equalization is to decrease the number of gray levels so that the contrast of the image can be enhanced.
SURF: SURF stands for Speeded Up Robust Features. It is a robust local feature detection algorithm. It is partially stimulated by the SIFT (Scale Invariant Feature Transform) descriptor. The basic version of SURF is three times faster than SIFT and it is alleged by the authors that it is more potent when compared with the different image transformations than SIFT. SURF is based on the sums of responses obtained from 2-Dimensional HAAR wavelet and it makes a decisive use of integral images. It performs an integer approximation on the determinant retrieved from Hessian blob detector and this can be estimated quickly with an integral image. For features, SURF uses the sum of the HAAR wavelet response around the interest point. Here, SURF is used in this scheme for extraction of the relevant features and the extraction of descriptors from images. In SURF, a descriptor vector is constructed of length 64 by using a histogram of gradient orientations in the local neighbourhood around each interest point. This scheme is ratified over the previous schemes because of its concise descriptor length of 64 floating point values. This is the reason SURF algorithm is preferred over other algorithms for matching.
Although the current commonly used steganography method, which is based on the insensitivity of TCP-based timestamps to the least significant bit (LSB), can increase the timestamp, a slight time delay in TCP data packet transmission can make the timestamp consistent with the actual time. Second, because many factors can cause the timestamp to change, the network monitoring and detection technique cannot determine all the reasons for a change in the TCP timestamp; therefore, using timestamps can make it easier to hide information . There are very few reported methods for effectively detecting covert channels constructed using this type of method. Additionally, the RTP/RTCP is based on the user datagram protocol (UDP) or the TCP [13,14] to ensure real-time data transmission and protocol control, as shown in (Fig. 1). There are even fewer results for hiding and detecting information in timestamps in the RTP/RTCP. Therefore, this paper presents a detection method based on difference histogram similarity matching using RTP timestamps and LSB steganography and a method of detecting based on clustering using the area difference between 2 best-fit curves.
17 Read more