Top PDF Virtual Lab for histogram-based binarization of gray level images: Heuristic threshold

Virtual Lab for histogram-based binarization of gray level images: Heuristic threshold

Virtual Lab for histogram-based binarization of gray level images: Heuristic threshold

The creation of a Virtual Lab, integrated and interactive software, to perform histogram-based binarization of gray scaled images is reported. In an attempt to optimize the binarization the developed virtual lab introduces a Heuristic Binarization Threshold. At processing time, the module extracts the histogram of the input gray scaled image, and then a Heuristic Threshold is calculated by taking the weighted average of the foreground grey levels of the image. Next those pixels in input image whose grey levels are above this threshold are highlighted. Although still not experimentally optimum, this heuristic threshold provides a first approximation towards automatic optimum binarization of grey scaled images. For comparison purposes when an input image is operated on by the binarization module, it is possible to visualize up to three different binarizations on screen simultaneously, each for a different binarization threshold fixed by the user. Keywords: Virtual Lab, digital image processing, automatic binarization, gray scale histogram, heuristic threshold.
Show more

6 Read more

Enhancement of Old Images and Documents by Hybrid Binarization Techniques

Enhancement of Old Images and Documents by Hybrid Binarization Techniques

In computer vision and image processing, Otsu's method, named after Nobuyuki Otsu is used to automatically perform clustering-based image thresholding or, the reduction of a gray level image to a binary image. The algorithm assumes that the image contains two classes of pixels following bi- modal histogram (foreground pixels and background pixels), it then calculates the optimum threshold separating the two classes so that their combined spread (intra-class variance) is minimal, or equivalently (because the sum of pairwise squared distances is constant), so that their inter-class variance is maximal. Consequently, Otsu's method is roughly a one-dimensional, discrete analog of Fisher's Discriminant Analysis.
Show more

5 Read more

Combined Method of Level set with impact on Pre processing for binarization of document images in Tamil Script

Combined Method of Level set with impact on Pre processing for binarization of document images in Tamil Script

Global methods treat each pixel independently, converting each to black or white based on a single threshold value. These are usually not suitable for degraded document image, because these images do not have a clear bimodal pattern that separates foreground and background. But in clear document images, where there is a suitable dissimilarity between fore ground and back ground, global methods do work well. If pixel‟s color intensity is higher than the global threshold it is assigned one value, otherwise it is assigned the opposite value. In contrast local methods, make use of the color information in nearby pixels to determine an appropriate threshold for a particular pixel. Typically the two colors used for a binary image are black and white though any two colors can be used. The simplest way to use image binarization is to choose a threshold value, and classify all pixels with values above this threshold as white, and all other pixels as black. The problem lies in selecting the precise threshold. This still remains as an unsolved problem due to different types of document degradations, image contrast variations, bleeding- through, and smear. Other factors which make binarization a difficult task is the presence of strong noise, complex patterns and/or variable modalities in gray-scale histogram.
Show more

8 Read more

3D Reconstruction of Multi view Images Based on Modificated Histogram Equalization

3D Reconstruction of Multi view Images Based on Modificated Histogram Equalization

The image-based 3D reconstruction process mainly includes point cloud reconstruction and surface reconstruction. Sparse reconstruction of point clouds involves the extraction and matching of feature points, camera calibration, etc. David Lowe proposed an operator (Scale-invariant feature transform) that describes local features, which essentially finds feature points in different scale spaces and calculates the direction of feature points [4]. Bay proposed SURF algorithm on the basis of SIFT algorithm. The SURF algorithm maintains invariance under both scale and affine transformation, and the operation speed is several times higher than that of SIFT. The advantages of rich image information and strong matching ability of SIFT operator are retained [5].
Show more

6 Read more

Binarization of medical images based on the recursive application of mean shift filtering: Another algorithm

Binarization of medical images based on the recursive application of mean shift filtering: Another algorithm

It is possible to observe that, in this case, the proposed segmentation algorithm is a direct extension of the fi ltering algorithm, which fi nishes when the entropy reaches the stability. Note the simplifi cation of this algorithm compared with the one proposed by Rodriguez and colleagues (2005) (see Algorithm steps in next section). Some comments on this algorithm follow. In (Christoudias et al 2002), it was stated that the recursive application of the mean shift property yields a simple mode detection procedure. The modes are the local maxima of the density. Therefore, with the new segmentation algorithm, by recursively applying mean shift, convergence is guaranteed. Indeed, the proposed algorithm is a straightforward extension of the fi ltering process. Comaniciu (2000) proved that the mean shift procedure converges. In other words, one can consider the new segmentation algorithm as a concatenated application of individual mean shift-fi ltering operations. Therefore, if we consider the whole event as linear, the recursive algorithm converges. Binarization is carried out after the segmented image is obtained.
Show more

12 Read more

Extraction of Tongue Crack Based on Gray Level and Texture

Extraction of Tongue Crack Based on Gray Level and Texture

Tongue crack is a kind of special surface texture in the tongue image which is the most changeful. It can be viewed as curve structure on the surface of tongue, so we can use the method of line detection which is roughly divided into 3 categories: contour-based, center line-based and area-based [3-5].Yang got line response image using tongue image gray-level and color information as well as the pixel distant gradient and extracted tongue cracks from line response image by hysteresis threshold algorithm [6].Liu proposed an improved line width detector which can avoid the effect of undesirable width and uneven illumination [7]. There is also extraction algorithm using region characteristics such as color feature of cracks. Chen used color features in L*a*b and regional division and merger to divide the tongue image into several regions, then extract cracks by adaptive threshold [8]. Yang used kernel false color transformation to increase the image contrast, then calculated gradient image for the G component, finally used the lag threshold method to get crack images [9]. The resent researches’ results are difficult to apply in the subsequent analysis of tongue cracks directly. This study aims to improve accuracy of crack extraction by combining the characteristics of the crack’s color and texture.
Show more

11 Read more

Histogram Based Block Classification Scheme of Compound Images: A Hybrid Extension

Histogram Based Block Classification Scheme of Compound Images: A Hybrid Extension

The aim of JPEG 2000 is not only improving compression performance over JPEG but also adding (or improving) features such as scalability and editability [9]. In fact, JPEG 2000's improvement in compression performance relative to the original JPEG standard is actually rather modest and should not ordinarily be the primary consideration for evaluating the design. Very low and very high compression rates are supported in JPEG 2000. In fact, the graceful ability of the design to handle a very large range of effective bit rates is one of the strengths of JPEG 2000. For example, to reduce the number of bits for a picture below a certain amount, the advisable thing to do with the first JPEG standard is to reduce the resolution of the input image before encoding it [11]. That's unnecessary when using JPEG 2000, because JPEG 2000 already does this automatically through its multi-resolution decomposition structure. Compared to the previous JPEG standard, JPEG 2000 delivers a typical compression gain in the range of 20%, depending on the image characteristics. Higher-resolution images tend to benefit more, where JPEG-2000's spatial-redundancy prediction can contribute more to the compression process [12-14]. In very low-bitrate applications, studies have shown JPEG 2000 to be outperformed by the intra-frame coding mode of H.264. Implementation of JPEG2000 makes compression of picture blocks more easy and effective.
Show more

5 Read more

Privacy Protection In Medical Images Using Histogram Shifting Based RDH

Privacy Protection In Medical Images Using Histogram Shifting Based RDH

________________________________________________________________________________________________________ Abstract - In the medical field the patients’ confidential medical records need to be transmitted securely. One method would be to embed these confidential data in medical images and extract it at the other end. But while performing this embedding and extraction no data loss or image distortion should occur. So here th is project presents a method of embedding the medical data in medical images using histogram shifting based reversible data hiding. An embedding process is used here where the difference between adjacent pixels in the image is utilized. Since external data is being embedded into the original image, some noticeable changes can occur in the image which might affect the security of the data. To avoid this, after data embedding, the output image should be identical to its original image. Also after extraction the doctor should be able to provide the proper treatment using the images and the extracted data. Reversible data hiding is a newly developed branch in data hiding or watermarking researches. Histogram shifting (HS) is a useful technique of reversible data hiding (RDH). With HS-based RDH, high capacity and low distortion can be achieved efficiently and both the original image and the hidden data can be perfectly recovered. The method is reversible in the sense that the extraction process is exactly opposite to embedding process. Finally the performance can be evaluated by mean square error, peak signal to noise ratio, structural similarity index etc. In future, to push forward the capacity-distortion behaviour of RDH, more meaningful shifting and embedding functions are expected.
Show more

8 Read more

Contrast Enhancement of Gray and Color Images based on DWT and SVD

Contrast Enhancement of Gray and Color Images based on DWT and SVD

In this paper, existing algorithm is modified for obtaining weighted sum of singular values of LL sub band images of input low contrast image and histogram equalized image. The proposed method is implemented for gray, and color images in RGB and HSI color spaces. The proposed method is compared with GHE, CLAHE and specified existing DWT – SVD based methods. Mean, Standard Deviation, Entropy are used for objective analysis. Histogram plots and subjective results are also compared. From the results it is evident that the proposed method achieves better contrast enhancement over the specified existing DWT – SVD based methods in terms of standard deviation.
Show more

11 Read more

Automatic histogram threshold using fuzzy measures

Automatic histogram threshold using fuzzy measures

where B O and F O are, respectively, the background and foreground of the original (ground-truth) image, B T and F T are the background and foreground pixels in the resulting image, respectively, and |.| is the cardinality of the set. This parameter varies from 0% for a totally wrong output image to 100% for a perfectly binary image. The performance measure for every algorithm is listed in Table III. Mean and standard deviation are also presented. The methods indicated by IM1 and IM2 represent the improved method without and with histogram equalization, respectively. After comparing results, the improved method with histogram equalization provides, in general, satisfactory results with particular attention in images with imprecise edges.
Show more

7 Read more

A Novel Histogram-Based Multi-Threshold Searching Algorithm for Multilevel Color Thresholding

A Novel Histogram-Based Multi-Threshold Searching Algorithm for Multilevel Color Thresholding

Various  multilevel  thresholding  techniques  have  been  proposed for grey images. They roughly can be divided  into  parametric  and  nonparametric  approaches  [2].  In  parametric  approaches  [3,  4],  the  probability  density  function  (pdf)  of  each  object’s  grey‐level  distribution  is  required.  But,  to  estimate  the  pdf  of  the  grey‐level  distribution  is  typically  a  nonlinear  optimization  problem.  This  usually  leads  to  an  inefficient  algorithm  with  high  computational  complexity  and  poor  performance.  By  contrast,  nonparametric  approaches  directly  determine  the  threshold  values  by  optimizing  some  certain  cost  functions  [5‐8].  For  instance,  Otsu  proposed  a  thresholding  method  that  determines  the  optimal  thresholds  by  maximizing  a  between‐class  variance  criterion  using  an  exhaustive  search  [5].  To  speed  up  the  maximization  process  in  the  conventional  Otsu’s  method,  Liao  et  al.  proposed  a  fast  multilevel  global thresholding algorithm that maximizes a modified  between‐class variance  with  a  look‐up  table  acceleration  approach [6]. However, this method still takes too much  time  for  multilevel  threshold  selection  since  it  requires  evaluating  all  possible  solutions.  The  authors  in  [7]  proposed  a  new  criterion  for  automatic  multilevel  thresholding  with  low  computational  complexity.  Recently,  Gao  et  al.  used  a  quantum‐behaved  particle  swarm  optimization  technique  to  improve  the  convergence  rate  of  the  Otsu’s  method  [8].  These  nonparametric  multilevel  thresholding  approaches  produce  satisfactory  segmentation  results  for  grey  images,  however,  the  papers  [5‐8]  considers  only  grey  scale images (i.e., single channel) and cannot be directly  applicable to multi channel images
Show more

13 Read more

Intensity Enhancement in Gray Level Images using HSV Color Coding Technique

Intensity Enhancement in Gray Level Images using HSV Color Coding Technique

examples of enhancement operations. Reducing the noise and blurring and increasing the contrast range could enhance the image. The original image might have areas of very high and very low intensity, which mask details. An adaptive enhancement algorithm reveals these details. Adaptive algorithms adjust their operation based on the image information (pixels) being processed. In this case the mean intensity, contrast, and sharpness (amount of blur removal) could be adjusted based on the pixel intensity statistics in various areas of the image. Images are produced by a variety of physical devices, including still and video cameras, x-ray devices, electron microscopes, radar, and ultrasound, and used for a variety of purposes, including entertainment, medical, business (e.g. documents), industrial, military, civil (e.g. traffic),security, and scientific. The goal in each case is for an observer, human or machine, to extract useful information about the scene being imaged. Often the raw image is not directly suitable for this purpose, and must be processed in some way. Such processing is called image enhancement; processing by an observer to extract information is called image analysis. Enhancement and analysis are distinguished by their output, images vs. scene information, and by the challenges faced and methods employed. Image enhancement has been done by chemical, optical, and the electronic means, while analysis has been done mostly by humans and electronically. Digital image processing is a subset of the electronic domain wherein the image is converted to an array of small integers, called pixels, representing a physical quantity such as scene radiance, stored in a digital memory, and processed by computer or other Digital hardware. Digital image processing, either as enhancement for human observers or performing autonomous analysis, offers advantages in cost, speed, and flexibility, and with the rapidly falling price and rising performance of personal computers it has become the dominant method in use
Show more

7 Read more

Binarization of Historical Document Images Using Phase Congruency

Binarization of Historical Document Images Using Phase Congruency

ABSTRACT: When fourier component of image are maximally in phase then image features like edges, lines and match bands are present at that point. For detecting edges of image phase based method is more suitable than gradient based method because the phase is dimensionless quantity. Change in phase will not change the brightness or contrast of image. This provides a threshold value which can be applied over a image and binarized image is obtained. Here phase congruency features are calculated using wavelets. The existing theory that has been developed for signals is extended to allow the calculation of phase congruency in 2D images. It is shown that for good localization it is important to consider the spread of frequencies present at a point of phase congruency. An effective method for identifying and compensating for the level of noise in an image is presented. Finally it is argued that high pass filtering should be used to obtain image information at different scales. With this approach the choice of scale only acts the relative signicance of features without degrading the localization
Show more

7 Read more

Virtual Communication for Lab-based Science Teaching: A Case Study

Virtual Communication for Lab-based Science Teaching: A Case Study

strategies for setting up learning communities in large classes which include creating small peer working groups; group laboratory experiments, field work and poster presentation; specially designed card and board games; and computer-aided learning materials designed for use in peer groups 5 . It is clear from the student response to surveys that we are using the technology as an adjunct to their learning process, allowing them to learn in a way that suits their lifestyle and which we hope will enhance opportunities for participation in higher education. Moving part of the total course materials to the Web stimulated us to design web-based communication
Show more

10 Read more

3D Gray Level Co Occurrence Matrix Based Classification of Favor Benign and Borderline Types in Follicular Neoplasm Images

3D Gray Level Co Occurrence Matrix Based Classification of Favor Benign and Borderline Types in Follicular Neoplasm Images

Since the efficiency of treatment of thyroid disorder depends on the risk of malignancy, indeter- minate follicular neoplasm (FN) images should be classified. The diagnosis process has been done by visual interpretation of experienced pathologists. However, it is difficult to separate the favor benign from borderline types. Thus, this paper presents a classification approach based on 3D nuclei model to classify favor benign and borderline types of follicular thyroid adenoma (FTA) in cytological specimens. The proposed method utilized 3D gray level co-occurrence matrix (GLCM) and random forest classifier. It was applied to 22 data sets of FN images. Furthermore, the use of 3D GLCM was compared with 2D GLCM to evaluate the classification results. From experimental results, the proposed system achieved 95.45% of the classification. The use of 3D GLCM was better than 2D GLCM according to the accuracy of classification. Consequently, the proposed method probably helps a pathologist as a prescreening tool.
Show more

6 Read more

Gray-level Co-Occurrence Matrix application to Images Processing of crushed Olives fruits.

Gray-level Co-Occurrence Matrix application to Images Processing of crushed Olives fruits.

Abstract This paper shows the results obtained from images processing digitized, taken with a 'smartphone', of 56 samples of crushed olives, using the methodology of the gray-level co-occurrence matrix (GLCM). The values of the appropriate direction (θ) and distance (D) that two pixel with gray tone are neighbourhood, are defined to extract the information of the parameters: Contrast, Correlation, Energy and Homogeneity. The values of these parameters are correlated with several characteristic components of the olives mass: oil content (RGH) and water content (HUM), whose values are in the usual ranges during their processing to obtain virgin olive oil in mills and they contribute to generate different mechanical textures in the mass according to their relationship HUM / RGH. The results indicate the existence of significant correlations of the parameters Contrast, Energy and Homogeneity with the RGH and the HUM, which have allowed to obtain, by means of a multiple linear regression (MLR), mathematical equations that allow to predict both components with a high degree of correlation coefficient, r = 0.861 and r = 0.872 for RGH and HUM respectively. These results suggest the feasibility of textural analysis using GLCM to extract features of interest from digital images of the olives mass, quickly and non-destructively, as an aid in the decision making to optimize the production process of virgin olive oil.
Show more

8 Read more

Improving Degraded Document Images Using Binarization Technique

Improving Degraded Document Images Using Binarization Technique

This paper presents an adaptive image contrast based document image binarization technique that is tolerant to variety of document degradation such as uneven illumination and document smear. The proposed technique is easy & robust, only few parameters are involved. It works for different kinds of degraded document images. It makes use of the local image contrast that is evaluated based on the local maximum and minimum. The proposed method has been tested on the various datasets. Experiments show that the proposed method outperforms most reported document binarization methods in term of the F-measure, pseudo F-measure, PSNR, NRM, MPM and DRD.
Show more

6 Read more

Estimating Watermarking Capacity in Gray Scale Images Based on Image Complexity

Estimating Watermarking Capacity in Gray Scale Images Based on Image Complexity

The segmentation phase has an important role in this method. Thus, given an image segment, we can express the heterogeneity of an image using the JS-divergence applied to the probability distribution of each segment. For comparison with other methods, ICC values are normalized in (0, 1). 2.2. Quad Tree Method. We have introduced this measure in our previous work [19]. Briefly, quad tree representation is introduced for binary images but it can be obtained for gray scale images, too. For a gray scale image, we use the intensity variance in blocks as a measure of contrast. If the variance is lower than a predefined threshold, it means that there is not much detail in that block (i.e., pixels of the block are very similar to each other), thus, that block is not divided further. Otherwise, the division of that block into 4 blocks is continued until either a block cannot be divided any more or reaching to a block size of one pixel.
Show more

9 Read more

Virtual Lab Using SCADA

Virtual Lab Using SCADA

Nowadays, the increasing accessibility of internet has emerged a great interest of the educational community in developing web-based assessment and self-assessment tools, to support not only distance learning, but also to improve learning in traditional class. Self-assessment that provides feedback facilitates the active involvement of students to step-by-step improve their knowledge in each subject. Adaptive self-assessment enables tailoring tests to student’s preferences and interpretation. A web-based self-assessment tool allows students to access self-assessment tests using a browser on their home computers. This paper starts with introducing the reader to assessment issues, regarding that assessment is used as a tool for learning. The advantages of computerized assessment, adaptive assessment and web- based assessment are discussed here. The significant importance are web-based adaptive self-assessment systems and leads to improvement in student performances. In the past, assessment was used to decide what extend students had reached the learning objectives. Testing was traditionally used to give qualification of student’s current
Show more

5 Read more

A clustering based transfer function for volume rendering using gray-gradient mode histogram

A clustering based transfer function for volume rendering using gray-gradient mode histogram

In terms of histogram based volume rendering, Zhou et al. proposed a clustering method to classify volume data by using gradation-gradient color histogram, where identified information was transformed into the hue, saturation, and light (HSL) color space [17]. And a S-KA histogram based multi-dimensional transfer function was given by Chen et al. to classify internal voxels and find the class of boundary voxels [18]. Yang et al. used a multi-dimensional transfer function based on f -LH histogram to improve the accu- racy of boundary visualization, where a modified Low and High (LH) histogram construction algorithm was pre- sented [19]. Meanwhile, Xia et al. introduced a hybrid transfer function using both gradient histogram and size his- togram to present spatial information, contributing to a better performance of images [20]. A transfer function based on two-dimensional histogram was introduced by Shen et al. to perform the tissue boundary via gradient magnitude and gray level, contributing to contain more detail information
Show more

11 Read more

Show all 10000 documents...

Related subjects