image-data volume reduction

Top PDF image-data volume reduction:

Image-Based Transfer Function Design for Data Exploration in Volume Visualization

Image-Based Transfer Function Design for Data Exploration in Volume Visualization

We define a volume to be a 3D array of voxels in an intensity (scalar) field, and a color volume (or RGBA volume) to be a 3D array of voxels in an RGBA (color and opacity) field. A color volume can be directly displayed using a volume rendering algorithm such as raycasting[9] or 3D texture mapping[3], without the need for a trans- fer function. Therefore, we define a transfer function to be a two step process. The first step consists of a sequence of intensity map- pings in a volume’s intensity field, and performs tasks such as infor- mation filtering, noise reduction, surface extraction, etc. The sec- ond step is essentially the “coloring” process, which generates col- ors and opacity values directly from intensity values using either an intensity-to-RGBA color look-up table or a shading procedure. In the color look-up table, a linear ramp should be used for the opac- ity component and the appropriate (depending on the desired col- ors) color components, since the necessary “intensity processing” is done in the first step. Shading can be done by computing the gra- dients of the resulting intensity field of the first step, and using the gradients as local “surface normals” in computing the lighting ef- fects in the rendering process. Since the “coloring” step is a fairly straightforward process, we will only consider the transfer function design problem in the volume’s intensity field.
Show more

8 Read more

High speed, high volume, optimal image subtraction for large volume astronomical data pipelines

High speed, high volume, optimal image subtraction for large volume astronomical data pipelines

Computing power continues to grow in approximate agreement with Moore's Law (~2x every 1.5-2 years), and astronomy data acquisition rates are growing as fast or faster. Driven by advances in camera design, image sizes have increased to over a gigabyte per exposure in some cases, and the exposure times per image have decreased from hours to seconds, or even down to video frame rates of multiple exposures per second. All of this is resulting in terabytes of science data per night in need of reduction and analysis. New telescopes and cameras already under construction will increase those rates by an order of magnitude. The application of emerging low-cost parallel computing methods to OIS and other image processing techniques provides a present and practical solution to this data crisis. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer
Show more

277 Read more

Structural Medical Image Analyses using Consistent Volume and Surface Image Processing

Structural Medical Image Analyses using Consistent Volume and Surface Image Processing

To decrease overall computational complexity without compromising segmentation quality, new approaches have emerged to minimize CCNR. One of the most common methods is the atlas selection [97, 100-102], which reduces the CCNR by keeping the most representative atlases. In recent years, researchers have even tried to eliminate the CCNR by employing non-local label fusion methods [91-93, 103, 104]. However, the reduction of CCNR is typically accompanied with the large increase of CCNC. To minimize the CCNC further, other researchers have attempted to use the learning based scheme, which grasps the non-local correspondences offline [74-79]. Once the model is well trained, it is able to be applied on the target image efficiently. However, these learning-based algorithms are still limited since the learning based models are applied and tested on homogenous small-scale dataset (typically less than 200 subjects from the same resource) without using a great deal of available heterogeneous data (from different resources e.g. different studies and scanners). As a result, the previous learning based schemes are mostly applied on segmenting single anatomical region or subcortical regions rather than whole brain. When applied on whole brain, non-rigid registration (high CCNR) is still essential to compensate the large inter-subject variation for the small size of the dataset.
Show more

224 Read more

Kernel Principal Component Analysis for the Classification of Hyperspectral Remote Sensing Data over Urban Areas

Kernel Principal Component Analysis for the Classification of Hyperspectral Remote Sensing Data over Urban Areas

that is, all samples are considered neighbors. Thus, only one cluster can be identified. Several strategies can be used, from cross-validation to density estimation [38]. The choice of σ should reflect the range of the variables, to be able to detect samples that belong to the same cluster from those that belong to others clusters. A simple, yet effective, strategy was employed in this experiment. It consists of stretching the variables between 0 and 1, and fixing σ to a value that provides good results according to some criterion. For a remote sensing application, the number of extracted KPCs should be of same order than the number of species/classes in the image. From our experiments, σ was fixed at 4 for all data sets.
Show more

14 Read more

The Learning Architecture of FPGA Based on CMAC Coding Schemes Programed by VHDL

The Learning Architecture of FPGA Based on CMAC Coding Schemes Programed by VHDL

N inputs. The significant property of CMAC is that the learning algorithm changes the output values for the nearby inputs. Therefore similar inputs lead to similar output even for unlearned inputs. This property is called generalization, which is of great use in the CMAC based coding. Moreover, we can control the degree of generalization by changing the size of K. The larger K is, the wider the generalization region is. The generalization region of a CMAC with K=4 is shown in Figure 1. If a value d(9,9)=4 is given as the destined weight of input, and then the input data maps to four hypercubes denoted by Cc, Hh, Mm and Rr, and four weights stored in the hypercubes are updated by Equations (1) and (2). Let the initial values of all weights be zero, then the error e(9,9) equals 4 before learning. The neighbour weights of the input vector v=(9,9) including itself will be all updated, and their values become “ 4‖, ―3‖, ―2‖, and ―1‖. More precisely, the outputs indicated by ―4‖, ―3‖, ―2‖, and ―1‖ are updated by (4ue/4), (3ue/4), (2ue/4), and (ue/4), respectively. The error e is e(9,9) which is the difference between the destined value d(9,9) and output value g(9,9) before the updating. We find that the neighbour outputs are changed even for unlearned input after updating by the CMAC learning algorithm, which is called generalization property. The generalization property of CMAC will be used to compress image data in later sections.
Show more

10 Read more

Dimensionality Reduction for Data Visualization

Dimensionality Reduction for Data Visualization

Dimensionality reduction is one of the basic operations in the toolbox of data-analysts and de- signers of machine learning and pattern recognition systems. Given a large set of measured variables but few observations, an obvious idea is to reduce the degrees of freedom in the measurements by representing them with a smaller set of more “condensed” variables. Another reason for reducing the dimensionality is to reduce computational load in further processing. A third reason is visual- ization. “Looking at the data” is a central ingredient of exploratory data analysis, the first stage of data analysis where the goal is to make sense of the data before proceeding with more goal-directed modeling and analyses. It has turned out that although these different tasks seem alike their solution needs different tools. In this article we show that dimensionality reduction to data visualization can be represented as an information retrieval task, where the quality of visualization can be measured by precision and recall measures and their smoothed extensions, and that visualization can be optimized to directly maximize the quality for any desired tradeoff between precision and recall, yielding very well-performing visualization methods.
Show more

9 Read more

Gravity Gradiometer Data Reduction

Gravity Gradiometer Data Reduction

assuming a Taylor series expansion for the gravitational field over the station and fitting the coefficients of this field using a least squares technique based on the relatively accurat[r]

89 Read more

Dimensional Reduction of Hyperspectral Image Data Using Band Clustering and Selection Based on Statistical Characteristics of Band Images

Dimensional Reduction of Hyperspectral Image Data Using Band Clustering and Selection Based on Statistical Characteristics of Band Images

We have used a well known Airborne Visible/ Infrared Imaging Spectrometer [13] for our research work. The Cuprite image is used to compare and evaluate the proposed research work. The image scene is shown in Fig. 2. And it is available at website [14]. It was collected by 224 spectral bands with 10 nm spectral resolutions over the Cuprite mining site, Nevada in 1997, where Cuprite is a mining area in the south of Nevada with minerals and little vegetation. The geologic summary and mineral map can be found in [15]. Cuprite has been widely used for experiments in remote sensing and has become a standard test site to compare different techniques of hyperspectral image analysis. In our research work, a sub image of size 350×350 with 224 bands of a data set taken on the AVIRIS flight of June 19, 1997. The instrument of AVIRIS covers 0.41 – 2.45 µm regions in 224 bands with a 10 nm bandwidth and flying at an altitude of 20 km, it has an Instantaneous Field Of View (IFOV) of 20 m and views a swath over 10 km wide. Prior to the analysis of AVIRIS Cuprite image data, low SNR bands 1 – 3, 105 – 115 and 150 – 170 have been removed and the remaining 189 bands are used for experiments. The ground truth of spatial positions of four pure pixels corresponding to four mineral alunite (A), buddingtonite (B) , calcite (C) and kaolinite (K) are labeled and encircled by “A”, “B”, “C”, and “K” respectively. Endmembers extracted by an endmember algorithm are verified by using these labels of spatial positions. The USGS signatures of “A”, “B”, “C” and “K” are also shown in Fig. 3.
Show more

5 Read more

Hybrid Approach for Improving Data Security and Size Reduction in Image Steganography

Hybrid Approach for Improving Data Security and Size Reduction in Image Steganography

corresponding DNA sequence S is selected and the hidden message M is incorporated into it so that S’ is produced. S’ is then transfer to the receiver and the receiver is able to recognize and extract the message M hidden in S’. Finally, experimental results indicate a better performance of the proposed methods in comparison to the performance of the traditional methods.W.Kan et.al [19] describes technique is based on manipulating the quantization table and quantized discrete cosine transformation coefficients. Experimental reports show that the proposed method attains both high storage and high image quality without any damage. G.X.Z.D.Yang et.al [4] discussed about discrete STA, there are four basic operators like swap, shift, symmetry and substitute as well as the “risk and restore in probability” strategy. Firstly, main concern is on a parametric study of the restore probability p1 and risk probability p2. To effectively deal with the head pressure constraints, it investigates the effect of penalty coefficient and finds enforcement on the performance of the algorithm. Based on the experience gained from the training of the Two-Loop network problem, the discrete STA has successfully achieved the best known results for the Hanoi and New York problems. S.K. K et.al [16] describes paper proposes a scheme by compressing encoded data with the help of a subservient data and Huffman coding. The encoded data is then compressed using a quantization mechanism and Huffman coding. To quantize the image subservient data obtained by the data owner is used. The quantized values are then coded by the use of Huffman coding. Experimental results show that the compression ratio distortion performance of this method is superior to the traditional Techniques. T. Turker et.al [18] discussed about a new data hiding technique is proposed based on hidden sharing
Show more

6 Read more

Speckle Noise Reduction in SAR Image: A Survey

Speckle Noise Reduction in SAR Image: A Survey

[21], Polarimetric SAR image smoothing requires preserving the target polarimetric signature. Each element of the image should be filtered in a similar way to multilook processing by averaging the covariance matrix of neighboring pixels; and homogeneous regions in the neighborhood should be adaptively selected to preserve resolution, edges and the image quality. The second requirement, i.e. selecting homogeneous are as given similarity criterion, is a common problem in pattern recognition. It boils down to identifying observations from different stationary stochastic processes. Usually, the Boxcar filter is the standard choice because of its simple design. However, it has poor performance since it does not discriminate different targets. Lee et al. [18, 19] propose techniques for speckle reduction based on the multiplicative noise model using the minimum mean square error (MMSE) criterion. Lee et al. [20] proposed a methodology for selecting neighboring pixels with similar scattering characteristics, known as Refined Lee filter. Other techniques use the local linear minimum mean squared error (LLMMSE) criterion proposed by Vasile et al. [37], in a similar adaptive technique, but the decision to select homogeneous areas is based on the intensity information of the polarimetric coherency matrices, namely intensity driven adaptive-neighborhood (IDAN). Çetin and Karl [4] presented a technique for image formation based on regularized image reconstruction. This approach employs a tomographic model which allows the incorporation of prior information about, among other features, the sensor. The resulting images have many desirable properties, reduced speckled among them. Osher et al. [26] presented a novel iterative regularization method for inverse problems based on the use
Show more

5 Read more

'On the fly' dimensionality reduction for hyperspectral image acquisition

'On the fly' dimensionality reduction for hyperspectral image acquisition

However, such large volumes of data require complex analysis. For that reason, HSI hypercubes and related data are usually subject to a feature extraction process, where different techniques are used to extract salient features [3-6]. This also includes dimensionality reduction, where the high correlation between adjacent spectral bands is addressed by classical and well-known techniques such as principal com- ponent analysis (PCA), independent component analysis (ICA), and maximum noise fraction (MNF) [3, 6]. In partic-

5 Read more

Color Image Reduction using Genetic Algorithm

Color Image Reduction using Genetic Algorithm

factor for segmentation, compression, presentation and transmission of images. The main purpose of CRI is to cut off the image storage spaces and computation time [3].Technology today allows color images that are able to represent colors in ‗truecolor‘ mode (24-bit color – 16 million colors) [6]. In the color reduction process, representative colors are selected by considering color distribution of the input image so that the error between the approximated image and the original one becomes minimum [7]. This generation of set of color displays with increasing contrast enhancement allows us to gradually distinguish the different existing materials in this scene, particularly between those that have very similar spectral signatures, thus making easier and more reliable the interpretation and quick overview of such multidimensional hyperspectral Images [2]. When the images are in color, i.e., typically coded as discrete RGB, CMY, or HSL values, then it is customary to average the values in the respective channels. It is not immediately clear that this is appropriate and what are the other ways to average color values [4]. Imagesegmentation is an important preprocessing step which consists of dividing the image scene into spatially coherent regions sharing similar attributes [8].
Show more

5 Read more

VISION SYSTEM NVS9000 UNMATCHED PERFORMANCES CAMERA SYSTEMS. IDENTIFICATION 1D Bar code - 2D Bar code

VISION SYSTEM NVS9000 UNMATCHED PERFORMANCES CAMERA SYSTEMS. IDENTIFICATION 1D Bar code - 2D Bar code

NVS9000™ utilizes a state of the art high speed linear CCD sensor, achieving 33,000 scans per second. Compared to standard devices running at a lower 19,000 scans per second, the NVS9000™ JLYHVXVHUVWKHÀH[LELOLW\WRLQFUHDVHLWHP speed or adjust the system for better quality image capturing.

8 Read more

Color Attenuation Prior for Removing the Haze in Single Image

Color Attenuation Prior for Removing the Haze in Single Image

the human brain can quickly identify the hazy area from the natural scenery without any additional information. This inspired us to conduct a large number of experiments on various hazy images to find the statistics and seek a new prior for single image dehazing. Interestingly, we find that the brightness and the saturation of pixels in a hazy image vary sharply along with the change of the haze concentration. Figure 2 gives an example with a natural scene to show how the brightness and the saturation of pixels vary within a hazy image. As illustrated in Figure 2(d), in a haze-free region, the saturation of the scene is pretty high, the brightness is moderate and the difference between the brightness and the saturation is close to zero. But it is observed from Figure 2(c) that the saturation of the patch decreases sharply while the color of the scene fades under the influence of the haze, and the brightness increases at the same time producing the high value of the difference. Furthermore, Figure 2(b) shows that in a dense-haze region, it is more difficult for us to recognize the inherent color of the scene, and the difference is even higher than that in Figure 2(c). It seems that the three properties (the brightness, the saturation and the difference) are prone to vary regularly in
Show more

18 Read more

The Sixth Sense Technology

The Sixth Sense Technology

Indexed: Mostly all the colors images have a subset of more than sixteen million possible colors. For ease of storage and handling of file, the image has an related color map, or we can say the colors palette, that is simply a list of all the colors which can be used in that image. Each pixel has a value associated with it but it does not give its color as for as we see in an RGB image. Instead it give an index to the color in map. It is convenient for an image if it has 256 colors or less. The index values will require only one byte to store each. Some image file formats such as GIF which allow 256 color only.
Show more

11 Read more

Three Dimensional Ultrasonography of the Eye and Measurement of Optical Nerve Sheet Diameter in Dog

Three Dimensional Ultrasonography of the Eye and Measurement of Optical Nerve Sheet Diameter in Dog

By using different probe angle, it was easily possible to detect different scans of complete lens, cornea, and iris and cilliary body and optic disc. However, it was easier to see all, longitudal, Horezental and Transverse planes by changing the position of the cursor presents on the monitor into the favorite plane (fig.1). The 3DU image acquisition required less than 10 seconds using this advanced ultrasound machine. Finally a 3D rotating animation of the ocular structures at desired angle could be reconstructed for better visualization and recognition of different parts of the eye (fig.2). The values of the optical nerve in obtained 3D images, which were measured (fig.3) and shown (table 1). There wasn’t any significant difference (p<0.05) between ocular nerve measurements of male and female dogs and between left and right eye.
Show more

6 Read more

Efficacy of bronchoscopic lung volume reduction: a meta-analysis

Efficacy of bronchoscopic lung volume reduction: a meta-analysis

Analyses of secondary outcomes were related to the safety of a particular device or procedure. As the compli- cations associated with each procedure were distinct from each other, we were not able to pool the data for a common outcome across different subgroups. For one-way valves, we included the incidence rates of pneumonia distal to valve, pneumothorax lasting more than 7 days, and migration of valves. For the BioLVR, we included the incidence rate of pneumonia and COPD exacerbations. For the LVRCs, we only included the incidence rate of COPD exacerbations. Data from the studies on airway bypass stents and BTVA were not sufficient enough to analyze.
Show more

11 Read more

Traditional Scan Based Design For Atpg Of A Feedbach Shift Register Using Lbist

Traditional Scan Based Design For Atpg Of A Feedbach Shift Register Using Lbist

Abstract : Testing cost is one of the major contributors to the manufacturing cost of integrated circuits. Logic Built-In Self Test (LBIST) offers test cost reduction in terms of using smaller and cheaper ATE, test data volume reduction due to on-chip test pattern generation, test time reduction due to at-speed test pattern application. However, it is difficult to reach a sufficient test coverage with affordable area overhead using LBIST. Also, excessive power dissipation during test due to the random nature of LBIST patterns causes yield-decreasing problems such as IR- drop and overheating. In this dissertation, we present techniques and algorithms addressing these problems. In order to increase test coverage of LBIST, we propose to use onchip circuitry to store and generate the “top-off” deterministic test patterns. First, we study the synthesis of Registers with Non-Linear Update (RNLUs) as on-chip sequence generators. We present algorithms constructing RNLUs which generate completely and incompletely specified sequences. Then, we evaluate the effectiveness of RNLUs generating deterministic test patterns on-chip. Our experimental results show that we are able to achieve higher test coverage with less area overhead compared to test point insertion. Finally, we investigate the possibilities of integrating the presented on-chip deterministic test pattern generator with existing Design-For- Testability (DFT) techniques with a case study. The problem of excessive test power dissipation is addressed with a scan partitioning algorithm which reduces capture power for delayfault LBIST. The traditional S-graph model for scan partitioning does not quantify the dependency between scan cells. We present an algorithm using a novel weighted S-graph model in which the weights are scan cell dependencies determined by signal probability analysis. Our experimental results show that, on average, the presented method reduces average capture power by 50% and peak capture power by 39% with less than 2% drop in the transition fault coverage. By comparing the proposed algorithm to the original scan partitioning, we show that the proposed method is able to achieve higher capture power reduction with less fault coverage drop.
Show more

21 Read more

Dynamic hashing technique for bandwidth reduction in image transmission

Dynamic hashing technique for bandwidth reduction in image transmission

The ability to create dynamic hashed message is highly related with the integrity and robustness o f the image steganography. As a significant verification method, digital signature algorithm introduces a technique to endorse the contents of the message. This message has not been altered throughout the communication process (Filler et al., 2009). Thus, it increases the receiver confidence that the message was unchanged. Two drawbacks when using digital signature schemes are extra bandwidth and large file size during transmission. Implementing an encryption algorithm in the spatial domain steganographic method can contribute to increasing the degree o f security. Unfortunately, there is wide variety o f attacks that affect on quality o f image steganography, although there are methods for data hiding but they are still very weak in resisting these attacks.
Show more

40 Read more

A Web Middleware Architecture for Dynamic Customization of Content for Wireless Clients

A Web Middleware Architecture for Dynamic Customization of Content for Wireless Clients

system with both a local and remote proxy, and found that Web surfing waiting times can be reduced by a factor of 3-7 depending upon the time of day. According to [22], using remote processing to reduce the number of connections across a wireless link when browsing pages with images can reduce response time significantly as the number of images in a page increases. For a page with 16 images, the average waiting time is reduced by approximately 30%. They also did experiments with remote compression and showed a 48% compression rate of .au audio files and a 94% compression rate for .mid audio files. In the PowerBrowser project, which uses a proxy filter t o modify HTML pages into a special format to improve information retrieval time on a PDA with a stylus, the authors showed a 45% savings in time to complete tasks involving finding information on the Web [6]. Fox et al. show a major reduction in end-to-end latency over a dial-up connection for image distillation that reduces the size and color-depth of images [8].
Show more

12 Read more

Show all 10000 documents...