Piracy:Copyright infringement (or piracy) is the unauthorizeduse of material that is covered by copyright law, in amanner that violates one of the copyright owner'sexclusive rights, such as the right to reproduce orperform the copyrighted work, or to make derivativeworks. Especially for electronic and audio-visual, media, unauthorized reproduction and distribution isoccasionally referred to as piracy. Generally piracybehavior can be classified into two types, i.e.,unauthorized access and unauthorized distribution.The former one denotes the unauthorized users accessthe multimedia content, while the latter one means thatthe users redistribute the accessed multimedia contentto other unauthorized users. Privacy:Privacy is sometimes related to anonymity, the wish toremain unnoticed or unidentified in the public realm.When something is private to a person, it usuallymeans there is something within them that isconsidered inherently special or personally sensitive.The degree to which private information is exposedtherefore depends on how the public will receive thisinformation, which differs between places and overtime. Privacy can be seen as an aspect of security – onein which trade-offs between the interests of one groupand another can become particularly clear. Almost allcountries have laws which in some way limit privacy.An example of this would be law concerning taxation,which normally requires the sharing of informationabout personal income or earnings. In multimediainformation systems, some personal information isprivate, such as user login information, subscribeinformation, user profile, and interaction records.Additionally, in some social networks, such as UserGenerated Content sharing networks, users canproduce or post some multimedia content that isshared with other users. The user generated contentsmay be also private.
reason is that the color is often very much related to the objects or scenes contained in the image. In addition, compared with other visual features, the color feature has less dependence on the size, orientation, and viewing angle of the image itself, so that it has higher stability and higher robustness, and the calculation is simple, so it is widely used at present. Users can input the color features that they want to query and match the informa- tion in the color feature library. The color-based feature extraction method can better represent the color infor- mation of the image. At present, the methods of color feature extraction mainly include color histogram [1, 2], color moments [3, 4], color sets , color coherence vector [6, 7], and color correlogram . (2) Based on the retrieval of image texture features, texture features are visual features that reflect the homogeneity of the image independently of color or brightness. It is a common in- trinsic feature of all surfaces. Texture features contain important information about the structure and arrange- ment of the surface and their relationship with the sur- rounding environment. Because of this, texture features are widely used in content-based image retrieval, and users can find other images that contain similar textures
To utilize color as a visual cue in multimedia, imageprocessing, graphics and computer vision applications, an appropriate method for representing the color signal is needed. Color model literature can be found in the domain of modern sciences, such as physics, engineering, artificial intelligence, and computer science . With the color format, a digital image can record and provide more information than the gray scale format image does.
Over the last six years of evolving digital camera technologies, FUJITSU has upgraded the core color processing engines of four generations of digital devices. With each generation, our devices have satisfied consumer demand for more pixels, faster processing speed, improved image quality, high- function developments, reduced power consumption, and lower cost. We have also improved overall efficiency, sped up operating frequencies, upgraded various functions to enhance image quality (such as noise-reduction and edge enhancement), and perfected the functionality of multimedia technologies such as MPEG-4 and audio processing. All of these improvements culminate in our new major upgrade, MB91680 debuted in the spring of 2006.
At last, we have measured the effect of embedding bits on a PSNR and quality of image and we come to know that as the no. of embedding bits increases, the PSNR ratio decreases as well as the quality of image also decreases. We have also measured the effect of High and Low resolution images and no. of embedding bits on quality of image. We can conclude that, If the resolution of image is high and the embedding bit number is high or medium, the quality of image have noticeable effect in image quality. And if the embedding bit is less i.e 1, then effect can not be detected. If the resolution of image is low and the embedding bit number is low and medium then quality of image is not much affected. But, if bits exceeds more than 4 then it will create noticeable effect in image quality.
• Complement image: The Image Complement piece figures the supplement of a twofold, force, or RGB picture. For twofold pictures, the piece replaces pixel qualities equivalent to 0 with 1 and pixel qualities equivalent to 1 with 0. For a power or RGB picture, the piece subtracts every pixel esteem from the most extreme esteem that can be spoken to by the info information sort and yields the distinction.
The fundamental concept behind these matrices is spatial distribution of grey level elements. In this approach a set of matrices are created that show the probability that a pair of brightness values (i,j) will occur at a certain separation from each other (Δx,Δy). The assumption is that the textural dependence will be at angles of 0°, 45°, 90° or 135° (with 0° being to the right and 90° above) from the original pixel that means four GLCM matrices would have to be created. Consider an image to be analysed has Nx resolution horizontally and Ny resolution vertically. Grey tone appearing in each resolution cell is quantized to Ng levels. The set LxXLy is the set of resolution of an image ordered in row and column. An image I can be represented as function which assigns some grey tone in G. we assume that texture context information in an image I is contained in overall or average spatial relationship which the grey tones in image I have to one another. Texture-context information is more adequately specified by matrix relative frequencies Pij with which two neighbouring pixels are separated by distance of d occur in an image such matrices of grey tone spatial dependency matrices are function of angular relationship between neighbouring cells as well as the distance between them. Since all texture information is present in grey tone spatial dependence matrices. Hence all texture features are extracted from these matrices. There are total 14 set of features of measures. But still it is difficult to say which measure describes which feature of texture. Following are 3 features out of 14 which define the textural characteristics. They are Angular Second Moment (ASM), Contrast(CON) and Correlation(COR) These metrics are calculated for each pixel for each using each of the four GCLMs and then a final texture value is usually calculated as an average of all four. It is obvious that these measurements can be computationally expensive especially as the quantization level becomes large. For many applications it may be beneficial to quantize the image into a smaller number of gray levels prior to creating the GLCMs
Another use of fractal is in the application of 1-D fractal analysis to 2 -D patterns. It is possible to transform a 2-D pattern in such a way to obtain a 1-D pattern, which is then analysed using methods applied for signal analysis. For example, grey level images are segmented to produce the corresponding binary image. Then strips can be taken of the binary image of total length, N pixels and height, M pixels, with N several times greater than M. At each point of the long axis, the fraction of 'white' pixels in the column orthogonal to the long axis, denoted as t [1,N] is calculated by
This paper suggests a better segmentation result achieved based on the standard of image compression. It accurately counts the required bits that are needed to determine a natural image in terms of the texture and the boundaries. The proposed algorithm exactly determines the coding length for the texture features.Its mainly based on non-overlapping window numbers and the probability distribution of regions to be segmented. To adapt to various shapes and scales in an image, a ladder of numerous windows with its size is integrated. The minimum description length analyses the cluster for a mixed set of data. This algorithm is used for segmenting the images that are natural and it combines the novel clustering technique called as compression-based texture merging.
These strategies combine decisions coming from text and visual-based systems by mean of aggregation functions or classical combinations algorithms, which don’t take into account the different semantic level of each modality. At best, some of them just use weighted factors to assign different levels of confidence to each mode. It is an asymmetric multimedia fusion strategy, which exploits the complementarity of each mode. The schema consists in a prefiltering textual step, which semantically reduces the collection for the visual retrieval, followed by a monomodal results fusion phase. Finally will show how retrieval performance is improved, while the task is made scalable thanks to the significant reduction of the collection. The idea behind multimedia fusion is to exploit the individual advantages of each mode, and use the different sources as complementary information to accomplish a particular search task. In an image retrieval task, multimedia fusion tries to help in solving the semantic gap problem while obtaining accurate results.
CTUALLY and increasingly more often, we can see through the various research works, that data compression is being used for applications in imageprocessing. In fact, very surprising we can see that many authors use data compression techniques for classification and / or segmentation images, filter or denoising image, artifacts detection in images, detecting altered images, etc. The analysis of digital images is of great importance in many fields for which there are many methods as shown in , and if the compression can help more easily to this purpose, it is a breakthrough.
Because getImage() acquires an image asynchronously, the entire Image object might not be fully loaded when drawImage() is called. The ImageObserver inter- face provides the means for a component to be told asynchronously when addi- tional information about the image is available. The Component class implements the imageUpdate() method (the sole method of the ImageObserver inter face), so that method is inherited by any component that renders an image. Therefore, when you call drawImage() , you can pass this as the final argument; the compo- nent on which you are drawing serves as the ImageObserver for the drawing pro- cess. The communication between the image observer and the image consumer happens behind the scenes; you never have to worry about it, unless you want to write your own imageUpdate() method that does something special as the image is being loaded.
S.Kranthi et al.,  discuss about two algorithms Edge Finding Method and Window Filtering Method for the development of the number plate detection system. In Edge Finding Method the original image is converted to gray scale image which is in high contrast, then the location of the number plate horizontally in which row its present is identified. The algorithm first determines the extent of intensity variation for each row, while in the second step it selects the adjacent rows. In window filtering method an appropriate window size is considered an appropriate window size is considered. The original image with complex Background is Filtered and the filtered image shows the High contrast regions apart from the number plate. A window is considered to exclude the surroundings from the image and concentrate on the actual image. On the basis of the expected size of the number plate the window size is estimated. The best result is obtained only if the window size equals the width of the number plate, but smaller window dimensions provide fairly good values too.
The use of experts in identifying landmarks is common in medical imageprocessing, but due to intra-expert and inter-expert variability, it is often desirable to find an automatic method. The paper by Sanchez Castro et al. entitled “A Cross Validation Study of Deep Brain Stimulation Targeting: From Experts to Atlas-Based, Segmentation-Based and Automatic Registration Algorithms’’ presents a validation study comparing expert performance with non-rigid registration in the task of identifying the subthalamic nuclei. Since the subthalamic nuclei are usually not clearly identifiable in clinical MRI, the issue of an appropriate reference standard is raised. In this work, landmark localization performance is assessed in both a limited test data set where the subthalamic nuclei are clearly visible, and by examination of the influence of alignment of the surrounding anatomy upon the accuracy of localization of the subthalamic nuclei. The validation study carried out by the authors enables them to conclude that automatic localization of the subthalamic nuclei can be achieved with an accuracy not different from that of interactive localization by experts. In “Generalised Overlap Measures for Evaluation and Validation in Medical Image Analysis”, Crum et al. present a framework in which a single figure-of-merit and a complementary measure of error (the Overlap Distance) can be used to capture the extent of non-overlapping parts when registering MR brain images. The process is demonstrated by constructing ground truth for a set of brain atlas images that can then be used to evaluate various segmentation algorithms that others may wish to use for algorithm performance comparisons. Deligianni et al. also deal with registration issues in their paper “Non-Rigid 2D/3D Registration for Patient Specific Bronchoscopy Simulation with Statistical Shape Modelling: Phantom Validation”. This paper proposes and validates a practical 2D/3D registration framework that incorporates patient- specific deformations captured by 3D tomographic imaging and catheter tip electromagnetic tracking. The incorporation of data from the catheter tip tracking reduces the number of parameters that control airway deformation (modelled by an Active Shape Model), significantly simplifying the optimization problem.
Imageprocessing is spreading in various fields. Imageprocessing is a method which is commonly used to improve raw images which are received from various resources . It is a technique to transform an image into digital form and implement certain actions on it, in order to create an improved image or to abstract valuable information from it. It is a kind of signal dispensation where image is an input and output is also an image or features related with image. The purpose of imageprocessing is distributed into several groups which are given below.
Similar to Efros and Leung , pixels are synthesised one at a time using a heuristic mea- surement taken from the input image. In  the stability of  is improved by introducing the following features. Firstly, the Manhattan distance is used when computing the similarity between two neighbouring blocks as it is more forgiving to outliers. Secondly, the order in which pixels are synthesised in the new image is changed. In the Efros and Leung algorithm, pixels are synthesised based on the number of known neighbours in their spatial neighbourhood. Pixels with a higher number of known neighbours are synthesised before those with a lower number. In  the synthesising order is determined by a priority value assigned to each location in the output image. Higher priority sites are synthesised before those of lower priority. To as- sign a priority value to each site, a normalised weighting is defined which indicates the relative amount of information a pixel gives about each of its neighbours. The priority of each empty location in the output image is then defined as the sum of the weightings from neighbouring pixels. To create the weightings, interactions between neighbouring pixels in the input image are analysed. This order based synthesis addresses to some extent the scale dependency of , however it does not remove it entirely. The computational burden associated with the exhaustive nearest neighbour searching inherent in  is also removed in . Rather, a kd-tree  struc- ture is introduced in order to reduce the search time. As with most nearest neighbour searching approximations, the reduction in search time comes at the expense of increased memory storage.
The research work is about watermarking in imageprocessing. The increasing amount of research on watermarking over the past decade has been largely driven by its important applications in digital copyrights management and protection. One of the first applications for watermarking was broadcast monitoring. It is often crucially important that we are able to track when a specific video is being broadcast by a TV station. This is important to advertising agencies that want to ensure that their commercials are getting the air time they paid for. Watermarking can be used for this purpose. The research work provides the study the existing researches in the field of watermarking in imageprocessing. It also analyzes the loopholes of existing researches related to watermarking in imageprocessing. This research would explain the watermarking technology more fundamentally. Finally the images with watermarks embedded have been tested and then the features and quality of embedded watermarks are analyzed. The surviving abilities of watermarks are verified and the description of the testing process and results are given.