A fast algorithm for mesh-basedmotionestimation employ- ing uniform triangular patches is proposed. The technique utilises an embedded block model to estimate the motion of the mesh grid points. Without the need for time-consuming evaluation, the algorithm reduces the number of search iter- ations according to the inherent motion. A block-wise cod- ing approach is taken for the motion information, permitting any picture degradation caused by the fast algorithm to be successfully compensated by the residue coding. Simula- tions on three classes of test sequence show that the pro- posed algorithm results in a better PSNR-rate performance than the hexagonal matching algorithm. Moreover, a reduc- tion of up to 91% in mesh iterations is obtained.
To test different motionestimation algorithms in the spa- tial domain, they were applied in video compression al- gorithm, as demonstrated in Figure 3. This figure illus- trates the utilizing process of motionestimation algo- rithm in compression. As can be seen in Figure 3, the difference between the current frame and the previous one, which was compensated with one of these previous, mentioned methods, will be send to the encoder. As a result of using the difference between two sequences, we managed to send the minimum amount of information to the decoder. Then the residual image will be encoded and decoded by the SPIHT method, which is based on wave- let transformation. While the motionestimation process is being done, the motion vectors obtained by these three methods are send to the decoder. In the decoder section, the previous frame which was completely send to the decoder produced the restored image by means of motion vectors. This image will be added to the encoders resid- ual image and in this way the main image will be pro- duced. This process is performed for all of the images in a image sequence and all images are restored by this method.
Template-based methods exploit a template model as a prior of the subject to be captured. Template models have been widely used mostly for estimating the correspondences and for augmenting missing information in the captured data. The first full-body dynamic capture of human subjects was done by Sand and others, in 2003 . Although they did not use a geometry model as a template, a predefined skeletal structure together with a primitive deformation model was given to assist in the estimation of the configuration of the subjects. Allen and others built a general template model to estimate the deformation space with respect to different body shapes . In their work, they defined a small set of predefined landmarks on the template model for easily estimating correspondences between captured data. For the same purpose, the predefined landmarks have been commonly used in other methods, such as those in –. However, they were mostly for acquiring static objects or a few sequences of dynamic objects, due to the complexity of their automatic method of building correspondences. In 2008, two interesting works were presented simultaneously by two different groups; their methods having similar capture setups. One was by Vlasic and others , while the other was by de Aguiar and others . Both use template models extensively for correspondences and filling missing information. Especially,  exploited a lower resolution version of the volumetric template model for effectively tracking fast and complex non- rigid motion. Most recently, Li and others presented a method for capturing a complex dynamic motion only in a single- view setup . They also used their template at a lower resolution so as to achieve robustness and efficiency of captures.
direction, the algorithm must identify relevant features (often line segments) and define a grouping strategy that allows the identification of feature sets, each of which may correspond to an object of interest (e.g. potential vehicle or road obstacle). Vertical edges are more likely to form dominant line segments corresponding to the vertical boundaries of the profile of a road obstacle. Moreover, a dominant line segment of a vehicle must have other line segments in its neighborhood that are detected in nearly perpendicular directions. Thus, the detection of vehicles and/or obstacles can simply consist of finding the rectangles that enclose the dominant line segments and their neighbors in the image plane [2,30]. To improve the shape of object regions Ref. [32,33] employ the Hought transform to extract consistent contour lines and morphological operations to restore small breaks on the detected contours. Symmetry provides an additional useful feature for relating these line segments, since vehicle rears are generally contour and region-symmetric about a vertical central line . Edge- based vehicle detection is often more effective than other background removal or thresholding approaches, since the edge information remains significant even in variations of ambient lighting .
The new fast full search motionestimation algorithm for optimal motionestimation is proposed in this paper. The Fast Computing Method (FCM) which calculates the tighter boundaries faster by exploiting the computational redundancy and the Best Initial Matching Error Predictive Method (BIMEPM) which predicts the best initial matching error that enables the early rejection of highly impossible candidate blocks are presented in this paper. The proposed algorithm provides the optimal solution with fewer computations by utilizing these two methods FCM and BIMEPM. Experimental results show that the proposed new fast full search motionestimation algorithm performs better than other previous optimal motionestimation algorithms such as Successive Elimination Algorithm (SEA), Multilevel Successive Elimination Algorithm (MSEA) and Fine Granularity Successive Elimination (FGSE) on several video sequences. The operation number for this proposed algorithm is reduced down to 1/52 of Full Search (FS). But MSEA and FGSE algorithms can reduce computations by 1/40 and 1/42 of FS. Finally, the proposed new fast full search motionestimation algorithm is modified to sub optimal motionestimation algorithm introducing only a small average PSNR drop of around 0.2dB but achieves very fast computational speed. The superior performance of this sub optimal motionestimation algorithm over some fast motionestimation algorithms is also proved experimentally.
In this paper, we present a channel assignment scheme for multi- radio WMNs (Wireless Mesh Networks) to provide high- throughput paths especially for the highly loaded node with the best connectivity to the gateway(e.g. in terms of highest rate, lowest interference or both). We observed the flows on the links and data packets at each wireless access point in an existing wireless mesh backbone from logs files of traffic flows generated at gateway level. After observing, we estimate the traffic load for each network link using load estimation algorithm. We provide the links having maximum load to minimum interference channel i.e. non-interference channel based on IEEE 802.11. we show some problem arising in WMN and we discuss possible strategies to retrieve the problem and exploit it.
The flowchart of the proposed method is shown in Figure 3.1. The captured video frames will contain up-and-down vibrations caused due to the walk of user. First the elimination of vibration is performed in order to acquire stable images. The frames without vibration are given to the next stage where strong feature points are detected. The relevant feature points are extracted and then feature points are tracked using feature trackers which yield motion vectors to detect moving objects. Then, by analysing the motion vectors, it is judged if the detected moving object will move towards the user or move far away from the user. If it moves towards the user, then the size of the object in the image frame is calculated. If the area exceeds a threshold value ,then there is chance of collision and the system will notify the user as these sentences "object is coming towards","object is going away" and "chance of collision”. The system continuously speaks ‘chance of collision’ until the person deviates from his/her position to a safer area.
and random destinations is Θ ( W / n log n ) bps. If optimal conditions are assumed, the transport capacity (bit-distance product) is Θ ( W An ) bit-meters per second and as a result, the throughput is Θ ( W / n ) bps. In their analysis, two types of networks are used – arbitrary and random networks, where node locations and traffic destinations are arbitrary and random respectively. For both types of networks, two types of wireless transmission reception models are used – protocol and physical models. In the protocol model, a successful transmission is determined based on the ratio of the distances. In the physical model, a transmission is successful when the signal-to-interference-and-noise ratio (SINR) is greater than a threshold. They used the product of bits and distances as an indicator of transport capacity. Although, they presented asymptotic capacity of wireless networks through rigorous theoretical analyses, their work lacks practical applicability because they did not consider the coordinated access to the wireless channel. However, we borrow their concept of protocol and physical models which turns out to be useful in modeling the wireless transmission reception in our estimation of WMN capacity.
 M. Rehan, A. Antoniou, and P. Agathoklis, “A new fast block matching algorithm using the simplex technique,” in Proceed- ings of the IEEE Symposium on Advances in Digital Filtering and Signal Processing, pp. 30–33, Victoria, BC, Canada, June 1998.  M. E. Al-Mualla, C. N. Canagarajah, and D. R. Bull, “A simplex minimization for single- and multiple-reference motion esti- mation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 11, no. 12, pp. 1209–1220, 2001.
Despite the unimportance of degraded ﬁltering accu- racy, an experiment comparing motionestimation degra- dation is carried out to evaluate the loss of accuracy in the overall algorithm. As benchmarks, we have used a cou- ple of synthetic sequences widely accepted in this context: the ‘diverging tree’ and the ‘translating tree’, both created by David Fleet at Toronto University . The ‘diverg- ing tree’ shows an expansive motion of a tree (in camera zoom mode) with an asymmetric velocity range depend- ing on the pixel position (null in the central focus and 1.4 pixels/frame and 2 pixel/frames in the left and right boundaries, respectively). The ‘translating tree’ shows the translational motion of a tree with an asymmetric veloc- ity range depending on the pixel position (zero to 1.73 pixel/frames and zero to 2.3 pixel/frames in the left and right border, respectively). For an error metric, we used Barron , considered to be one of the most accepted metrics in the specialized literature.
ABSTRACT: In the current times the extent of video forgery has inflated on the online with the increase in the role of malware that has created it achievable for any user to transfer, transfer and share objects on-line in conjunction with audio, images and video. With the wide availability of powerful media editing tools, it becomes much easier to manipulate or even tamper digital media without leaving any perceptible traces. This leads to an increasing concern about the trustworthiness of digital media contents and there is a pressing need to develop effective forensic techniques to verify the authenticity, originality, and integrity of media contents. Digital media production and writing technologies have junction rectifier to widespread forgeries and unauthorized sharing of digital video. Gaussian Mixture Model (GMM) is a well-known algorithm that is robust against repetitive motions, illumination changes and long-term scene changes. Adaptive Noise Cancellation (ANC) is another algorithm that has significant robustness against shadow, noise, lighting changes, etc. In this paper, a background is made for each frame by GMM method that is used instead of previous frame in ANC algorithm. This background is much similar to the real background than previous frame is used by ANC. Simulation results show that proposed algorithm detects motions much efficiently than other algorithms This paper presents a method to find forgery by motionestimation algorithm
Moving Object Tracking using Optical Flow and Motion Vector Estimation  that moving object detection and tracking is an developing exploration field since it has vast applications in traffic inspection, 3D observation, movement investigation (human and non-human), activity recognition, therapeutic imaging etc. designing a modern object perception and tracking algorithm which utilized optical flow belongs to motion vector estimation for object detection and tracking in a successive frames. The optical flow gives significant information about the object movement regardless of whether no quantitative parameters are processed. The motion vector estimation strategy can provide an estimation of object position from successive frames which built the exactness of this algorithm and helps to provide robust result independent of image obscure and changing background. The necessity of median filter with this algorithm constitute it more rapid in the presence of clamor but the intended tracker finds out the human with trivial movement and precisely the moving shadow of the same person as a moving object. Performance of Optical Flow Technique  considering a strategies for a model obstruct of real and standard image consecutives. The outcomes of various routinely issues alluded to optical flow techniques enclosing a model of differential matching energy based and phase based methods are accounted. Our comparisons are basically experimental and focus on the accuracy, dependability and density of the speed measurements they demonstrate that performance can vary essentially among the methods we implemented. Segmentation of Vehicles Based on MotionEstimation that by applying an Lucas canade algorithm to evaluating the displacement vectors to identify the motions in custom to separate the vehicles from background using threshold operation and the movement of vehicles can be tracked by using blob in a video sequence.
In this paper, we propose an RDO-based rate con- trol scheme for H.264 with two-step QP determination but single-pass encoding in order to maximize the video quality by appropriately determining QP for each mac- roblock, which is based on our previous work . To break the chicken-and-egg dilemma resulting from QP-dependent rate-distortion optimization (RDO) in H.264, a pre-analysis phase is conducted to gain the necessary source information, and then the coarse QP is decided for R-D estimation. After QP-dependant motionestimation (with coarse QP), we fur- ther refine the QP of each mode based on the obtained actual standard deviation of motion-compensated residues. Using the actual standard deviation, each possible mode’s QP can be calculated. Thus, these QPs are used in the comparison of each mode’s rate-distortion (RD) cost (RDcost). The encoder chooses the mode having the minimum value. Thus, care- fully selected QPs can ensure accurate bits allocation to indi- vidual MBs according to their actual needs. The introduction of QP refinement process is helpful to achieve a good video quality given the bit budget. In addition, the header bits and coe ﬃ cient bits are separately estimated so that the rate con- trol accuracy is further enhanced. In the encoding process, RDO only performs once for each macroblock, thus one- pass, while QP determination is conducted twice. Therefore, the increase of computational complexity is small compared to that of the JM 9 . 3 software. Experimental results indicate that our rate control scheme not only eﬀectively improves the average PSNR but also controls the target bit rates well.
dividing of a data set into subsets (clusters), so that in each subset (ideally) the data are some common trait – often proximity according to some defined distance measure. Many schemes of clustering are categorized based on their special characteristic, such as the hard clustering scheme and the soft (fuzzy) clustering scheme. The conventional hard clustering scheme restricts each point of the data set to entirely just one cluster. As a consequence, with this approach the result of segmentation is often very crisp, i.e., each pixel of the image belongs to just one class exactly. However, in many real situations, for images, issues such as poor contrast, limited spatial resolution ,overlapping intensities, intensity in -homogeneities variation and noise make this hard (crisp) segmentation a difficult job .In fuzzy (soft) clustering, data elements can belong to more than one cluster. The fuzzy set th eory described by a membership function.  The most popular method among the fuzzy clustering methods is fuzzy c-means (FCM) algorithm. Because it gives much more information than the hard segmentation methods and has robust characteristics for ambiguity.
Abstract: Parametric motionestimation is an important task for various video processing applications, such as analysis, segmentation, and coding. However, the main disadvantage of standard approaches to parametric motionestimation (PME) is the increased computational complexity with the higher degree of motion models when compared to block-based local motionestimation approaches. The method in this paper, proposes new low complexity PME algorithm. In proposed algorithm, full-precision images are replaced with 1 bit-per-pixel images which allows many of the arithmetic operations in the standard PME approach to be replaced with logic operations.
the sequence free of occlusions, a temporal average in this aligned data would be optimal, even if the noise reduction would slowly decrease as 1/M, where M is the number of adjacent frames involved in the process. Generally, this will not be the case, inaccuracies and errors in the computed flow and the presence of occlusions make this temporal average likely to blur the sequence and have artifacts near occlusions. The proposed approach tends to solve these limitations. Occlusions are detected depending on the divergence of the computed flow: negative divergence values indicate occlusions. Additionally, the color difference is checked after flow compensation. A large difference indicates occlusion, or at least failure of the color constancy assumption. We combine both criteria for a pixel x=(x, y) and the computed flow between I0 and I1. These occluded points having a negative divergence of the flow and a large color difference after flow compensation are located near the discontinuities of the motion field. In this patch wise motion compensated is performed.
In this study, a PC with dual core and 4 G memory is used as a hardware plat- form in all experiments. VC6.0 is the programming environment. Images (352 × 288) acquired by using a fixed camera are tested. The test source is the surveil- lance video collected in the natural environment. The algorithm in this study can effectively filter the background noise and obtain partial optical flow informa- tion in the smooth region of black car in ROI. This algorithm takes 121 ms by using HS optical flow computation, and the time consumed by using this algo- rithm is reduced to 45 ms. The proposed algorithm effectively guarantees the real-time performance. The error rates of the two methods can be calculated by using the proximity membership grade between the estimated motion vector and the exact vector, as illustrated in Figure 4. Among them, 15% of error rate is reduced by using the improved optical flow algorithm than that by using the HS algorithm. The former is improved by nearly three times in the time consump- tion, as shown in Figure 5. Here, the testing times and moving velocity are in- creased linearly. The experimental results in different vehicle types by two me- thods are compared. Four common vehicle types are adopted in this study, namely, SUV, car, bus, and jeep. The comparison in the time consumption (IHS/HS) is shown in Figure 6. IHS has obvious advantages, which shows the universality of the improved optical flow algorithm.
and reduce the total transmission bit rate which plays an important role in the video compression process  . The block-matching algorithm (BMA) as shown in Figure 1 is the most common and efficient method for motionestimation . In this algorithm each frame is divided into equal sized non-overlapping blocks then each block of current frame F is compared with the corresponding block and its neighbors within a search win- dow on the previous frame F-1 to find the best matching block . There are several criterion can be used to determine the candidate block as Mean Absolute Difference (MAD), Mean of Squared Error (MSE) and Sum of absolute difference (SAD) . The proposed work uses Equation (1) to calculate the MAD.
D R . Y U H UANG was born in the PR China. He got his B.S. Degree in 1990 at the Dept. of Information & Control Engineering, Xi’an Jiaotong University, Xi’an city, PR China, M.S. degree in 1993 at the Depart- ment of Electrical Engineering, Xidian University, Xian city, PR China, and PhD degree in engineering in 1997 at the Institute of Information Science, Northern Jiaotong University, Beijing, PR China. From April 1997 to April 1999, he was assistant professor and postdoctoral fellow at the Dept. of Computer Science & Technology, Tsinghua Univer- sity, Beijing, PR China. In Sept. 1999 he entered Chair for Pattern Recognition, University of Erlangen-N¨urnberg, Erlangen, Germany, as a research fellow of Prof. H. Niemann, supported by Alexander von Humboldt Foundation. His interests include motion-based segmenta- tion, video indexing, vision-based HCI and Augmented reality.
This process is known as motion compensation (MC), and the prediction so produced is called the motion-compensated prediction (MCP) or the displaced-frame (DF) . In this case, the coded prediction error signal is called the displaced-frame difference (DFD).A block diagram of a motion compensated coding system is illustrated in Figure 1.3 This is the most commonly used inter frame coding method. The reference frame employed for ME can occur temporally before or after the current frame. The two cases are known as forward prediction and backward prediction, respectively. The prediction can be observed in figure 1.