• No results found

Hardware Efficient Mean Shift Clustering Algorithm Implementation on FPGA

N/A
N/A
Protected

Academic year: 2020

Share "Hardware Efficient Mean Shift Clustering Algorithm Implementation on FPGA"

Copied!
5
0
0

Loading.... (view fulltext now)

Full text

(1)

Volume 3, Issue 5, May 2014

Page 460

Abstract

It is terribly essential to perform image process, as a result of it provides very important steps f or various visual perception and trailing applications. Image segmentation victimization mean shift clump is most generally recognized together of the foremost grid computing task in image process and principally suffers from poor quantifiable with relation to range of iterations and image size. During this approach we have a tendency to a specializing in making fine tuned ascendance hardware design for the machine needs of the mean shift rule. By mapping the rule to re configurable hardware we will expeditiously cluster many pixels severally. By victimization mean shift design to implement on FPGA, we have a tendency to bring home the bacon a hurrying of 3 orders of magnitude with relation to our software package baseline.

Keywords: Mean shift, Clustering, Reconfigurable hardware, Image segmentation.

1.

I

NTRODUCTION

The contributions of Comaniciu & Meer within the early 2000s demonstrated the performance blessings of the mean shift algorithm, by expeditiously applying it to image processing application, chiefly image segmentation, and pursuit and edge detection. For image segmentation, the mean shift rule has become associate degree more and more fashionable resolution, yielding sensible results and providing a solid stepping stone for high level vision tasks. Image segmentation maps the input image to smaller component regions that share a standard feature. This method helps in analyzing and decoding the image, transforming it from a random assortment of pixels to a unique arrangement of perceptible objects. Image segmentation is employed to find and separate objects from the background and from one another, to search out boundaries, contours or any general region of interest. It is also instrumental to a multitude of domains with applications in computer/machine vision, medical imaging/diagnosis, satellite imaging and face recognition to call simply a couple of. However Fukunaga and Hosteler declared in their initial publication that their rule “may be expensive in terms of computer time and storage”. Even for today’s abundant improved machine platforms, their assessment remains true. The task is especially difficult as a result of the algorithm scales poorly with each the amount of pixels and range of iterations. Speeding up third-dimensional bunch in applied math data analysis has been the concentration of various papers that have used totally different techniques to accelerate the clustering algorithms. These techniques primarily concentrate on incorporating higher distance functions, higher techniques for stopping early, and data-dimensionality reductions. Other approaches have transitioned from conventional CPU platforms to GPUs and many-node computer hardware platforms, in an effort to accelerate the convergence speed. Recently, efforts to accelerate the mean-shift algorithmic program have targeted on FPGA technology and have achieved the foremost promising results so far. During this paper, we tend to tackle the problem of prohibitory execution time by initial breaking the algorithm right down to its smallest grain, that is that the single pixel movement. We tend to begin by initial showing that the fine granularity of this algorithmic program permits all pixels to be shifted in parallel towards their various clusters as a result of their computational independence. By initial coming up with a pipeline dedicated to the individual movement of 1 constituent, and then scaling this approach to include a whole bunch a lot of, we demonstrate that the algorithmic program is accelerated while not incurring important overhead. The procedure platform we propose is right for this state of affairs, consisting of gate-array fabric on that these pipelines is replicated, so that each one will method individual constituent movement over multiple iterations. This embarrassingly parallel approach is first replicated to effectively utilize all FPGA resources and then more scaled up to a board-level design. The speed we tend to attain, by unrolling the outer loop and bunch pixels in parallel (wide parallelism) whereas pipelining the inner loop and accumulating pair wise constituent interaction each clock cycle (deep parallelism), is over one-thousand fold.

2.

BACKGROUND

The mean-shift algorithmic rule has been intuitively referred to as a “hill climbing” technique, as a result of the info points move in the direction of the calculable gradient. it\'s additionally normally named a “mode seeking” algorithmic rule, as a result of the info points cluster round the nearest density maxima (the modes of the data). The underlying basic approach of the mean shift is to rework the sampled feature house into a probability density house and find the peaks of the probability density function (PDF). The regions of highest density area unit the cluster centers. This methodology is taken into account associate unsupervised clump methodology as a result of the quantity of clusters and their location is set solely by the distribution of the info set.

Hardware Efficient Mean Shift Clustering

Algorithm Implementation on FPGA

Kiran B.V1, Sunil Kumar K.H2

1

PG scholar, CMRIT, Bangalore, Karnataka, India

2

(2)

Volume 3, Issue 5, May 2014

Page 461

3.

MATHEMATICAL FRAMEWORK

The parzen window technique could be a very fashionable approach of evaluating the PDF of a variate. The kernel density estimator for a three-d space given n samples is often expressed as:

(1)

Where h is that the kernel size or information measure parameter. a large vary of kernel perform are often employed in estimating the PDF. The foremost oftentimes used kernel and also the one we've chosen to evaluate the PDF with is that the Gaussian kernel:

(2)

The next step is to calculate the gradient of the PDF and set it adequate to zero for locating the peaks of the density function. The ensuing mean shift equation using the Gaussian kernel is:

(3)

The term m(x) is that the new location of point x whereas the gap traveled by the point from location x to m(x) is coined as the “mean shift” distance.

4.

APPROACH

We have chosen to specialize in the Gaussian mean-shift (GMS) algorithm for many key reasons. GMS is primarily a non-parametric and unsupervised clustering technique. This advantage makes it a good candidate for associate autonomous setting wherever human steering is restricted. GMS will not require any prior information with respect to the number of clusters, their shape, or the distribution of the information. The mean-shift formula is additionally embarrassingly parallel thanks to its high roughness, and will be expeditiously mapped to hardware for quicker execution while not sacrificing performance. Pixels move within the direction of the estimated PDF gradient at every iteration, however the gradient vector estimation (Eq. 3) are often evaluated at each picture element location in parallel. This key advantage is leveraged through our hardware style.

Each pixel within the dataset maps to a novel purpose within the feature house supported its grayscale price, beside associate x and y coordinate location (pixel(x, y, grayscale)). Even though this can be associate unvaried formula, as pixels within the feature space converge toward their individual clusters, their new locations area unit never updated within the original image (as within the blurring GMS). The gradient vector is calculable supported the pixel distribution of the initial image, permitting America to achieve hazard-free parallelization across multiple devices as long as every device contains a copy of the whole image in memory. Every FPGA may so phase a special region of the image while not encountering boundary issues.

By with success exploiting the inherent mean-shift parallelism, we've got designed a climbable hardware architecture capable of bunch many pixels in parallel. Every picture element edges from its own dedicated pipeline and moves independent of all alternative pixels towards its cluster. This approach is often simply scaled to include multiple FPGAs while not acquisition important overhead as are often observed within the experimental results section. By parallelizing the mean-shift formula and clustering regions of the image consisting of many pixels in parallel, we have leveraged the benefits of reconfigurable computing and efficiently slashed machine quality (loop unrolling).

5.

PIPELINE ARCHITECTURE

Image segmentation requires the pair wise distance computations of all pixels for evaluating the PDF. Since we utilize the Gaussian kernel to estimate the PDF we designed a hardware block Figure. 1 that can quickly compute squared Euclidean distances.

(3)

Volume 3, Issue 5, May 2014

Page 462

The pipelined design allows a new pixel to be input every clock cycle, and enables us to efficiently stream pixel values from memory without stalling. The Euclidean distance block is further integrated in the pipeline design for evaluating the Gaussian kernel.

The input pixel_i (x, y and z values) is the pixel being moved by the pipeline to a new location, while pixel_j (x, y and z values) represents streaming pixel values from memory. All computational blocks are fully pipelined and can handle a steady stream of input pixels from memory.

The Figure. 2 show a more conservative approach for the hardware resource utilization. In this architecture each pixel is having its dedicated pipeline and each pipeline is having its dedicated pipeline divider so there is no stalling of process between moving pixel from one location to another.

The application is scaled by replicating the pipelines to efficiently allocate all the available FPGA hardware resources. Each pipeline is dedicated to servicing the movement of one pixel.

The formula follows a monotonous pattern. The pixels being stirred towards their modes are browse from memory and keep within the on board input delay registers. The entire image is then streamed from memory to each pipeline (low memory bandwidth needed) and therefore the output results are stored within the output delay registers. This method is re-current for multiple iterations till each pixel converges to a mode. A replacement batch of pixels is then browsing from memory till the complete image is serviceable.

Figure. 2: Feedback loop closed for running multiple iterations.

As seen by above design because the range of pipelines per FPGA will increase the speed of operations conjointly will increase and after we think about the performance it'll not vary solely as a result of the pipelines, the hardware resources used and the process time at the every stage also matters. The 2 contributory factors to the achieved performance are the pipelined design (deep parallelism), at the side of the parallelized pipeline approach (wide parallelism). Not solely will we have a tendency to cluster multiple pixels in parallel however we are able to conjointly record a brand new pixel from memory to every pipeline while not stall.

6.

RESULTS

Performance results:

(4)

Volume 3, Issue 5, May 2014

Page 463

0

100 200 300 400 500 600

50 100 150 200 250

Number of pipelines

Figure. 3: Speedup achieved as number of pipelines are increased

Segmentation visual results:

Performance of our fixed point architecture is segmented into two grayscale images of various resolution and complexities and this hardware of fixed point results is compared with floating point implementation. Another quantity of clustering accuracy is the misclassification error. The misclassification errors for two images prove that less than 10% of pixels converge to a different cluster in hardware.

-

Figure. 4: segmentation results

7.

CONCLUSION

This paper proposes a novel method which is more efficient and less delay by making use of pipeline design and parallelized pipeline design. The hardware is implemented on Xilinx, FPGA gives satisfying results compared with the previous results.

REFERENCES

[1] Stefan Craciun, Gongyu Wang Alan D. George, Herman lam, Jose C” A Scalable RC Architecture for Mean Shift Clustering” Department of Electrical and Computer Engineering, University of Florida Gainesville, Florida, 32116200

(5)

Volume 3, Issue 5, May 2014

Page 464

[3] D. Comaniciu, P. Meer, “Mean Shift Analysis and Applications,” Proc. Seventh International Conference on Computer Vision, pp 1197-1203, Sept. 1999

[4] P. Bai, C. Fu, M Cao, Y. Han, “Improved Mean Shift Segmentation Scheme for Medical Ultrasound Images,” Fourth International Conference on Bioinformatics and Biomedical Engineering (iCBBE), pp. 1-4, June 2010

[5] W. Al-Nuaimy, Y. Huang, A. Eriksen, V. T. Nguyen, “Automatic Feature Selection for Unsupervised Image Segmentation,” Applied Physics Letters, Vol. 77, Issue: 8, pp. 1230-1232, Aug. 2000.

[6] A. Yamashita, Y. Ito, T. Kaneko, H. Asama, “Human Tracking with Multiple Cameras Based on Face Detection and Mean Shift,” IEEE International Conference on Robotics and Biometrics, pp. 1664-1671, Dec. 2011.

[7] W. Al-Nuaimy, Y. Huang, A. Eriksen, V. T. Nguyen, “Automatic Feature Selection for Unsupervised Image Segmentation,” Applied Physics Letters, Vol. 77, Issue: 8, pp. 1230-1232, Aug. 2000.

[8] H. Wang, J. Zhao, H. Li, J Wang, “Parallel Clustering Algorithms for Image Processing on Multi-Core CPUs,” International Conference on Computer Science and Software Engineering, Vol. 3, pp. 450-453, 2008.

[9] U. Ali, M. B. Malik, K. Munawar, “FPGA/Soft-Processor Based Real-Time Object Tracking System,” Fifth Southern Conference on Programmable Logic (SPL), pp 33-37, April 2009.

Figure

Figure. 1: Pipelined Euclidean distance hardware diagram. [1]
Figure. 2: Feedback loop closed for running multiple iterations.
Figure. 4: segmentation results

References

Related documents

slightly increased in comparison with the weekdays snapshot observation. As can be seen from figure 6.10, walking activity is the highest recorded activity at this time. b) Shoppers’

1.Research interest in topic of information use for media work process Kamollimsakul (2003); Sukparee(2011); Attfield& Dowell (2002) and Ansari &Zuberi (2010;2011)

Conclusions: This study improve, an efficient protocol was established that maintained seed viability and enhanced the germination rates of seeds, compared with previously

power factor and reduced harmonics content in input line currents as compared to conventional single phase diode.. rectifier circuit with parallel input

The data are as expected and desired—those most likely to teach sexual health education (PE/health teachers or nurses/counselors) have higher levels of materials and

نیا هک اجنآ زا تتسا رکذ لباق همانتشتسرپ نایوجتشناد یور نر م هدتش یندومزآ یور رب همانشسرپ یارجا زا لبق ،دوب ،شهوژپ یاه همانشتسرپ ققحم دروآرب روونم هب ار یونعم شوه ی رابتعا

1162 performed the determination of ASSA 3244 on the basis of type testing (including sampling), initial inspection of the manufacturing plant and of the

Chris Maliepaard joined Plant Breeding of Wageningen University in 2007 as research group leader ‘Quantitative Aspects of Plant Breeding’ and assistant professor.. He carries