• No results found

Moving Object Speed Estimation Method

N/A
N/A
Protected

Academic year: 2020

Share "Moving Object Speed Estimation Method"

Copied!
6
0
0

Loading.... (view fulltext now)

Full text

(1)

2017 2nd International Conference on Computer Science and Technology (CST 2017) ISBN: 978-1-60595-461-5

Moving Object Speed Estimation Method

Hao LONG

1, 2,a*

, Yi-nung CHUNG

2

, Chun-te LI

2

and Chin-chung YU

2 1College of Robotics, Beijing Union University, No. 97 Beisihuan East Road, Chao

Yang District, Beijing, China

2Department of Electrical Engineering, National Changhua University of Education,

No. 2, Shi-Da Road, Changhua, Taiwan

alonghao315@126. com

*Corresponding author

Keywords: Aerial image, Image motion analysis, HSV color space, Subtraction techniques.

Abstract. This paper presents a solution to solve the detection and estimating speed problem for road vehicles in images acquired by means of unmanned aerial vehicles (UAV). UAV photographing which can conveniently take vehicles’ picture and save time for setting up fixed cameras have become an important part in intelligent transportation system (ITS). The proposed method starts with HSV color space transformation in order to reduce the influence of light change, and uses the saturation features to remove shadows in the aerial images. Then, the lane is used as a scale to estimate the vehicle speed for reducing the error influences in UVA flight altitude pressure. The experimental results show that this method performs well in the height of 40 meters, and the vehicle speed calculation error is less than 3. 3%.

Introduction

Today, local administrations have exploited imaging technologies on the ground to estimate the speed of vehicles. Because of shooting down from the top, the speed estimation by means of UAV can solve the shadowing problem that the traditional camera may encounter. The development of this method can be extremely interesting for local administrations in order to optimize the urban traffic planning and management. Moreover, the use of aerial technologies could be useful especially in emergency situations where it is necessary to reach and monitor specific roads in very short time.

In the current literature, it is possible to find several vehicle detection techniques considering both low and high-resolution images [1-4]. While it is possible to find only very few papers which deal with the estimation of vehicle speed from aerial images. Yamazaki et al. [5] suggests deriving the speed of cars by exploiting the shadow of cars. In [6], the authors explore the use of a set of invariant features (i. e. SIFT features) to locate the vehicles and to estimate their speed in two successive UAV images.

(2)

estimation results obtained on a real case are presented. Section conclusion is devoted to the conclusions of this paper and to future developments.

Proposed Method

The procedure of the main method in this paper is discussed in detail in following context.

The method of temporal differencing [7] is used, which is not affected by the motion of the UAV camera, and not resulted in judging moving object failure, for the foreground extraction. In order to reduce the influence of light changes, this paper converts the RGB color space form of images to the HSV color space ones, and uses the saturation features to remove shadows in the images. The conversion equation from the RGB color space to the HSV color space is showed in Eq. 1.

, otherwise , ) , , max( ) , , min( 1 ) , , max( ) , , min( ) , ,

max(, if max 0 0     − = − = = b g r b g r b g r b g r b g r s

,

255

)

,

,

max(

r

g

b

v

=

(

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

             = ° + − − × ° = ° + − − × ° < = ° + − − × ° ≥ = ° + − − × ° = ° = b. b g, r, max if , 240 ) , , min( , , max 60 g b g, r, max if , 120 ) , , min( , , max 60 b g and r b g, r, max if , 360 ) , , min( , , max 60 b g and r b g, r, max if , 0 ) , , min( , , max 60 b g, r, min b g, r, max if , 0 b g r b g r g r b g r b g r r b b g r b g r b g b g r b g r b g h (1)

In Eq. 1 r, g, and b represent the three components in the images respectively,

) , ,

max(r g b represents the maximum of the three components, and vice versa min(r,g,b)

represents the minimum value of the three components.

Horizontal projection [8-10] is used in this paper to determine how much percentage of the whole image the lane occupies in vertical axis. The lane horizontal projection result is shown in Fig. 1 (a), in which there are eight extremes with large numerical variation. It is obvious that the values between the lane positions vary regularly. Fig. 1 (b) illustrates the corresponding binary image of the land. B. Feature Extraction.

In this paper, the procedure of determining the vehicle category is: If the moving object’s width is greater than one-fourth the width of the lane and the aspect ratio is between 1. 5 and 3 times, it will be regarded as a vehicle, this range represents small vehicles (locomotives, bicycles, etc.) and medium-sized vehicles (minivans, cars, recreational cars, etc.). Vehicles, whose width is greater than half the width of the lane and the aspect ratio is more than 3. 5 times, is regarded as large vehicles (including buses, big trucks, etc.), the objects that do not meet the above conditions are determined as non-vehicle moving objects excluded.

(3)

presents a more convenient and rapid calculate method, which use the lane width as a scale to calculate the actual moving distance.

Suppose one moving object is focused on, the method calculates the centroid of the target, thus the position of the object is defined, by comparing the centroids of the two continuous pictures, the moving distance of the vehicle can be estimated by using the Euclidean distance formula. The vehicle moving pixels is multiplied by the actual distance represented by each pixel using the lane width estimated from the horizontal projection as the scale, and then the actual vehicle moving distance is obtained. If the distance is divided by the time difference between two continuous pictures, the speed of vehicles can be calculated.

(a)

[image:3.612.193.410.213.398.2]

(b)

Figure 1. Horizontal projection of the lane: (a) Horizontal projection of the lane (b) Binary image of the lane.

Experimental Results

This section shows the results obtained by using the proposed technique on a sequence of images acquired over different altitudes.

Experimental Platform and Parameters

(4)
[image:4.612.219.403.67.171.2]

Figure 2. Mobile phone APP interface.

Results and Analysis

Several speed-estimated results captured from each experiment of different driving directions are displayed in Fig. 3 for the actual driving speed of 60km/hr.

(a) (b)

(c) (d)

(e)

[image:4.612.140.473.254.653.2]
(5)
[image:5.612.198.415.78.175.2]

Table 1. Speed estimated accuracy analysis.

No. Results

(km/hr)

Relative error(%)

Average relative error(%)

Average Precision

(%)

a 57. 3 4. 5

3. 3 96. 7

b 59. 0 1. 6

c 58. 0 3. 3

d 56. 7 5. 5

e 58. 9 1. 6

From the accuracy analysis results (Table I), the average relative errors at the altitude of 40 meters are 3. 3%.

Summary

In this paper, a method to detect vehicles and estimate the vehicle speed for aerial images acquired by means of an UAV sensor is presented. The road lane is used as a scale for the purpose of reducing the error influences in UVA flight altitude pressure. Temporal differencing method is used to separate moving objects (vehicles) and backgrounds. By finding out the centroid of vehicles and calculate the moving distance expressed in pixels, speed of vehicles is estimated more accurately. The experimental results prove that our method can achieve moving vehicle detection accuracy and the average relative error is less than 3. 3%. The approach presented in this paper is able to quickly process real data, which will enable moving vehicles speed estimating to be performed automatically by UAVs, moreover, it is more convenient for planning and analysis of surrounding transportation for big event without taking time to set up a fixed camera or to mark a feature point on a road.

The future work realistic therefore focuses on the proposed technique works more accurately. Starting from this work, it is possible to identify several developments to make the method more stable to all possible altitude. Especially, the lane width calculating which is of main importance to correctly extract the differences could be improved to obtain better results even if the considered images are acquired with larger temporal differences.

Acknowledgement

This work is financially supported by the Science and Technique Program of Beijing Municipal Education Commission-KM201711417010, and the National Science Council under Grant NSC 101-2221-E-018-031- and NSC 102-2221-E-018-018-.

References

[1] T. Zhao, R. Nevatia. “Car detection in low resolution aerial imageries,” in Proc. IEEE Int. Conf. Comput. Vis., vol. 1, pp. 710–717, Jul. 2001.

(6)

[4] Z. Zheng, G. Zhou, Y. Wang, Y. Liu, X. Li, X. Wang and L. Jiang. “A Novel Vehicle Detection Method with High Resolution Highway Aerial Image,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 6, no. 6, pp. 1-6, 2013.

[5] F. Yamazaki, W. Liu and T. Vu. “Vehicle Extraction and Speed Detection from Digital Aerial Images,” IEEE International Geoscience and Remote Sensing Symposium, vol. 3, pp. III-1334, 2008.

[6] Thomas Moranduzzo, Farid Melgani. “Car Speed Estimation Method for UAV Images,” IEEE Geoscience and Remote Sensing Symposium, pp. 4942-4945, Jul. 2014. [7] S. Murali, R. Girisha. “Segmentation of Motion Objects from Surveillance Video Sequences Using Temporal Differencing Combined with Multiple Correlation,” Advanced Video and Signal Based Surveillance, AVSS'09. Sixth IEEE International Conference, pp. 472-477, Sep. 2009.

[8] I. Barbancho, C. Segura, L. J. Tardon, A. M. Barbancho. “Automatic selection of the region of interest in ancient scores,” in MELECON 2010 15th IEEE Mediterranean Electrotechnical Conference, pp. 326-331, Apr. 2010,

[9] Z. K. Li. “Applying Image Identification Technology to Building Resident Security Management System,” Master Thesis, Department of Electrical Engineering, National Changhua University of Education, 2014.

Figure

Figure 1. Horizontal projection of the lane: (a) Horizontal projection of the lane (b) Binary image of the lane
Figure 3. Speed estimated results at the altitude of 40 meters.
Table 1. Speed estimated accuracy analysis.

References

Related documents

 Human elements involved in the problem  Support systems surrounding the problem  Tracking systems related to the problem..  Institutional process for managing

∙ If the random number chosen by the computer

How Many Breeding Females are Needed to Produce 40 Male Homozygotes per Week Using a Heterozygous Female x Heterozygous Male Breeding Scheme With 15% Non-Productive Breeders.

Public universities capture a much larger proportion of the total market than public VET providers − 92% of higher education students were enrolled at public universities in 2014,

However, unlike in our previous study, 26 the correlation of time in bed with suicide-related behaviours was not U-shaped, and although suicide attempts were not associated with time

Anytime of practice number sense task cards focus shifts from the same pattern like addition, using the measurement worksheets to figure, so you can name the relationships?.

The federal grant of authority to State and local governments to franchise cable systems does not include cable systems that serve subscribers without “using” any public

2) Rotate manual shift lever back toward rear of vehicle 2 notches to Neutral position. Ensure gearshift is in Neutral. While holding manual shift lever lightly toward the