• No results found

Microscopic Traffic Data Collection by Remote Sensing

N/A
N/A
Protected

Academic year: 2021

Share "Microscopic Traffic Data Collection by Remote Sensing"

Copied!
11
0
0

Loading.... (view fulltext now)

Full text

(1)

Microscopic Traffic Data Collection by Remote Sensing

S. P. Hoogendoorn1, H. J. Van Zuylen1, M. Schreuder2, B. Gorte3, and G. Vosselman3

Abstract. To gain more insight into the behavior of drivers during congestion, and to develop and test theories and models describing congested driving behavior very detailed data are needed. This paper de-scribes a new data collection system prototype for determining individual vehicle trajectories from se-quences of digital aerial images.

Software was developed to detect and track vehicles from image sequences. Besides the longitudinal and lateral positions as a function of time, the system can also determine the vehicle lengths and widths. Before vehicle detection and tracking can be achieved, the software handles correction for lens distortion, radiometric correction, and orthorectification of the image.

The software was tested on data collected from a helicopter, using a digital camera gathering high-resolution monochrome images, covering 280 m of a Dutch motorway. From the test, it is concluded that the techniques for analyzing the digital images can be applied automatically without much problems. How-ever, given the limited stability of the helicopter, only 210 m of the motorway could be used for vehicle detection and tracking. Furthermore, the poor weather conditions at the time of data collection had a sig-nificant influence on accuracy and reliability of the collected data: 94% of the vehicles could be detected and tracked automatically, with resolution of 22 cm. This percentage will increase substantially under bet-ter weather conditions. Furthermore, equipment stabilizing the camera – so-called gyroscopic mounting – and the use of color images can be applied to further improve the system.

Keywords: congestion, data collection, vehicle trajectories, remote sensing, vehicle tracking Word count

Abstract 246

Main text 3970

Figures and tables (7 x 250) 1750

Total 5966

BACKGROUND

Congestion is an important issue from both an economic and a societal viewpoint. Understanding its causes, in particular the influence of driver behavior on the origin of congestion can be very important and may lead to improved measures to either prevent congestion or to reduce its negative effects.

Research at the microscopic level requires however a much higher detail of data collection than re-search at the macroscopic level in order to be useful. Empirical study of driver behavior is not possible without real-life data that consist of complete vehicle trajectories recorded at a long roadway stretch and at a high frequency, as well information on the vehicles themselves. Longitudinal and lateral positions are thus to be collected very precisely in order to determine the exact lane changing and

accelera-tion/deceleration behavior of the vehicles.

On top of this, data are usually only collected at a number of fixed locations where the detectors are positioned, rather than over an entire stretch. Among the drawbacks of the local measurements is the fact that instantaneous characteristics, such as distance headways, densities, space-mean-speeds, cannot be determined directly, let alone their dynamics.

1

Transportation and Traffic Engineering Section, Faculty of Civil Engineering and Geosciences, Delft University of Technology.

2

Traffic Research Center, Dutch Ministry of Transportation. 3

Photogrammetry and Remote Sensing Section, Faculty of Civil Engineering and Geosciences, Delft University of Technology.

(2)

In most cases, microscopic simulation models are calibrated and validated using macroscopic traffic data. That is, the microscopic parameters are tuned such that they yield macroscopic flow characteristics consistent with the traffic flow data. Given the large number of unobservable microscopic parameters, and their complex and non-linear relation with the resulting macroscopic flow characteristics, such indirect model calibration may not always be good practice. For this reason, Brackstone and

Mac-Donald (1995,1998) argue that calibration of true microscopic simulation models (microscopic representa-tion and microscopic behavioral rules describing interacrepresenta-tion between vehicles) requires detailed informa-tion on dynamic behavior of vehicle pairs.

Also the existence of stable or metastable traffic states in congested circumstances and the behavioral hypotheses postulated to explain these phenomena (Kerner,1999) can also be studied in detail when de-tailed microscopic information is available. Finally, the microscopic processes that cause congestion to occur 500 – 1000 m downstream of the on-ramp (Cassidy and Bertini,1999) rather than at the location of the on-ramp can be analyzed in detail. The availability of these data is limited and dated (Treiterer and Myers,1974).

Research questions

Typical research questions for studying driver behavior during congestion are the following:

§ How can the car-following behavior of drivers be described? Are the models that have been developed during the seventies (Herman and Rothary,1963) still valid? Are different parameter settings required?

§ Is the gap that drivers accept when entering the motorway different than the gap that is maintained when following a vehicle (Dijker,1997)?

§ How does the overtaking behavior of drivers depend on the width of the lanes and on obstacles and ob-structions? How does driver behavior change in case of narrow lanes? What about the behavior in case of roadworks?

§ Are multiple stable states identifiable during congested traffic flow?

§ Can microscopic phenomena be identified that can explain macroscopic traffic states and the transitions between these macroscopic states?

Currently available data collection systems, such as inductive loops, pneumatic tubes, dGPS, video, etc., are not suitable to answer these fundamental research questions satisfactory.

Study objective

To answer the research questions a new approach to individual traffic data collection that enables study-ing the dynamics of the individual vehicles and the interdependency between them is needed. The objective of this study is to develop a data collection method to collect vehicle trajectories (longitudinal and lateral position of the center of the vehicle represented by a rectangle as a function of time) and individual vehicle characteristics (vehicle length and width) in particular during congested traffic flow conditions.

System requirements

Given the fundamental requirements of research into driver behavior during congestion, specific de-mands to the monitoring system pertain to both the temporal and spatial resolution. For the latter, it was decided that the final system must have a resolution of 0.4 m. The roadway length that can be observed by a single camera is thus determined by resolution of the camera. It turns out that a high resolution B&W digital camera has a resolution of 1300 pixels (x 1030 pixels), implying that 1300 x 0.4 = 520 m is the maximum roadway length that can be observed.

Given the average headways between vehicles, and their average speeds, it was decided that the time between two observations should not exceed 0.1 s. It can be shown that in case the specifications above are met, the locations of the vehicles can be determined with an accuracy of ¼ pixel (= 0.1 m). The resolution of the speeds that are determined from the vehicle positions is thus 1 m/s.

MEASUREMENT AND EXPERIMENTAL SET-UP

Having considered a number of alternative systems, such as taking video measurements from VMS gan-tries, it was decided that the data must be collected from an elevated position, more specifically from a helicopter, to which a digital camera system was attached. The camera system had to meet very high

(3)

stan-dards with respect to the resolution of the images as well as the frequency at which the images could be collected.

Camera and helicopter

The camera used in the measurement system provides grayscale images at a resolution of 1300 by 1030 pixels with a maximum frequency of 8.6 Hz. The camera, a Baser A101f, is very sensitive to light, yielding a short integration time, with little loss of image quality due to the vibrations of the helicopter. The camera is able to collect color images as well. In case color images are collected, the resolution of the camera is reduced to 650 by 515 pixels.

Given the number of pixels in the longitudinal direction (1300 pixels) and the spatial resolution (40 cm per pixel), in theory the roadway length that can be observed equals 1300 × 0.4 = 520 m. The area that each pixel represents in reality (in this case, 40 × 40 cm) is determined by the specifications of the lens (light sensitive chip and lens) and the height at which the images are collected. To decrease the probability that clouds obstruct the observations, it was decided to not fly higher than 500 m. It was decided to use a cam-era with a 2/3” chip, and a lens of 16 mm. Regrettably, a camcam-era with a different chip was delivered. As a result, each pixel represents an area of approximately 20 × 20 cm, leaving only 280 m of the roadway ob-served.

The camera itself does not have sufficient memory to store all the images needed for analyses. On top of this, compression of the images is not an option due to the loss of image quality. Instead, a Personal Computer equipped with a frame grabber was attached to the camera enabling real-time storage of the digi-tal images.

The helicopter – a Bell 206 JetRanger – and its pilot were hired from the Dutch firm Heli Holland. The camera was attached to the helicopter and was fixed. No gyroscopic mounting was used to attach the cam-era, assuming that the resulting vibrations and movements of the helicopter would not influence the quality of the collected data too much.

Measurement location

The data was collected at different motorway sites near the Dutch city of Utrecht, in particular on the A2 motorway. The sites were selected both for the very high probability of congestion occurring during the afternoon peak hour (between 15:30 and 17:30), the type of bottleneck that causes the congestion, and the possibility to observe the traffic without too many obstructions present (e.g. viaducts). The list below de-scribes the main measurement sites (see Figure 1):

1. Merge / bottleneck near the South-East of Rijnsweerd, on-ramp De Meern, at a height of 500 m and 300 m (see top-left inlay picture of figure 1).

2. Queue spilling backwards above the viaduct of the A27 over the A2, at a height of 500 m and 300 m. (see lower-right inlay picture of figure 1).

(4)

Figure 1 Overview of the study location near the Dutch city of Utrecht. Data was collected mostly on the A2 motorway.

Weather conditions

The data was collected at the 25th of April, 2002. Although it was rather clouded, the images from the digital cameras looked sufficiently clear to go ahead with the data collection. During the flight, weather conditions were constantly changing: at some times, the cloudiness was rather thin; while at other times it was too thick to see the road at all. Due to the varying weather conditions, the ambient conditions also changed constantly. As a result, the shadows of the vehicles were not the same in all the image sequences.

Finally, the wind conditions were unfavorable and it was difficult for the pilot to keep the helicopter at a fixed location. Together with the instability of the helicopter itself (in terms of its pitch and yaw), the movements of the helicopter reduced the constantly observed part of the roadway substantially (approxi-mately 200 m instead of 280 m). Figure 2 shows an example of the raw image data.

(5)

Figure 2 Example reflecting movement of helicopter and effect on collected images. The time gap between the two shown images equals 6 seconds.

The instability of the images increases the requirements for the vehicle detection software, as will be discussed in the remainder of the paper. In addition, the duration of the usable sequence was rather short (approximately 35 s).

PROCESSING OF THE IMAGES

The objective of the image data analysis is to automatically determine the vehicle trajectories from the rough pictures. Before the vehicles can be detected, the following operations are applied to correct the rough images and to convert them into ‘standard’ pictures:

1. Lens distortion correction 2. Orthorectification 3. Radiometric correction

These steps are described in the remainder of this chapter. After these steps have been completed, the images can be used to detect and track the vehicles, which is described in the following chapter. Subse-quently, the screen coordinates are converted into world coordinates.

Correction for lens distortion

Due to the movements of the helicopter, the distortion of the lens complicated the orthorectification of the images: the considered part of the roadway sometimes is at the top of the image, while it is at the bot-tom in other cases. As a result, an essential step is to correct for the distortion of the lens. For this specific camera, a radiometric distortion was present: the corners of the image were 7 pixels (1%) too far to the ‘in-side’.

Orthorectification

In an aerial photograph of a rectangular object, the image of this object will only be rectangular is the camera is located exactly above the middle of the rectangle (neglecting the lens distortion discussed above). Otherwise, the perspective of the image will be distorted, depending on the location and the angle of the camera. On top of this, the size of the rectangle will depend on the height at which the images are collected, and will the image be rotated around the vertical axis.

During orthorectification, the perspective distortion, the scale and the rotation of the images are ad-justed such that the objects on the image are projected at the same location as the same objects in the refer-ence image R.

Orthorectification needs control points, which are points in the image that are visible in both the refer-ence image and the processed image. In theory, only 4 control points are needed. However, due to the fact that the determination of the location of the control points cannot be achieved with 100% reliability, 10 to 30 points were used instead of four, which also gives an indication of the accuracy of the process.

(6)

To handle the fact that the control points in the reference image and the processed image may be far apart, a special process was developed that uses the information of the control point location in image I-1 to determine the location of these points in image I. This process starts from reference image R. If R is not the first image of the sequence, the process is also performed backwards (for images R-1, R-2, etc.).

Two sets of control points have been used. The first set, the so-called characteristic control points are used for coarse matching. These are around these points are unique for the entire image and can thus be used to match to images where the amount of perspective distortion is large (i.e. the objects on the image and the objects on the reference image are far apart). Typical objects in these sets are lanterns, gantries, etc. The second set – the roadsurface control points – contains points of the roadway surface, i.e. on the refer-ence plane such as the lane markings. These points are not characteristic and are used for finematching only. Figure 3 illustrates both sets.

The two phases of the process are:

1. Coarse matching finds the control points in the characteristic set of the reference image R in image I. This is achieved using an iterative approach that maximizes the cross-correlation coefficient of the pix-els around the control points in the characteristic set. The search window contains 50 × 50 pixpix-els. The accuracy of coarsematching is approximately 1 pixel. The results of coarsematching are used to deter-mine the transformation of image I-1 to image I.

2. Fine matching uses the same approach as coarsemathing. In this case however, the roadsurface control points are used instead of the characteristic control points, while the search window is only 7 × 7 pixels large.

Figure 3 The two sets of control points: characteristic control points (left) and road surface control points (right).

Radiometric correction

Since ambient conditions are changing during data collection, some images are brighter than others. The vehicle detection is based on differences in the intensity of the pixels of image I and the background image (image of the roadway without vehicles on it). It turned out that the vehicle detection process was very sensitive to these differences in intensities. This is why it was decided to normalize the images using so-called histogram matching: by comparing the histograms of the reference image and the current image, the intensities of the current image is adjusted such that they agree with the reference images as much as possible.

VEHICLE DETECTION AND TRACKING

The previous sections discussed the operations that need to be applied to the image sequence before the actual vehicle detection and tracking can be performed. In this section, we briefly discuss the approach to detection and tracking itself. The approach consists of the following steps:

1. Determination of background

2. Vehicle detection (determination of center of vehicle, its length, and its width) 3. Vehicle tracking

(7)

Determination of background image

In this first step, the background image (i.e. the empty roadway) is determined. The approach that was used is very straightforward: for each pixel of an image sequence, the different intensity values are stored. For these intensity values, the median value is assumed to be the intensity value of the background. Figure 4 shows the result of this operation. Note that the main assumption is that the probability that the roadway surface is empty is larger than the probability that a vehicle is present. This implies that the applicability of the method may be restricted to periods where congestion is not too severe. Other approaches (such as morphological operations) are not restricted by this requirement.

Figure 4 Background image

Vehicle detection

For any image, vehicle detection is based on the difference between the current image R and the back-ground image B. A first approximation is to use a threshold value to decide whether a pixel represents a vehicle or not. If so, neighboring pixels can either be identified as a vehicle or not.

In practice, a number of complicating factors will occur:

1. Both light and dark vehicles will cast shadows which are generally darker than the roadway surface. 2. Light vehicles have dark spots (the windshield, etc.).

3. On occasion, a small vehicle completely drives in the shadow of a big vehicle (e.g. a truck or a bus). As a result, the shadow of the small vehicle disappears. Furthermore, the intensity of the vehicle itself may be close to the intensity of the background image.

The biggest problems are caused by vehicles that have the same intensity as the roadway surface or ve-hicles that have the same intensity as their shadow. Different approaches have been implemented to resolve these issues (morphological grayscale operations, binary morphological operations, split and merge image segmentation, etc.). Table 1shows an example. No definite algorithm could be chosen based on the per-formance: in most cases, many vehicles are detected (about 94%,). It is likely that under better weather conditions, and with the use of color images, nearly 100% of all vehicles will be detected and subsequently tracked.

(8)

Table 1 Overview of vehicle detection algorithm (in this case, for light vehicles only). A similar approach can be applied to detect the dark vehicles (only step 6 is skipped, since the shadows will also be identified as part of the dark vehicle).

1. Determining the difference D by subtracting the background image B from

2. Determining pixels in D with values larger than threshold value.

3. Morphological opening 4. Morphological closing

5. Determining bounding boxes. 6. Expanding bounding boxes to include vehicle shad-ows.

When the vehicles are detected, the positions as well as the length and width of the vehicles could be es-tablished easily. Figure 5 shows an example of the final result of vehicle detection, where the vehicles are described by rectangles.

Figure 5 Vehicle positions and their dimensions resulting from vehicle detection. In this example, all vehicles are detected. However, the figure also shows a false detection, namely the shadow of a truck. Moreover, some of the vehicle dimensions are not correctly detected.

Vehicle tracking

The aim of vehicle tracking is to follow the vehicles detected in an image, i.e. to determine their posi-tion in the other images. In most cases, tracking is done using an approach similar to the control-point ap-proach used in the orthorectification step (using both coarsematching and finematching). For the

(9)

applica-tion at hand, only coarsematching was required since it provided sufficient accuracy for vehicle tracking. Application of coarsematching yields an unique label for all vehicles detected during the vehicle detection step, enabling determination of the vehicle trajectories in the following step. Figure 6 shows the results of the vehicle tracking process.

Figure 6 Vehicle tracking in four subsequent images. The lines behind the vehicles are indicative for the speed of the vehicles. Note that vehicle detection only occurs at designated time instants and not to all collected images; this is why some vehicles in the figure have not yet been detected.

Conversion of image coordinates to world coordinates

In the final step, the image vehicle coordinates are translated into world coordinates. Scaling and trans-lation determine both the longitudinal position of the rear bumper of the vehicle relative to an arbitrary lo-cation on the roadway, and the lateral lolo-cation of the vehicle relative to the right lane demarlo-cation. To this end, maps of the roadway are used. Furthermore, the length and the width of the vehicles are determined as well.

Application example

Figure 7 shows an example of the results of application of the data collection approach described in this contribution. Besides the trajectories indicating the longitudinal positions of the vehicles on the roadway as well as the roadway lane, the approach also yields the lateral positions and the dimensions of the vehicles.

(10)

right lane left lane on-ramp

vehicle entering main road overtaking maneuver

Figure 7 Example of trajectories derived from observations at on-ramp during congested traffic flow operations.

Verification of data collection

To verify the approach, two image sequences were analyzed. The first sequence is 45 s long and per-tains to a situation where traffic flow is near capacity, but not yet congested. The images were collected from 500 m height; the observed roadway was 210 m long. Since the position of the vehicles can be deter-mined at a one-pixel accuracy, the spatial resolution for the first sequence equals 22 cm.

The results of the first sequence were very positive: after correcting lens distortion, the images from the sequence could be rectified using the approach described in the previous sections. Application of the vehi-cle detection procedure on 40 images of the sequence showed that 98% of the vehivehi-cles were detected. After detection, the vehicle tracking was applied determining the trajectories of the vehicles at an accuracy of 1 pixel. It should be noted that the error made while tracking the vehicle across the images is cumulative.

The second sequence pertains to an on-ramp situation where traffic conditions are congested (see Figure 7). The sequence lasts for 52 s and has been collected from a height of 300 m. As a result, only 120 m of the roadway were observed, while the spatial resolution is approximately 13 cm.

The ambient conditions during the second sequence were less favorable than during the first sequence; clouds frequently moving in front of the sun mainly caused this. The radiometric correction was not suffi-ciently able to handle this. This is why only 90% of the vehicles were correctly detected and tracked. Also, the cumulative error while tracking slow moving vehicles caused a maximum error in the position of the vehicle of 1.3 m at the time the vehicle left the roadway section.

CONCLUSIONS AND LESSONS LEARNED

This paper describes a new data collection system that was developed to determined individual vehicle trajectories from high-resolution grayscale images. These images were collected using a digital camera mounted underneath a helicopter, and stored on a personal computer. Having applied a number of photo-grammetric operations on the images, approximately 94% of the vehicles could be detected and tracked, yielding both vehicle positions, and vehicle dimensions. The spatial resolution was 22 cm; the temporal resolution is 8.6 Hz. This is even smaller than the required 40 cm, due to the fact that a different chip has

(11)

been used than was anticipated. Using the measurement set-up and detection and tracking algorithms de-scribed in this paper, it was possible to track vehicles on an area of 210 m length. On top of this, the maxi-mum duration of the useable image sequence was only 35 s. There appears to be no limit to the number of vehicles that can be detected.

Firstly, it turned out that the weather conditions have quite an adverse effect on the quality of the col-lected data, thereby reducing the accuracy of detection and tracking. Furthermore, the windy conditions caused the helicopter to move even more.

Secondly, the use of a gyroscopic mounting can increase the stability of the images. Besides increasing the effective part of the images that can be used for vehicle detected and tracking, the increased stability will also substantially increase the duration of the sequences that can be used. Using a different camera set-up, and gyroscopic stabilizing devices, the observed area (using a single camera) may be increased to 500 m (with a resolution of 40 cm), and the sequence duration will be longer than 15 minutes.

Thirdly, the approach is labor intensive, especially considering data collection. Data collection effort can be reduced somewhat using unmanned helicopters or by collecting data from a fixed location (e.g. a high building). The analyses of the images and the trajectories will be completely automated in time.

Future research is aimed at further refining the data collection approach. To this end, a second helicop-ter flight is planned to collect data under more favorable conditions, i.e. less clouds, less wind, gyroscopic mounting, etc. The resulting footage will be of higher quality and will be used to fine-tune the methods. We will also consider whether post-processing the data (e.g. by Kalman filtering) can further improve the qual-ity of the data. After the second flight and the algorithmic refinements, the maximum length of the meas-ured roadway length as well as the maximum duration of the data collection can be determined.

The collected microscopic traffic data will be used for scientific research (theory building and model development, e.g. microscopic origin of congestion, gap-acceptance for different traffic states, etc.), practi-cal research (such as ex-post evaluation studies of measures, such as behavior on narrow lanes), and the calibration and validation of microscopic simulation models. It is important to note that traffic is not influ-enced during data collection. Furthermore, current research is aimed at investigating the applicability of the system as an on-line monitoring system, in particular in urban areas (e.g. traffic monitoring from a high building for intersection control).

Acknowledgements - The research described here was performed by the Delft University of Technology on behalf of the Traffic Research Center AVV of the Dutch Ministry of Transportation, Public Works and Water Management.

REFERENCES

Brackstone, M. and M. McDonald (1995). The microscopic modelling of traffic flow: weaknesses and po-tential developments. Traffic and Granular Flow.

Brackstone, M. and, M. McDonald (1998). Modeling of motorway operations – Transportation Research Records 1485.

Cassidy, M.J., and R.L. Bertini (1999) Observations at a freeway bottleneck. Proceedings 14th

Interna-tional Symposium of Transportation and Traffic Theory, Jerusalem.

Dijker, T. (1997). Verkeersafwikkeling bij congestie. Graduation Thesis (in Dutch), Transportation and Traffic Engineering Section, Delft University of Technology.

Herman, R. and R.W. Rothery (1963). Car-following and Steady-State Flow. Theory of Traffic Flow Sym-posium Proceedings, 1-11.

Hoogendoorn, S.P., and T.P. Alkim (1999). Expert views on traffic flow operations during congestion. Re-search Report on behalf of the Traffic ReRe-search Centre of the Dutch Ministry of Transport.

Kerner, B.S. (1999). Theory of Congested Traffic Flow: Self-Organization without Bottlenecks. Proceed-ings of the 14th International Symposium of Transportation and Traffic Theory, 147-172.

Treiterer, J. and J.A. Myers (1974) The hysteresis phenomenon in traffic flow. The 6th International

References

Related documents

For the poorest farmers in eastern India, then, the benefits of groundwater irrigation have come through three routes: in large part, through purchased pump irrigation and, in a

Using a model with constant relative risk-aversion preferences, endogenous labor supply and partial insurance against idiosyncratic wage risk, we provide an analytical

 Safety and tolerability of BHQ880 in patients with smoldering multiple myeloma by assessing AEs, SAEs, clinical laboratory values (Time frame: From start of study until

It argues that full-time access to peers - which interviewees identify as the main consequence of smartphones and instant messaging apps on their interactions with peers – is

 Transmission, diagnosis, and management of hepatitis E: an update; Santiago, Ramos, Mainardi, Gerona, and Arbiza; Hepatic Medicine..

These authors show for the case of location parameters in general non-regular models that a maximum bound on the convergence rate of parametric estimators can be deduced from

Following recent calls for a better understanding of public views of human enhancement (e.g., [ 8 ]), we exam- ined the role of the Dark Triad of personality (DT), trait

Due to the thousands of unique online fraud attacks that are being perpetrated on the customers of financial institutions and retailers that offer online services, online