MONITORING OF GAS TRANSMISSION PIPELINES
– A CUSTOMER DRIVEN CIVIL UAV APPLICATION
, Werner Zirnig§
, and Gunter Schreier&
¶German Aerospace Center (DLR), D-82234 Weßling, Germany1 §Ruhrgas AG, Halterner Straße 125, D-46284 Dorsten, Germany &Definiens Imaging GmbH, Trappentreustrasse 1-3, D-80339 Munich
Remote Sensing, Gas Transmission Pipeline Monitoring, Security, Infrastructure Monitoring, Unmanned Aerial Vehicles; Optics, SAR, Image Processing, Feature Extraction
It is in the interest of any gas company to maintain the value of its pipelines and to protect them effectively against damage caused by third parties. As a result of global progress in high-resolution remote sensing and image processing technology, it is now possible to design natural gas pipeline monitoring systems with remote sensors and context-oriented image processing software. Recent developments in UAV technology show their suitability as plat-forms for such customer driven missions [1,2]. Two different scenarios for a UAV based gas pipeline monitoring sys-tem will be discussed.
The legal framework in place to ensure the safe operation and supervision of oil&gas transmission pipelines dif-fers substantially from country to country. In some cases, the differences are quite significant. Regardless of the re-quirements imposed by authorities, oil&gas companies themselves take a host of measures to ensure that their pipeline systems are operated in a safe, economic and environmentally friendly way and that they are protected effectively against damage caused by third parties. Concerning the size of oil & gas pipeline networks all over Europe, the oil seg-ment is of a rather reduced size compared with the 200,000 km natural gas transmission pipelines. Thus, the focus of this study is on the monitoring tasks of the European natural gas transmission network, whereas later expansion to other networks would enlarge the application of such remote sensing systems. The monitoring methods most widely used for natural gas transmission pipelines include foot patrols along the pipeline route and aerial surveillance using small planes or helicopters. These patrols prevent developments and events which could place high pressure pipelines, the surround-ings of pipelines or security of supplies at risk. Although these methods ensure a high level of safety in pipeline opera-tion, the cost is also very high.
4. Monitoring tasks for pipeline observation [3,4]
Ruhrgas is Germany’s leading gas transmission company and the largest natural gas importer in Europe. It has a total gas send out of some 600 billion kilowatt hours and operates a high-pressure transmission system totalling some 11,000 km of pipeline. This pipeline system covers the whole of Germany, forming the central link of the integrated European gas grid. The European natural gas transmission network includes high pressure pipelines with a total length of about 200,000 km. Most pipelines are under a soil cover of about 1 m and depending on pipeline diameter, 4 m to 10 m wide ROW (Right-of-Way) is specified above the pipelines. Buildings and large trees with deep roots are not admissible in these pipeline corridors. Work in the ROW is not admissible unless prior approval has been obtained from the relevant pipeline operator.
Today the pipeline corridors are monitored by regular foot and vehicle patrols and by air patrols carried out us-ing small fixed-wus-ing aircraft and helicopters. These patrols prevent developments and events which could place the natural gas pipelines, the surroundings of pipelines or security of supplies at risk.
The monitoring tasks break down into object and situation detection to prevent third party interference, monitor-ing of soil movement and detection of gas emissions. Along the pipeline routes, the followmonitor-ing situations have to be detected in a strip of 20 m on both sides of the centreline:
• Construction work, earth movement and excavation, laying of cables, sewers, drainage systems and pipes, erection of buildings, foundations, pylons, etc., boring and pressing,
• Preparing activities, such as building sheds and pickets, assembling machinery and laying drainage cables,
• Soil upheaval, erosion, deep vehicle tracks, water-logged surfaces, • Planting of new shrubs and trees,
• Discolouring of vegetation above the pipeline,
• Temporary deposition of materials and agricultural products.
In addition, any transport and work carried out within a 200 m-wide strip must be reported if there is reason to believe that it may affect the pipeline route at a later stage.
A gas emission detection system must be capable of identifying possible small gas escapes with flow rates of 0.01 - 10 m³/hr at an early stage. Any major gas emission caused by severe damage to a pipeline is detected and re-ported directly by other systems.
These monitoring tasks have to be carried out throughout the year at regular intervals, largely regardless of weather conditions. Although the areas monitored differ quite significantly in terms of soil characteristics, vegetation and building density, it is important for future remote sensing methods employed to be usable in almost all types of terrain.
On the basis of a rapid automatic data fusion and evaluation process the future remote monitoring system must be capable of identifying objects and situations representing threats to the pipeline. In terms of its price/performance ratio it must at least correspond to the methods presently in use, or be better.
5. Suitable optical and Radar sensor technology for monitoring of gas pipelines
Optical and infrared remote sensing systems
The principle of optical sensors is that via an optical lens system the amount of sunlight as reflected by the earth is sensed. Normally only light in a specific spectral window is gathered, e.g. red, green or blue light. High resolution optical systems sense the light intensity for very small areas on the earth surface, in the order of 0,5 to several meters. Given this principle, optical sensors can be characterised by a number of important features:
1. Optical sensors correspond to the human eye in many aspects, which makes interpretation easy.
2. Optical sensors can reach relatively high spatial resolutions. Resolutions of less than 0.5 meter from space and some centimeters from the air are possible. Limiting factors for the spatial resolution are the sensitiv-ity of the detectors, the size of the lens or mirror and the refraction.
3. Optical sensors can only be used in daytime. Since optical sensors make use of a passive light-source, the sun.
4. Optical sensors are hindered by cloud cover, which represents, from an operational point of view, an im-portant limitation. since especially Europe is frequently covered with clouds.
For a digital optical/IR system different mechanisms can be distinguished for imaging a large area:
• Rotating (Whiskbroom) scanner: By this sensor mechanism there is only one sensor element per spectral chan-nel. In order to build up a total image, the element scans the earth perpendicular to the flight direction by using a rotating mirror. As a consequence of the speed over the earth of the platform the image is built up in flight di-rection.
• Pushbroom scanner: In this case the sensor consists of a long row of light sensitive elements (CCD's) with which in one moment a complete image line is observed. As a consequence of the speed of the platform over the earth surface the second image direction is built up.
• Matrix sensors: These sensors do have a matrix of CCD-elements (a focal plane array, FPA) so that in one moment a complete image can be gathered.
Optical sensors can have a different number of spectral bands with a different bandwidth:
• Panchromatic sensors. They have only one, relative broad, spectral band, generally covering the region of 500 to 700 nm, resulting in a 'black and white' image. So the image contains little spectral (colour) information. Differences between objects with different colours but the same intensities cannot be observed. On the other side very high spatial resolutions can be reached because of the wide spectral band, collecting relatively much energy. The spectral band for panchromatic sensors is in general chosen from 500 to 700 nanometre. So
corre-sponding to the human eye. The blue region (400 to 500 nm) is not included because these region is often dis-turbed by atmospheric influences. Sometimes the spectral band is extended up to 800 nm. This gives the ad-vantage that a lot of energy reflected by vegetation is included and also that the vegetation can clearly be rec-ognised in the images.
• Multispectral sensors. These sensors contain a limited number of spectral bands (3 to 10). The spectral bands are defined by spectral filters which only pass light within a certain spectral region, generally with a bandwidth of 10 to 100 nanometer. Because of the less wide spectral bands the spatial resolution (and or the radiometric resolution) of the multispectral images in general is less then for panchromatic images. The spectral bandwidth is in the order of 10 to 100 nanometre, also depending on the spatial resolution which has to be obtained, and on the spectral information to be gathered. Most multispectral sensors at least do have three bands: green (550 nm), red (550 nm) and near infrared (850 nm). These three bands are represented as false colour image, in blue, green and red respectively. Within these images most information which is sensed by the human eye is represented plus information on the vegetation, which is represented in the near infrared channel. A lot of sen-sor systems do only have these three bands: red green and near infra-red. Some multispectral sensen-sors do have more spectral bands:
• Blue band (400 nm). Combined with the green and red band this gives natural colour images. Also this band contains information on the atmosphere so that it can be used for the atmospheric correction of the other spec-tral bands.
• Bands in the infrared (1 – 15 µm), where spectral bands in the near and middle infrared (900 to 2500 nm) mainly contain information on vegetation and soil moisture, whereas the spectral bands around 10 µm characterise especially the temperature.
Developments in high resolution optical systems have gone relatively fast during the last years. For spaceborne systems the last years several commercial satellites have come available with panchromatic resolutions of better than 1 m and multispectral resolutions of better than 3 m. For airborne platforms during already for a long time high resolution optical digital sensors. Currently a subdivision can be made between four types of systems:
• High quality photogrammetric systems. Currently the first operational photogrammetric systems are coming to market. The resolution and geometric requirements of these systems are very high. Main applications are large scale mapping and DEM generation activities. The processing software for these systems is very sophisticated and almost fully automated. Examples are the Z(I Imaging, For the application of pipeline monitoring the sys-tems are probably of too high quality and therefore of too high cost.
• Multispectral scanning systems. Multi-spectral airborne scanners exist for about 20 years already and more and more operational systems come available. Different examples exist of multispectral sensors operating in the VIS/NIR region, both pushbroom scanners and matrix cameras. Also examples exist of operational sensors op-erating in the VIS, NIR and MIR region, e.g. the Daedalus (whiskbroom) scanners, which operate with 10 to 20 channels. Important aspect for these systems is that the data processing software is strongly related to the system (sensor, radiometry, geometry) and therefore need to be fully operational.
• Digital camera based systems. The quality of commercial digital camera's is increasing rapidly, which means that these camera's are more and more suited for operational use. Especially the high end camera's provide good detail and radiometric sensitivity, and are provide possibilities to influence the spectral band definition, quantisation and automatic read out of the imagery. Examples exist of operational systems of one or a cluster of digital camera's, including a highly automated and operational processing chain.
• Digital video based systems. Also the quality of digital video camera's has increased during the last years. Still the number of pixels and the sensitivity stays a limitation. It is expected that within a number of year very high resolution video camera's will enter the market with non interlaced images of 1000*1500. These might be suited very well for airborne monitoring systems. Also currently examples exist however of operational video based airborne monitoring systems. Even from false colour video systems.
Experimental optical demonstration sensor :
The experimental investigations shown below have been performed with an optical/infrared multispectral scanner (Daedalus AADS 1268) characterised by the following sensor parameters Spectral range 0,4 - 14 µm
• Bandwidth 0,02 - 5 µm
• FOV ±43°
• IFOV 2,5 mrad
• Spatial resolution up to 0.8 m
• Bands 11 • max. Scanfrq. 100 Hz • Digitisation 8 Bit • On_board Cal. 2 black bodies • Initial operation 1986
This airborne sensor is a bulky instrument (see. Fig.2), however it is an operational system (including the data processing chain), it has bands from the visible to the thermal infrared wavelength range, and (most important) it is a highly economic and technically simple system. These are the main reasons for utilising the Daedalus as a suitable optical and infrared sensor system in a demonstration campaign. With this configuration, quite a range of different air-borne and spaceair-borne platforms and mission scenarios, as well as wavelength ranges and other sensor parameters, can be investigated. However, the Daedalus system is not suitable for operation on a UAV.
For the utilisation on a UAV platform, suitable lightweight IR sensor technology is available on the market, e.g. an AGEMA 570 infrared camera with a real-time data acquisition system (Fig. 3).
• Principle: Uncooled Microbolometer Array, temperature stabilized • Detector: Vanadium Oxide, 320 x 240 Pixel
• Spectral Range: 7.5 - 13 µm
• FOV: 12° x 9° resp. 24° x 18°
• Spatial Resolution: 0.65 mrad resp. 1.3 mrad (1 m in 0.7 resp. 1.5 km) • Temperature Resolution: 0.15 K @ 300 K
• Framerate: 50 Hz
• Weight: ca. 2 kg without Objectives and Power Supply
Optical and IR data processing
The Daedalus data have to be preprocessed in several steps before using them in a change detection algorithm: • System corrections
• Radiometric corrections • Geometric rectification
All corrections have to be performed for every single image strip.:
System corrections: The system corrections accounts for a lost lines procedure, which uses spline interpolation with the neighbouring lines. Also it accounts for the scan angle dependent sensitivity of the sensor which has been measured before the flight in the laboratory. The measured values are used in a multiplicative way, to adapt the whole line to the nadir value.
Radiometric corrections: The radiometric correction procedure is an (empirical) method to account for the viewing angle dependent properties of the image data in case of large field of views (e.g. the Daedalus field of view is 86°). From general considerations of radiation transfer from the ground target to the sensor (under some assumptions) the empirical image-based radiometric correction method EMRACO has been developed at DLR, which corrects the radiances of pixels at the sensor for sensor viewing angle effects and indirectly for BRDF effects of the surfaces. The radiance at any viewing angle (off-nadir) is normalized to the radiance at the selected reference angle, usually in nadir direction. In order to implement such type of correction the objects (or classes or surface types) have to be extracted from the image data.
This selected region of the image (usually symmetric on both sides of nadir) is used to initialize the whole pro-cedure, which consists of several steps. First, the initial region of an image is clustered by an extended k-means algo-rithm, which defines automatically the number of clusters (classes) depending on the complexity of an image. Then for each cluster an average intensity profile along the scan direction is calculated. These profiles (initially defined in a cen-tral part of an image line) are extrapolated to the whole swath width of an image by a polynomial approximation. Fi-nally, applying a linear regression method over all clusters to the radiation transfer equation results in a radiometric correction function for each sensor viewing angle with which the pixel intensity across the scan line can be relatively adjusted to the pixel intensity of the reference viewing direction. The procedure is iterative, that is the correction is first performe for a narrow central part of an image. Then the procedure is initialized with this corrected image region and repeated until the whole image swath width is corrected. This object-based correction method allows a relative correc-tion of the local radiometric distorcorrec-tions.
Geometric rectification: The line scanner imagery are geometrically distorted with respect to a mapping frame. The upcoming high precision direct georeferencing systems consisting of a combined GPS/IMU and one or more imag-ing sensors can be used for orthoimage production provided a digital elevation model (DEM) is available in the case of single imagery. The utilization of direct measurements of the image exterior orientation parameters by a GPS/IMU system for image rectification is called Direct Geo-referencing and allows a fast automatic ortho-rectification of the remotely sensed data.
In a first step of the orthoimage production the six parameters of the exterior orientation (position and attitude data normally corrected by the lever arm calibration values) are synchronized with each image line. As input for the orthoimage processor serves the exterior orientation for each scan line, the model of the sensor (whiskbroom system) in combination with the calibrated interior orientation and a digital elevation model (DEM). Applying the rigorous collin-earity equation the intersection point of each sensor look direction with the DEM is iteratively calculated. The resulting irregular grid is filled with bilinear interpolated pixel values. The orthoimage processor RECTIFY supports a multitude of map projections (including local topocentric or ECEF coordinate systems) and geodetic datum transformations as well as ellipsoid to geoid transformations.
Synthetic Aperture (SAR) remote sensing systems
Radar sensors can be operated regardless of weather and light conditions and therefore have a high rate of avail-ability. Synthetic Aperture Radar (SAR) systems provide a two-dimensional microwave image of the area scanned by the radar. A three-dimensional image is obtained in the interferometric mode, which involves acquisition and combina-tion of two or more images of the same area from slightly different observacombina-tion angles.
During the flight, the SAR sensor periodically transmits microwave pulses orthogonal to its flight direction which are scattered back by the illuminated targets and finally received and stored by the system. The geometry, the surface roughness, and the electromagnetic properties of the object illuminated as well as the frequency and polarisation of the radar will influence the magnitude and phase of the backscattered signal. The geometric resolution in flight direc-tion is determined by the length of the synthetic aperture and is therefore independent of the distance between object and sensor. In cross-track direction the resolution is defined by the bandwidth of the transmitted signal and again is independent of the distance between object and sensor.
SAR acquisitions can be carried out in different modes according to the users requirements. The most popular Stripmap mode allows to map contiguous strips with reasonable resolution. In Spotlight mode, higher azimuth resolu-tion can be achieved, whereas in ScanSAR mode wider swaths are obtained, yet at the expense of a significantly lower resolution.
SAR systems are operated on airborne as well as on spaceborne platforms. Spaceborne systems presently pro-vide a geometric resolution of up to 10 m x 10 m (Radarsat 1, stripmap mode), allowing strips with a width of 50 km. Due to this extended coverage and the typically realised repeat cycles of a few days only, spaceborne systems are ide-ally suited for operational global monitoring. In addition, adequate repeat intervals together with the stability of satellite platforms enable the use of the differential interferometry technique (D-InSAR), allowing to detect changes in terrain elevation in the centimetre range. With today's standard airborne systems, geometric resolutions up to 0.5 m can be achieved, depending on several parameters like the carrier frequency. Those systems usually illuminate strips of few km width. In general, airborne systems are characterised by their high geometric resolution, the large number of available frequency and polarisation modes, and their high flexibility of use. In terms of performance, today's airborne systems can be seen as the predecessors of future satellite systems.
The following paragraphs provide a brief description of the SAR system geometry and standard SAR signal processing approach valid for both airborne and spaceborne systems.
Imaging Geometry of Sidelooking Radar
Fig. 4 shows the typical geometry of a SAR system. The carrier flies at height h along azimuth direction x. The radar illuminates a strip of width SW in range direction r by transmitting short pulses with a certain repetition rate (PRF) so that each object on ground is illuminated manifold according to PRF and carrier speed. The size of the area illuminated depends on the angle of incidence and the antenna beamwidths, θa and θe. The absolute location of the
illu-minated area is determined by the antenna depression angle θdr and the squint angle θda. The geometry described applies
to both airborne and spaceborne systems. In the latter case, however, the earth’s curvature has an impact on the imaging geometry.
Due to the sidelooking characteristics of the imaging geometry shadowing (similar to optical observations) as well as layover may occur in the final images, which is a fundamental limitation especially for areas with steep terrain slopes or man-made structures.
SAR Signal Processing
In order to obtain images of the earth’s surface the received radar signal has to undergo an image formation (or processing) step. Main part of this processing is the SAR focusing. Due to the two key principles of SAR imaging, the synthetic aperture principle and the transmitting frequency modulation technique, a target’s backscatter information is distributed over a two-dimensional area in raw data space so that a visualisation of raw data looks like an image con-taining pure noise. Using a technique called matched filtering the information is compressed so that each target appears with its true radiometric and geometric properties according to the system parameters.
Additional processing methods are needed to correct for SAR-specific distortions concerning the image geome-try and radiomegeome-try. Due to the sidelooking principle geometric distortions are caused by topographic variations and the earth’s curvature. They can be removed using a priori knowledge of the surface topography. The brightness of the indi-vidual objects depends on a variety of parameters of the system, the imaging geometry, and the object itself. A final radiometric calibration ensures an object-inherent signal strength according to their backscatter properties, which is a prerequisite for reliable automatic classification.
Operational SAR System
For the acquisition of the SAR data shown below, an airborne X-band SAR system designed and constructed by Aero-Sensing Radarsysteme GmbH and now operated by Intermap Technologies GmbH, has been used. This SAR system represents state-of-the-art in the respective frequency ranges, X- and P-band, with respect to end-to-end per-formance and operational use. Aero_Sensing/Intermap Technologies designs, constructs and operates airborne SAR and interferometric SAR systems since 1996. The X-band system is in operational use for commercial flight campaigns since 1996, P-band since 1998. Both have been operated in numerous commercial flight campaigns. Among others, extended mapping projects have been carried out in Venezuela (268,000 km2), Indonesia (45,000 km2), Brazil (10,000 km2), USA (40,000 km2), and Jamaica (16,000 km2). The following table summarizes the key system parameters.
System parameter X-band P-band
operating frequency 9.55 GHz 415 MHz
Bandwidth 400 MHz 70 MHz
Polarisation HH HH,HV,VH,VV
Ground resolution 0.5 m 2.5 m
InSAR baseline 0.6, 1.5, and 2.4 m variable (repeat-pass) Flight altitude 500 ... 9000 m
Aircraft Gulfstream Commander
6. Automated feature recognition using object oriented image analysis technology
In order to take benefit of the digital image data stream to be supplied by the imaging sensors of a satellite, air-craft or UAV should be analysed by automated techniques, rather than by pure visual inspection. Features, which have changed along the pipeline track and are possibly interfering with the buried pipe, should be automatically detected – based on the supplied digital imagery - and supplied to an alarming system. Possibly an easy task for a skilled human analyst, software to mimic human visual understanding is currently not used for this task. Traditional image analysis software are either targeted to evaluate the numeric signal contained in the single image pixel or to find a specific target based on a fixed image pixel template by means of template to image matching.
However, the human visual system does not look at pixels, but immediately grasps the content of an image by means of analysing the “objects” contained in an image and taking their “spatial relations” into account. Moreover, the information about “objects” are weighted according to their reliability. The sum of all this information (including the background knowledge of the analyst), determines the interpretation of an image. A new algorithmic approach is going to use exactly this procedure to find potentially dangerous objects in digital imagery.
This approach is implemented in the software eCognition (REF). Within eCognition digital raster images are “segmented” to generate a hierarchy of objects, i.e. homogenous areas (such as houses, streets, but also cars on streets) are treated as “objects” and no longer as set of individual pixels. All identified objects are annotated with their colour, colour staistics, size, shape and neighbourhood relations to other objects.
In order to tell eCognition, what kind of objects would be of interest, “rule bases” specify the visual appearance of objects, are able to incorporate other information (such as the position of the pipeline corridor supplied in a GIS structure) and apply certain reliability weights to the defined information (e.g. a “car” has a minimum and a maximum
size). The reliability is introduced by means of fuzzy logic reasoning. The therewith defined rules can now be applied to the network of image objects, automatically generated by eCognition. The result is the identification of the image ob-jects defined in the rule base (Figure 5). Due to the fuzzy logic implied, identified image obob-jects are labelled with a reliability of extraction. Based on the quality of the input image data and the unambiguous definition of rule bases, objects can carry certain degrees of “danger” to be imposed on the pipeline system. At a certain “danger” level, these objects are subject to further – possibly human – intervention and inspection. Because eCognition identifies objects with a clear outline, outlined of objects can directly be vectorized and exported to Geographic Information Systems (GIS) for further mnagement and analysis.
Initial tests to classify objects with the pipeline corridor based on high resolution space borne optical imagery have been performed (Figure 6). Due to the use of shape and neighbourhood relations in object oriented image classifi-cation, the dependence of spectral reflection of objects (in parts even the use of radar instead of optical images) can be diminished. Rule sets are identified, which classify a scenery as independent as possible from the sensor technology used and are only (in operational system automatically) updated on certain functions based on specific parameters of the sensor (including image geometry) and the scenery (e.g. the sun elevation and the corresponding casting of shadows plays an important role in the identification of objects).
Amongst the operational advantages of pipeline monitoring is the general knowledge of the scenery, due to the frequent observation of the pipelines. An operational image analysis system has therefore less to concentrate to identify the entire scenery as such, but to address the differential changes in the scenery which might have occurred during the last observation. As a multitude of sensors and imaging geometries would be used by an operational system, the “nor-malisation” of the general scene information to a standard GIS format would be appropriate. The question is than to extract the changes within the updated imagery with reference to a GIS (and not the changes of one recent optical image with a previous radar image of the same scene) (Figure 7). Again, the advantage of object oriented feature analysis, is the concentration on object appearance and neighbourhood parameters and therewith the wide independence from imag-ing scale, geometry and sensor technology.
7. Experimental optical and SAR data
The suitability of optical and SAR sensors for the inspection of gas pipelines has been shown in a first demon-stration flight campaign in the EU project PRESENSE (http://www.presense.net/): Different objects (e.g. excavators) were placed on sites typical for gas pipeline surroundings (Figs. 8 and 9). After the observation of these scenarios with the airborne optical and SAR sensors, the objects were moved to different places, and data were taken for these changed scenarios. The changes can well be seen by eye inspection. (see Fig. 10). Assuming a (future automatically operating) feature extraction system, .this type of data can be processed to produce the correct alarms for cases of real threats to pipelines.
8. Two possible UAV scenarios for a future pipeline inspection system
Civil unmanned aerial vehicles (UAVs) seem to have a great potential to contribute to the improvement of the quality of life for the European public and beyond. The civil UAVs can perform where manned flight is too dangerous, expensive or monotonous. There are numerous applications where the use of Civil UAVs are highly preferable over manned flight, such as detecting storms and forest fires, or inspecting large infrastructures such as pipelines or electric power lines. However, the use of the Civil UAV is presently highly limited by the lack of regulations, standards and procedures necessary to operate the Civil UAV in a civil ATC/ATM environment. The establishment of airworthiness and operational certification standards is necessary to open the airspace for Civil UAVs.
These airworthiness standards for UAV must ensure that the appropriate safety level (with respect to potential risks on the persons and the property on the ground) is met and that UAV gain public trust as well as social and political acceptance. UAV airspace integration will bring new operational aspects to the ATC/ATM community because of the differences in control, performance and flight objectives between Civil UAVs and general aviation. It can be expected that all these safety related problems can be solved within the next few years.
Once the safety and guidance issues have been solved and operational certification procedures exist, Civil Un-manned Aerial Vehicles will be ready to operate in civil airspace and serve numerous civil applications: UAVs seem to be suitable airborne platforms for the task of regular inspections of gas pipelines. The requirements with respect to sensor type and size for operational monitoring of gas pipelines, however, depend on the type of UAV. For this purpose, two different scenarios for UAV based pipeline monitoring systems will be considered:
1. Small and lightweight, low-altitude UAVs with a limited sensor and weight capacity, e.g. the LUNA X-2000 system (Fig. 11)
2. Medium-size, mid-altitude UAVs with a weight capacity sufficient for multi-sensor applications, e.g. the system EAGLE (Fig.12).
UAV scenario 1 – small and lightweight low-altitude UAV
A small lightweight low altitude UAV can be characterised by the following parameters Altitude: low
Payload typical: <25 kg
Endurance: 5-6 hrs
Such a UAV is operated in the uncontrolled (lower) airspace and , therefore, requires appropriate sense-and-avoid technology to sense-and-avoid collisions with obstacles such as buildings, electrical power lines or low flying objects (bal-loons). For the purpose of pipeline inspection, an altitude of ca. 100 m might be appropriate, which is generally below clouds. In this case, an optical/IR sensor system will be sufficient. Furthermore, high resolution can be achieved with even small optics. From the technical system and platform point of view, all components for UAV based pipeline in-spection are available:
• The LUNA X-2000 platform has been operated successfully in Kosovo and Iraq. • Lightweight optical/IR sensors are commercially available for airborne applications. • On-board image pre-processing tools (hardware and software) exist.
• Data transmission to ground stations is no major technical problem.
• The image processing and feature extraction efforts are relatively simple, since the processing is re-stricted to one sensor.
However, safe automatic operation in uncontrolled low airspace have to be developed, which is a necessary con-dition to provide a feasible technical solution.
UAV scenario 2 – medium-size and medium-weight mid-altitude UAVA typical MALE (Medium Altitude Long Endurance) UAV is characterised by
Payload typical: 200 kg
Endurance: up to 30 hrs
Such a UAV is operated in the controlled airspace and must, therefore, be implemented in a full ATM/ATC en-vironment. Since the UAV is operated above 1000 m is, i.e. in general, not below clouds, a radar (SAR) sensor is re-quired, which could be complemented by an optical/IR sensor system. This, in turn, requires an appropriate payload capacity. From the technical system and platform point of view, all components for UAV based pipeline inspection are available:
• The EAGLE UAV has demonstrated successful operation, e.g. in campaigns in Northern Scandinavia in 2002.
• One possible (and tested) payload was a radar/SAR system, thus proving the respective feasibility. • Due to the high radar data rates in the Gbit/s range the data transmission to ground stations requires an
advanced IT technology, however this problem has been proven to be solvable.
• The image processing and feature extraction efforts is much more complicated than for scenario 1, since (i) radar data require more sophisticated processing steps and (ii) this data, in general hase to be com-bined/fused with data from other sensors.
As in scenario 1 the certification standards, safety, and ATM/ATC regulations still have to be developed, which is again a necessary condition for the operation of a UAV based pipeline monitoring system..
For both UAV cases, the technical feasibility of the total system and platform is only the first step: Once all (fu-ture) requirements and standards for a safe operation are met, the crucial aspect of cost-effectiveness can be considered, which has to take into account all cost categories including potentially high expenses for certification and operation.
9. Summary and Outlook
UAVs provide the appropriate platforms for a remote sensing based inspection system: Appropriate small and medium size UAV have been developed, their operation is technically feasible in controlled as well as in uncontrolled airspace.The technology for sensors, as well as for data and information processing seems to be suitable for this task, On the other hand, several key issues have to solved before a UAV based pipeline inspection system can be employed:
1. The data and information processing has to be developed to an operational standard, the capability of near-real-time determination of threats with high probability and low false alarm rates is not yet existing. 2. A total operational system consisting of UAV platform, sensors, data processing and alarm detection has
not yet been demonstrated and proven its feasibility in a complete mission.
3. Finally the certification and operation standards for a safe and efficient operation of UAVs in controlled as well as uncontrolled airspace might be the most crucial problem.
This work has partly been accomplished in the frame of the EU projects PRESENSE (http://www.presense.net/), Contract No. ENK6-CT2001-00553, and UAVNET (http://www.uavnet.com/), Contract No. Contract No. G4RT-CT-2001-05053. The respective inputs from both consortia, especially from Intermap Technologies and NLR, are kindly acknowledged. Special thanks to Marcus Schwäbisch (Intermap Technologies) and Peter Reinartz (DLR) for processing the sensor data.
 Hausamann, D., “Civil Applications of UAVs – User Approach”, Shephard’s Civil UAV Symposium, London, UK, 17 – 19 July (2002).
 Hausamann, D., Brokx, W., “User Driven UAV Applications - Pipeline Monitoring and other Examples”, Proc. First European Conference on the Applied Scientific Use of UAV Systems, Kiruna, SW, 10 - 11 June (2002).  Zirnig, W., Hausamann, D., Schreier, G., “A Concept For Natural Gas Transmission Pipeline Monitoring Based
on New High-Resolution Remote Sensing Technologies”, Contributed Paper, International Gas Research Confer-ence 2001, Amsterdam 5th – 8th November (2001).
 Zirnig, W., Hausamann, D., Schreier, G., “High-Resolution Remote Sensing Used to Monitor Natural Gas Pipe-lines”, Earth Observation Magazine, 11, pp12-17 (2002).
 eCognition (2003); most recent information under www.definiens-imaging.com.
 Benz, U. C., Schreier, G. (2001): OSCAR - object oriented segmentation and classification of advanced radar allows automated information extraction. In: Proceedings of IGARSS 2001, July 2001, Sydney.
 Blaschke, T., Strobl, J. (2001): What’s wrong with pixels? Some recent developments interfacing remote sensing and GIS. In: GeoBIT/GIS 6: 12-17. http://www.definiens-imaging.com/down/GIS200106012.pdf
Figure 1: European gas transmission pipeline network
Figure 2: Airborne Optomechanical Multispectral Scanner Daedalus AADS 1268
A: perspective B: side view C: top view point target at (r0,x0) A) B) C) range direction r r SW azimuth x SW SW a Θ dr
Figure 4: SAR viewing geometry
Raster & vector data
Human knowledge stored in “rule bases“
Classification based on “rule bases“ Automatic
generation of object hierarchy
Result: Information from images ready to use in geographical information systems
Figure 5: Processing steps and data flow within object oriented eCognition: Raster and collateral vector-information are turned into a hierarchy of objects. Pre-defined rule based – applied to this object hierarchy – classify and label the objects. The
Figure 6: Identification of objects near a pipeline track based on 1 m resolution IKONOS images. The right image contains the eCognition object classes, separated into areas outside and inside the pipeline surveillance corridor.
si & Satel- Air-Imag Supp Feature Fea-Data Fu-on Feature and Scenery (GIS) Op eration co Change De-tection Alarming System
Figure 7: Sketch of a fully operational image analysis system for pipeline monitoring. The principle would be the compari-son of objects – found in the imagery – with reference GIS information, available for the pipeline track. Some identified objects (such
Figure 8: Pipeline Monitoring Test Site
Figure 9: Test Scenario with Excavator
Figure 10: Pipeline Scenario:
Excavator in different positions:a,c,e) position 1, b,d,f) position 2 a,b) Airborne optical images (DLR Daedalus)
c,d) Airborne infrared image (DLR Daedalus) e,f) Airborne SAR image (Intermap Technologies X-SAR)
Figure 12: LUNA X-2000 UAV