• No results found

Merging panchromatic multispectral images for enhanced image analysis

N/A
N/A
Protected

Academic year: 2019

Share "Merging panchromatic multispectral images for enhanced image analysis"

Copied!
217
0
0

Loading.... (view fulltext now)

Full text

(1)

Rochester Institute of Technology

RIT Scholar Works

Theses

Thesis/Dissertation Collections

8-1-1990

Merging panchromatic multispectral images for

enhanced image analysis

Curtis K. Munechika

Follow this and additional works at:

http://scholarworks.rit.edu/theses

This Thesis is brought to you for free and open access by the Thesis/Dissertation Collections at RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please contactritscholarworks@rit.edu.

Recommended Citation

(2)

MERGING PANCHROMATIC AND

MULTISPECTRAL IMAGES

FOR

ENHANCED IMAGE ANALYSIS

by:

CURTIS K. MUNECHIKA, Captain, USAF

A thesis submittedinpartial fulfillment of the requirements for the degree of Master of Science

in the Center for Imaging Scienceinthe College of Graphic Arts and Photography at the

Rochester Institute of Technology

August 1990

Signature of the Author ....lo.(...1JLLrtLLiS:>...JKL>...l.lMUlI..u.IO.JJ:e:.Jo...cL1JbiL.Do.ka..a _

Accepted by _

(3)

Center for Imaging Science College of Graphic Arts and Photography

Rochester Institute of Technology Rochester, New York

CERTIFICATE OF APPROVAL

M.S. DEGREE THESIS

The M.S. degree thesis of Curtis Munechika has been examined and approved by the thesis committee as satisfactory for the thesis requirement

for the Master of Science degree

Dr. John R. Schott, Thesis Advisor

Lt. Col. Phil Datema

Mr. Carl Salvaggio

d3

406

/0

(4)

Center for Imaging Science

College of Graphic Arts and Photography Rochester Institute of Technology

Rochester, New York

THESIS RELEASE PERMISSION FORM

Title of Thesis: Merging Panchromatic and Multispectral Images for Enhanced Image Analysis

I, Curtis K. Munechika grant pennission to the Wallace Memorial Library of the Rochester Institute of Technology to reproduce this thesis in whole or in part

provided any reproduction will not beof commercial use or for profit.

(5)

MERGING PANCHROMATIC AND

MULTISPECTRAL IMAGES FOR

ENHANCED IMAGE ANALYSIS

by

Curtis K. Munechika

Submittedto theCenter forImagingScience in

partialfulfillmentoftherequirementsforthe

MasterofScience degreeatthe

Rochester InstituteofTechnology

ABSTRACT

This study evaluates severalmethods that enhance the spatial resolution of multispectral

images using afinerresolutionpanchromatic image. Theresultanthybrid,highresolution, multispectral data set has increased visible interpretation and improved classification

accuracy, while preserving the radiometry of the original multispectral images. These

(6)

ACKNOWLEDGEMENTS

Thiswork could nothave been completed without the supportofmanypeople. I would first like to thank theUnited StatesAir Force forallowingmethe opportunity to study at

theCenterofImaging Science. Theexperiencehas beenexceptionallyrewarding.

Iwouldalsoliketo thankmycommittee membersforalltheirassistance:

toDr. JohnSchott,for providing theoverall guidance,focus, and an occasionalkick in the pants;

to Lt Col Phil Datema, for his time, patience, and support in maintaining a long distance workingrelationship; and

toCarl Salvaggio, for answering allofmynot-so-quick questions.

Specialthanks alsoto: Jim Warnickwhofirst startedthis study andwasmy best sounding board; toDr. Roger Eastonwhosecompanyattheoddesthours wasappreciated; to Steve Schultzwhorescuedmyworkfromaterminallyillcomputer; andtoLauraAbplanalpand

Leslie Gentherwhohelped in puttingthis together.

Lastly I wish to acknowledge my comrades-in-arms fortheir encouragement, assistance,

and abuse: RanjitBhaskar, Betsy Fry,GopalSundaramoorthy, WendyRosenblum, Steve

(7)

DEDICATION

Thisworkis dedicated tomy family...

To myparents,forinstillingmy desiretolearn (and fortheirbabysitting);

To mybrother,for his insightandtime (as wellasuseofhis computer);

andespecially

to

(8)

Table of Contents

ListofFigures ListofTables

1.0 Introduction 1

1.1 Spatial, Spectral, Radiometric Resolution 2

1.1.1 Resolution and Classification Accuracy 3

1.1.2 Resolution Trade-offs 5

1.2 Overview on Merging Images 7

1.2.1 Merging for Display 7

1.2.2 Merging bySeparate Manipulationof

Spatial Information 9

1.2.3 Merging and Maintaining Radiometry 10

2.0 Merging Methods and Modifications 12

2.1 DIRS Method for Merging 12

2.1.1 Results Using the DIRS Merging Method 13

2.1.2 Concerns on the DIRS Merging Method 13

2.2 General Modifications to the DIRS Method 17

2.2.1 Creating a TM Panchromatic Image 17

2.2.2 Interpolated TM Input Images 24

2.2.3 A Technique for Radiometric Post-Correction 25

2.3 DIRS Enhancement 1 Method for Merging 27

2.4 DIRS Enhancement 2 Method for Merging 29

2.4.1 A Modification to DIRS Enhancement 2 Method 36

2.5 Price's Method for Merging 37

2.5.1 Case 1 - Correlated Input Images 37

2.5.2 Case 2

-Weakly Correlated Bands 40

(9)

2.6 Price's Modification ~ A Method

ofHandling

Weakly Correlated Bands 42

3.0 Experimental Approach 46

3.1 Selection and Preparation of Test Imagery 46

3.1.1 Registration of Images 47

3.1.2 Blurred Image Sets 49

3.1.3 Figures of Images 50

3.2 Classification of Test Imagery 54

3.2.1 Training Samples for Classification 54

3.2.2 Measuring Classification Accuracy 55

3.2.3 Testingthe SignificanceofOverall Classification Accuracy 56

3.3 Radiometric Error Analysis 57

3.4 Running the Methods and Modifications 59

3.4.1 Running DIRS Enhancement 2 59

3.5 Experimental Procedure 61

3.5.1 Pan 1

-Merging with Coarser Resolution Data Sets 61 3.5.2 Part 2

-Merging withCoarser Resolution DataSets with a

Lower Registration Error 64

3.5.3 Part 3 Merging with Original Resolution Data Sets 64

4.0 Results 65

4.1 The New Synthetic Panchromatic Image 65

4.2 Computing the Error in Reflectance 70

4.3 Results for Part 1

-Merging with Coarser Resolution Data Sets 75 4.3.1 Observations from Part 1

-MergingwithCoarser

(10)

4.4 Results for Part 2 Mergingwith CoarserResolutionDataSets

with a Lower Registration Error 80

4.4.1 Observations from Part 2 Mergingwith Coarser

Resolution Data Sets withaLowerRegistration Error 84

4.5 Results for Part3

-Merging withOriginal Resolution DataSets 85 4.5.1 Observationsfrom Part3

-Merging with

Original Resolution Data Sets 90

4.6 Other Comparisons Between Merging Methods 90

4.6.1 Visual Comparisons 90

4.6.2 Implementation Comparisons 99

5.0 Conclusions 101

6.0 Recommendations 103

6.1 Improvements to the Price Methods 103

6.2 Improvements to the DIRS Methods 104

6.2.1 Recommendations in Processing Correlated Bands 104 6.2.2 Recommendations in Handling Weakly Correlated Bands 106 6.2.3 Improvements to the Interpolated Inputs 107 6.2.4 Improvements to the DIRS Enhancement 2 Method 107

6.3 General Considerations 109

(11)

Appendix A Classification Results for Original TM Data

Appendix B Classification Results for Hybrid Data(30m)using

Blurred TMandSPOT Data

AppendixC- Radiometric Results fortheHybrid Data

(30m)using Blurred TMandSPOT Data

AppendixD Classification Results for Hybrid Data(30m)using

BlurredTM andRe-registered SPOT Data

Appendix E~ Radiometric ResultsfortheHybrid Data

(30m)using Blurred TMandRe-registered SPOT Data

Appendix F Classification Results for Hybrid Data(10m)using

TMandRe-registered SPOT Data

AppendixG - Statistical Tests

AppendixH~

Input ParameterstoComputetheWeightingFactors

foraSynthetic Panchromatic Image

Appendix I

(12)

List of Figures

Figure 1-1 Pradine'sMergingMethod

Figure2-1 Superpixeland subpixels

Figure2-2 Reflectance SpectraforSelectedUrbanTargets Figure 2-3 Reflectance SpectraforSelectedSoilTargets Figure2-4 ReflectanceSpectra for SelectedWaterTargets

Figure2-5 Reflectance SpectraforSelected TreesTargets

Figure 2-6 Reflectance SpectraforSelected Grass Targets Figure 2-7 Pixel Replication

Figure 2-8 InterpolatedInput

Figure2-9 DIRS NearestNeighbor Correction Routine Figure2-10 DIRS Enhancement1 Correction Routine

Figure2-1 1 Connectionsbetweenthe

center subpixel

Figure2-12 Flow DiagramofDIRS Enhancement2MergingMethod Figure 2-13 Linear RegressionofTM Bands

andthePanchromatic Image Figure 2-14 Price'sMergingMethod- Case 1

Figure2-15 Creationof aLook-upTable (LUT)

Figure 3-1 Original TM (30mGIFOV)Bands 5,3,2 in RGB

Figure3-2 Blurred TM (90mGIFOV)Bands 5,3,2 in RGB Figure 3-3 OriginalSPOT (10mGIFOV)

Figure4-1 PlotofTM 1 DCversusEstimatedReflectance

Figure 4-2 PlotofTM2 DCversusEstimated Reflectance

Figure 4-3 PlotofTM3 DCversusEstimated Reflectance

Figure4-4 PlotofTM4 DCversusEstimated Reflectance

Figure4-5 ComparisonofCoarser Resolution Images

Figure 4-6 ComparisonofOriginal ResolutionImages

Figure 4-7 Replicated TM Bands 5,3,2 (30mGIFOV) (magnification4x)

Figure4-8 Interpolated TM Bands 5,3,2 (30mGIFOV)(magnification4x) Figure 4-9 DIRS MethodwithInterpolated Input (magnification4x)

Figure4-10 DIRSMethodwithInterpolatedInputandPost-fixing(magnification4x) Figure4-11 DIRSEnhancement 1 withPost-fixing (magnification4x)

Figure4-12 DIRSEnhancement2withTM4,TM5modification(magnification 4x)

Figure4-13 Price's Method (LUT)(magnification4x)

(13)

List of Tables

Table 1-1

Table 3-1 1

Table3-2 Table3-3

Table 3-4 ]

Table3-5 I Table 4-1

Table 4-2

Table 4-3

Table 4-4 !

Table4-5 I

Table 4-6

Table 4-7a

Table 4-7b I

Table4-7 ! Table 4-8a I

Table4-8b :

Table4-8c

Table 4-8 I

Table 4-9a

Table 4-9b

Table 4-9c !

AComparisonofMultispectral Scanners(MSS) and

Thematic Mapper(TM)

SceneAcquisition andEphemeris Data

Scene Statistics

Transformation Coefficients

New Transformation Coefficients SummaryTableofMergingMethods

SummaryofWeightingFactors usedtoGenerate theTM Panchromatic Image

SummaryoftheHistogram StatisticsforthePanchromatic Images

SummaryoftheLinearCoefficientsusedfor Adjustment

SummaryoftheError Differences Selected Control Points

Averaged DCandEstimated Reflectance

SummaryofClassification Breakdown byPercentage (30mhybrid) SummaryofClassificationAccuracy with

Independent DataSet 1 (30mhybrid)

SummaryTable for 30m HybridResults

SummaryofClassification BreakdownbyPercentage

(re-registered 30mhybrid)

SummaryofClassificationAccuracywith

IndependentData Set 1 (re-registered 30mhybrid)

SummaryofClassificationAccuracywith

aRandomData Set 1 (re-registered 30mhybrid) SummaryTable for Selected 30m Hybrid Sets usinga

Re-registered SPOT imageasInput

SummaryofClassificationBreakdownbyPercentage (10mhybrid) SummaryofClassificationAccuracywith a

Random DataSet 1 (10mhybrid)

SummaryofClassificationAccuracywith

(14)

Table 4-9d SummaryofClassificationAccuracywith Independent Data Set 2 (10mhybrid)

(15)

1.0 INTRODUCTION

Spatial resolution is an importantparameter in image interpretation. However, to

capture multispectral images or images within a narrower spectral bandpass, spatial

resolution is often diminished. In general, if a sensing system has fine spectral discrimination,thenitis physicallydifficulttoalsohave fine spatial resolution.

The emphasis of this study is to enhance the spatial resolution of multispectral

images using data from another sensor. In particular, this study evaluates several

techniques tomerge the medium resolutionThematic Mapper(TM) multispectral images

with a high resolution panchromatic image captured from the SPOT satellite. The

performance ofeach merging technique is measured by classification accuracy on the hybrid images, as well as on how well the hybrid images maintain the radiometric and

spectralinformationoftheTM data.

The results of this study show that these merging techniques can produce high

resolution multispectral images for enhanced image analysis. Not only is visible

interpretation clearly improved, butclassification accuracy isalsoincreased. In addition,

theresultanthybridimagesadequatelymaintainthe TMradiometry.

Fortheremainderofthisintroductorychapter, section 1.1 discusses some concepts

on resolution in theremote sensing arena. It covers the effects that different types of

(16)

1.1 Spatial, Spectral, and Radiometric Resolution

Ingeneralterms,resolution can bethoughtof as theabilityofa systemtodistinguish

fine detail. Most often, we think of resolution in terms of "spatial resolution", orhow

well we can resolve the spatial detail in an image. As such, there are many ways to

measure orquantify the spatial resolution of an imaging system (i.e. by themodulation

transferfunction, ground resolvabledistance,etc).

For simplicity, the ground instantaneous field-of-view (GIFOV) is used in this

study. TheGIFOV is theprojection ofthelimitingdetector aperture ontothe ground. Itis

similar to the ground sample distance and has units of length. Thus the smaller the GIFOV, the smaller the sampling distance on the ground, and the finer the spatial

resolution ofthe system.

However, in the field of remote sensing, and especially with multispectral data,

thereareotherformsofresolutionthatareequally asimportant.

"Radiometric resolution"

is determinedby thenumberof effective grey levels that

are availableto the systemtorepresentthescene brightness. Forexample, an8-bitsystem

would be ableto record an image in 256 levelsofbrightness. Its radiometric resolution

wouldbe finerand more precisethan a6-bitsystem which canonlywork with64 levels.

"Spectral resolution"

can be thought of as the width of the bandpass over which

radiance is measured. The more narrow this wavelength interval, the finer the spectral

resolution. Intuitively, thefineryourspectral resolution,themore spectralbandswe could

(17)

more spectralinformation willbeavailable.

1.1.1 Resolution and Classification Accuracy

Theeffect ofthese three types of resolution onclassifyingimageshas been studied,

particularlysince theintroductionoftheThematic Mapper.

The ThematicMapper(TM) was launchedaboardLandsat-4 onJuly 16, 1982. In

comparison with the older Multispectral Scanners (MSS), TM provided finer spatial

resolution, narrower and more optimally placed spectral bands, and finer radiometric

precision. Acomparison chartis shownbelow.

Table 1-1

AComparisonofMultispectral Scanners(MSS)and ThematicMappej(TM)

MSS TM

GIFOV

(m) 80

30 (bands1-5,7)

120 (band6)

Spectral

Bandpass

(micron)

0.5 - 0.6 0.6- 0.7 0.7-0.8 0.8-1.1

0.45- 0.52 0.52- 0.60 0.63- 0.69 0.76- 0.90 1.55

- 1 .75 10.40 - 12.50

2.08- 2.35

Quantization Levels 64 (6Ms) 256 (8bits)

(18)

determine their effect on classification accuracy. Even before the TM was launched,

Sadowski et al [77] used simulated data to find that classification accuracy would be

significantlyenhanced with theadditionalTM spectral bandsandtheincreasednumber of quantization levels. However, they alsodiscovered thatclassification accuracy actually

decreased as spatial resolution became finer. These results were corroborated by

Morgenstern et al [77], Latty and Hoffer [81], Markham and Townshend [81], and

Williamset al[84].

The reason behind the apparent lack of effect that finer spatial resolution has on

classification is becauseoftwooffsettingeffects. Improving spatial resolution will sharpen

boundaries and reduce theamountofmixed pixels in an image. This reduction in mixed

pixels contributesto an increase inclassification accuracy. However, improving spatial

resolution also increases the within-class spectral variance causing a decrease in

classification accuracy [Landgrebe77, and Irons et al 85]. Thus depending on the input

image, increasing spatial resolutionmayormaynot aidincomputerclassification - even

though the advantages of increased spatial resolution appearobvious when conducting

manual photointerpretation.

Many of these studies concluded that the commonly-used per-pixel Gaussian

maximumlikelihood(GML)classifierdidnoteffectivelyusetheadditionalinformationthat

increased spatial resolution provides. However, in these cases, only spectral bands were

usedasinputto theGMLclassifier.

To better use this information in high spatial resolution images, current classifiers

have beenfollowingeitheroftwopaths,one usingthetextural features found in theimage

and the other using context to support classification. For two of the more well-known

papers on using textureand context forclassification, referto Haralick [79] and Gurney

(19)

Using these types of classifiers which apply spatial information, classification

accuracieshave increasedwith morespatial detail [DiZenzoet al 87,HjortandMohn 84, Warnick et al 89]. Rosenblum [90] found that the optimal set of input features for

classification ofhighresolutionairphotos containbothspectral andtexturalinformation.

Generally, if a classifier can effectively use all spatial, spectral, and radiometric

information, classification accuracies can be improved with finer spatial, spectral, and

radiometric resolution.

1.1.2 Resolution Trade-offs

Unfortunately, asin manyrelationships, therearetrade-offsbetween thedifferenttypes

of resolution. Forexample, ifwe wished tohave a systemwith finespatialresolution, it

would be very difficult for this system to additionally have fine spectral resolution. To improvethespatial resolution of ourscanningsystem, we wouldrequirea smallerGIFOV. A smaller GIFOV means the energy reaching the sensor has originated from a smaller

ground area and ifother parametersremain constant-- this

meansless energy reaching the sensor. Lower levels ofenergy means a lower signal level available to the sensor,

resulting in a lower signal-to-noise ratio (SNR). To produce useful output, the sensor mustbe ator above athresholdSNRtodistinguisha signalfromthenoise. Tocompensate thislowerSNR due toa smallerGIFOVwe couldbroaden thespectral bandpassto allow

moreenergythrough

-i.e., spectral resolution is sacrificedto compensateforthe gainin

spatial resolution.

Conversely, ifwe wished ourdata tobe captured within a narrower spectral band,

again less energywouldbe incidentonthedetector. Increasing theGIFOV(losing spatial

(20)

acceptableSNRismaintained.

Another trade-off mentioned by Green [88] is a simple data handling problem.

Supposewehad 8Mbitsofdigitalstorage. Wecould approximatelystore either a:

1000x 1000pixel image,8bits/pixel, 1 spectralband;

or

380x 380pixelimages, 8 bits/pixel,7 spectralbands;

or

1000x 1000pixelimages,4bits/pixel,2 spectralbands.

The first system provides high spatial resolution but limited spectral information. The

second example provides a significant amount of spectral data, but at a low spatial

resolution. The third system provides some spectral resolution athigh spatial resolution, but at much lower radiometric precision than the first two systems. Other similar data

handlingconstraintsthatcoulddrive informationtrade-offsarelimitations in transmission

and acquisitiontime.

All these system trade-offsare dependent on thecurrent state oftechnology. With

timeandtechnologicaladvances,information trade-offswillbecomelesssevere. Butgiven

existing data andexistingremote sensing systems, we can overcomethese current trade

offs bycombining data from two complementary systems. The resultant hybrid data set

would contain the "best"

(21)

1.2 Historical Overview on Merging Images

Toimprovetherelatively coarse spatial resolution of multispectralimages,theidea

of merging data from different sensors has been attempted. Early studies reported

successful merging with the LandsatMSS images (80 m GEFOV). Radar images from

airborne systems and the Shuttle Imaging Radar

(SIR-A) have been merged with MSS

images toenhance geologicalinterpretation [Daily 79; Chavez 83]. LauerandTodd [81]

combined MSS and RBV images to obtain a hybrid product, while Schowengerdt [82]

mergedMSSwiththeHeatCapacity MappingMission(HCCM)data.

With theintroduction oftheTMand SPOTsensor systemswhich have finerspatial

and spectral resolutions, further studies on merging werereported. All ofthese merging

techniques, as well as those done with.MSS images, could basically be classifiedinto 3

categories.

1.2.1 Merging for Display

Price [87] coined the first category of merging techniques as the "ad hoc"

approaches. Since the primary concern ofthese methods was to optimizeimage display,

some unusual operations were done on the data. However, theresultanthybrid images

appeared tohave increased spatial resolution. Welshet al [87] summarizedthese ad hoc

methods intotwoequations:

M'i = aj x (m- x p)1/2

+ b; (1-1)

or

(22)

where:

M|'

= thedigitalcount

(DC)fora pixelinthei-th bandofthemergedimage;

M-= DC forthe

correspondingpixelinthei-thmultispectralimage;

P = DC for the

correspondingpanchromatic referenceimagepixel;

(>l ,a>2 = weightingfactors;

ai ,

bj

= scaling factorstooptimizethedynamicrange; and

= operatorwhich couldbe

addition, subtraction, multiplication, ratio,etc.

Using these methods and simulated SPOT data, Cliche [85] integrated the

panchromatic channel into the multispectral channels to significantly improve visible

interpretation. Chavez [84] added edge enhanced (simulated) SPOT data toTMimages.

Chavez [86] also mergedTMdatawithadigitizedpanchromatic photograph. Hashim [88]

simply

"overlaid"

registeredTMandMSS imagestoobtain ahybridproduct.

Currently, many Geographical Information Systems (GIS) which contain layersof

information intheirdatabase,implementsome sortofdatacombination fordisplay to the

user [Welsh 85; Walsh et al 87]. Since a high priority of a GIS is to optimize image

display, "ad hoc"

approaches to merging are frequently used. One of the more simpler

methodsforanenhancedRGB displayistoplacethehighresolution panchromatic image

intothe green channel, and two lowresolution multispectral bands into thered and blue

channels. Since the green channel contributes the most to the intensity component, the

(23)

1.2.2 Merging by separate manipulation of spatial information

Amultispectralimagecanbethoughtof ashavinga spectral component and a spatial

component. Thesecondtypeofmergingalgorithmfirsttries toseparate thesecomponents,

then manipulates the spatial component to obtain spatially enhanced images without

touchingthe spectralinformation.

One way to separate spatial/spectral information is by looking at the spatial

frequenciesoftheimage. Animage,accordingtoSchowengerdt,can alsobe consideredto

bethe sum of alow spatialfrequency component and ahighspatial frequencycomponent

[Schowengerdt 80].

image = lowpass

(image ) + highpass (image ) (1-3)

The primaryassumptionin this techniqueisthat thespectralinformation iscontained

inthelowpasscomponent andthatthe spatialinformation is inthehighpass component. If

theedges,orthehigh spatialfrequenciesoftheimagestobemerged arecorrelated, thenthe

highpass component of the finer spatial resolution image could be substituted for the

highpasscomponent ofthe lowerspatial resolution image. Assumingthat the majorityof

the spectralinformation iscontained in the low frequencycomponent, then theresultant

hybridimagewould maintain itsspectral content whilegaining improvedspatial resolution.

Schowengerdt used this technique to reconstruct compressed MSS images by

extrapolating the edge information from the high resolution bands to the low resolution

(compressed) bands. Tomet al [85] usedavariationofthis technique to sharpenTM band

(24)

Another way to separate spatial and spectral information is based on the

intensity-hue-saturation (IHS) color transformation [Hayden et al 82]. The IHS transformation

allows spatialinformation, contained in theintensitycomponent, tobe treated separately

from the spectral information which is embedded in the hue and saturation components.

The user can then manipulate the spatial information via the intensity component while

maintainingoverall colorbalanceoftheoriginal scene.

With the IHS method, threecarefully selected multispectral bands are transformed

into the IHS domain. The digital counts (DC) of the intensity component can then be

modifiedusingthehighresolution(panchromatic)referenceimage DCs. Themodifieddata

isthen transformedbackintothered-green-blue (RGB)colordomain anddisplayed.

UsingIHS methods,Carperet al [87] andWelsh and Ehlers [87] obtainedvisually

superiorresults than thoseobtained using Cliche's (adhoc)methods. Care hasto be taken

though,toensurethat thehighspatialresolution referenceimage ishighly correlatedto the

otherinput images. Having high correlation between two images means that the linear

relationship between theimages is very strong.

1.2.3 Merging and Maintaining Radiometry

All of these methods up to now produced visually enhanced images. Spatial

resolution appearedto takeonthereference (panchromatic)imageresolutionwhileretaining

mostofthe spectralinformation. However,in allcases, thespecific radiometric values of

themultispectralimageswerelost. Thethird typeofmergingalgorithmissimilarto thead

hocapproach,butismorestatistically-basedand attemptstomaintain radiometricintegrity.

One method suggested by Pradines [86] to keep radiometric quality of the

(25)

P1 P2

P3 P4

XP1 XP2

XP3 XP4

where XP(J) = X x P(J)

PI +P2+ P3+P4 , J

= 1..4

Figure 1-1. Pradine'sMergingMethod

3y using this method, the aggregate of the four hybrid pixels will return the

radiometry oftheoriginalmultispectralimage. However,thehigh spatial resolution image

(panchromatic channel)mustbe correlatedwith theindividual multispectral images. For

thosebands not correlatedwiththereferenceimage, thismergingalgorithmcannotbeused.

Two other merging techniques which try to maintain the radiometry of the

multispectral images are the Price [87] and the DIRS methods [Warnick 89]. These

(26)

2.0 MERGING METHODS AND MODIFICATIONS

Thisstudyevaluates the third typeofmergingtechniques

-those techniqueswhich

attempt topreserve the integrity ofthe multispectral data. The two primary techniques

undertest are the DIRS and the Price methods. In addition, several modifications and

enhancements have been incorporated to address some ofthe known short-comings of

these techniques. Thesemodified versions are alsoincludedforcomparison.

2.1 The DIRS Method

The DigitalImaging andRemoteSensing Laboratory (DIRS)conducted a

proof-of-concept study on merging multi-date-multi-sensor-multi-resolution images forenhanced

image analysis [Warnick, et al, 1989]. Specifically they merged a high resolution

panchromatic image (SPOT-1 panchromaticchannel, 10m GIFOV)with medium resolution

multispectral images(Landsat-5,TM bands 1-5, 7, 30m GIFOV).

The DIRSmethod canbe summarizedinthefollowingsteps:

(1) TheSPOT image is geometricallyregisteredto theTM images.

(2) Amediumresolutionpanchromaticimage is createdfroma weighted

average ofTM bands 1 through 4. This synthetic image approximates the same spectral

characteristics asthehighresolution SPOTpanchromaticchannel.

(3) The histogram of the SPOT panchromatic image is then linearly

adjustedto the histogramofthe syntheticTMpanchromatic image. This transformation

will, to thefirstorder, accountforthedifferingatmospheric and sensor effectsbetweenthe

(27)

(4) The images are then merged to create a high resolution, multiband

hybridimage. The mergingalgorithmis:

DCHybridMultibandO) =

DCSPOTPan

L

^C(l)

1

(2-1)

\UL.SynTMPan/

where:

DCHybrid

Multiband (-) isthedigitalcountofthei-th band inthehybrid

multibandimage;

DCSPOT

Pan isthe digitalcountintheadjusted panchromaticSPOTimage;

DCjj^O)istheDCinthei-th bandoftheoriginalmultispectralimage; and

---'CsynTMPan -s^ie digitalcountinthesyntheticTMpanchromaticimage.

The DIRS method is applied on apixel-by-pixel basis, and therefore each ofthe

aboveterms alsohas apixel location. These locationsareleftoffforeasierreading.

2.1.1 Results Using the DIRS Merging Method

In addition to the visible improvement of the images, the overall classification

accuracywasapproximately 10percentagepointshigherforthehybrid datasetthan forthe

originalTMimages.

2.1.2 Concerns on the DIRS Merging Method

Because the DIRS method was developed only as a proof-of- concept

study, the

authors identifiedseveralareasfor furtherstudy.

(28)

The synthetic, medium resolution panchromatic imagewas produced as a weighted

average ofTM bands 1 through4. The weighting factor foreach band is determined by

findingthecommon area oftheTM spectral response curve andthe SPOTpanchromatic

spectral response curve. Thecommon area under the two curves isthen divided by the

totalareaundertheTMcurve considered. Theresultant quotientistheweighting factor for

that TM band. By using integration to determine areas, the weighting factor can be

representedas:

I

min(TMRSR,i(?i), SPOTRSR(X.))d(X)

w;'

=

^ (2-2)

TMRSR,i(X) dX

1

where:

i refersto theTM bandnumber, i= 1 ..4;

w-'

istheweighting factor forthei-thband;

TMRSR

yk) is therelative spectral responsecurveofthei-th bandoftheTM

scene; and

SPOTRSR(X) istherelative spectral response curve oftheSPOTpanchromatic

channel.

Once the weighting factors are determined for the 4 TM bands, the factors are

(29)

w

W;'

1 "

4

X

Wi'

i=l

ThesyntheticTMpanchromaticimagecan nowbecreated as:

(2-3)

TMsynPan =

W; X i (2'4)

i=l

However, itwasfoundthat the syntheticTMpanchromaticimage wasnot sensitive

in the 700 to 750 nm range, while the SPOT panchromatic sensor is still relatively

responsive. Since the reflectivity of vegetation is active in this region, the SPOT

panchromaticimagerespondsto thisvegetation reflectanceinformationwhile thesynthetic

TMpanchromaticdidnot.

Tocorrectthisdiscrepancy, thespectralresponse curvefor TM band4 was

"shifted"

from a bandpass of (760, 900) nm down to (710, 850) nm. This shift increased the

weighting factor for TM band 4, thereby increasing the responsivity in this region and

providinga

"truer"

approximation oftheSPOTpanchromatic image. However,theauthors

suggestedthatanother method ofproducinga syntheticTMpanchromatic imageshould be

explored.

(2) Concernson theweak correlationbetweenthe SPOTpanchromatic channel andTM

bands4, 5, and7:

The mergingalgorithm used in theDIRS method aspreviously shown in equation

(30)

DCHybridMultiband(i) = DCSpOT

Pan '

L.^

\U<~SynTMPan

where i represents the TMbands 1 through 5and 7. Theconcern iswhen the images for

TM bands4, 5, and7 are merged with the SPOTpanchromatic image. These bands are

only weakly correlated with the SPOT panchromatic channel and should probably be

mergedinadifferent fashion.

(3) Concernson Radiometry:

The high resolution, multiband hybrid images can be thoughtof as TM multiband

images thathave been spatiallyenhanced. Theenhancementistheresult oftheintegration

oftheSPOTpanchromaticimage,buttheresultantmeanradiances should still bethesame

asthe original TM data. As illustratedbelow, theoriginal TM images haveone pixel to

describea30mby 30marea,whilethehybrid image has9pixels todescribethe same area.

Forconsistent nomenclature, the 30m by 30m area will be referredto as a "superpixel";

and a 10mby 10mpixel as a "subpixel".

30m

TMpixel

"superpixel"

(a)

1 2 3

4 5 6

7 8 9

Nine highresolution

"subpixels"

(b)

Figure 2-1 Superpixeland subpixels

(31)

Assuminglinearmodeling, thefollowingcondition mustholdtomaintain radiometric

integrity.

9

DC =

t

E

DCHybrid(j) (2-5)

j=i

However,no such check was made or enforcedin theDIRSmethod.

2.2 General Modifications to the DIRS Method

Thissectiondescribes threegeneral modificationsto theDIRS methodwhichaddress

some ofthe concerns raised in the previous section. These modifications are grouped

together because they donot alterthe merging algorithm. Instead, they affect eitherthe

input imagesto themergingalgorithm,orthehybridoutputimages.

2.2.1 Creating a TM panchromatic image

In the DIRS study, the weighting factors for each TM band were obtained by

computing the overlapping areas ofthe SPOTpanchromatic and the TM bands spectral

response curves. However,the syntheticTMpanchromaticimage hadalackofsensitivity

in the 700-750 nm region. Consequently the TM band 4 spectral response curve was

"shifted"

toprovide a strongerweighting factorand moreresponsivity inthatbandpass.

Anothermethod to obtain the weightingfactors has been used by Suits [Suitset al

(32)

Suit's approach,newTM weighting factors werecomputedusinga multivariateregression on estimated sensor signals.

EstimatingSensor Signals

Given the reflectance spectrum of a target, some atmospheric parameters, and an atmospheric model (such as LOWTRAN) we can estimate the radiance that reaches the sensor in a bandpass of interest. This radiance parameter is designated L^ , and is a

function of wavelength, X. With

L^

we can cascade the sensor's spectral response

function, $(X),to gettheeffective radianceseen bythe sensor. Thiseffectiveradiance, Ls

iscomputed as:

(

Lx p(?0 d^

Ls = ^ (2-6)

P(X) dX

Jo

Theoutput signal (inDCs) can nowbeestimatedusingLs andtheknown gain and offset of each sensor.

DCs = Ls-i

-offseti

(2-7)

garni

where: DQ = estimatedDCofTM band i;

L = effective radiance seenbythei-th band; and

S-l

gainj, offset;

(33)

These simulatedsignals are thenregressed todeterminethe

weighting factors. This

regression canberepresentedas:

DCspot-1

DCspot-2

DCspoT-3

DCspQT-n _

DCxMl-1 DC-TM2-1 DCTM3-1 DCtM4-1

DCtmI-2 DCtM2-2 DCjM3-2 DCTM4-2

DCtmI-3 DCtM2-3 DCTM3-3 DCTM4-3

_ DCTMl-n DCTM2-n DCTM3-n DCTM4-n

CO! co2 (03 C04. El 2 3 -nJ (2-8)

where: DC istheestimated signal ofthesubscriptedsensor;

com , m = 1..4 is the weightingfactors forTM1, TM2, TM3,andTM4

respectively;

n is thenumberofsamplesin theregression; and

e istheerrorvector whichis minimized when solving forco.

To compute the simulated signals from TM1, TM2, TM3, TM4 and SPOT,

LOWTRAN7 (an atmospheric propagationmodel)was modified. The firstmodification to

LOWTRAN 7 was to allow the program to access a target reflectance spectrum.

LOWTRAN 7 in itsoriginalformusesonlyone reflectancevalue (calledthesurface albedo

or'SALB')forall wavelengths. This limitation impliesthatalltargetreflectance spectra are

uniform and flat. The modified version now allows LOWTRAN 7 to access a file

containingreflectanceinformation. Areflectance valueis then accessed(orcomputed via

interpolation)foreachwavelengthrunbyLOWTRAN.

Thesecond modification was availablefrompreviousworkby CarlSalvaggio. His

modification integrated the sensor's spectral response curves with the radiance values

(34)

equation (2-7).

Having modified LOWTRAN 7 to produce simulated sensor DCs, 25 target

reflectance spectra were chosen. These 25 targetswere comprisedof5 majorclasses

-urban, soil, water, trees, and grass with 5 samples each. These spectra are plotted in

figures 2-2through2-6onthefollowingpages.

^ 40

30

-20

-10

-1 1 1 1 1 1 ' r~

0.3 0.5 0.7 0.9 1.1

wavelength (um)

asphaltave concrete ave

slate ave gravel

roofasphalt

(35)

ou

-4n-

.-

30-

20-/

VS*-""

*&r<

y'"

10

-0 - '

1 * 4?

clayave

loamdryave

loamwet ave

sand ave

soil ave

0.3 0.5 0.7 0.9 1.1

wavelength (um)

Figure 2-3 Reflectance SpectraofSelected Soil Targets

0.2

waterl

water2

water3

water4

water ave

0.4 0.6

wavelength (um)

(36)

0.3 0.9

wavelength (um)

ash

beech

maple

oak

pine

r

1.1

Figure2-5 ReflectanceSpectraofSelected Tree Targets

clover

coarse grass

grassave

orchardgrass

swampgrass

0.7

wavelength

(37)

The modified LOWTRAN 7 was then run with these 25 reflectance spectra at 3

differentatmospheres ofvarying haze. These 75 samples werethenregressedvia equation

(2-8) and new weighting factors forTM1, TM2, TM3 and TM4 were obtained. These

results are described in section 4. 1. For further information on the LOWTRAN input

(38)

2.2.2 Interpolated Input Images

In the original DIRS method, each TM 30m superpixel was replicated into nine

smaller 10mpixels tomatchtheSPOT'Spixel resolution(see figure 2-7).

512

pixels

512

pixels

->

K

30m pixels

-> 3 9 15 21 1536 pixels Y 1536 pixels h-(a)

10mpixels

(b)

->

3 3 3 9 9

3 3 3 9 9

3 3 3 9 9

15 15 15 21 21

15 15 15 21 21

Figure 2-7 PixelReplication

(a) 512 by512originalTMimagewith 30mpixels.

(b) Afterreplication, theimage isnow 1536by 1536with 10mpixels. Thecovered ground areaisthesame.

ThesereplicatedTMimageswere thenused asinputto themergingalgorithm. In

hopesofreducingthe subsequent

"blocky"

appearancein thehybrid images (caused inpart

(39)

input images. This simple filtering technique has been used to reduce the blocky

appearance when enlarging images [Bernstein 79], and when merging multi-resolution

images [Chavez 84, 86]. Thus, given a replicatedimage as in Figure 2-7, the resultant

input image isshownin Figure 2-8 below.

1 1 1

1 1 1

1 1 1

3 3 5 7 9

3 3 5 7 9

7 7 9 11 13

11 11 13 15 17

15 15 17 19 21

(a) 3by3 averaging kernel (b) resultantinterpolated input

Figure 2-8 Interpolated Input

By using these averaged TM inputs, the merging algorithm does not change, althoughthe termDCj^ri) nolonger hasthesame value overtheentiresuperpixel.

Forthisstudy,thesereplicatedimagesthatwere smoothedbytheaveraging filterare referredtoasthe"interpolated" inputimages. This termisusedtodistinguishtheseimages

fromotherimagesthatwillbe"blurred", "smoothed", or "averaged".

2.2.3 A Technique for Radiometric Post-correction

Although the interpolation of the input data reduces the blocky appearance, the

(40)

the original data radiometric integrity is lost. In addition, the DIRS method does not

check or enforcewhethertheaverageDCwithintheSPOTpanchromaticsuperpixelequals

thecorrespondingsuperpixelin thesyntheticTMpanchromaticimage. Again, theprecise

radiometric valueislost.

Ratherthanalterthebasic mergingalgorithm oftheDIRS method (or anymethod),

radiometric integrity canbe restored at the superpixel level by multiplying each hybrid

superpixel (a block of 9 10m pixels) by a correction factor. This correction factor is definedas:

fjp _ TMj

. 30m

(2-9)

(i)

Hybrid(j)i-iOm j=i

where:

TM;.30m

= DC fromone original30m pixelfromTMband i

Hybrid(j)i. 10m

= DC fromthej-th 10m pixel (outof9)fromthehybrid

superpixelthatcorrespondsto

TMi

.

Thispost-fix operationcorrectsfor differences intheSPOTpanchromaticand thesynthetic

TMpanchromatic images,as wellas theradiometric changeto theinterpolated TM input

images. Thiscorrection, however, re-introducessomeoftheblockappearances sincethe

(41)

2.3 The DIRS Enhancement 1 Merging Method

In a simple efforttoreduce the "blocky" appearance ofthe hybrid data, the DIRS

report suggested implementing a nearest neighbor-type of correction scheme. In this

correctionroutine,each subpixel was comparedtoitsown superpixel'scenter valueandto

its neighbors center value(see Figure 2-9).

::::^

\

\

-y

-O-^:::: >::^

..^7. \J

~

comersubpixel

side subpixel

center subpixel

Figure2-9 DIRS nearest neighbor correction routine.

Comer subpixels compared values with the diagonal neighbor, and side subpixels

compared to the adjacent neighbor superpixel's center value. If the subpixel's DC was

closer toitsown center pixel's value,then themergingalgorithm remained thesame as in

equation(2-1):

DCHybridMultibandO) = DCsPOT Pan

DCtmG)

|

(42)

Ifthe subpixel's DCwas closerto theneighbor's DC,than the neighbor's ratiowas

usedinthemerging algorithm:

T\r Ji\ - T\r

I

DCNeighborTM(i)

\

^T-HybridMultiband(l) = ^SPOT

Pan

-f-7; s

\UL,NeighborSyn TM PanI (2-10)

The DIRS Enhancement 1 merging methodfollows this same correction scheme,

except that the comer pixels also compare theirvalues against its adjacent superpixel's

center values (notjust the diagonal superpixel). Thecomparison scheme forone corner

subpixel looksasbelow:

::::^ m

\

1

/

/

K

::::>

^

*>;;;!

Figure2-10 DIRS Enhancement 1 Correction Routine

Thesuperpixel whose center pixelvaluewas closestto thesubpixel value contributed

theratiointhemergingalgorithm.

(43)

subpixelsis because thecorrelationbetween adjacent pixelsismuch higher thanbetween

diagonalpixels[Ahem 86]. This bettercorrelation meansthereisan increasedchance that

a comer subpixel willfind abetter,more accurate matchforsubstitutingratios. Correcting

thecomer subpixelsobviouslyreducestheoverallblockappearance.

Despitethesechanges,theDIRS Enhancement 1 method stillhasseverallimitations.

These include:

- The

centersubpixel of a superpixel isnever changed andwill always use itsown

ratio.

- Each

andevery non-center subpixel is checked,regardless if itresides in a

non-varyingsuperpixel, orifthe neighboringsuperpixelshavea highvariance.

- TM bands

4,5, and 7 are still handled as though they are correlated to the SPOT

panchromaticimage.

-Radiometry isstill notpreciselypreserved.

2.4 DIRS Enhancement 2 Merging Method

DIRS Enhancement 2 is an attempttocorrect some ofthedeficiencies in previous

DIRS methods. This method is described below and can be followed in the flow chart

diagramin Figure2-12attheend ofthis section.

The basic premise ofthis method is that the high resolution SPOT panchromatic

imageprovidesthe spatialinformationon whichsegmentationand computationaldecisions

(44)

images. The SPOTpanchromatic image is handled in 3 by 3 blockswhich correspond to

thereplicatedTMinput imagesuperpixels.

Thestepsforeach3by3panchromaticblockare outlined as follows:

1. Check to see ifthe SPOT block is "mixed"

or "pure". A block is considered

mixed ifthe variance among its nine pixels is higher than an established threshold. A

"pure"

block wouldbeone with a variancelowerthan this threshold.

a) If the superpixel block is pure, then the DIRS original merging

algorithm (section2.1)with a post-fix correction routine (section2.2.3)isrun tocreatethe

corresponding hybrid block. This merging operation will be referred to as the "pure

merge"

inthisalgorithm.

b) If the block has a high variance, then the algorithm continues

processingthemixedblock.

2. Check ifall the neighboring blocks are also mixed. Ifall are mixed, then the

algorithmdefaults to do the pure merge (the original DIRS method) over the superpixel

block of interest. If some of the neighboring superpixel blocks are "pure", then the

algorithm continues.

3. Connectthepanchromatic superpixel. Eachsubpixel within thepan superpixel is

compared toitsadjacent subpixels. Ifthedifference betweenthe2subpixels is lowerthan

a setthreshold, then they areconsidered"connected". Iftwo subpixelsareconnected,then

each subpixel is allowed to compare to the others adjacent subpixels for further

connections. Thus anetwork ofthesuperpixelcan be constructed. An exploded view of

(45)

Figure 2-11. Connectionsbetween thecentersubpixel

4. Findthe subpixels within the mixedSPOT block that can be matched to a pure

neighboring superpixel. A match can occur when the subpixel DC and the neighboring superpixel's mean value are within a set threshold. Rather than limit the subpixels to

compareonlywiththose neighboringsuperpixelsthey cantouch (as in DIRS Enhancement

1), this method also allows the subpixels to compare to any of the eight neighboring

superpixels providedthat: (a) the neighboringsuperpixel ispure; and(b) thesubpixelcan

be "connected" to thesuperpixel. Ifthe subpixelhas morethan one superpixel to whichit

couldbematched, then thesuperpixel whose mean valueistheclosesttothe subpixelvalue ischosen. Thesematched subpixels are thenreferencedas

"pure"

subpixels.

5. Createthe corresponding TM hybridsubpixels bymergingthesepure subpixels

usingthe matching superpixel's ratio andcorrection factor. This merge is similarto the

(46)

6. Derive a newTM value forthe remaining (mixed) subpixels

by removing the

hybridvaluescomputed inthestep before. Thiscanbedescribedas:

9-(TM0

-hybridiO)

NewTMi = -J^.

(2.U)

where:

NewTM;

= New DC forthe

remaining (mixed)subpixels within

thesuperpixel ofinterestin TM bandi;

TMj

= Original DCofthe

superpixel ofinterestin TM bandi;

n = number of pure

subpixels;

hybrid;(j) = those hybrid

subpixels computed fromthe pure

sub-pixels;

Ifall the subpixels werefound to bepure, (n = 9), then this

step is skipped to step

10.

7. Checkto seeifthenewTMvalues are valid. ThenewTMvalues cannot be less

than 0or greaterthan 255.

a) Ifthevaluesareinvalid,then the leastpurehybrid subpixel isremoved

and reclassified as a mixed subpixel. The leastpurehybridsubpixelis thesubpixel which

hasthelargest differencebetween itselfandits matchingsuperpixel mean. The leastpure

hybridsubpixel will continuetoberemoveduntilthereare no purehybridsubpixelsleft,or

new valid valuesfor TMare computed. Ifthereare no pure hybrid subpixelsleft, then the

algorithmdefaults todoa pure merge onthesuperpixel.

(47)

TM DCs. Using the new valid TM DCs and the weighting factors developed in section

2.1.1,thenew syntheticTMpanchromaticcanbedescribedas:

NewTMSynPan =

j

wi x NewTM; (2-12)

i=l

9. The remaining subpixels are now merged using the new TM DC and the new

synthetic

TM^

DC usingthesamebasic DIRS algorithm:

DCHybrid(i)

= DCspOTPan-L^^^

I

(2-13)

\!JUNewTMSynPan/

10. Radiometricallycorrectthesemixed hybridsubpixels generatedin step 9 using

thepost-fix operation.

11. Check to see if this final hybrid block (pure and mixed subpixels) is

"reasonable". Some checksforbeingreasonable are: (a) ifthehybrid band iscorrelated

withtheSPOTpanchromaticimage,theorder ofthe9 hybridsubpixels should bethesame

as the order of the original SPOT subpixels; (b) the standard deviation among the

subpixels of the hybrid superpixel should be within a threshold factor of the standard

deviationoftheoriginalSPOTsuperpixel.

12. Ifthehybrid superpixelis foundtobe unreasonable, then theleastpure subpixel

is removed and the algorithm returns to step 6. Ifthe hybrid superpixel is found to be

(48)

Note thatwith thisnewmethod, thefollowingimprovementstoDIRS Enhancement

1 weremade:

(1) The center subpixel is not constrained to only use its superpixel ratio and

correction factor. If itcan match toaconnecting superpixel, then thecenter subpixel can

usethematching superpixel's parameters. In addition, theother subpixels can also match

withanyoftheeightneighboringsuperpixels providedthata connection exists between the

twoandthe superpixelis pure.

(2) Rather than running comparisons for all subpixels, only those SPOT pan

superpixel blocks thatareidentified as

"mixed"

(havea largevariance) will beprocessed.

In addition,only thoseneighboringsuperpixelsthatare purewillbeable tocontribute their

ratioandcorrection factor. Thesechanges should helpin speed andin avoiding improper

substitutionsforsuperpixelparameters.

(3) Radiometry is preserved. Those subpixels that are identified as 'pure'

(are

matched to a pure neighboring superpixel) produce the pure hybrid subpixels. The

remainingsubpixels are thencomputedtomaintaintheradiometric value oftheoriginalTM

value. Insimplisticterms, itcanbe describedas:

known_answer =

pure_valuessubpix + mixed_valuessubpix

Givenaknown answer(original TMvalue) andthepuresubpixel component(determined),

themixedcomponentissimplywhat'sleft.

However, DIRS Enhancement 2 still does not handle the weakly correlated TM

(49)

CD

Get Hi-Res PanchromaticSuperpixel

O*- No

JL Connectthesubpixels

Findthepure subpixels

-thosewhichare closetoa

pure neighbor's mean value

>0

Mergepuresubpixelsby

usingthepure neighbor's

ratio and correctionfactor

Removetheleast

purehybridsubpixel

DerivenewTM DCs for

remainingmixed sub-pixels

(50)

2.4.1 A Modification to the DIRS Enhancement 2 Method

This small modification is a quick fix to handle the TM bands that are weakly

correlated to thepanchromaticimage. Forthismodification, theonlychange to theDIRS Enhancement 2 method is to the "puremerge"

operation fortheweaklycorrelated bands.

Insteadof puremergingthesebands usingtheDIRSmethod:

DCHybridMultibandO) =

DCsPOTPan '

\DCsynTMPan)

thehybridvalue nowsimplytakes thevalue oftheoriginalTM superpixel:

DGHybrid

UncorrelatedBandW = DCTM(i)

Thus if a subpixel is matched to a neighboring pure superpixel, the hybrid subpixel for

theseweaklycorrelatedbandswould contain theTMvalueforthatmatchingsuperpixel.

Allother(correlated)bands aremanipulatedintheidenticalfashion asbefore.

Thesemethodsdescribedsofar insection2arevariationsofthebasic DIRS merging

algorithm. Theremainderofthissectiondescribes Price's mergingmethod and some ofits

(51)

2.5 Price's Merging Technique

Price'stechniquefor mergingmultispectral and panchromaticimagesis splitintotwo

cases. The firstcase is formultispectral bands thatare correlatedwith the panchromatic

image. Thesecond caseisfor weaklycorrelatedbands.

2.5.1 Case 1 -- Correlated Input Images

Becausetheimagesarecorrelated, therelationship canbe describedaslinear. On an

individual band basis, the linear coefficients can be determined using a simple linear

regression. This is illustrated in Figure 2-13, where the DCs of an averaged SPOT

panchromaticimageare plotted againsttheDCsofaThematic Mapper(TM) band.

TM TM = ( a x Pan ) + b

i i ave i

Pan ave

Figure2-13. Linear RegressionofTM bandsandthePanchromaticImage

where: TM- = DCofasuperpixelintheTM i-thband;

Panave

= theaverage oftheDCs inthecorresponding superpixel

oftheSPOTpanchromaticimage;

a-,

(52)

After solving forajandb- we cancompute ahighresolution estimate ofthei-th band

using theoriginalhighresolutionpanchromatic channel:

fMj

=

^ Paniom + b; (2-14)

The mergingalgorithmthenusesthehighresolutionestimatetocreatethehybrid image:

Hybrid, - LLi (2-15)

1Mi_ave

where: Hybrid- = DCofthehybrid i-thband;

TMj = DC fromtheoriginal TMi-thband;

TMi = DC fromthehighresolutionestimateofTM i-thband;

TMi.ave = average oftheDCs inthecorrespondingTM;image

correspondingto the

TM^

pixel.

Forexample, themergingofTMmultispectral images (30mGIFOV) witha SPOT

(53)

T

Original

Panchromatic

30m

1

PI P2 P3

P4 P5 P6

P7 P8 P9

Original TM

Tl T2 T3

T4

*

TJ T8

?

High Resolution EstimateofTM

HI H2 H3

H4 H5 H6

H7 H8 H9

Hybrid.

Figure 2-14 Price'sMergingMethod - Case 1

where:

H(J); = TM, T(J)j

TMj.ave

, J = 1...9

and TMi.ave =

i

T(J)-y j= i

Notethe similarity toPradines' method,except that Price isusing an estimate ofa

multispectral band insteadofthepanchromaticdatadirectlyforthemergingoperation. In

(54)

multispectral band, while Pradine has the sum of the hybrid DCs equal the original

multispectralvalue. Finally,Price suggests a methodtohandlemultispectral bandsthatare

weaklycorrelatedwiththepanchromaticchannel.

2.5.2 Case 2

-Weakly Correlated Bands

Becausesomebandsare notlinearlyrelated(ievisible andnearinfrared channels),a

more general relationship was used by Price. After registering the images, Price again

averaged the panchromatic channel to the same spatial resolution as the multispectral

images. Pricethen computedtheexpected (mean)valuefortheweaklycorrelatedband for

eachgivenvalueinthe 30mpanchromaticimage.

Forexample,Price wouldfind allthepixelsin the30mpanchromaticimage thathad

aDCofX (0to 255). He would then find the average oftheDCs ofthe corresponding

pixelsintheweaklycorrelated band. Thusa simplelook-up-table (LUT)can begenerated.

This process is depicted in Figure 2-15 on the following page. Using this LUT, a high

resolution estimate of the multispectral band can be computed from the original high

resolution panchromatic image. This high resolution estimate is now used as in the

correlatedcase,andismergedwiththeoriginal multispectraldata.

2.5.3 Price's Results

Pricetested hisprocedureusing SPOTsimulationdatawitha 10mpanchromatic and

three 20m multispectral channels. He then averaged the channels to obtain 20m

panchromatic and 40m multispectral channels. Applying the above procedures to the

averageddata,Priceproducedhybrid 20mvalues whichwere thencomparedto theoriginal

(55)

0 0 0

0 0 2

1 1 2

Average Pan Image (30m)

Average Pan DC

8 8 8

7 9 16

20 24 14

WeaklyCorrelatedTM Band (30m)

CorrespondingDCs from

WeaklyCorrelatedTM Band

0

1 2

8, 8, 8, 7, 9, 20, 24, ...

16, 14, ...

255

Computethemean ofthe

corresponding TM DCs

tocreatetheLUT

Ave Pan DC Mean TM DC

0

1 2

255

8 22

15

(56)

Multispectralchannels 1 and2werehighlycorrelatedwith the panchromaticchannel,

and produced residual errors of aroundtwodigitalcounts. This isquite accurate sincethe

standarddeviationoftheoriginaldatawas around20 DCs.

For the thirdchannel which is notcorrelated, the procedure was

only "moderately

successful". Thepredicted values accounted only for about 75% ofthe variance in the

originaldata,as comparedto99%inthecorrelated case.

Price didnottest theeffecthistechniquehadon classification accuracy.

2.6 Price Modification -- A Method of

Handling Weakly Correlated Input

Images

One ofthe biggest concerns in all ofthemerging techniques to date has been the

mergingof multispectral imagesthatareweaklycorrelated withthereference panchromatic

channel. In our case, TM bands 4, 5, and 7 are not strongly correlated with the

panchromatic channel.

This modification to Price'stechnique implementsa method byTom et al [85]. In

hispaper,Tomimprovedthespatial resolutionoftheTM band 6 from 120mto30musing

theotherTMbands. His techniqueis basedon an adaptive multibandleastsquares method

which computes an optimal image estimate. His approach relies onthe assumption that

registeredTMdataare correlated acrossthebands in small local areas. Byusingthelocal

correlationproperty, Tomusedvisible and IRbands topredictthe thermal IR image data

(band 6)in 30m resolution.The thermalimageestimate wasformed byaweightedlinear

(57)

image.

By modifying Tom's technique, we could obtain high resolution estimates ofTM

bands 4,5, and 7. To compute the high resolution TM band 7 estimate, the adaptive

multiband enhancement procedure wouldtake thesegeneral steps:

(1) Average filterthepanchromatic channeltoobtain30mresolution.

(2) Using theaveraged 30 m panchromatic image and the originalTM 1, 2, and 3

images asinput,we can implementtheadaptiveleastsquares methodtogeneratethelinear

prediction coefficientsfor band 7 at each pixellocation. Fora givenlocationofthesliding

3by 3window,wecan representthisas:

TM7i TM72 TM73

TM79 _

1 TMh TM2] TM3i

Pan-1 TM12 TM22 TM32 Pan2

1 TM13 TM23 TM33 Pan3

1 TM19 TM2g TM39 Pan9

~b0" "ei"

bi e2

b2 + e_3

b3

-b4_ _e9_

(2-16)

The least square solution is computed by solving fortheset of coefficients (thevector b)

thatminimizes theerror vector(e). Notethat each pixellocation will have its own set of

coefficients. One waytohandle thisdata istohave matchingcoefficient

"images"

foreach

input/referenceimage.

(3) Obtain 10m resolution estimates ofTM bands 1, 2,and3byprevious methods.

(i.e.,PriceorDIRS methods)

(4) GenerateanoptimalestimateofTM band 7 usingthepredictioncoefficients and

the high resolution input images (panchromatic, Hybrid bands 1, 2, 3). For each 10m

(58)

TM7 =

(br

Hybi)+(b2- Hyb2)+(b3- Hyb3)+

(b4-Pan,om)

+ b0 (2-17)

where

Hybj

,

Hyb2

and

Hyb3

are the 10m hybrid pixels derived from using

Price's techniqueforcorrelatedbands.

(5) Mergethishighresolution estimate ofTM band 7 withtheoriginal band 7 using

Price'smethod with correlatedimages(section2.5.1)toobtain a 10m

Hyb7

image.

After obtaining Hyb7, we can usethis band as another inputreference to compute

Hyb5. As before, avector b is determined, this time with one more dimension for the

additionalinputreferenceimage.

TM5i

TM52 TM53

TM5g

1 TMh TM2i TM3i

1 TM12 TM22 TM32

1 TM13 TM23 TM33

TM7i Pam

TM72 Pan2 TM73 Pan3

1 TM19 TM2g TM3g TM79 Pan9

"bo" -, bi ei e2 b, + e3 b3 b4

Lbs.

.e9_ (2-18)

With this vector b, the high resolution estimate for TM5 can be computed similar to

equation(2-17):

TM5 =

(by

Hybi)+ (b2- Hyb2)+(b3- Hyb3)+

(b4- Hyb7)+

(b5Paniom)

+ b0

(2-19)

The hybrid image for TM5 is then obtainedby merging this highresolution estimate of

TM5with theoriginalTM5 imageusing Price'smethodwith correlatedimages. Oncethe

hybrid for TM5 (Hyb5)is found, this band as well as

Hyb7

are used to compute the high
(59)

Theorder ofcomputing band 7, 5, and then4ischosen becausethis is theorder of

decreasing band correlation with the panchromatic image. Tom et al [85] additionally

showedthat increasing thenumber ofinputreference bandsdecreased thehighresolution

estimate error. Therefore,band 4 the weakest correlatedband~ is

computedlastto use

the2additional referencebands.

Section 2 described the twoprimarymethods ofmergingthatareinvestigated inthis

study the DIRS and the Price methods. In addition, several modifications and

enhancements have also been introduced. The following section describes how these

(60)

3.0 EXPERIMENTAL APPROACH

In comparing these mergingmethods, several variations ofthe input images were

used. Thesevariations aredescribed insection 3.1. Thetwoprimarytestsforevaluation

-classification accuracy and radiometric error

-are described in sections 3.2 and 3.3

respectively. Finally,theprocedureforthis studyispresentedin section3.4.

3.1 Selection and Preparation of Test Imagery

The SPOTandTM imagesselected forthis studyare theidentical imagesusedinthe

DIRS proof-of-concept study [Warnick 89]. The images were selectedfrom a scene of

greaterRochester, NY,

Figure

SummaryTable 4-8d Table for Selected 30 m Hybrid Sets
Table F5a.Confusion Matrix using a Random Data Set (100 samples/class)for the Price Method and a Re-registered SPOT image

References

Related documents

SPONTANEOUS SPEECH COLLECTION FOR THE CSR CORPUS S P O N T A N E O U S S P E E C H C O L L E C T I O N F O R T H E C S R C O R P U S J a r e d B e r n s t e i n a n d D e n i s e D a n i

RESEARCH Open Access Sero survey of rubella IgM antibodies among children in Jos, Nigeria Surajudeen A Junaid*, King J Akpan and Atanda O Olabode Abstract Sero survey of rubella

Focusing on the 8 most COVID-19 clinically relevant blood measurements, we performed PRS-PheWAS analysis across 44 disease antigen measurements (n = 6,643 unrelated individuals

Harrington [7] for example, conducts a study with short injection holes (both an experimental and CFD study), showing how short injection lengths affect correlations (e.g. Sellers

28. The general role of courts in the development and reform of the tort system is not without controversy. Decisions of state supreme courts interpreting state constitutions have

The above case study showed that students applied the known strategies to solve the type of a task familiar to them according to things which they thought that the teacher

In accordance with the (ASME Section VIII div.1) rules for construction of pressure vessel of the relevant provisions, implementation of Radiography Testing of pressure vessels

The minimum specific assumptions of the reciprocal chiasmata hypothesis for conjunction of the sex chromosomes in male Drosophila appear to be the fol- lowing+ ( I )