Theses Thesis/Dissertation Collections

11-9-2012

## Analytical modeling, performance analysis, and

optimization of polarimetric imaging system

Lingfei Meng

Follow this and additional works at:http://scholarworks.rit.edu/theses

This Dissertation is brought to you for free and open access by the Thesis/Dissertation Collections at RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please [email protected].

Recommended Citation

## Polarimetric Imaging System

by

Lingfei Meng

B.E. Tianjin University, 2007

A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Chester F. Carlson Center for Imaging Science

College of Science

Rochester Institute of Technology

November 9, 2012

Signature of the Author

Accepted by

### ROCHESTER, NEW YORK

CERTIFICATE OF APPROVAL

Ph.D. DEGREE DISSERTATION

The Ph.D. Degree Dissertation of Lingfei Meng has been examined and approved by the dissertation committee as satisfactory for the

dissertation required for the Ph.D. degree in Imaging Science

Dr. John P. Kerekes, Dissertation Advisor

Dr. Zoran Ninkov

Dr. Alan D. Raisanen

Dr. Karl D. Hirschman

Date

## Polarimetric Imaging System

by Lingfei Meng Submitted to the

Chester F. Carlson Center for Imaging Science in partial fulfillment of the requirements

for the Doctor of Philosophy Degree at the Rochester Institute of Technology

### Abstract

Polarized light can provide additional information about a scene that cannot be ob-tained directly from intensity or spectral images. Rather than treating the optical field as scalar, polarization images seek to obtain the vector nature of the optical field from the scene. Polarimetry thus has been found to be useful in several applications, including material classification and target detection. Recently, optical polarization has been identified as an emerging technique and has shown promising applica-tions in passive remote sensing. Compared with the traditional spectral content of the scene, polarimetric signatures are much more dependent on the scene geome-try and the polarimetric bidirectional reflectance distribution function (pBRDF) of the objects. Passive polarimetric scene simulation has been shown to be helpful in

better understanding such phenomenology. However, the combined effects of the

scene characteristics, the sensor noise and optical imperfections, and the different

processing algorithm implementations on the overall system performance have not

been systematically studied. To better understand the effects of various system

at-tributes and help optimize the design and use of polarimetric imaging system, an analytical model has been developed to predict the system performance.

A detailed introduction of the analytical model is first presented. The model propagates the first and second order statistics of radiance from a scene model to a

measurements. It has been shown that the analytical model is able to predict the

general polarization behavior and data trends with different scene geometries.

Based on the analytical model we then define several system performance metrics

to evaluate the polarimetic signatures of different objects as well as target detection

performance. Parameter tradeoffstudies have been conducted for analysis of

poten-tial system performance.

Finally based on the analytical model and system performance metrics we inves-tigate optimal filter configurations to sense polarization. We develop an adaptive polarimetric target detector to determine the optimum analyzer orientations for a multichannel polarization-sensitive optical system. Compared with several con-ventional operation methods, we find that better target detection performance is achieved with our algorithm.

My deepest gratitude goes first and foremost to my advisor Dr. John Kerekes, for his constant encouragement and guidance. My dissertation would not have been possible without his consistent support.

Second, I would like to express my heartfelt gratitude to Dr. Zoran Ninkov, Dr. Alan Raisanen and Dr. Karl Hirschman for their willingness to serve on my doctoral committee and for their valuable advice.

In addition to the committee members, there are several individuals I would like to acknowledge. First and foremost, I would like to thank Dr. Michael Gartley who was always willing to share his knowledge and to provide his instruments to me. I would also like to thank Dr. Michael Presnar, Dr. Brian Flusche and Weihua Sun for their help on data collection. I also want to thank Cindy Schultz and Sue Chan for their patience and for helping me through many hurdles. Finally, I would like to thank my manager Dr. Kathrin Berkner in Ricoh for her continuous support.

Last but certainly not least, I want to thank my family for their unconditional moral support. Thank you Mom, Dad and Ruoyin.

### 1 Introduction and Objectives 1

1.1 Background and Objective . . . 1

1.2 Related Work . . . 3

1.3 Thesis Organization. . . 6

**2 Overview of Polarization** **8**
2.1 Types of Polarization . . . 8

2.2 Representation of Polarized Light . . . 9

2.3 Mueller Matrix . . . 11

2.4 Measurement of the Stokes Vector . . . 11

2.5 Fresnel Equations . . . 12

**3 Analytical Model of Polarimetric Imaging System** **15**
3.1 Scene Model . . . 17

3.1.1 Optical Properties of Object Surface . . . 17

3.1.2 Simulations of pBRDF Model Determined Polarization . . . . 25

3.1.3 Solar and Atmosphere Modeling . . . 27

3.1.4 Simulations of Atmospheric Polarization . . . 32

3.1.5 Statistics of Radiance at Sensor Aperture . . . 34

3.1.6 Simulation of Scene Geometry Variation . . . 40

3.2 Sensor Model . . . 42

3.2.1 Formulation of Sensor Model. . . 42

3.2.2 Simulation of Sensor Signal Output . . . 46

3.2.3 Sensor Noise Characterization . . . 46

3.2.3.1 Camera Model . . . 46

3.2.3.2 Sensor Noise Estimation . . . 47

3.2.3.3 Example of Sensor Noise Characterization . . . 48

3.3 Processing Model . . . 51

3.3.1 Formulation of Processing Model . . . 51

3.3.2 Monte Carlo Simulations . . . 55

3.4 Model Validation . . . 55

3.5 Summary . . . 70

**4 System Performance Evaluation** **71**
4.1 System Performance Metrics . . . 71

4.1.1 Strength of Polarimetric Signature . . . 71

4.1.2 Target Detection Performance . . . 72

4.2 System Performance Evaluation . . . 73

4.2.1 Polarization Contributions of Radiometric Sources . . . 73

4.2.2 Effect of Sensor Platform Altitude. . . 75

4.2.3 Effect of Scene Geometry Variation . . . 78

4.2.4 Effect of Object Surface Roughness . . . 83

4.2.5 Effect of Scene Geometry. . . 84

4.2.6 Effect of Sensor Noise. . . 90

4.2.7 Effect of Sensor Integration Time . . . 90

4.3 Summary . . . 94

**5 Optimization of Polarimetric Imaging System** **95**
5.1 Signal-to-clutter Ratio . . . 95

5.2 Stokes Vector Statistical Model . . . 96

5.3 Optimal Multi-channel Combination . . . 98

5.4 Adaptive Polarimetric Target Detector . . . 99

5.5 Validation of APTD Algorithm with Synthetic Images. . . 103

5.7 Summary . . . 110

### 6 Conclusions and Future Work 112

**Bibliography** **113**

**A Computation of Scene Geometry Variation** **125**

**B Prediction of Covariance Matrix for Scene Non-uniformity** **129**

## List of Figures

1.1 Parking lot polarization image. . . 2

1.2 Framework of FASSP model.. . . 4

2.1 Fresnel reflectance. . . 14

3.1 Framework of analytical model of polarimetric imaging system.. . . . 16

3.2 Light reflection geometry for BRDF. . . 17

3.3 Optical scatter from object surface. . . 19

3.4 Specular and diffuse components. . . 20

3.5 Scattering geometry of a single microfacet. . . 21

3.6 pBRDF model determined DoLP at different surface roughness. . . . 26

3.7 pBRDF model determined DoLP of different materials. . . 28

3.8 Solar and atmospheric effects on radiance reaching sensor aperture. . 29

3.9 Downwelled radoamce integration over hemisphere. . . 31

3.10 MODTRAN-P predicted Stokes vector of downwelled radiance. . . 33

3.11 Solar zenith angle settings for simulations in Fig. 3.12. . . 34

3.12 MODTRAN-P predicted DoLP of downwelled radiance. . . 35

3.13 Sources of scene variation. . . 37

3.14 Radiance variation due to sensor viewing geometry. . . 38

3.15 Computation of scene geometry variation. . . 39

3.16 Scene geometry variation simulation. . . 41

3.17 Framework of sensor model. . . 43

3.18 Sensor signal output simulation. . . 47

3.19 Lab setup for sensor noise calibration.. . . 49

3.20 Sensor noise parameter fitting. . . 52

3.21 Sensor noise characteristics of a CCD camera. . . 53

3.22 Monte Carlo simulation for DoLP statistics i. . . 56

3.23 Monte Carlo simulation for DoLP statistics ii. . . 57

3.24 Outdoor polarization image collection i.. . . 58

3.25 Outdoor polarization image collection ii. . . 59

3.26 Refractive index of objects in the scene.. . . 60

3.27 DHR of objects in the scene. . . 61

3.28 Sensor response. . . 61

3.29 Analytical model predicted intensity statistics i. . . 64

3.30 Analytical model predicted intensity statistics ii. . . 65

3.31 Model validation with paints i.. . . 66

3.32 Model validation with paints ii. . . 67

3.33 Model validation with asphalt i. . . 68

3.34 Model validation with asphalt ii.. . . 69

4.1 Polarization contribute of radiometric sources i. . . 76

4.2 Polarization contribute of radiometric sources ii. . . 77

4.3 DoLP SNR at different platform altitude. . . 79

4.4 ROC curve comparison at different platform altitude. . . 80

4.5 Variation of reflected radiance.. . . 81

4.6 Variation of upwelled radiance. . . 82

4.7 ROC curve with scene geometry variation effect. . . 83

4.8 Effect of surface roughness on DoLP SNR. . . 84

4.9 ROC curve comparison with different levels of surface roughness.. . . 85

4.10 DoLP SNR of black paint at different scene geometries. . . 86

4.11 DoLP SNR of glass at different scene geometries. . . 87

4.12 ROC curve comparison at different scene geometry settings. . . 88

4.14 Effect of sensor noise factor on DoLP SNR of black paint.. . . 91

4.15 Effect of sensor noise factor on DoLP SNR of glass. . . 92

4.16 ROC curve comparison with different sensor noise factors. . . 93

4.17 Effect of sensor integration time on DoLP SNR. . . 93

4.18 ROC curve comparison with different sensor integration time. . . 94

5.1 SCR values at different sensor integration time. . . 96

5.2 Distribution of SCR for a three-channel system. . . 100

5.3 Flowchart of proposed APTD algorithm. . . 102

5.4 Synthetic Stokes parameter images. . . 104

5.5 ROC curve of target detection on synthetic intensity images. . . 107

5.6 ROC curve of target detection on synthetic Stokes parameter images. 108 5.7 Panchromatic image of testing scenario.. . . 109

5.8 ROC curve of target detection on outdoor Stokes images. . . 111

A.1 Scene geometry at central point. . . 126

3.1 Parameter setting for simulations shown in Fig. 3.6. . . 25

3.2 Parameter setting for simulations shown in Fig. 3.7. . . 27

3.3 Parameter setting of simulation in Fig. 3.10. . . 32

3.4 Sensor noise parameter estimation. . . 51

3.5 Stokes vector reconstruction based on three common approaches.. . . 54

3.6 Input parameters in analytical model. . . 62

3.7 RMSE of the model predicted mean and standard deviation. . . 66

4.1 Input parameters for baseline scenario. . . 74

4.2 DoLP variation at different field of view and *„* values. . . 81

5.1 Parameter setting for simulations shown in Fig.5.2. . . 101

5.2 Comparison of SCRs with different operation methods. . . 110

## Chapter 1

Introduction and Objectives

1.1 Background and Objective

The primary properties of light are intensity, frequency or wavelength, coherence, and polarization. The intensity information that is normally captured by a conven-tional panchromatic sensor, and spectral images by a multispectral or hyperspectral systems at narrow spectral bands have shown tremendous applications in the field of remote sensing. Polarization of light provides valuable information about scenes that cannot be obtained directly from intensity or spectral images. Rather than treating the optical field as scalar, polarization images seek to obtain the vector

na-ture of the optical field from the scene [1,2]. Polarized light reflected from scenes has

been found to be useful in several applications, such as material classification [3–5],

computer vision [6,7], and target detection [8–12]. Recently a study on polarimetric

combined with spectral imaging has demonstrated the capability of remote sensing

platforms to detect [13] and track vehicles [14,15]. The benefit of using polarization

to detect man-made objects is illustrated in Fig. 1.1, where a degree of linear

polar-ization (DoLP) image is compared to the intensity image. In the DoLP image, the man-made objects tend to show higher polarization, and may be easily distinguished from the background.

Polarimetric imaging is however a relatively new and not fully explored field for

(a)

DoLP

0 0.2 0.4 0.6 0.8 1

(b)

Figure 1.1: Parking lot images (a) intensity and (b) DoLP collected from a rooftop.

remote sensing as pointed out in [2]. The relatively unstable polarimetric signatures

that are highly dependent on the source-target-sensor geometry, as well as the lack of available polarimetric imaging data make the utility of polarimetric imaging still an open question for remote sensing applications.

It is the goal of gaining a better understanding of the potential performance and limitations of polarimetric imaging technology, and also heading toward more optimal designs of polarimetric imaging system that motivates the research presented in this dissertation.

The objectives of this dissertation are listed below:

• develop an analytical model for polarimetric imaging system in terms of the

end-to-end system performance analysis;

• based on the analytical model define system performance metrics, which are

specified for different applications;

• optimize the polarimetric imaging system using the analytical model as well

as the system performance metrics.

## 1.2 Related Work

In this section, a short overview of the related work in imaging system modeling, scene modeling, polarimeter design and optimization, and processing of polarimetric data is presented.

The system approach to analyze the remote sensing system has been addressed

in [16,17]. The forecasting and analysis of spectroradiometric system performance

(FASSP) model presented in [17] is a mathematical model for spectral imaging

sys-tem analysis. A framework of the FASSP model is shown in Fig. 1.2. This model

divides the remote sensing system into three subsystems: scene, sensor, and pro-cessing. This mathematical modeling approach generates statistical representations of the scene, and analytically propagates the statistics through the whole system. This model was then extended to cover the full spectrum from visible to longwave

infrared [18]. The FASSP model has been found to be extremely helpful in the

study of different aspects, such as land cover classification [19], and target

detec-tion [17,20]. The statistical approach was also adopted by [21] for analyzing

mul-tispectral imaging systems in mine detection, by [22] for Mars hyperspectral image

analysis, and by [23] for performance prediction of ship detection from microsatellite

electro-optical imagers. Another approach for imaging system analysis is the image

simulation model such as DIRSIG [24], which produces synthetic images.

The study of polarimetric imaging has aroused great interest among researchers, and a lot of studies have been done on the scene simulation, polarimetric sensor design, and polarimetric data processing.

In the study of scene simulation, characterization of the optical properties of ma-terial has always been challenging. The polarimetric BRDF (pBRDF) is often used to fully characterize the polarization behavior of object surface. The acquisition of pBRDF is often based on empirical modeling or experimental measurement. A

pBRDF model based on microfacet theory is given in [25,26]. Recently this model

has been extended by including a shadowing/masking function and also a

Lamber-tian component [27]. The pBRDF model has been implemented in the image

## Processing

Model

Sensor

Model

Scene

Model

Performance Metrics Reflectance

Statistics Library

Sensor Parameter

Files

Processing Algorithm Description

Scene Description

Sensor Settings

Processing Settings

User Inputs Spectral

Radiance Statistics

Spectral Signal Statistics

Figure 1.2: Framework of FASSP model [17].

cover the longwave infrared wavelength [30]. Based on the pBRDF model

polariza-tion image simulapolariza-tions in micro-scale and with contaminated surface are presented

in [31]. An approach for measuring the first column of the pBRDF matrix and also

taking into account the GSD-based variability is given in [29,32]. A method to

determine pBRDF using in-scene calibration materials is shown in [33]. Some early

work of measuring the polarization of natural backgrounds, such as soil, vegetation

and snow, can be found in [34–39]. Several models are available to characterize the

pBRDF of natural background materials. In [40,41] pBRDF models were developed

for general vegetation. A polarized reflectance representation for soil can be found

in [42,43]. Recently modeling and simulation of polarimetric hyperspectral images is

presented in [44] by including skylight polarization and polarized reflectance models.

A polarization-sensitive optical system can be used to measure the polarization state of the light, and Stokes parameter images are often constructed to reveal

the polarization properties of the scene. Different types of polarimeters have been

deigned, and some of them are summarized and compared in [1,2,45]. One commonly

[image:17.612.122.530.104.351.2]polarization images by rotating polarization elements in front of the camera. A

multispectral passive polarimeter based on a DoTP architecture is introduced in [48].

A division of amplitude polarimeter (DoAmP) acquires images simultaneously by

splitting a beam into three or four beams, which are then passed through different

analyzers and refocused on different focal planes [49]. The division of aperture

polarimeter (DoAP) acquires all of the polarization data simultaneously by including

a microlens array in the optics design [50]. The division of focal-plane polarimeter

(DoFP) or division of wavefront polarimeter [51–54] fabricates a micropolarizer array

directly onto the focal plane array. The micropolarizer array has a superpixel design

such that each subpixel passes a different polarization state of the light.

The measurement of a Stokes vector is corrupted with noise and systematic

errors. In [55] two architecture designs of DoFP are compared by analyzing the

signal-to-noise ratio associated with each Stokes parameter. A number of studies have reported on designing an optimum polarimeter for Stokes vector measurement

by minimizing various condition numbers of the system measurement matrices [56–

61]. Recently several methods have been proposed for optimizing the polarization

state of illumination in active Stokes images by maximizing the contrast function in

the presence of additive Gaussian noise [62] or maximizing the Euclidean distance

between the target and background Stokes parameters [63]. In the case of a

single-target/single-background scenario, the maximal contrast can be achieved in a scalar

image obtained with the optimal illumination and analysis state [64].

Regarding the processing of polarimetric data, some studies have focused on reconstruction of the Stokes vector from intensity images. A maximum likelihood

blind deconvolution algorithm was proposed in [65] for the estimation of

unpolar-ized and polarunpolar-ized components, as well as the angle of polarization and point spread

function. A Stokes parameters restoration algorithm was proposed in [66] based on

penalized-likelihood image reconstruction techniques. A framework of jointly

esti-mating Stokes parameters and aberrations was presented in [67]. Another Stokes

image restoration algorithm using spatial Gaussian mixture models can be found

in [68]. The images collected by a DoTP system may contain misregistration or

im-ages using Fourier transform techniques was shown in [69]. Another algorithm for

compensating the motion artifacts using optical flow was presented in [70].

Other interesting studies on polarimetric data processing are material classifica-tion and object detecclassifica-tion. The techniques of using polarizaclassifica-tion to improve

mate-rial classification or segmentation can be found in [71–73]. Material classification

in turbulence-degraded polarimetric imagery using the blind-deconvolution

algo-rithm [65] was introduced in [5,74]. Target detection using generalized likelihood

ratio test (GLRT) detectors is shown in [75]. Man-made target detections based on

RX and Topological Anomaly Detection (TAD) approaches were compared using

synthetic polarimetric images generated by DIRSIG [11] and lab measurments [76].

A image fusion approach for object detection can be found in [77].

The related work presented above is very helpful for obtaining better

under-standing of each subsystem of polarimetric imaging. However, the combined effects

of the scene characteristics, the sensor configurations, and the different processing

algorithm implementations on the overall system performance have not been sys-tematically studied, and a convenient system approach to analyze the end-to-end system performance of polarimetric imaging system does not exist.

## 1.3 Thesis Organization

In this chapter, the objectives of this research were stated after some background introduction. A short overview of the related work in imaging system modeling, scene modeling, polarimeter design and optimization, and the processing of

polari-metric data was also provided. In Chapter 2, an overview will be provided on the

mathematical representations of polarization, as well as some basic formulations that will be used through out this thesis. The analytical model of polarimetric imaging

system will be introduced in detail in Chapter 3. Some intermediate results

asso-ciated with each of the subsystem models (scene, sensor, and processing) will also

be presented. In Chapter 4, different system performance metrics will be defined,

new approach will be introduced to optimize a polarimetric imaging system based on target detection performance metric as well the analytical model. Conclusions

## Overview of Polarization

2.1 Types of Polarization

The electric field propagating in the*z*direction and at time*t*can be mathematically

expressed as

*˛Á*(*z, t*) =*Áx*(*z, t*)*x˛*+*Áy*(*z, t*)*˛y* (2.1)

The transverse components of the electrical field *˛Á*can be defined as

*Áx* =*Á*0*x*cos(*·*+*„x*) (2.2)

*Áy* =*Á*0*y*cos(*·*+*„y*) (2.3)

where *Á*0*x* and *Á*0*y* are magnitudes in the *x* and *y* directions, *·* is the propagation

term, and *„x* and *„y* are the phase angles of *Áx* and *Áy*.

The state of polarization refers to the orientation of electrical field *˛Á*. Elliptical

polarization is the most general case of polarization, and can be represented in the form of

*Á*2_{x}*Á*20*x*

+ *Á*2*y*

*Á*20*y* ≠

2*ÁxÁy*

*Á*0*xÁ*0*y*

cos*„* = sin2*„* (2.4)

where

*„* =*„x*≠*„y* (2.5)

Linear polarization and circular polarization are two special cases of elliptical polarization. Linear polarization occurs at the conditions that

*„* = 0 or*ﬁ,* (2.6)

and Eq. (2.4) reduces to

*Áy* =±

*Á*0*y*

*Á*0*x*

*Áx* (2.7)

Circular polarization occurs when

*Á*0*x* =*Á*0*y* =*Á*0*,* (2.8)

*„*=±*ﬁ*_{2}*,* (2.9)

and Eq. (2.4) reduces to

*Á*2_{x}*Á*2_{0} +

*Á*2_{y}

*Á*2_{0} = 1 (2.10)

The elliptical polarization representation is only valid for fully polarized light, and thus has certain limitations. Therefore the polarization state of light is often represented in other forms.

## 2.2 Representation of Polarized Light

The Stokes vector can be expressed as
**S**=
S
W
W
W
W
W
W
U
*S*0
*S*1
*S*2
*S*3
T
X
X
X
X
X
X
V
=
S
W
W
W
W
W
W
U

*Á*2_{0}*x*+*Á*20*y*

*Á*2_{0}_{x}_{≠}*Á*2_{0}_{y}

2*Á*0*xÁ*0*y*cos*„*

2*Á*0*xÁ*0*y*sin*„*

T X X X X X X V (2.11)

where *S*0, *S*1, *S*2, and *S*3 are the Stokes parameters, which are also commonly

denoted as *I*, *Q, U,* and V in some literature. *S*0 can be thought of as the intensity

of the light, *S*1 represents horizontal or vertical polarization, *S*2 represents diagonal

polarization, and *S*3 represent circular polarization. The degree of polarization

(DoP) is defined as

*DoP* =

Ò

*S*_{1}2+*S*_{2}2 +*S*_{3}2

*S*0 (2.12)

In most of the passive remote sensing scenario, little circular polarization is

ob-served [1,2]. By considering only linear polarization, Stokes vector can be expressed

as
**S** =
S
W
W
W
U
*S*0
*S*1
*S*2
T
X
X
X

V*,* (2.13)

and accordingly the degree of linear polarization (DoLP) is found as

*DoLP* =

Ò

*S*_{1}2+*S*_{2}2

*S*0 *.* (2.14)

The angle of polarization *Â* is defined as

tan 2*Â* = *S*2

*S*1 (2.15)

## 2.3 Mueller Matrix

The polarization of light is characterized using the Stokes vector representation. The optical interaction with media that transmit or reflect light can be represented

using a Mueller matrix. The Mueller matrix **M** relates the output Stokes vector

**S***out* as a linear transformation of the incident Stokes vector **S***in* as

**S***out* =**MS***in* (2.16)

The Mueller matrix for an ideal polarizer rotated at *◊* for example can be

ex-pressed as

**M**= 1_{2}

S W W W U

1 cos 2*◊* sin 2*◊*

cos 2*◊* cos22*◊* sin 2*◊*cos 2*◊*

sin 2*◊* sin 2*◊*cos 2*◊* sin22*◊*

T X X X

V (2.17)

## 2.4 Measurement of the Stokes Vector

There are several conventional methods used to measure the incident Stokes vector using linear polarizing filters. By rotating the polarizing filter to at least three

nonredundant angles, the incident Stokes vector can be reconstructed. If *I◊* is the

measurement when the polarizing filter is rotated at *◊*, the incident Stokes vector

can be obtained using Pickering’s method [37]

*S*0 =*I*0+*I*90*,* (2.18)

*S*1 =*I*0≠*I*90*,* (2.19)

*S*2 = 2*I*45≠*I*0≠*I*90*,* (2.20)

*S*0 = 2_{3}(*I*0+*I*60+*I*120)*,* (2.21)

*S*1 = 2_{3}(2*I*0≠*I*60≠*I*120)*,* (2.22)

*S*2 =≠Ô2

3(*I*120≠*I*60)*,* (2.23)

or using modified Pickering’s method

*S*0 = 1_{2}(*I*0+*I*45+*I*90+*I*135)*,* (2.24)

*S*1 =*I*0≠*I*90*,* (2.25)

*S*2 =*I*45≠*I*135*.* (2.26)

## 2.5 Fresnel Equations

Fresnel equations characterize the behavior of light moving between media with

different refractive indices.

For dielectric material, the Fresnel reflection coefficients for polarized radiation

perpendicular to the plane-of-incidence can be expressed as

*rs*(*◊i*) =

cos*◊i*≠(*n*2 ≠sin2*◊i*)

1 2

cos*◊i*+ (*n*2≠sin2*◊i*)

1

2*,* (2.27)

and the polarized radiation parallel to the plane-of-incidence can be expressed as

*rp*(*◊i*) =

*n*2cos*◊i*≠(*n*2≠sin2*◊i*)
1
2

*n*2cos*◊i*+ (*n*2≠sin2*◊i*)
1

2*,* (2.28)

refractive index. The coefficients *rs* and *rp* are also indicated as *r*‹ and *r*Î in some

literature [78]. The corresponding reflection can be formed as

*Rs*(*◊i*) =*rs*2(*◊i*) (2.29)

*Rp*(*◊i*) =*rp*2(*◊i*) (2.30)

For metal that has complex refractive index (*n*Â =*n*+*ik*), the Fresnel reflection

be found as [29]

*Rs*(*◊i*) =

(*A*_{≠}cos*◊i*)2+*B*2

(*A*+ cos*◊i*)2+*B*2

(2.31)

*Rp*(*◊i*) = *Rs*(*◊i*)

C

(*A*≠sin*◊i*tan*◊i*)2+*B*2

(*A*+ sin*◊i*tan*◊i*)2+*B*2

D

(2.32)

where

*A*=

Û_{Ô}

*C*+*D*

2 (2.33)

*B* =

Û_{Ô}

*C*≠*D*

2 (2.34)

*C* = 4*n*2*k*2+*D*2 (2.35)

*D*=*n*2≠*k*2≠sin2*◊i* (2.36)

0 20 40 60 80 0

0.2 0.4 0.6 0.8 1

θ

i [deg]

Reflectance

R

s of glass

R

p of glass

R

s of copper

R

p of copper

## Chapter 3

Analytical Model of Polarimetric

Imaging System

To better understand the effects of various system attributes and to help optimize the

design and use of polarimetric imaging systems, an analytical model is developed to predict the system performance. The framework of the analytical model is illustrated

in Fig. 3.1. The polarimetric imaging system is analyzed from the perspective of

scene, sensor, and processing [79]. In the scene model, we analyze how the light

interacts with the object surface, and how the polarized light is reflected to the sensor aperture. In the sensor model, we consider how the light is passed through optical components and then digitized by the camera. In the image processing model, we

study the transformations of the polarization images to different vector spaces or

features. Finally some system performance metrics are generated to help evaluate the end-to-end system performance and system optimization. The analytical model is introduced below in detail.

Fig

ur

e

3.1

:

Fr

ame

w

or

k

of

ana

ly

tic

al

mo

de

lo

fp

ola

rime

tr

ic

ima

ging

sy

st

[image:29.612.161.464.176.635.2](

)

L

!

( , )

_{i}

_{i}E

!

( , )

_{r}

_{r}L

!

*i*

*r*
*r*

*i*

*r*
*r*

Figure 3.2: Light reflection geometry for BRDF.

## 3.1 Scene Model

3.1.1 Optical Properties of Object Surface

The optical properties of the object surfaces are often characterized by bidirectional reflectance distribution function (BRDF) in spectral image analysis. The BRDF is defined as the ratio of the radiance scattered into the direction with orientation

angles (*◊r*, *„r*) to the incident irradiance from (*◊i*,*„i*), i.e.

*f* = *L*(*◊r,„r*)
*E*(*◊i,„i*)

(3.1)

where*E*(*◊i,„i*) is the incident irradiance, and*L*(*◊r,„r*) is the reflected radiance. The

definition of BRDF is also illustrated in Fig. 3.2.

When considering polarization the BRDF should be extended to a polarimetric bidirectional reflectance distribution function (pBRDF). In general pBRDF can be represented by a Mueller matrix as

**F**=

S W W W U

*f*00 *f*01 *f*02

*f*10 *f*11 *f*12

*f*20 *f*21 *f*22

T X X X

which is a 3◊3 matrix when only linear polarization is considered. The incident

irradiance and reflected radiance are accordingly extended to Stokes vectors, and are related by the pBRDF matrix as

S
W
W
W
U
*L*0
*L*1
*L*2
T
X
X
X
V=
S
W
W
W
U

*f*00 *f*01 *f*02

*f*10 *f*11 *f*12

*f*20 *f*21 *f*22

T
X
X
X
V
S
W
W
W
U
*E*0
*E*1
*E*2
T
X
X
X
V (3.3)

where [*L*0*, L*1*, L*2]*T* (*T* denotes transpose) is the Stokes vector of the reflected

radi-ance, and [*E*0*, E*1*, E*2]*T* is the Stokes vector of the incident irradiance.

The pBRDF can estimated based on an empirical approach or an analytical ap-proach. For the empirical approach the pBRDF matrix is found based on

measure-ment. In [80] a framework for the outdoor measurement of pBRDF is introduced,

but only the first column of the pBRDF matrix is considered. The full measurement

of pBRDF can be achieved based on different permutations of polarization states

of the light generator and analyzer [2,81], and also at different scene geometries.

The empirical approach is therefore not very flexible and requires a large number of measurements. Alternatively, the pBRDF could be analytically estimated using physics-based models. In our work the pBRDF is predicted based on microfacet

theory which is commonly used in the modeling of BRDF [82,83], and recently

extended to the modeling of pBRDF [26,27,29,30].

The general idea of microfacet theory is that the object surface is not perfectly

smooth, and has a microscopic level of detail as illustrated in Fig. 3.3. This

mi-croscopic level of detail is assumed to contain a statistically large distribution of microfacets. Each microfacet is perfectly mirror like, and its reflection can be

char-acterized using Fresnel equations as introduced in Section 2.5.

The light scattered from a material surface may follow different paths as

illus-trated in Fig 3.3:

• Light waves may be directly reflected off the object surface, and results in

single scattering. This is shown as type A reflection in Fig. 3.3.

## Object

Surface

Microscopic

Detail

A

B

C

Figure 3.3: Optical scatter from object surface.

then reflect back out to generate uniform radiance reflection. This multiple

scattering is illustrated as the type B reflection in Fig. 3.3.

• Light waves may go through multiple reflections among multiple microfacets,

and also result in uniform radiance reflection. This is indicated as type C

reflection in Fig. 3.3.

The type A reflection is called the specular component of reflection, and is normally polarized. A nearly specular surface always has its specular reflection concentrated

within a cone in the hemisphere as shown in Fig. 3.4. The type B and C reflections

are called diffuse components of reflection. This diffuse reflection can be thought

of as a depolarization processing, and is normally unpolarized. The pBRDF is

therefore divided into a polarized specular component**F***s* and an unpolarized diffuse

Figure 3.4: Specular and diffuse components of light reflected from a glass-like

surface. The diffuse component is magnified three times.

**F**=**F***s*+**F***d* (3.4)

The specular component is modeled based on the microfacet theory, and its reflection is modeled by a Mueller matrix. The microfacet model is considered

accurate when the microfacet area is larger than wavelength [84]. The scattering

geometry of a single microfacet is illustrated in Fig. 3.5, and its Mueller matrix can

be expressed as

**M**(*—*) = 1

2

S W W W U

*r*2* _{s}*+

*r*2

_{p}*r*2

_{s}_{≠}

*r*2

*0*

_{p}*r*2

_{s}_{≠}

*r*2

_{p}*r*2+

_{s}*r*2

*0*

_{p}0 0 2*rsrp*

T X X X

V (3.5)

where the Fresnel reflection coefficients *rs* and *rp*, which are defined in Section 2.5.

The coefficients *rs* and *rp* are calculated based on the scattering half angle*—*, which

can be found based on the scene geometry as

cos 2*—* = cos*◊i*cos*◊r*+ sin*◊i*sin*◊r*cos*„* (3.6)

where*◊i* is the incident zenith angle,*◊r* is the reflected zenith angle, and*„*is relative

azimuthal angle between incident and reflected directions. The definitions of these

β

β

θ

*i*

θ

*r*

α

Φ

## Object

Surface

Microfacet

Facet

Normal

Surface

Normal

[image:34.612.144.508.242.543.2]The Mueller matrix calculated in Eq. (3.5) is based on the local coordinates of

each microfacet surface. The scene geometries *◊i*, *◊r,* and *„* are defined in global

coordinates, whose normal is the object surface normal. The Stokes vector of the incident irradiance should be rotated from the global coordinate to the local coor-dinate, interacted with the microfacet, and rotated back to the global coordinate to

generate the Stokes vector of the reflected radiance [2,31]. Accordingly the Mueller

matrix can be modified to take account of this coordinate transformation as

**M**Õ_{(}_{—}_{) =}_{R}_{(}_{≠}_{÷}

*r*)**M**(*—*)*R*(*÷i*) (3.7)

where *R*(*÷*) is the rotation matrix, and is expressed as

*R*(*÷*) =

S W W W U

1 0 0

0 cos 2*÷* sin 2*÷*

0 ≠sin*÷* cos 2*÷*

T X X X

V (3.8)

The rotation angle *÷i* and *÷r* are determined based on the angle between the

mi-crofacet normal and object surface normal *–*, as well as the scattering half angle *—*

as

cos*÷i* =

cos*–*≠cos*◊i*cos*—*

sin*◊i*sin*—* (3.9)

cos*÷r* =

cos*–*≠cos*◊r*cos*—*

sin*◊r*cos*—*

(3.10)

where *–* can also be calculated based on scene geometry and *—*

cos*–*= cos*◊i*+ cos*◊r*

2 cos*—* (3.11)

The orientation of each microfacet is often assumed to follow certain probability density function. In our analytical model the statistical distribution of microfacet

*p*(*–*) = 1

2*ﬁ‡*2*n*cos3*–*

*e*≠

tan2*–*

2*‡ _{n}*2 (3.12)

The surface roughness *‡n* is characterized using surface height standard deviation

*‡h* and surface correlation length *l* [27,84] as

*‡n* =

*‡h*

*l* (3.13)

Surface roughness*‡n*increases by increasing height standard deviation*‡h*or

decreas-ing surface correlation length *l*. And *‡n* decreases by decreasing *‡h* or increasing *l*.

The surface roughness *‡n* is commonly estimated in an empirical fashion using data

fitting techniques [29,82].

A shadowing/masking function is often employed to account for the incident

and reflected light blocked by adjacent microfacets [27,85]. The shadowing/masking

function is expressed as

*G*(*◊i,◊r,„*) = min(1*,*

2 cos*–*cos*◊i*

cos*—* *,*

2 cos*–*cos*◊r*

cos*—* ) (3.14)

The shadowing/masking function *G* models the effect of incident and reflected

light intercepted by adjacent microfacets. The specular component of the pBRDF

can be formed based on the transformed Mueller matrix **M**Õ, distribution function

of microfacets *p*(*–*), and shadowing/masking function *G*

**F***s*= *p*(*–*)**M**Õ(*—*)

4 cos*◊i*cos*◊r*

*G*(*◊i,◊r,„*) (3.15)

The diffuse component of pBRDF **F***d* is unpolarized, and therefore only has a

non-zero value in the first component of its matrix

**F***d*_{=}

S W W W U

*fd*

00 0 0 0 0 0 0 0 0

T X X X

V (3.16)

reflectance (DHR). The DHR is the ratio of the total reflected energy from surface to the total incident energy from a direction. According to the conservation of energy, the DHR can be defined as

*DHR*(*◊i*) =

2*ﬁ*

ˆ

0

*ﬁ/*2

ˆ

0

*f*00cos*◊r*sin*◊rd◊rd„* (3.17)

The diffuse component can be assumed to be perfect Lambertian, and the diffuse

term *fd*

00 is generalized based on *DHR* and *f*00*s* the first element of **F***s*

*f*_{00}*d* = 1

*ﬁ*

S W W

U*DHR*(*◊i*)≠

2*ﬁ*

ˆ

0

*ﬁ/*2

ˆ

0

*f*_{00}*s* cos*◊r*sin*◊rd◊rd„*

T X X

V (3.18)

Finally the pBRDF is generalized by substituting Eq. (3.15) and Eq. (3.16) into

Eq. (3.4).

The pBRDF model introduced above is commonly applied to man-made target objects. For some types of natural backgrounds, such as vegetation, a certain layer

structure could be observed. Several researchers have derived physical models [34,

40,42,43,86] to analyze polarization reflection of a vegetation canopy. Following

the model derived in [40], a simplified model assuming a uniform leaf inclination

distribution is given by [42] as

*ﬂp*(*◊i,◊r.„*) =

*Fp*

4(cos*◊i*+ cos*◊r*)

(3.19)

where *Fp* is the polarized reflection, which is equivalent to the second component of

the Mueller matrix in Eq. (3.5). A Gaussian or Beta distribution function can also

be used alternatively as the leaf inclination distribution. This model was validated

by field measurement [40], using a radiometer with a footprint of 80cm in diameter,

which may be a common resolution for an airborne sensor. The pBRDF of vegetation

can also be estimated accordingly be replacing the polarized reflection *Fp* with the

Table 3.1: Parameter setting for simulations shown in Fig. 3.6.

Parameter Setting in simulation

Refractive index *n* 1.52

*DHR* 0.1

Surface roughness *‡n* 0.1, 0.3, 0.5, 0.6

Incident angle *◊i* 60¶

Observation zenith angle*◊r* 0¶ ≥90¶

Observation azimuth angle*„* 0¶ ≥180¶

## 3.1.2 Simulations of pBRDF Model Determined

Polariza-tion

In this section the polarization behavior of an object surface is analyzed based on the pBRDF model when the incident light is randomly polarized, and in this case DoLP can be simply calculated using

*DoLP* =

Ò

*f*_{10}2 +*f*_{20}2

*f*00 (3.20)

We first analyze the polarization behavior of glass when its surface presents

different levels of roughness. The DoLPs of glass determined by the pBRDF with

different levels of surface roughness and at different observation geometries (*◊r*, *„*)

are shown in Fig. 3.6. The parameter settings used in this group of simulations

are listed in Table. 3.1. In Fig. 3.6, the radial coordinate corresponds to the

observation zenith angle, and the angular coordinate corresponds to the observation azimuth angle. For a relatively smooth surface with a low surface roughness, the

polarization tends to be higher at the specular plane (*„*= 180¶)* _{,}*but the polarization

can only be observed within a specular lobe. As the surface has a higher level of roughness, the polarization spreads across the whole hemisphere, but its magnitude decreases.

(a) (b)

(c) (d)

Figure 3.6: pBRDF model determined DoLP of glass with different levels of surface

roughness (a) *‡n* = 0*.*1, (b) *‡n* = 0*.*3, (c) *‡n* = 0*.*5, and (d) *‡n*= 0*.*7. The incident

angle *◊i* is 60¶. The radial coordinate corresponds to the observation zenith angle,

Table 3.2: Parameter setting for simulations shown in Fig. 3.7.

Parameter Glass Vegetation

Refractive index *n* 1.52 1.5

Reflection factor *DHR*= 0*.*1 *fd*

00 = 0*.*04*/pi*

Surface roughness 0.1 ≠

Incident angle *◊i* 20¶, 40¶, 60¶

Observation zenith angle*◊r* 0¶ ≥90¶

Observation azimuth angle *„* 0¶ _{≥}180¶

are presented at different light incident angles. The polarization behavior of glass

and vegetation are compared in Fig. 3.7. The parameter settings for this group of

simulations are listed in Table. 3.2. The parameters for the vegetation simulation

are adopted from [40]. In Fig. 3.7 it can be observed that glass shows higher

polarization than vegetation due to its smoother surface, and therefore much higher specular reflection. But high polarization observations are restricted within the specular plane. The DoLP value of glass is found to be much higher at the incident

angle of 60¶, which is close to the Brewster’s angle (_{¥}57¶) for glass.

## 3.1.3 Solar and Atmosphere Modeling

In this section, the modeling of the solar illumination and atmospheric effects in

optical polarization is discussed. The illumination and atmospheric effects that may

appear in a remote sensing system are illustrated in Fig. 3.8. In Fig. 3.8 several

factors are seen to contribute to the radiance received by the sensor.

• The first path along which the light propagates to the sensor’s aperture is

indicated as type A illumination in Fig. 3.8. This type of illumination is

(a) (b)

(c) (d)

(e) (f)

Figure 3.7: pBRDF model determined DoLP of glass (a, c, e), and vegetation (b, d,

f) at different incident angles: (a, b) *◊i* = 20¶, (c, d) *◊i* = 40¶*,* and (e, f) *◊i* = 60¶.

Figure 3.8: Solar and atmospheric effects on radiance reaching sensor aperture.

• The type B radiance also originates from sun light, which is scattered by the

atmosphere, and then reflected by the Earth to the sensor. The sun light scattered by the atmosphere is commonly referred as skylight. This type B radiance is called downwelled radiance.

• The type C radiance is the solar irradiance scattered from the atmosphere

directly to the sensor’s aperture without even reaching the Earth. This is called upwelled or path scattered radiance.

• Other paths of radiance, such as the sun light reflected off the nearby surface

(adjacency effect) or the background radiation, may also exist in the remote

sensing system, but are not currently considered in our model.

As mentioned previously in Eq. (3.3), the incident irradiance and reflected

[image:42.612.154.495.125.376.2]modeled mathematically based on the pBRDF. The radiance calculations are

func-tions of the incident and reflected zenith angles (*◊i,◊r*)*,* and relative azimuth angle

(*„*) between sun and sensor. All radiance calculations are functions of wavelength

*⁄*, but the subscript has been dropped for clarity.

The direct solar irradiance is randomly polarized and the solar reflected radiance can be calculated by considering only the first column of the pBRDF Mueller matrix

[*f*00*,f*10*, f*20]*T* and is expressed as

**L***r* =

S W W W U

*Lr*0
*Lr*1
*Lr*2

T X X X

V=*·*cos*◊iEs*
S
W
W
W
U
*f*00
*f*10
*f*20
T
X
X
X
V (3.21)

where **L***r* = [*Lr*0*, Lr*1*, Lr*2]*T* is the Stokes vector representation of the direct solar

reflected radiance, *·* is the transmittance along the surface-to-sensor path, and *Es*

is the solar irradiance incident on the surface from direct transmission through the atmosphere.

Due to Rayleigh scattering the downwelled radiance distributed over the entire sky is polarized. The polarization state of the skylight is dependent on the incident

zenith angle*◊d*and azimuth angle*„d*. The variability of the skylight polarization can

be seen in the simulations shown in Fig. 3.10, which will be explained in detail later.

The object surface also shows various polarization behavior when the light comes

from different directions since the pBRDF is considered. Therefore it is necessary

to integrate the skylight reflected off the surface over the whole hemisphere, and the

surface reflected downwelled radiance can be expressed as

**L***d*=

S W W W U

*Ld*0
*Ld*1
*Ld*2

T
X
X
X
V=*·*

ˆ

**F**(*◊i,„*)**L***sky*(*◊d,„d*) cos*◊dd* (3.22)

where**L***d* = [*Ld*0*, Ld*1*, Ld*2]*T* is the Stokes vector representation of the reflected

down-welled radiance, **L***sky*(*◊d,„d*) is the downwelled radiance originated from direction

Figure 3.9: An illustration of the downwelled radiance integration over hemisphere.

as illustrated in Fig. 3.9.

The upwelled radiance is also polarized, and expressed as

*Lu* =

S W W W U

*Lu*0
*Lu*1
*Lu*2

T X X X

V (3.23)

The solar irradiance, and the magnitude of the downwelled and upwelled radiance

(*L*0 component) are often estimated using an atmospheric scattering code such as

MODerate resolution atmospheric TRANsmission (MODTRAN) [87]. A polarized

version of MODTRAN (MODTRAN-P) has been developed to include the polariza-tion influence due to Rayleigh scattering, and used to estimate the Stokes vectors

of atmospheric radiance [88]. An alternative approach to estimate the Rayleigh

scattering polarization is the use of the Coulson table [89,90]. The comparisons

Table 3.3: MODTRAN-P input parameter setting of simulation in Fig. 3.10.

Parameter Setting

Atmospheric model 1976 US Standard

Haze parameter Rural Extinction

Surface Meteorological Range 23 km

Scattering Type Single

Solar Zenith Angle 46¶

Day of the Year 170

Center Wavelength 450 nm

measurement by Coulson can be found in [2,91,92]. Other comparisons between

MODTRAN-P simulations and direct sky polarization measurements can also be

found in [93]. The all-sky polarization can also be captured using a digital device

composed of fisheye lens, liquid-crystal variable retarders, and CCD camera [94,95].

## 3.1.4 Simulations of Atmospheric Polarization

As mentioned in the last section, the atmospheric radiance is polarized. In this sec-tion we will present the simulasec-tions of downwelled radiance generated by

MODTRAN-P. In Fig. 3.10, we present a simulation of the Stokes vector and DoLP of

down-welled radiance after iteratively running MODTRAN-P 370 times. The zenith angle

is changed from 0¶ to 90¶ with a 10¶ step, and the azimuth angle is changed from

0¶ to 360¶ also with a 10¶ step. The parameter settings of MODTRAN-P input

are listed in Table. 3.3. It is seen that the polarization of skylight varies over the

hemisphere.

The polarization state of the downwelled radiance is also dependent on the solar

incident angle. In Fig. 3.12we present a group of MODTRAN-P predicted DoLP of

skylight when the solar zenith angles are fixed at different directions. The selection

of the solar zenith angles is illustrated in Fig. 3.11. The stronger polarization of

(a) (b)

(c) (d)

Figure 3.10: MODTRAN-P predicted Stokes vector and DoLP of downwelled

radi-ance. (a)*S*0, (b) *S*1, (c)*S*2, and (d) DoLP. The radial coordinate corresponds to the

a

b

c

d

Figure 3.11: The solar zenith angle settings for simulations in Fig. 3.12. (a)*◊i* = 20¶,

(b) *◊i* = 40¶, (c) *◊i* = 60¶, and (d) *◊i* = 80¶.

fall off from this maximum value.

## 3.1.5 Statistics of Radiance at Sensor Aperture

Based on the pBRDF model, and the solar and atmosphere model introduced in

section 3.1.1 and 3.1.3, the statistics of radiance at the sensor aperture can be

generated. The radiance reaching the sensor aperture is found as the combination of the solar reflected radiance, downwelled radiance, and upwelled radiance

**L***S* =**L***r*+**L***d*+**L***u* (3.24)

con-(a) (b)

(c) (d)

Figure 3.12: MODTRAN-P predicted DoLP of downwelled radiance as the solar

zenith angles changed at (a) 20¶* _{,}*(b) 40¶

*(c) 60¶, and (d) 80¶. The radial coordinate*

_{,}corresponds to the observation zenith angle, and the angular coordinate corresponds to the observation azimuth angle. An illustration of the parameter settings is shown

volved with the radiance in the scene model. The spectral response function will be revisited in the sensor model. In this dissertation, we are particularly interested in the polarization image captured in gray scale (panchromatic image) over visible wavelengths. The radiance received by the sensor is therefore an intensity that is no longer specified as a function of wavelengths.

The radiance is represented by the first and second order statistics. The expected

value or mean of a random variable *X* in discrete type is defined as [96]

*E*_{{}*X*_{}}= ¯*X* =ÿ

*i*

*pixi* (3.25)

where *pi* is the probability that*X* takes the values *xi*.

The variance of*X* is defined as

Var{*x*_{}}=*‡*2 =*E*_{{}(*X*_{≠}*X*¯)2_{}} (3.26)

The covariance of two random variables *X* and *Y* is defined as

Cov{*X, Y*_{}}=*E*_{{}(*X*_{≠}*X*¯)(*Y* _{≠}*Y*¯)_{}} (3.27)

If*X* and *Y* are vector valued random variables, their covariance is expressed as

Cov{*X, Y*}=*E*{(*X*≠*X*¯)(*Y* ≠*Y*¯)*T*} (3.28)

The mean of the radiance is also represented as Stokes vector

¯

**L***S* =

S W W W U

¯

*LS*0

¯

*LS*1

¯

*LS*2

T X X X

V (3.29)

The variation associated with **L***S* is defined as a covariance matrix *LS*. The

variation of sensor reaching radiance values may come from different sources as

shown in Fig. 3.13. The scene variation can be modeled as a combination of viewing

3.1. Scene Model 37

Scene Variation

trix

*v. In our analytical model,*

*v*

## is considered as a diagonal matrix, and

could be derived from measurement. A derivation is presented in Appendix

??

.

The covariance matrix of Stokes vector radiance at sensor’s aperture is

found as

*LS*

=

*g*

+

*v*

(1.31)

The radiance statistics will be further propagated into the sensor model

to generate signal statistics.

1.2 Sensor Model

1.2.1 Formulation of Sensor Model

## A polarization-sensitive optical system can be used to measure the

polar-ization state of light. The framework of the sensor model is shown in Fig.

1.14

. The sensor model takes the mean and covariance of the sensor

reach-ing radiance to produce the signal mean and covariance by applyreach-ing sensor

e

ff

ects, such as the polarizing e

ff

## ects of optical components, sensor spectral

response, radiometric noise sources, etc. The sensor model introduced here

is specified for a DoTP system, and the scene considered is stationary.

How-ever, it should be noted that the sensor model could be easily extended to

other types of polarimeters, such as DoFP. For an ideal linear polarizing

## fil-ter, which is also called an analyzer, rotated at an angle of

Â

*i*

with respect

to horizontal, the transmitted radiance

L

*i*

can be expressed as

L

*i*

=

1

2

(

L

*S*0

+ cos 2Â

*i*

·

L

*S*1

+ sin 2Â

*i*

·

L

*S*2

) =

m

*Ti*

L

*S*

(1.32)

where

m

*i*

= 0

.

5 [1

,

cos 2Âi

,

sin 2Âi]

*T*

, and

L

*S*

is the incident Stokes vector

radiance. With the linear polarizing filter rotated at

N

di

ff

erent angles, an

Scene Geometry Variation

## CHAPTER 1. ANALYTICAL MODEL OF POLARIMETRIC IMAGING SYSTEM

26

trix

*v*

. In our analytical model,

*v*

is considered as a diagonal matrix, and

could be derived from measurement. A derivation is presented in Appendix

??

.

The covariance matrix of Stokes vector radiance at sensor’s aperture is

found as

*LS*

=

*g*

+

*v*

(1.31)

The radiance statistics will be further propagated into the sensor model

## to generate signal statistics.

1.2 Sensor Model

1.2.1 Formulation of Sensor Model

A polarization-sensitive optical system can be used to measure the

polar-ization state of light. The framework of the sensor model is shown in Fig.

1.14

. The sensor model takes the mean and covariance of the sensor

reach-ing radiance to produce the signal mean and covariance by applyreach-ing sensor

e

ff

## ects, such as the polarizing e

ff

ects of optical components, sensor spectral

response, radiometric noise sources, etc. The sensor model introduced here

is specified for a DoTP system, and the scene considered is stationary.

How-ever, it should be noted that the sensor model could be easily extended to

## other types of polarimeters, such as DoFP. For an ideal linear polarizing

fil-ter, which is also called an analyzer, rotated at an angle of

Â

*i*

with respect

to horizontal, the transmitted radiance

L

*i*

can be expressed as

L

*i*

=

1

2

(

L

*S*0

+ cos 2

Â

*i*

·

L

*S*1

+ sin 2

Â

*i*

·

L

*S*2

) =

m

*Ti*

L

*S*

(1.32)

where

m

*i*

= 0

.

5 [1

,

cos 2

Â

*i*

,

sin 2

Â

*i*

]

*T*

, and

L

*S*

is the incident Stokes vector

radiance. With the linear polarizing filter rotated at

N

di

ff

## erent angles, an

Texture Variation

CHAPTER 1. ANALYTICAL MODEL OF POLARIMETRIC IMAGING SYSTEM

26

trix

*v*

. In our analytical model,

*v*

is considered as a diagonal matrix, and

could be derived from measurement. A derivation is presented in Appendix

??

.

The covariance matrix of Stokes vector radiance at sensor’s aperture is

## found as

*LS*

=

*g*

+

*v*

(1.31)

The radiance statistics will be further propagated into the sensor model

to generate signal statistics.

1.2 Sensor Model

1.2.1 Formulation of Sensor Model

A polarization-sensitive optical system can be used to measure the

polar-ization state of light. The framework of the sensor model is shown in Fig.

1.14

## . The sensor model takes the mean and covariance of the sensor

reach-ing radiance to produce the signal mean and covariance by applyreach-ing sensor

e

ff

ects, such as the polarizing e

ff

ects of optical components, sensor spectral

response, radiometric noise sources, etc. The sensor model introduced here

is specified for a DoTP system, and the scene considered is stationary.

## How-ever, it should be noted that the sensor model could be easily extended to

other types of polarimeters, such as DoFP. For an ideal linear polarizing

fil-ter, which is also called an analyzer, rotated at an angle of

Â

*i*

with respect

to horizontal, the transmitted radiance

L

*i*

can be expressed as

L

*i*

=

1

2

(

L

*S*0

+ cos 2Â

*i*

·

L

*S*1

+ sin 2Â

*i*

·

L

*S*2

) =

m

*Ti*

L

*S*

(1.32)

where

m

*i*

= 0

.

5 [1

,

cos 2Â

*i*

,

sin 2Â

*i*

]

*T*

, and

L

*S*

## is the incident Stokes vector

radiance. With the linear polarizing filter rotated at

N

di

ff

erent angles, an

Figure 3.13: Sources of scene variation.

introduced in detail below.

The scene variation may come from the scene geometry effect which is illustrated

in Fig. 3.14. If the scene is viewed by a sensor with a large field of view, which

is very common for airborne sensors, it is possible for the surface to exhibit vari-ous polarization properties, because both pBRDF and atmospheric polarization are dependent on the sun-surface-sensor geometry.

To compute the variation of the sensor reaching radiance resulting from the

sensor viewing geometry, a discrete approach is employed as illustrated in Fig. 3.15.

The focal plane of the sensor or field of view for scanning systems is first projected on the ground. In this research we only consider a sensor with 2-D array. The whole focal plane or a region of interest (ROI) that covers the interested object is then

divided into different tiles. Each tile is associated with a different sensor viewing

geometry. The scene geometry of each tile is feed into the scene model to calculate the Stokes vector radiance reaching the sensor’s aperture for each tile. Assuming

the focal plane or the ROI is equally divided into *K* different tiles within a viewing

angle range of ( *◊x,* *◊y*) with its center pointed at *O* as shown in Fig. 3.15, then

the mean of the *i*th component of the sensor reaching radiance Stokes vector**L***s* can

be found as

¯

*LSi* = 1

*K*
*K*

ÿ

*g*=1

Figure 3.14: Radiance variation due to sensor viewing geometry.

where *◊i,g* is the solar zenith angle,*◊r,g* is the sensor’s zenith angle, and *„g* is the

relative azimuth angle, all are computed at the *g*th tile of the focal plane. The

parameter *K* can be selected based on the number of pixels of the focal plane, or

the number of pixels in the ROI of a real detector array. But choosing a larger *K*

value will sacrifice the computation time. In this research we choose a smaller *K*

value to speed up the simulation. With the help of parallel computing, choosing

a *K* value that represents the physical size of a real detector array will make the

estimation of scene geometry variation more accurate, but is not considered here.

The pBRDF, solar illumination and atmospheric effects are computed for each

ge-ometry (*◊i,g,◊r,g,* *„g*), and the sensor reaching radiance *LSi*(*◊i,g,◊r,g,* *„g*) is then

generated accordingly using Eq. (3.24). A derivation of computing*◊i,g*,*◊r,g* and *„g*

is presented in Appendix A.

A covariance matrix due to this viewing geometry can be found accordingly as

3.1. Scene Model 39

Ba

ckground Clas

s
00
(,
)
*xy*
*x*
θ
Δ
*y*
θ
Δ
S
ce
ne
M
ode
l

L

(θ *i*,1

,
θ *r*
,1
,
Δ
φ 1
)
F
oc
al
pl
ane
proj
ec
te

d on ground

S ens or re ac hi ng S toke s ve ct or ra di anc e – M ea n – Cova ri anc e (

θ*i*,1

,

θ*r*,1

,

Δ

φ1

)

(

θ*i*,2

,

θ*r*,2

,
Δ
φ2
)
*O *

L

(θ *i*,2

,
θ *r*
,2
,
Δ
φ 2
)
C
HAP
T
E
R
1.
ANAL
YT
IC
AL
M
O
D
E
L
O
F
P
O
LAR
IM
E
T
R
IC
IM
A
G
ING
SYST
E
M
25
Fig
ur
e
1.1
4:
R
adia
nc
e
va
ria
tio
n
due
to
se
ns
or
vie
w
ing
ge
ome
tr
y.
the
sc
ene
mo
de
lt
o
ca
lc
ula
te
the
St
ok
es
ve
ct
or
ra
dia
nc
e
re
ac
hing
the
se
ns
or
’s
ap
er
tur
e
fo
r
ea
ch
tile
.
As
suming
the
fo
ca
l
pla
ne
is
eq
ua
lly
div
ide
d
in
to
*K*
di
ff
er
en
t
tile
s
w
ithin
a
vie
w
ing
ang
le
ra
ng
e
of
(
*◊x*
*,*
*◊y*
)
w
ith
its
ce
nt
er
po
in
te
d
at
*O*
as
sho
w
n
in
Fig
.
1.1
5
,t
he
n
the
me
an
of
the
*i*
th
co
mp
one
nt
of
the
se
ns
or
re
ac
hing
ra
dia
nc
e
St
ok
es
ve
ct
or
**L***s*
ca
n
be
fo
und
as

¯*LSi*

=

1 _{K}

*K*_{ÿ} * _{g}*=1

*LSi*
(
*◊i,g*
*,*
*◊r,*
*g*
*,*
*„g*
)
(1
.2
9)
w
he
re
*◊i,g*
is
the
so
la
r
ze
nit
h
ang
le
,
*◊r,*
*g*
is
the
se
ns
or
’s
ze
nit
h
ang
le
,a
nd
*„g*
is
the
re
la
tiv
e
az
im
ut
h
ang
le
,
all
ar
e
co
mput
ed
at
the
*g*
th
tile
of
the
fo
ca
l
pla
ne
.
T
he
pB
R
D
F,
so
la
r
illumina
tio
n
and
at
mo
sphe
ric
e
ff
ec
ts
ar
e
co
m-put
ed
fo
r
ea
ch
ge
ome
tr
y
(
*◊i,g*
*,*
*◊r,*
*g*
*,*
*„g*
),
and
the
se
ns
or
re
ac
hing
ra
dia
nc
e
*LSi*
(
*◊i,g*
*,*
*◊r,*
*g*
*,*
*„g*
)
is
the
n
ge
ne
ra
te
d
ac
co
rding
ly
us
ing
E
q.
(
1.2
3
).
A
de
riv
a-tio
n
of
co
mput
ing
*◊i,g*
,
*◊r,*
*g*
and
*„g*
is
pr
es
en
te
d
in
App
endix
.
A
co
va
ria
nc
e
ma
tr
ix
due
to
this
vie
w
ing
ge
ome
tr
y
ca
n
be
fo
und
ac
co

rd-C

HAP

T

E

R

1.

ANAL

YT

IC

AL

M

O

D

E

L

O

F

P

O

LAR

IM

E

T

R

IC

IM

A

G

ING

SYST

E

ing

ly

as

*g*

=

E

{

(

L

*S*

≠

¯

L

)(

*S*

L

*S*

≠

¯

L

)

*S*

*T*

}

(1

.3

0)

Ba

ckground Clas

s
00
(,
)
*xy*
*x*
θ
Δ
*y*
θ
Δ
S
ce
ne
M
ode
l
11
(,
)
*L*

θφ _{(,}22

)
*L*
θφ
F
oc
al
pl
ane
proj
ec
te

d on ground

Se ns or 're ac hi ng 'S to ke s' ve cto r'r ad ian ce ' – Me an ' – Co var ian ce '

Fig

ur

e

1.1

5:

C

## omput

at

io

n

of

se

ns

or

vie

w

ing

ge

ome

tr

y

re

la

te

d

sc

ene

va

ria

tio

n.

As

sho

w

n

in

Fig

.

1.1

3

,t

he

sc

ene

va

ria

tio

n

ma

y

als

o

co

me

fr

om

the

se

ns

or

vie

w

ing

ge

ome

tr

y

e

ff

ec

t.

If

the

sc

ene

is

vie

w

ed

by

a

se

ns

or

w

ith

a

la

rg

e

fie

ld

of

vie

w

,w

hic

h

is

ve

ry

co

mmo

n

fo

ra

irb

or

ne

se

ns

or

,it

is

po

ss

ible

fo

rt

he

ob

je