Top PDF A Generic Framework for Combining Multiple Segmentations in Geographic Object-Based Image Analysis

A Generic Framework for Combining Multiple Segmentations in Geographic Object-Based Image Analysis

A Generic Framework for Combining Multiple Segmentations in Geographic Object-Based Image Analysis

* Correspondence: sebastien.lefevre@irisa.fr Received: 30 July 2018; Accepted: 27 January 2019; Published: 30 January 2019    Abstract: The Geographic Object-Based Image Analysis (GEOBIA) paradigm relies strongly on the segmentation concept, i.e., partitioning of an image into regions or objects that are then further analyzed. Segmentation is a critical step, for which a wide range of methods, parameters and input data are available. To reduce the sensitivity of the GEOBIA process to the segmentation step, here we consider that a set of segmentation maps can be derived from remote sensing data. Inspired by the ensemble paradigm that combines multiple weak classifiers to build a strong one, we propose a novel framework for combining multiple segmentation maps. The combination leads to a fine-grained partition of segments (super-pixels) that is built by intersecting individual input partitions, and each segment is assigned a segmentation confidence score that relates directly to the local consensus between the different segmentation maps. Furthermore, each input segmentation can be assigned some local or global quality score based on expert assessment or automatic analysis. These scores are then taken into account when computing the confidence map that results from the combination of the segmentation processes. This means the process is less affected by incorrect segmentation inputs either at the local scale of a region, or at the global scale of a map. In contrast to related works, the proposed framework is fully generic and does not rely on specific input data to drive the combination process. We assess its relevance through experiments conducted on ISPRS 2D Semantic Labeling. Results show that the confidence map provides valuable information that can be produced when combining segmentations, and fusion at the object level is competitive w.r.t. fusion at the pixel or decision level.
Show more

18 Read more

From pixels to grixels: a unified functional model for geographic object based image analysis

From pixels to grixels: a unified functional model for geographic object based image analysis

Geographic Object-Based Image Analysis (GEOBIA) aims to better exploit earth remotely sensed imagery by focusing on building image-objects resembling the real-world objects instead of using raw pixels as basis for classification. Due to the recentness of the field, concurrent and sometimes competing methods, terminology, and theoretical approaches are evolving. This risk of babelization has been identified as one of the central threats for GEOBIA, as it could hinder scientific discourse and the development of a generally accepted theoretical framework. This paper contributes to the definition of such ontology by proposing a general functional model of the remote sensing image analysis. The model compartmentalizes the remote sensing process into six stages: (i) sensing the earth surface in order to derive pixels which represent incomplete data about real-world objects; (ii) pre-processing the pixels in order to remove atmospheric, geometric, and radiometric distortions; (iii) grouping the pre-processed pixels (prixels) to produce image-objects (grouped pixels or grixels) at one or several scales; (iv) feature analysis to examine and measure relevant spectral, geometric and contextual properties and relationships of grixels in order to produce feature vectors (vexcels) and decision rules for subsequent discrimination; (v) assignation of grixels to pre-defined qualitative or quantitative land cover classes, thus producing pre-objects (preliminary objects); and (vi) post-processing to refine the previous results and output the geographic objects of interest. The grouping stage may be analized from two different perpectives: (i) discrete segmentation which produces well-defined image-objects, and (ii) continuous segmentation which produces image-fields with indeterminate boundaries. The proposed generic model is applied to analyze two specific GEOBIA software implementations. A functional decomposition of discrete segmentation is also discussed and tested. It is concluded that the proposed framework enhances the evaluation and comparison of different GEOBIA approaches and by this is helping to establish a generally accepted ontology.
Show more

7 Read more

Segmentation: The Achilles ’ heel of object–based image analysis?

Segmentation: The Achilles ’ heel of object–based image analysis?

gesm@ceh.ac.uk b danm@ceh.ac.uk KEY WORDS: digital cartography, spatial framework, segmentation, land cover ABSTRACT: Much of the background material on geo-object-based image analysis (GEOBIA) begins with a description of image segmentation and suggest it as the first stage of the process. Although this approach may be reasonable in some cases, the crucial first step should be more generically defined as obtaining a set of objects which represent the features of interest in the image. In the context of land cover mapping these objects will be fields, lakes and patches of woodland etc. The application of image segmentation might be considered a ‘black art’, due to it dependence on the image data and the limited amount of control available to the user. The resulting segments reflect the spectral structure of the image rather than the presence of true boundary features in the landscape. For instance, two adjacent fields with the same crop could be combined into a single object even though they may be owned by different farmers. Conversely, single fields containing natural and acceptable variability may give multiple objects per field. Segments also retain the inherent area sampling of the original image. This lack of a direct one-to-one relationship between real world objects and segments has prevented GEOBIA reaching its full potential. Rarely today is any environmental analysis begun on a blank canvas, but in the context of existing mapping of some form, often in digital format. In regions of the world with high quality large scale cartographic mapping an obvious question is why this information is not used to control the GEOBIA process? Much of the earlier GEOBIA work was within the raster processing domain and only recently have fully structured digital vector cartography datasets and the necessary software tools become available. It is proposed that the GEOBIA process be made more generic to use the best existing real world feature datasets as the starting point for the process before segmentation is considered. Such an approach would increase opportunities for integration with other datasets and improve the results of map update initiatives. This approach will be demonstrated in the context of an operational national land cover mapping exercise.
Show more

6 Read more

Efficient methodology for object based image analysis

Efficient methodology for object based image analysis

192 geometric and radiometric variations during different acquisition times, the same objects tend to have different spectral characteristics betweenflight lines, resulting in reduced classification accuracies. YulanGuo,et al.[2] has worked on an accurate and robust range image registration algorithm for 3D object modeling.This method based on pairwise course registration algorithms and multi view registration algorithmsPairwise coarse registration is usually accomplished by finding point correspondences through the matching of local features. Pairwise registration algorithm is performed to produce a more accurate solution.A multi-view coarse registration algorithm involves two tasks. The first task is to recover the overlap information between the input range images, and the second task is to calculate the rigid transformations between any two overlapping range images. Multi-view registration algorithms aim to minimize the registration error of all overlapping range images.Thomas Blaschke,et al.[3] has worked on object based image analysis and digital terrain analysis for locating landslides. To establish a semiautomated object-based image analysis (OBIA) methodology for locating landslides.Landslide detection and delineation through earth observation data are considered to be a promising application domain.FatemehTabibMahmoudi,et al.[4] has worked on object recognition based on the context aware decision level fusion in multiviews imagery. This method based on spectral similarities and spatial adjacencies between various kinds of objects, shadow, and occluded areas behind high-rise objects as well as the complex relationships between various object types. This method reduce difficulties and ambiguities in object recognition in urban areas. Using a knowledge base containing the contextual information together with the multiviews imagery may improve the object recognition results. Yang Shu , et al.[5] has worked on object based unsupervised classification of very high resolution panchromatic satellite images by combining the HDP and IBPon Multiple Scenes. This method perform the classification by capturing the classes from the data without requiring training data. Thus, the unsupervised classification methods are widely used to extract geographic thematic information from remote sensing images when a training sample is not available.
Show more

5 Read more

Object based Classification of Satellite Images by Combining the HDP, IBP and k mean on Multiple Scenes

Object based Classification of Satellite Images by Combining the HDP, IBP and k mean on Multiple Scenes

Abstract: The goal of this system to analyze remote sensing images and classify objects into land cover or use classes. In this project classify the object based unsupervised classification of remotely sensed very high resolution(VHR) panchromatic and multispectral satellite images in which the hierarchical dirichlet process(HDP) and Indian buffet process(IBP) and k-means clustering algorithm on multiple scenes. In this framework, a VHR satellite image is first over segmented into basic processing units and divided into a set of subimages. The hierarchical structure of our model transmit the spatial information from the original image to the scene layer implicitly and provide useful cues of classification by using k-means clustering algorithm. Clustering is a popular tool for exploratory data analysis such as k-means clustering technique. K-means clustering algorithm is used to partition and analyzes the data which used the required cluster. After dividing the cluster which deciding the color of frequency by using HDP and IBP technique and then applying the color frequency by using the support vector machine algorithm (SVM). Support vector machine algorithm is used for classification of an image. After performing the classification algorithm display the spatial information with the help of deciding color frequency also it shows the percentage of every spatial information.
Show more

8 Read more

From pixels to grixels: a unified functional model for geographic-object-based image analysis

From pixels to grixels: a unified functional model for geographic-object-based image analysis

Figure 5: Discrete segmentation as implemented in Definiens. 6 CONCLUSIONS This paper proposes a unified framework for geographic object- based image analysis process. It is based on recent theoretical de- velopments of geographic representation in GIS and aims to con- tribute to a better integration between remote sensing and GIS ap- plications. It is able to accommodate different approaches to es- timate land surface properties, either the qualitative classification of land cover or the quantitative estimation of bio-geo-physical variables. It uses a bottom-up description of the image analysis process where structural properties of the image and abstraction are changed by generic functions or stages. The proposed frame- work may serve to analyze different GEOBIA software imple- mentations and to compare the particular ways they use to accom- plish each stage of the process. More important, critical processes of the GEOBIA image analysis approach like the segmentation stage can be extended and generalized to allow the existence of both discrete image-objects and continuous image-fields.
Show more

6 Read more

Combining Multiple Resolutions into Hierarchical Representations for kernel-based Image Classification

Combining Multiple Resolutions into Hierarchical Representations for kernel-based Image Classification

Geographic object-based image analysis (GEOBIA) framework has gained increasing interest recently, especially in the case of very high resolution remote sensing images (Blaschke et al., 2014). One of the key features for GEOBIA framework is the hierarchi- cal image representation through a tree structure, where objects- of-interest can be revealed through various scales, and where the topological relationship between objects (e.g. A is part of B, or B consists of A) can be easily modeled. In the classification con- text, however, most papers in literature address one scale only, as being pointed out in a recent survey paper (Blaschke, 2010). Features extracted from multiple scales are important for improv- ing the object-based classification accuracy, as the underlying tree structure models the hierarchical relationship among the objects (Blaschke, 2010). Two important topological information across the scales can be extracted from hierarchical representation: con- text features and objects spatial arrangement features.
Show more

7 Read more

OBJECT BASED IMAGE ANALYSIS FOR AUTOMATED INFORMATION EXTRACTION A SYNTHESIS

OBJECT BASED IMAGE ANALYSIS FOR AUTOMATED INFORMATION EXTRACTION A SYNTHESIS

Object based image analysis is becoming a widespread methodology. Some would argue that it is also a new paradigm in image processing. The authors conclude that the sharp increase in application numbers was correlated with both, the advent of high resolution data sets and the availability of an operational software product which followed a very different strategy compared to most software products available on the market which have been developed over the last twenty years or so. This development was clearly demand driven and software driven. Similarly to the development of GIS it can be argued that a theory deficit is very likely in such a situation. The 2006 OBIA conference brought together many researchers in this field. It turns out that there is still not a comprehensive theoretical framework but there is more and more work related to a scientific foundation of OBIA. Industry demands repeatable and transferable solutions. Such solutions will only be put in place when they are based on a sound methodology. The challenge is developing a flexible approach for transferring domain knowledge of a feature extraction model from image to image that is capable of adapting to changing conditions (image resolution, pixel radiometric values, landscape seasonal changes, and the complexity of feature representation).
Show more

7 Read more

Multiple object detection of workpieces based on fusion of deep learning and image processing

Multiple object detection of workpieces based on fusion of deep learning and image processing

convolutional neural network (CNN) pruning strategy [4], an improved pruning filter strategy is designed for the residual structure of Darknet-53, which is the backbone network of YOLOv3. For multiple consecutive residual identity blocks [5], the kernels of the last layer in each block need to be pruned in the same order. In this strategy, the hyperparameter α is set as the evaluation weight, and the 1 -norm of the corresponding kernels are assigned and integrated according to the evaluation weights in order of hierarchy. The unified pruning order is determined after the integrated result is sorted. We use the pruning strategy to test Darknet-53 (tiny) on the CIFAR100 dataset [6] and test YOLOv3 on the workpieces dataset. Finally, based on the detected rectangular bounding boxes, a cropping bias is set according to the type of the workpieces to increase the pixel coverage area, and the original image is cropped to extract the contours and appropriate gripping points of the workpieces through traditional image processing.
Show more

6 Read more

A Multiple Random Feature Extraction Algorithm for Image Object Tracking

A Multiple Random Feature Extraction Algorithm for Image Object Tracking

In this paper, the proposed tracking algorithm is showed in Figure 1. In order to solve the drift problem caused by occlusion, we refer to [4] and apply the sub- region classifiers to our algorithm. We reduce the number of sub-region classifi- ers to speed up the algorithm. Only nine sub-region classifiers are used for tracking. In addition to this, the locations of the sub-region classifiers are evenly distributed in order to avoid excessive concentration or excessive dispersion of the sub-region classifiers. In the process of tracking, each sub-region classifier independently tracks the specified part of the target. If the target is partially oc- cluded, only the occluded part of the tracking will be affected. As a result, the drift problem due to occlusion can be avoided. In the classifier update phase, each sub-region classifier is decided whether to update based on the respective classifier scores in order to prevent the target model from being updated by oc- clusions. If the score of the classifier is less than zero, it indicates that the proba- bility of the region being judged as belonging to a non-target object is relatively large. Therefore, the target model of the sub-region is not updated to retain the object information.
Show more

9 Read more

Combining multiple imputation and meta-analysis

Combining multiple imputation and meta-analysis

Abstract Multiple imputation is a strategy for the analysis of incomplete data such that the impact of the missingness on the power and bias of estimates is mitigated. When data from multiple studies are collated, both within-study and multilevel imputation models can be proposed to impute missing data on covariates. It is not clear how to choose between imputation models, or how to combine imputation and inverse- variance weighted meta-analysis methods. This is especially important as often differ- ent studies measure data on different variables, meaning that we may need to impute data on a variable which is systematically missing in a particular study. In this pa- per, we consider sporadically missing data in a single covariate, and discuss how the results would be applicable to the case of systematically missing data. We find that ensuring the congeniality of the imputation and analysis models is important to give correct standard errors and confidence intervals. For example, if the analysis model allows between-study heterogeneity of a parameter, then this heterogeneity should be incorporated into the imputation model in order to maintain the congeniality of the two models. In an inverse-variance weighted meta-analysis, missing data should be imputed and Rubin’s rules should be applied at the study level prior to meta-analysis, rather than meta-analysing each of the multiple imputations and then combining the meta-analysis estimates using Rubin’s rules. The results are illustrated using data from the Emerging Risk Factors Collaboration. (245 words)
Show more

27 Read more

Multiple segmentations of Thai sentences for neural machine translation

Multiple segmentations of Thai sentences for neural machine translation

following row, +1000, shows the score of the model trained with character-wise split and that split with BPE using 1000 merge operations (so there are two instances of each source sentence). The last row of the subtable (+20000) indicates the performance of the model trained with the five datasets concatenated (five instances of each source sentence). In the table we observe that combining datasets with differ- ent segmentations is beneficial. In each subtable of Table 2 we see that the score goes up when we add more datasets with different splits. For example, the performance of the model using data split in a character-wise achieves a score of only 20.70 (first subtable). When we concatenate the same dataset but split using 1000 merge operations, the score increases to 38.23 (85% improvement) and we see that the score increases as we add sentences with different splits. This effect is observed for all three subtables. The results also show that the lowest scores are achieved in those datasets in which a character-wise split fashion is performed. For example, the performance when using the dataset split with BPE using 1000 operations is 39.88 (BPE 1000 row in Table 2), whereas when used in combi- nation with character-split set (+1000 row in the first sub- table of Table 2) the score is 38.23. Another indication that character-wise splitting hurts performance is the compar- ison between the subtables of Table 2. If we compare the rows +1000, +5000, +10000 and +20000 of each subtable, the lower scores are seen in the first subtable
Show more

6 Read more

A generic framework for ontology-based information retrieval and image retrieval in web data

A generic framework for ontology-based information retrieval and image retrieval in web data

Though the results show an improvements over existing information retrieval tech- niques, an independent standard benchmark is required for evaluating semantic search systems. The lack of such exclusive and specific benchmarks made it difficult for us to evaluate our system. Currently, the web based document retrieval system is in its most primitive state. Later, we will try to add semantic links within web pages to other web resources along with the integration of information using the ontologies of the target web resources. We developed an algorithm that determines the most apt triplets that should be displayed with each web link and a service that determines the mind-set and searching patterns of users by developing ontologies that enhance his search experience. The use of ontologies to connect web resources can also be used to validate the classifi- cation groups that an entity belongs to using web resources. Also, the keyword proxim- ity score ranking technique is just one of the multiple techniques that can be employed while ranking images. In future, we will try to correlate O-A-V triplets extracted from text with features extracted from image processing for further improving image retrieval.
Show more

30 Read more

Tissue-like p system for region-based and edge-based image segmentations

Tissue-like p system for region-based and edge-based image segmentations

Later (Christinal et al., 2010a) calculated some algebraic-topological information for two-dimensional (2D) and three-dimensional (3D) images in a general and parallel manner with P systems by presenting two areas implemented by tissue-like P systems using communication rules. First, the edge-based segmentation segmentation of 2D images by employing 4-adjacency and 26-adjacency has been used for 3D digital images. Their results show a constant number of steps (9 steps) of computation to segment an image. Second, Homology, this work paved the way for another areas of study in which efficiency and power are used in topological processes for the first time by presenting new rules to achieve the homologous groups of 2D digital images in logarithmic time with respect to input data.
Show more

49 Read more

Combining Texture Synthesis {&} Inpainting for unwanted object removal and image completion

Combining Texture Synthesis {&} Inpainting for unwanted object removal and image completion

The algorithm presented here is based on research along similar lines. Bertalmio M, L. Vese and G. Sapiro in [8] presented a work that decomposes the original image into two components, one of which is processed by inpainting and the other by texture synthesis. The output image is sum of two processed components. This approach is limited to the removal of small image gaps because the diffusion process continues to blur the filled regions and also the approach avoids the automatic switching between “pure texture” and “pure structure mode”.
Show more

6 Read more

Segmentation and Classification of Remotely Sensed Images: Object-Based Image Analysis

Segmentation and Classification of Remotely Sensed Images: Object-Based Image Analysis

Another active obstacle, presented by today’s remotely sensed images, is the volume of information produced by our modern sensors with high spatial and temporal resolution. For instance, over this decade, it is projected that 353 earth observation satellites from 41 countries are to be launched. Timely production of geo-spatial information, from these large volumes, is a challenge. This is because in the traditional methods, the underlying representation and information processing is still primarily pixel-based, which implies that as the number of pixels increases, so does the computational complexity. To overcome this bottleneck, created by pixel-based representation, this thesis proposes a dart-based discrete topological representation (DBTR), where the DBTR differs from pixel-based methods in its use of a reduced boundary based representation. Intuitively, the efficiency gains arise from the observation that, it is lighter to represent a region by its boundary (darts) than by its area (pixels). We found that our implementation of DBTR, not only improved our computational efficiency, but also enhanced our ability to encode and extract spatial information.
Show more

147 Read more

Analysis of fuzzy clustering and a generic fuzzy rule-based image segmentation technique

Analysis of fuzzy clustering and a generic fuzzy rule-based image segmentation technique

1. INTRODUCTION Classical so-called “crisp” image segmentation techniques while effective when an image contains well-defined structures, such as edges and regular shapes, do not perform so well in the presence of ill-defined data. In such circumstances, the processing of images that possess ambiguities produce fuzzy regions. Fuzzy image segmentation techniques can cope with the imprecise data well and can be broadly classified into five general classes: fuzzy clustering, fuzzy rule based, fuzzy geometry, fuzzy thresholding, and fuzzy integral based image segmentation techniques [1]. Of these, the foremost are fuzzy clustering and fuzzy rule based segmentation. The most popular and extensively used fuzzy clustering techniques are: fuzzy c- means (FCM) [2-3] and possibilistic c-means (PCM) algorithms [4], although both techniques are unable to integrate either human expert knowledge or spatial relation information. Fuzzy rule based image segmentation techniques in contrast are able to integrate human expert knowledge. They are also less computationally expensive than fuzzy clustering and able to interpret linguistic as well as numeric variables [5]. They are however very much application dependent and it is difficulty to define fuzzy rules that cover all pixels. In most approaches, the structure of the membership functions is predefined and their parameters are manually or automatically determined [5-9]. In addition to these advantages, any fuzzy rule-based image segmentation should be both application and image independent
Show more

7 Read more

Object-based image analysis for forest-type mapping in New Hampshire

Object-based image analysis for forest-type mapping in New Hampshire

An alternative to pixel-based classification is object-based image analysis (OBIA), a type of image processing and classification that has provided better results when using high resol[r]

100 Read more

Leveraging machine learning to extend Ontology Driven Geographic Object Based Image Analysis (O GEOBIA): a case study in forest type mapping

Leveraging machine learning to extend Ontology Driven Geographic Object Based Image Analysis (O GEOBIA): a case study in forest type mapping

with real-world features into the GEOBIA process [1,3,4] in a manner that is formal, objective and transferable. One approach to this challenge is to employ an ontology to formally capture and represent the expert knowledge [4,5], referred to here as Ontology-driven GEOBIA (O-GEOBIA). Ontology helps to reduce semantic gap between high-level knowledge and low-level information. Ontology integrates qualitative (e.g., Forest canopy is dense) and subjective (description referred by subject i.e., experts) high-level knowledge with the quantitative (e.g., spectral band value represented by digital number) and objective (information refers to the object i.e., segmented image object) low-level information [4]. A consequential challenge is to manage decisions regarding the extent to which a prescribed ontology—both the relationships defined by that ontology and any quantitative attributes associated with those relationships—can be treated as transferable or generalisable across different study sites or data sources [6]. In turn, the question that arises is how best to generate the classification rules that cannot be achieved only from the domain knowledge but may be discoverable in the data on a site-specific basis.
Show more

25 Read more

Data Warehouse Requirements Analysis Framework: Business-Object Based Approach

Data Warehouse Requirements Analysis Framework: Business-Object Based Approach

d) Identification of Materialized Views: For user oriented DW requirements engineering, it is also important to analyze that how user will efficiently interact with the DW system to perform the necessary analysis activities. Materialized views are the central issue for the usability of the DW system. DW data are organized multi dimensionally to support OLAP. A DW can be seen as a set of materialized views defined over the source relations. Those views are frequently evaluated by the user queries. The materialized views need to be updated when the source relations change. During the DW analysis and design, the initial materialized view need to be selected to make the user's interactions simple and efficient in terms of accomplishing user analysis objectives. In the proposed requirements engineering framework, the domain boundary has been drawn through identifications of Fact BOs, Dimension BOs, Actor BOs, and interactions between them in the first phase and which have been further refined in this phase. The list of analysis activities may be performed by Actor BOs based on their roles and also Event BOs have been identified in the same phase. Moreover, the feature tree concept explores the constraint requirements for the interest of domain. Based on those identifications, the different materialized views can be identified in this step. In this step the materialized views are used to represent semantically in the context of some Fact BO and in terms of actor along with their roles, analysis activities those may be performed, events those may be occurred, related Dimension BOs involved and the related constraints. Related to one Fact BO, there may exist several materialized views to minimize the views level dependency and to meet the analytical evaluation requirements of the stake holders. Semantically, a materialized view will be represented using View Template. The Interface Template will contain the information of View name, identification, analysis objectives, target Fact BO, Actor BO, roles, related activities, related Dimension BOs to realize the source relations, related Event BOs and related constraints. Any view template is reusable and modifiable through iterative process to accommodate the updatable materialized view.
Show more

10 Read more

Show all 10000 documents...