Table II-1: Results from the restrictive and non-restrictive classification. The size of the four classes vegetation, dry vegetation, dark roof and street enables the fit of empirical correction models to the brightness gradients of the respective surface types. Specular reflectors, water and cast shadows are masked prior to the classification.

Table II-2: Standard deviations of classes and unclassified pixels after restrictive classification for all view-angles (SD all) and mean standard deviation of 4° view-angle intervals (SD angles). Values are averaged over all bands.

Table III-1: Distribution of training pixels by classes.

Table III-2: Reference pixels of the five land cover classes as distributed over the urban structure type.

Table III-4: Producer’s and user’s accuracies [%] of vegetation, built-up, impervious, and pervious and the overall accuracy by urban structure types in the pixel-based approach. Values for n < 20 are not shown.

Table III-5: Accuracies of segment-based classifications and multi-level approach by urban structure types. The highest accuracy of each region is indicated by bold numbers.

Table IV-1: Comparison of spatial properties and physical file size of HyMap image before and after geocoding. The physical file size relates to 114 spectral bands in 16 bit.

Table V-1: Reference pixels of five land cover classes as distributed in urban structure types.

Table V-2: Surface categories for detailed assessment of the land cover classification with corresponding description and area for nine field survey areas. The class to which the surfaces were assigned in the training data for classification is indicated.

Table V-4: Distribution of land cover for stratified areas of building outlines and street network.

Images

Figure I-1: Image data from Museumsinsel in Berlin-Mitte and spectral curves for six surface materials. The colored circles indicate the position of the sample spectra. The Quickbird data (bottom) has a slightly higher spatial resolution and shows more detail. The spectral resolution and wavelength coverage of the HyMap data (top) go far beyond that of Quickbird. Note: for matters of comparison the Quickbird spectra were resampled based on the HyMap spectra, since different acquisition dates, illumination conditions, and radiometric preprocessing do not allow direct comparison.

Figure I-3: Workflow for the use of parametric classifiers with hyperspectral data. Traditional classifiers that assume certain class distributions and rely on statistical parameters require the hyperspectral feature space to be modified and reduced. (Kuo and Landgrebe, 2004, modified)

Figure II-1: Illumination and viewing geometry of the corrected image and the reference image.

Figure II-2: Class-wise and weighted class-wise correction of brightness gradients in individual bands of HyMap data. Results from a SAM with restrictive angular thresholds are used to model the brightness gradients and to generate the compensation layers of classified surface types. Rule images are then used to assign these compensation layers to individual pixels in a discrete and weighted manner. The numbers in brackets indicate the corresponding equations in the text.

Figure II-3: Histogram of a rule image from the class vegetation during SAM classification. The vertical lines indicate angular thresholds of the restrictive (left) and non-restrictive (right) classifications. The grey area shows the transition zone as used to transform the rule image for the weighted class-wise correction.

Figure II-4: Brightness gradients and empirical models of four spectral classes. Gradients are illustrated by average brightness of 4° view-angle intervals for three spectral bands at 661.6 (diamonds), 828.5 (triangles) and 1647.8 nm (squares); fitted models are displayed as solid lines. The sun incident angle θi is 34°.

Figure II-5: Comparison of the brightness gradients before and after multiplicative class-wise correction of pixels that were not classified during the restrictive classification in spectral bands at 661.6 (diamonds/thick solid), 828.5 (triangles/solid) and 1647.8 nm (squares/dashed). The sun incident angle θi is 34°.

Figure II-6: Subsets of the corrected image before and after the class-wise correction (R = 828.5 nm; G = 1647.8 nm; B = 661.6 nm): In the uncorrected data (a), the bright surfaces to the right (backscatter direction) lead to obvious gradients over the entire FOV. These gradients do not exist after the multiplicative class-wise correction (b); the performance of other approaches appears similar at this scale. (c) illustrates the classification of the entire image; (d) shows the restrictive SAM classification (including previously masked areas) that was used to fit the empirical models. The advantages of the weighted class-wise approach (e) over the multiplicative (f) and additive (g) class-wise approaches are obvious in transition zones with mixed pixels; the original subset (h) is shown for comparison. The full image is displayed left. Note the rotated northing of all images.

Figure II-7: Spectra from six selected surfaces at large view-angles in HyMap data before and after correction with multiplicative global and class-wise approach. Spectra from the same surfaces in the nadir area of the reference image are shown for comparison.

Figure III-1: Flowchart of the pixel-based (left), segment-based (center), and multi-level approach (right). The SVM for both the pixel- and segment-based approach were trained on pixel data. For details on SVM classification see Section 2.2.

Figure III-3: Subsets from classified data at different levels. Pixel level, segment sizes of 3.4, 8.5, 13.1, and the multi-level classification are displayed (top to bottom).

Figure IV-1: Three different workflows for mapping land cover from hyperspectral data. During the pixel-based alternative workflow (top) the geocoding constitutes the last processing step and the increase in physical file size is moved to the end of the workflow. In the traditional workflow (bottom) the SVM classification is performed on the large geocoded data set. The segment-compressed workflow (middle) further decreases the amount of data by separating spatial and spectral information and independently performing geocoding and SVM classification.

Figure IV-4: Number of interpolated pixels per land cover class for workflows with nearest neighbor resampling and bilinear interpolation of gaps in the mapping array.

Figure IV-5: Subsets from the geocoded land cover maps from the traditional workflow (top), the alternative workflow (middle) and the segment-compressed workflow (bottom). The impact of image segmentation on land cover classification at different levels of aggregation before geocoding is discussed in Chapter III.

Figure IV-6 Producer's accuracies for five land cover classes based on reference data from field survey for classification results from three different processing workflows.

Figure V-1: Reflectance spectra from the airborne Hyperspectral Mapper (HyMap) for different surfaces. Gaps are due to atmospheric absorption.

Figure V-2: Image acquisition by the large FOV airborne line scanner HyMap in urban areas. Façades appear in the image at large view-angles and are differently illuminated.

Figure V-3: Study area and municipal boundary of Berlin. The outlines of the study area are determined by the extent of the airborne image data set. Image data is shown after preprocessing in false-color composite (R = 829 nm; G = 1648 nm; B = 662 nm).

Figure V-5: Hyperspectral image analysis steps, reference products and the data sets they are based on. The different reference products were derived from orthophotos, digital cadastral information and field surveys. They are used to assess different steps of image analysis (dotted lines). Italic numbers indicate, which research question is addressed by the corresponding assessment.

Figure V-6: Reference maps derived from ground mapping shown for one of nine subsets. The 12 land cover related surface categories were used for a spatially continuous assessment of the land cover map (left). The four categories of surface types related to imperviousness were used to assess the sensor view in comparison to the true ground cover (right).

Figure V-8: Building positions from land cover mapping compared to polygons from cadastre. Roof-tops at large view-angles north of the nadir region are shifted northwards (upper-left), south of the nadir region they are shifted southwards and façades are not illuminated (bottom). Buildings near nadir exhibit no shift (upper-right).