Читать книгу Urban Remote Sensing - Группа авторов - Страница 32
2.3.2 DATA PROCESSING AND ANALYSIS
ОглавлениеRaw lidar data are stored as point clouds, simple vector points with XYZ locations (with intensity, return number, etc. attributes), typically in LAS format (ASPRS 2013; LAZ files are compressed versions of LAS files) although other formats exist (e.g. XYZ, PLY, OBJ, PCD). Though a relatively simple file type, lidar point clouds are large datasets because they can contain millions of 3D points. Even for small area collections, lidar data are broken up into tiles, much like orthoimagery, to reduce the computational burden of handling the data. Not surprisingly, visualizing and processing these datasets can be difficult. Geographical Information System (GIS) and remote sensing software packages, though, continue to improve at integrating these datasets with other geospatial information. Due to these issues, conversion from point cloud to raster format persists to simplify the data into a more useable format (also, most analysis tools process raster data instead of raw point clouds).
Point clouds, without any processing, provide powerful visualizations that enable geospatial professionals and nonexperts alike to view and better understand the 3D layout of build‐up in an urban space. For example, Figure 2.2 shows both the raw point cloud data for downtown Austin, Texas in 2015 as well as a simplified version consisting of extruded buildings derived from 2006 lidar data. These visualizations and underlying datasets enable analyses related to urban planning such as solar radiation/interaction (Yu et al. 2009), potential for solar panel placement on building rooftops (Lukac et al. 2014), and more. Other researchers use the point cloud data directly to algorithmically detect and characterize specific built‐up shapes (Dorninger and Pfeifer 2008; Golovinskiy et al. 2009; Babahajiani et al. 2015). This type of analysis remains difficult in terms of algorithm development and computational needs. Further, for change detection, point cloud comparison analyses are becoming more commonly supported by open‐source software such as CloudCompare. Even in these cases though, point cloud data is often transformed to 3D mesh models (like triangular irregular networks or TINs in GIS) to analyze the data. Related to this and other approaches, Figure 2.3 summarizes the common lidar data workflows, data products, and eventual analyses conducted within urban remote sensing. Notice that when analyzing the point cloud directly, point cloud filtering is often still required.
FIGURE 2.2 3D lidar‐derived visualizations of downtown Austin, Texas looking northwest using raw point cloud data from 2015 (a), and extruded building footprints from 2006 (b).
Point cloud filtering is a process whereby all individual points within the point cloud are assigned to a class to better differentiate the point data (Shan and Toth 2018). The basic approach assigns points to either ground or nonground classes using a filtering algorithm based on trends in point heights (Axelsson 1999). Return number for individual points can also be utilized to aid this filtering effort. LAS file specifications allow points to be assigned to many other classes (e.g. high vegetation, building, low point noise, etc.) through more nuanced algorithms and/or manual efforts. Once filtered and assigned class designations, points can be analyzed directly as discussed above (upper‐right of Figure 2.3) or further processed to create raster Digital Elevation Models (DEMs).
FIGURE 2.3 Lidar data processing workflows, data products, and analysis approaches for urban remote sensing. For the purpose of this figure, DSM, Digital Surface Model.
Lidar‐derived raster surfaces, referred to generally as DEMs, provide a more approachable way in which to utilize lidar data. Note that DEMs are created with other elevation data and are not lidar‐specific datasets. Specific types of DEMs include the following:
Digital Terrain Model (DTM): a raster representing the bare Earth surface. Absolute elevation values from mean sea level are stored in pixels.
Digital Surface Model (DSM): a raster representing the bare Earth surface as well as all surface features such as buildings, tree canopies, etc. Absolute elevation values from mean sea level are stored in pixels. For this chapter, we elect not to use the DSM acronym for this dataset because it conflicts with another acronym we use in upcoming sections.
Digital Height Model (DHM): a raster surface containing all features like the DSM but with relative elevation values from ground‐level stored in pixels. DHMs are also referred to as Normalized Digital Surface Models (nDSMs).
DTMs are generated using heights of ground‐classified points (which may or may not require spatial interpolation to fill data gaps depending on point density and types of features within the area) while DSMs utilize heights of all points (ground and nonground) to create the raster surface. The DHM is calculated by subtracting the DTM from the DSM (see Eq. (2.1)):
Figure 2.4 provides examples of each of these datasets at a 1 m spatial resolution for Detroit, Michigan. The DSM (Figure 2.4b) and DHM (Figure 2.4d) appear similar because they both contain surfaces features but a difference can be spotted between the two as you move inland (to the north) where the DHM low lying areas (i.e. streets, residential yards) appear black and not gray. The DTM is a smoother surface representing the bare Earth without the surface features (Figure 2.4c), and in this case includes artifacts such as highway overpasses and bridges attesting to the complexity of point cloud filtering.
As for built‐up analyses using lidar‐derived raster data (refer back to Figure 2.3), the DHM provides the ideal dataset because it is normalized and conveys building height data from ground level. Pixel values for buildings, therefore, are representative and useful. Using building footprints (vector polygons), individual building heights can be extracted and extruded to visualize only the built‐up environment as solid objects (see Figure 2.2). Building footprints are highly useful ancillary data for urban analyses and are often freely available through local cadastral mapping sources or can be generated using the DHM (and other data such as aerial imagery) through Object‐Based Image Analysis (OBIA). OBIA segmentation provides a semi‐automatic procedure to create vector polygons of ground features. In the urban environment, especially where buildings are quite tall and protrude from the surrounding landscape features, OBIA segmentation is effective (Teo and Shih 2013). Imagery‐lidar fusion (i.e. adding the DHM data as an additional band within an image stack) improves the accuracy of OBIA classification results within urban areas compared to imagery alone (Ellis and Mathews 2019). Lidar intensity information is also useful as an additional band for further differentiation of surface features in OBIA analyses.
Building footprint data are also helpful in the calculation of built‐up volume. As Figure 2.5 illustrates, the input point cloud data (a) are used to create a DHM raster that is further altered by extracting only pixels within building footprint extents – notice the tree canopies along the streets (b) are no longer visible within the clipped DHM (c). Importantly, this removes all nonbuilt‐up pixels from the analysis for accurate volume estimation. At this stage, volume calculation is conducted at a per‐pixel or per‐building scale (refer to Figure 2.3). In the example provided in Figure 2.5, the height values are in meters and the spatial resolution is 1 m making the per‐pixel volume the same as the height value (though in m3). The per‐building alternative would sum the values of all pixel centroids falling within individual building footprint extents (e.g. zonal statistics in a GIS) and spatially join the per‐building sum value to the building footprint polygons as an attribute.
Multi‐temporal lidar data analysis facilitates the quantification of built‐up change within urban areas (Vu et al. 2004; Ellis and Mathews 2019) including characterization of types of changes occurring (Teo and Shih 2013; Dong et al. 2018). Although many change detection options exist (too many to cover here), direct differencing of the multi‐temporal DHMs is a common approach. The difference of DHMs (or differential DHM), simply dDHM, is calculated by Eq. (2.2) where t1 indicates the first lidar data acquisition (time 1) and t2 represents the second (time 2):
FIGURE 2.4 Lidar‐derived rasters (1 m spatial resolution) for Detroit, Michigan, 2004: (a) reference map.
Source: OpenStreetMap.
(b) Digital Terrain Model (DTM), (c) Digital Surface Model, and (d) Digital Height Model (DHM).
FIGURE 2.5 Lidar workflow to obtain building‐only volume: raw point cloud data (a), lidar‐derived DHM raster (b), DHM clipped by red building footprints (c); white represents high buildings, black signifies low/ground. The clipped dataset (c) is used for urban built‐up volume (m3 summed per 1 m pixel).
FIGURE 2.6 An example of built‐up change in southeast San Antonio, Texas: 2003 DHM (a), 2012 DHM (b), and difference of DHM (dDHM) (c). For the DHMs (a, b), pixels with higher values are shown with white (i.e. tall buildings) whereas lower values are black (i.e. ground). For the dDHM, red color indicates decreases in height (e.g. demolished buildings, removed trees) between the two years whereas blue denotes increases in height (e.g. new buildings, new or larger tree canopies).
The resulting dDHM (see ahead to Figures 2.6 and 2.7 for examples) rapidly visualizes the extreme changes within the urban environment, namely newly constructed or demolished buildings. The differences identified are then summarized a number of different ways (e.g. amount of built‐up volume increase between t1 and t2, net increase/decrease of built‐up volume). Other approaches attach height and/or volumes attributes to multi‐temporal building footprints for a discrete vector analysis of change. Types of changes (e.g. from vegetation to building, building to demolished building) are characterized using criteria such as roof pitches/slopes, volumetric change metrics, and other parameters (Teo and Shih 2013; Dong et al. 2018). Multi‐temporal lidar analyses though, like any change detection work within remote sensing, are not without challenges and limitations. Specifically, comparison of multiple datasets is difficult when different data acquisition parameters (e.g. point densities, number of returns, time of year), data are pre‐processed and only provided in raster format (i.e. line‐up issues between datasets that must be resolved), lack of available ancillary datasets, and more.
FIGURE 2.7 Citywide built‐up change (shown with dDHM) in San Antonio, Texas, 2003–2013 shown with blue indicating increased height values (i.e. new build‐up) and red decreased height values (i.e. demolished buildings). Vegetation is included in this example.