Читать книгу Urban Remote Sensing - Группа авторов - Страница 18
1.2 ADVANCES IN URBAN REMOTE SENSING
ОглавлениеOver the past decade, we have witnessed the rapid advancement in urban remote sensing research amid the evolving innovations in the broad arena of Earth Observation (EO). The purpose here, however, is not to provide a comprehensive review on the progress in urban remote sensing. Rather, we simply highlight some essential and emerging areas that are shifting the directions in urban remote sensing research. Some of the major progress are summarized below.
Firstly, a variety of remote sensors or systems have provided data essential for urban applications, which include not only some advanced ones, such as high‐resolution satellite systems, hyperspectral remote sensing, high‐resolution synthetic aperture radar (SAR), light detection and ranging (LIDAR), and nighttime satellite systems, but also several new and emergent ones, such as unmanned aerial systems (UASs) and social sensing (including street views). Inexpensive UAS (or drones) equipped with digital cameras (even with LIDAR units) and lightweight GPS units offer spatial flexibility in making sophisticated maps to support various urban applications (e.g. Kalantar et al., 2017; Khan et al., 2017; Dodge, 2018). Social sensing relies upon humans or mobile devices to collect “geotagged” information that can help improve image interpretation with additional human information (e.g. Jiang et al., 2016; Hu et al., 2016; Cai et al., 2017). Street views, such as the Google Street View (GSV) service publicly launched in 2007, offer street‐level imagery of city streetscapes that can help map urban tree cover and other features (e.g. Li et al., 2015b; Berland and Lange, 2017; Seiferling et al., 2017; Dodge, 2018).
Secondly, multi‐temporal analysis of remote sensor imagery has rapidly gained the popularity, which is essential for evolving beyond fast‐paced change detection (such as land conversion) and into monitoring of continuous land use activities with slower change rates (such as land modifications) using satellite time series. This represents an emergent research area in remote sensing of environment since early 2010s, given the availability of satellite imagery archives (e.g. Landsat and Sentinel data) at no charge (Wulder et al., 2019), the progression of advanced image processing infrastructures (such as high‐end computing systems and cloud computing platforms), and the increasing scientific need to understand continuous changes (e.g. Schneider, 2012; Li et al., 2015a; Fan et al., 2017; Huang et al., 2017; Zhu, 2017; Arévalo et al., 2019; Stokes and Seto, 2019; Zhu et al., 2019; Chen et al., 2020).
Thirdly, image merging or fusion in urban areas has been moving beyond pan‐sharpening and into other forms including multi‐sensor data merging, such as merging multispectral imagery with LIDAR point clouds (e.g. Meng et al., 2012) and merging optical images with SAR data (e.g. Errico et al., 2015); multi‐temporal data merging that combines images from the same area but at different dates to a composite (e.g. Schneider, 2012; Kabisch et al., 2019); spatial and temporal image fusion to generate a new data set with high spatial and temporal resolutions from an original dataset with high spatial but low temporal resolution and another dataset with low spatial but high temporal resolution (e.g. Chen et al., 2015; Wang and Atkinson, 2018); and merging of imagery with ancillary data that can improve image classification (e.g. Lai and Yang, 2020; Zhang and Yang, 2020).
Fourthly, the development of artificial intelligence beyond shallow learning algorithms and into deep learning models with many processing layers to learn representations of data with hierarchical abstraction can help discover complex structure in large remote sensor datasets (LeCun et al., 2015). Deep convolutional nets have brought about breakthroughs in image classification over complex urban areas (e.g. Maggiori et al., 2017; Sharma et al., 2017), whereas recurrent nets have demonstrated their effectiveness in processing satellite time series leading to improved performance in pattern recognition (e.g. Sharma et al., 2018). In addition, deep residual networks are easier to train comparing with deep convolutional networks and thus represent one of the most promising deep network architectures for image classification (He et al., 2016).
Fifthly, with more advanced pattern classifiers being used for urban feature extraction, there has been a trend moving beyond single classifiers and into multiple classifier systems (or classifier ensembles). Several relatively novel classifiers, such as support vector machines and random forests, are quite promising but by nature their performance may be compromised due to their incapability in accounting for the classification errors due to class ambiguity as a result of mixed pixels, within‐class variability, dynamic zones, transitional zones, and topographic shading (Smits, 2002). In contrast, multiple classifier systems can generate a better outcome for a classification task through combining a set of single classifiers (as base classifiers), assuming that an individual classifier does well at least over certain region in the feature space and leans to make independent prediction errors (see Du et al., 2012; Shi and Yang, 2017; Patidar and Keshari, 2018; Shen et al., 2018).
Sixthly, big data in terms of volume, variety, and velocity challenge data acquisition, storage, querying, sharing, analysis, visualization, updating, and information privacy. Over recent years, various cloud computing platforms have been developed to deal with these challenges. More specifically, cloud computing platforms are increasingly used to execute large‐scale spatial data processing and services. Google Earth Engine (GEE; https://earthengine.google.com/) and NASA Earth Exchange (NEX; https://c3.nasa.gov/nex/) are two most open cloud‐computing platforms supporting large‐scale Earth science data and analysis (e.g. Patel et al., 2015; Huang et al., 2017; Gorelick et al., 2017; Liu et al., 2018; Soille et al., 2018).
Lastly, integration of remote sensing and relevant geospatial data and technologies has supported a variety of innovative applications in urban areas, such as urban growth analysis (e.g. Huang et al., 2017), unplanned and informal settlement mapping (e.g. Kuffer et al., 2016), global urban settlement mapping (e.g. Corbane et al., 2017), urbanization impacts upon vegetation phenology (e.g. Zipper et al., 2016; Li et al., 2017), urban greenness and health (e.g. Mennis et al., 2018), urban heat island (UHI) and thermal sensing (e.g. Wang et al., 2016), urban climate (Johnson and Shepherd, 2018), urban hazards (e.g. Costanzo et al., 2016), urban planning (e.g. Norton et al., 2015), and urban sustainability (e.g. Bonafoni et al., 2017). There has been a trend in remote sensing applications that evolves beyond observing spatio‐temporal patterns and into analyzing socio‐environmental processes, and into pursuing towards urban sustainability (Seto et al., 2017).