Читать книгу Imagery and GIS - Kass Green - Страница 52

Viewing Angle

Оглавление

Viewing angle is often used to refer to one or both of the following angles:

 The maximum angle of the IFOV of the sensor, from one edge of the sensor view to the other, as shown in figure 3.19. Traditional film-based aerial survey cameras often used wide-angle cameras with a 90-degree IFOV. When they took photographs vertically, the features at the edges of the frames were captured at an angle of about 45 degrees to vertical. With the advent of digital photography, many digital aerial survey cameras have a narrowed IFOV, and coverage is achieved by taking more images. Most satellite imagery is collected with an even narrower IFOV. For example, a vertical WorldView-3 scene captures a strip about 13.1 km wide from an altitude of 617 km, with an IFOV of about 1 degree.

 The pointing angle of the sensor as measured from directly beneath the sensor (0°, or nadir) to the center of the area on the ground being imaged. This angle is also referred to as the elevation angle. Sensor viewing angles are categorized as vertical or oblique, with oblique being further divided into high oblique (images that include the horizon) and low oblique (images that do not include the horizon), as shown in figure 3.20.

Traditionally, with aircraft imagery, images captured with the sensor pointed at less than ± 0 to 3 degrees off nadir are considered vertical, and images collected at greater than ±3 degrees are considered oblique (Paine and Kiser, 2012). However, with the plethora of pointable high-resolution satellites, satellite companies tend to define images captured with a sensor viewing angle of ± 0 to 20 degrees as vertical images, and images collected with sensor angles greater than ±20 degrees as oblique.

Viewing angle is important because it affects the amount of area captured in an image, whether only the top of an object or its sides are visible, and the spatial resolution of the imagery. The larger the viewing angle from the sensor to the object, the longer the distance to the ground and the lower the spatial resolution of the pixels. For example, DigitalGlobe’s WorldView-3’s nadir spatial resolution of its panchromatic band is 0.31 meter on-nadir, and 0.34 meter at 20 degrees off nadir. The spatial resolution and scale within an oblique image change more rapidly than across a vertical image.

The primary advantage of a vertical image is that its scale and illumination are more constant throughout the image than those of an oblique image. While a vertical image’s scale will be affected by terrain and the slightly off-nadir pixels at the edge of the frame or scan line, a vertical image will always have more uniform scale than an oblique image. As a result, measurements are easier and directions can be more easily determined, allowing the image to approximate a map and be used for navigation (as long as the impacts of topography are considered).

On the other hand, an oblique image will show the sides of an object instead of just the top, allowing for realistic 3D rendering. Because humans spend much of their time on the ground, an oblique view is more intuitive to us and we are easily able to judge distances to objects seen in an oblique view (Paine and Kiser, 2012). Much imagery for military surveillance applications was captured as oblique or nonvertical, providing the advantage of showing objects farther away and showing more of the sides of the features, which often provide significant details for interpretation.

The very first aerial photographs were mostly oblique. However, for 70 years vertical photographs became the basis for most maps because the geometrical relationship between the sensor and the ground is fairly straightforward to determine with vertical images. In addition, the scale and illumination of vertical images are relatively constant within the image, and stereo models can be easily created by overlapping vertical images. Usually, the photographs were collected with at least a 50-percent overlap to enable stereo viewing and photogrammetric measurements (see chapter 6 for more detail on photogrammetry). Similarly, since the first launch in 1972, all nine Landsat satellites were designed to collect vertical images, but the systems are incapable of stereo except over higher latitudes, where there is enough overlap to allow some stereo collection.

When a vertical object such as a building is viewed at nadir, the sides of the building are not visible; only the top of the building is visible. If that vertical object is not located directly below the sensor at nadir, then one or more sides of the object will be visible in the image; this is termed an off-nadir view, and the effect is called relief displacement. We can refer to the angle between the nadir and the ray of light between the sensor and the vertical object as the off-nadir angle. This angle can be the result of a ray being off the center of a vertical image, meaning that it is not the principal axis of the image, or it can be the result of a ray from an oblique image. In either case, you can see the side of the vertical object, and this view allows for height measurements as well as being the basis for parallax between two images, which provides stereo imagery. Parallax is the apparent displacement of the position of an object relative to a reference point due to a change in the point of observation. This off-nadir angle may be small across the image when the imagery is vertical and the IFOV is small. Larger off-nadir angles are seen when the imagery is captured as oblique imagery or if the camera has a large IFOV.

These geometric shifts due to sensor perspective and collection geometry enable some good things like stereo imagery, but they also lead to occlusion of objects and variation from image to image that adversely affect image classification and other automated processes if elevation is not modeled at a high fidelity.

The 1986 introduction of the French SPOT systems brought off-nadir pointability to civilian satellite image collection, allowing for the collection of off-nadir stereo pairs of imagery to support the creation of DEMs. Now, most very-high-spatial-resolution satellites and airborne systems are able to collect both nadir and off nadir to oblique imagery either through pointing the system as shown in figure 3.20 or through the use of multiple sensors on the platform, some collecting at nadir and others collecting off nadir, as shown in figure 3.21. Recently, with advances in photogrammetry and computing power, airborne and terrestrial oblique images have been used to created detailed and accurate 3D representations of the landscape.


Figure 3.19. The concepts of the instantaneous field of view (IFOV), nadir, and off-nadir angles of a vertically pointed sensor


Figure 3.20. Examples of a framing camera’s vertical, low-oblique, and high-oblique viewing angles


Figure 3.21. Diagram showing how pointable satellites can pitch from side to side during orbit to collect off-nadir stereo images. Source: DigitalGlobe


Figure 3.22. Conceptual diagram of an aircraft with a Leica ADS100 with three beam splitters: two tetrachroid beam splitters in forward and backward directions with multispectral red, green, blue, and near-infrared (RGBN) bands and one bi-tetrachroid beam splitter in nadir with multispectral red, green, blue, and near infrared bands with staggered green bands. Source: Hexagon

Imagery and GIS

Подняться наверх