Open House 1998

Remote Sensing and Visualization
of ``Earth from Space''


Welcome to Lamont's new Remote Sensing and Visualization Laboratory!

This facility was established under a grant from the National

Aeronautics and Space Administration (NASA), a grant of computer

equipment from the Intel Corporation, and matching funds contributed

by Lamont and Columbia University. Satellite remote sensing images

collected over the past 25 years provide a way for scientists to

assess environmental changes, such as deforestation in tropical

regions, and relate them to areas of rapid population growth. Our

``RSVLab'', as it is called, contains 21 top-of-the-line color graphics

workstations manufactured by Sun Microsystems, Apple Computer, and

Intel Corporation. Software packages run on these machines provide

an intuitive, menu-driven approach to image processing and analysis

tasks which is extremely helpful to the first-time user.


Today, we illustrate the way in which we can combine satellite remote

sensing imagery with topographical data to make 3-dimensional

stereographic renderings of the Earth's surface. We will use remote

sensing data that comes from the Landsat instrument, which was

flown aboard a series of U.S. satellites since 1972. Landsat, flying

about 500 miles above the Earth's surface, follows a near-polar

orbit to keep pace with the Sun as the Earth revolves beneath it.

Reflected sunlight, both in the visible band that we see, and in the

infra-red band that our eyes cannot see, is recorded by the Landsat

sensor across a ground swath 100 miles wide as the satellite travels

along its orbit. An image is built row-by-row by gathering reflected

sunlight from small 250' x 250' areas positioned side-by-side across

the entire swath, then moving on to the next row of the swath.


For the 3-D stereo example, we take Landsat imagery from part of a

swath crossing the Red Sea coast of Arabia. The small green box locates

the area we will display. We assign the visible green component of

reflected sunlight to a `blue' channel, the visible red component to a

`green' channel, and the reflected infra-red to a 'red' channel. Then we

combine the blue, green and red channels to form a false-color image

of the Landsat data for our area in much the same way as a television

combines blue, green and red components into a color television

picture. To make a 3-D stereo rendering of the Landsat data, we need

a digital representation of the Earth's topographic surface for the

region that includes the Landsat image area. Fortunately, we can use a

280' x 280' grid of topographic height covering southern Arabia

and the Horn of Africa. The most time-consuming part of the process

is to identify distinctive points in both the height grid and the

Landsat imagery that constitute the same locations on the ground in

both data sets. Not surprisingly, these sets of common points are

called 'ground control points' (GCPs). Once we have identified

sufficient GCPs in each data set, we warp or stretch the Landsat data

so that it 'fits' the height data. The Landsat and topography data are

then said to be co-registered, or have the same map projection.


Using the gridded elevation data we make a 3-D perspective view of

the Earth's surface for the small area along the Arabian coast shown

in Figure 1. We first construct the elevated, perspective view of the

terrain using the elevation data. Then we ``drape'' the Landsat imagery

over that surface, rather like placing a carpet over a bumpy floor. So

now we have a perspective view of the area where the false-color

Landsat imagery tells us about the land cover type present in the

area, and the cover type at any particular point can be related to the

elevation at that point using the height data. You're looking in a north-

east direction across the coastal plain towards an elevated, interior plateau.


We have one final step left in making a stereoscopic view of our area.

You'll notice in Figure 2 that even though it is a perspective view of a

3-D surface, it looks kind of `flat'. The modern computer hardware and

software tools in the RSVLab allow us to make 'virtual reality'-type 3-D

rendering of the landscape. First, we make two new images. In one,

each pixel in Figure 1 is shifted to the left by an amount related to the

height value at that pixel. This image is meant for viewing with your

right eye. In the other, each pixel is shifted to the right according to the

height value for viewing with your left eye.


In a moment you'll want to move to the Sun workstation called 'Vega'

to view the stereo 3-D rendering of our area. Perched atop Vega's

tower is a small infra-red signal emitter. Its job is to control the

Crystal Eyes glasses so that the glasses alternately see the left and

right images on the display screen. Your brain does the job of

combining the left and right images into the full 3-D perspective

view of Landsat remote sensing data from the Arabian coast.


So walk over to the computer ``Vega'', don a pair of stereo glasses,

and enjoy the view!