Remote sensing—the acquisition of information from a distance—has had a profound impact on human affairs in modern history. This image of British Beach (the WWII code name for one landing spot of the June 1944 Normandy invasion) taken from a specially equipped US Army F5, reveals rifle troops on the beach coming in from various large and small landing craft. Seven decades later—even as its application has expanded to unimaginable reaches—remote sensing remains the most significant of reconnaissance and earth observation technologies.
Humans have always sought the high vantage point above the landscape. Throughout history, whether from a treetop or a mountain peak or a rocky cliff, the view from above allowed our ancestors to answer important questions: Where is there water? Where is the best hunting ground? Where are my enemies? Aerial photography was first practiced by balloonist Gaspard-Félix Tournachon in 1858 over Paris. With the advent of both photography and practical airflight in the early twentieth century, the advantages of having the high ground led to a quantum shift forward and the field of remote sensing was born.
The technology came of age rapidly during World War I as a superior new military capability. From 1914 to 1918, aerial reconnaissance evolved from basically nothing to a rigorous, complex science. Many of the remote sensing procedures, methods, and terminology still in use today had their origins in this period. Throughout World War II the science and accuracy of remote sensing increased.
The next big evolutionary step came with spaceflight and digital photography. Satellite technology allowed the entire globe to be repeatedly imaged, and digital image management and transmission made these expanding volumes of images more useful and directly applicable. Today’s diverse human endeavors require a steady flow of imagery, much of which finds its way onto the web within moments of capture.
The first aerial photograph was taken in 1858, a century before the term “remote sensing” came into existence. Long before satellites and digital image capture became available, people were taking pictures of the earth’s surface from afar, documenting many crucial moments in history for posterity.
Modern imagery is captured from a broad range of altitudes starting from ground level to over 22,000 miles above Earth. The images that come from each altitude offer distinct advantages for each application. While not meant to be an exhaustive inventory, let’s take a look at some of the most commonly used sensor altitudes:
Satellites that match Earth’s rotation appear stationary in the sky to ground observers. While most commonly used for communications, geosynchronous orbiting satellites like the hyperspectral GIFTS imager are also useful for monitoring changing phenomena such as weather conditions. NASA’s Syncom, launched in the early 1960s, was the first successful “high flyer.”
Satellites in this orbit keep the angle of sunlight on the surface of the earth as consistent as possible, which means that scientists can compare images from the same season over several years, as with Landsat imagery. This is the bread-and-butter zone for earth observing sensors.
Also known as pseudo-satellites, these unmanned vehicles skim the highest edges of detectable atmosphere. NASA’s experimental Helios craft measured solar flares before crashing in the Pacific Ocean near Kauai.
Jet aircraft flying at 30,000 feet and higher can be flown over disaster areas in a very short time, making them a good platform for certain types of optical and multispectral image applications.
Small aircraft able to fly at low speed and low altitude have long been the sweet spot for high-quality aerial and orthophotography. From Cessnas to ultralights to helicopters, these are the workhorses of urban optical imagery.
Drones are the new kids on the block. Their ability to fly low, hover, and be remotely controlled offer attractive advantages for aerial photography, with resolution down to sub-1 inch. Military UAVs can be either smaller drones or actual airplanes.
Increasingly, imagery taken at ground level is finding its way into GIS workflows. Things like Google Street View, HERE streel-level imagery, and Mapillary; handheld multispectral imagers; and other terrestrial sensors are finding applications in areas like pipelines, security, tourism, real estate, natural resources, and entertainment.
As the authoritative record of changing conditions on the ground, remote sensing imagery has a broad array of applications in traditional terrestrial human activities that involve the management of land. As such, industries like forestry, agriculture, mining, and exploration were among the early adopters of remote sensing, funding its growth.
Access to up-to-date imagery shows the creation of the Zaatari refugee camp over a nine-day period in July 2012. Designed to hold over 60,000 people, its population skyrocketed to over 150,000 before new camps relieved some of the pressure. The story map The Uprooted tells the tale.
A passive imaging sensor captures energy reflected or emitted from the scene it views. Reflected sunlight is the most common source of electromagnetic energy measured by passive sensors. These sensors provide the ability to obtain global observations of Earth and its atmosphere.
Higher-resolution panchromatic images are created when the imaging sensor is sensitive to a wide range of wavelengths of light, typically spanning the entire visible part of the spectrum stored and displayed as a single-band grayscale image. This enables creation of smaller pixels on the sensor, and a sharper image than the typical multispectral sensors on the same system.
An active sensor is an instrument that emits energy and senses radiation that is reflected back from the earth’s surface or another target. It is used for a variety of applications related to meteorology and atmosphere, such as radar to measure echoes from certain objects (such as rain clouds), lidar for capturing detailed surface elevation values, and sonar to measure seafloor depth.
There are over 3,300 earth-observing satellites orbiting the globe, and the number is growing continuously. These myriad “eyes in the sky” are delivering an unprecedented payload of image data into the hands of spatial analysts, finding application to virtually all aspects of human activity. They cover low, medium, and high (geosynchronous) earth orbits. They’re operated by government agencies (like NASA and the European Space Agency) and by private companies (like Digital Globe and Airbus). They cover all of the segments of the electromagnetic spectrum from ultraviolet to natural color to near, mid, and thermal infrared, and active microwave sensors such as radar.
But space is getting crowded. In addition to the 3,000-plus active spacecraft, the world’s space agencies collectively track another 10,000-plus pieces of “space junk”—the spent boosters, battery-dead satellites, tools dropped by astronauts, and other debris from various events and mishaps.
As private launching and microsatellites gain favor, we can expect the number of sensors to continue to grow. The increasingly dense sensor grid offers promise for a wide range of applications, but it will bring serious challenges when it comes to effectively utilizing and disseminating the unprecedented flow of raw information.
Not all geography is from the top down. Oblique-angle views provide a unique perspective that has particular application in reconnaissance and real estate, to name but two application areas. Street-level imagery, popularized by Google Street View, is another rich form of spatial data that creates an immersive and integrated navigation experience.
An important concept in imagery is that of ground resolution. Every image has a ground resolution, typically expressed as distance on the ground. The imagery community refers to this as ground sample distance (GSD). This cell resolution is a measure of a square cell’s height and width in ground units such as feet or meters.
Like a great piece of artwork, imagery reveals its character and structure in complex ways—always awe-inspiring, sometimes subtle, sometimes puzzling. First comes the astonishment of its raw beauty—stark glaciers in Greenland, the delicate branching of a redwood’s lidar profile, a jagged edge of a fault line in radar, the vivid greens of the tropics, the determined lines of human impact, the rebirth of Mount St. Helens’ forests, the jiggly wiggly croplands of Asia and Africa, the lost snows of Kilimanjaro. Each image entices us to discover more, to look again and again.
After the first glimpse, we begin to explore. What’s creating that unique spectral response? Why are the trees on north-facing slopes and shrubs on south-facing slopes in this area? Are the locations of different tree species related to slope and elevation? Why did this house burn and the one next door is untouched by flames? How many people live in this village? What crops are grown here? Will there be enough food to feed these people? How did the landscape change so dramatically? Who changed it?
Then, through the power of GIS, we discover the connections. If we’re lucky, we travel to the field with our Collector apps to see for ourselves how the landscape varies in relationship to the imagery and other GIS layers. We use ArcGIS to organize and coregister the layers of information, and we mine for the variables that are most predictive. We learn how to tease out information about each object’s location, height, shape, texture, context, shadow, tone, and color from the imagery and GIS data. And then we make maps—we inventory resources and monitor how they change over time.
Imagery has been my ticket to the world. Through it I have traveled the globe, heard amazing stories, and met fascinating people—all passionate about their endeavors and their communities. I am very fortunate to have found the beauty of imagery, and through it discovered the work I was clearly meant to do.
The best way to get going is to first get a sense of how imagery can be leveraged in the ArcGIS platform by seeing it in action solving real problems (or at least informing those problems). The following story maps provide guided, curated views into the world of imagery and its important application to solving some of the planet’s most pressing problems.
At the end of each story map, you’ll find links to the source data that was used and some best practices for getting the data working properly in ArcGIS.
The Global Ecological Land Units (ELU) map portrays a systematic division and classification of the biosphere using ecological and physiographic land surface features. Because it’s a global dataset, it is an ideal data source to analyze using ArcGIS Earth.
In this lesson you’ll open ArcGIS Earth, a lightweight app to access and display ELU data that will reveal patterns of change on Earth’s surface. You’ll analyze different areas of the planet and see how well your own notions of these areas compare with the actual empirical data.