Skip To Content

Segment the imagery

To determine which parts of the ground are pervious and impervious, you'll classify the imagery into land-use types. Impervious surfaces are generally human-made: buildings, roads, parking lots, brick, or asphalt. Pervious surfaces include vegetation, water bodies, and bare soil. However, if you try to classify an image in which almost every pixel has a unique combination of spectral characteristics, you're likely to encounter errors and inaccuracies.

Before you classify the imagery, you'll change the band combination to distinguish features clearly. Then, you'll group pixels into segments, which will generalize the image and significantly reduce the number of spectral signatures to classify. Once you segment the imagery, you'll perform a supervised classification of the segments. You'll first classify the image into broad land-use types, such as roofs or vegetation. Then, you'll reclassify those land-use types into either impervious or pervious surfaces.

Download and open the project

Before you begin, you'll download data supplied by the local government of Louisville, Kentucky. This data includes imagery of the study area and land parcel features.

  1. Download the Surface_Imperviousness compressed folder.
  2. Locate the downloaded file on your computer.
    Note:

    Depending on your web browser, you may have been prompted to choose the file's location before you began the download. Most browsers download to your computer's Downloads folder by default.

  3. Right-click the file and extract it to a location you can easily find, such as your Documents folder.
  4. Open the Surface_Imperviousness folder.

    Surface_Imperviousness folder

    The folder contains several subfolders, an ArcGIS Pro project file (.aprx), and an ArcGIS Toolbox (.tbx). Before you explore the other data, you'll open the project file.

  5. If you have ArcGIS Pro installed on your machine, double-click Surface Imperviousness (without the underscore) to open the project file. If prompted, sign in using your licensed ArcGIS account.
    Note:

    If you don't have ArcGIS Pro or an ArcGIS account, you can sign up for an ArcGIS free trial.

    Default project

    The project contains a map of a neighborhood near Louisville, Kentucky. The map includes a 6-inch resolution, 4-band aerial photograph of the area and a feature class of land parcels. Next, you'll look at the rest of the data that you downloaded.

  6. In the Catalog pane, expand Folders and expand the Surface_Imperviousness folder.
    Note:

    If the Catalog pane is not open, go to the ribbon and click the View tab. In the Windows group, click the Catalog arrow and choose Catalog Pane.

    Project data

    The other folders that you downloaded are connected to and can be accessed within the Surface Imperviousness project. The Index folder contains project metadata and reusable templates. The other folders contain the data, files, and tools you'll use during the project.

  7. Expand the Louisville_Imagery folder, the Training_Samples folder, and the Neighborhood_Data geodatabase.

    Project data expanded

    The Louisville_Neighborhood TIFF image and the Parcels feature class are already on the map. The Louisville_Training_Samples shapefile and the Accuracy_Points feature class are premade versions of data you'll create during your analysis (you'll learn more about them later).

Extract spectral bands

The multiband imagery of the Louisville neighborhood currently uses the natural color band combination to display the imagery the way the human eye would see it. You'll change the band combination to better distinguish urban features such as concrete from natural features such as vegetation. While you can change the band combination by right-clicking the bands in the Contents pane, later parts of the workflow will require you to use imagery with only three bands. You'll create a new image by extracting the three bands that you want to show from the original image.

  1. In the Contents pane, click the Louisville_Neighborhood layer to select it.
  2. On the ribbon, click the Imagery tab. In the Analysis group, click Raster Functions.

    Raster Functions

    The Raster Functions pane opens. Raster functions apply an operation to a raster image on the fly, meaning that the original data is unchanged and no new dataset is created. The output takes the form of a layer that exists only in the project in which the raster function was run. You'll use the Extract Bands function to create a new image with only three bands to distinguish between impervious and pervious surfaces.

  3. In the Raster Functions pane, search for and click the Extract Bands function.

    The Extract Bands function opens. The bands you extract will include Near Infrared (Band 4), which emphasizes vegetation; Red (Band 1), which emphasizes human-made objects and vegetation; and Blue (Band 3), which emphasizes water bodies.

  4. For Raster, choose the Louisville_Neighborhood image. Confirm that Method is set to Band IDs.

    Extract Bands Raster and Method parameters

    The Method parameter determines the type of keyword used to refer to bands when you enter the band combination. You can choose Band IDs, Band Names, or Band Wavelengths. For this data, Band IDs (a single number for each band) are the simplest way to refer to each band.

  5. For Combination, delete the existing text and type 4 1 3 (with spaces). Confirm that Missing Band Action is set to Best Match.

    Extract Bands Band and Combination parameters

    Tip:

    You can also choose the bands one by one using the Band parameter.

    The Missing Band Action parameter specifies what action occurs if a band listed for extraction is unavailable in the image. Best Match chooses the best available band to use instead, while Fail causes the function to fail.

  6. Click Create new layer.

    The new layer, called Extract_Bands_Louisville_Neighborhood, is added to the map. It displays only the extracted bands. The yellow Parcels layer covers the imagery and can make some features difficult to see. You won't need the Parcels layer until later in the project, so you'll turn it off for now.

  7. In the Contents pane, uncheck the Parcels layer box to turn it off.

    Extract Bands output

    The Extract Bands layer shows the imagery with the band combination that you chose (4 1 3). Vegetation appears as red, roads appear as gray, and roofs appear as shades of gray or blue. By emphasizing the difference between natural and human-made surfaces, you'll be able to more easily classify them later.

    Caution:

    Although the Extract Bands layer appears in the Contents pane, it has not been added as data to any of your folders. If you remove the layer from the map, you will delete the layer.

Configure the Classification Wizard

Next, you'll open the Classification Wizard and configure its default parameters. The Classification Wizard walks you through the steps for image segmentation and classification.

  1. In the Contents pane, make sure that the Extract_Bands_Louisville_Neighborhoods layer is selected.
  2. On the Imagery tab, in the Image Classification group, click the Classification Wizard button.

    Classification Wizard

    Note:

    If you want to open the individual tools available in the wizard, you can access them from the same tab. In the Image Classification group, click Classification Tools and choose the tool you want.

    The Image Classification Wizard pane opens. The wizard's first page (indicated by the blue circle at the top of the wizard) contains several basic parameters that determine the type of classification to perform. These parameters affect which subsequent steps will appear in the wizard. You'll use the supervised classification method. This method is based on user-defined training samples, which indicate what types of pixels or segments should be classified in what way. (An unsupervised classification, by contrast, relies on the software to decide classifications based on algorithms.)

  3. Confirm that Classification Method is set to Supervised and that Classification Type is set to Object based.

    The object based classification type uses a process called segmentation to group neighboring pixels based on the similarity of their spectral characteristics. Next, you'll choose the classification schema. The classification schema is a file that specifies the classes that will be used in the classification. A schema is saved in an Esri classification schema (.ecs) file, which uses JSON syntax. For this workflow, you'll modify the default schema, NLCD2011. This schema is based on land cover types used by the United States Geological Survey.

  4. For Classification Schema, choose Use default schema.

    Classification Schema

    The next parameter determines the output location, which is the workspace that stores all the outputs created in the wizard. These outputs include training data, segmented images, custom schemas, accuracy assessment information, intermediate outputs, and resulting classification results.

  5. Confirm that Output Location is set to Neighborhood_Data.gdb.

    You won't enter anything for Segmented Image, because you'll create a new segmented image in the next step. Likewise, you'll create new training samples using the wizard, so you'll leave the Training Samples parameter blank. The last parameter is Reference Dataset. A reference dataset contains known classes and tests the accuracy of a classification. You haven't classified this data before, so you don't have a reference dataset for it. You'll test your classification's accuracy later in the workflow.

  6. Click Next.

Segment the image

Next, you'll group adjacent pixels with similar spectral characteristics into segments. Doing so will generalize the image and make it easier to classify. Instead of classifying thousands of pixels with unique spectral signatures, you'll classify a much smaller number of segments. The optimal number of segments, and the range of pixels grouped into a segment, changes depending on the image size and the intended use of the image.

To control how your imagery is segmented, you'll adjust three parameters. The first parameter is Spectral detail. It sets the level of importance given to spectral differences between pixels on a scale of 1 to 20. A higher value means that pixels must be more similar to be grouped together, creating a higher number of segments. A lower value creates fewer segments. Because you want to distinguish between pervious and impervious surfaces (which generally have very different spectral signatures), you'll use a lower value.

  1. For Spectral detail, replace the default value with 8.

    The next parameter is Spatial detail. It sets the level of importance given to the proximity between pixels on a scale of 1 to 20. A higher value means that pixels must be closer to each other to be grouped together, creating a higher number of segments. A lower value creates fewer segments that are more uniform throughout the image. You'll use a low value because not all similar features in your imagery are clustered together. For example, houses and roads are not always close together and are located throughout the full image extent.

  2. For Spatial detail, replace the default value with 2.

    The next parameter is Minimum segment size in pixels. Unlike the other parameters, this parameter is not on a scale of 1 to 20. Segments with fewer pixels than the value specified in this parameter will be merged into a neighboring segment. You don't want segments that are too small, but you also don't want to merge pervious and impervious segments into one segment. The default value will be acceptable in this case.

  3. For Minimum segment size in pixels, confirm that the value is 20.

    The final parameter, Show Segment Boundaries Only, determines whether the segments are displayed with black boundary lines. This is useful for distinguishing adjacent segments with similar colors but may make smaller segments more difficult to see. Some of the features in the image, such as the houses or driveways, are fairly small, so you'll leave this parameter unchecked.

  4. Confirm that Show Segment Boundaries Only is unchecked.

    Show Segment Boundaries Only unchecked

  5. Click Next.

    A preview of the segmentation is added to the map. It is also added to the Contents pane with the name Preview_Segmented.

    Segmentation preview

    At the full extent, the output layer does not appear to have been segmented the way you wanted. Features such as vegetation seem to have been grouped into many segments that blur together, especially on the left side of the image. Tiny segments that seem to encompass only a handful of pixels dot the area as well. However, this image is being generated on the fly, which means the processing will change based on the map extent. At full extent, the image is generalized to save time. You'll zoom in to reduce the generalization, so you can better see what the segmentation looks like with the parameters you chose.

  6. Zoom to the neighborhood in the middle of the image.

    Zoomed segmentation preview

    The segmentation runs again. With a smaller map extent, the segmentation more accurately reflects the parameters you used, with fewer segments and smoother outputs.

    Note:

    If you're unhappy with how the segmentation turned out, you can return to the previous page of the wizard and adjust the parameters. The segmentation is only previewed on the fly because it can take a long time to process the actual segmentation, so it's good to test different combinations of parameters until you find a result you like.

  7. On the Quick Access Toolbar, click the Save button to save the project.
    Caution:

    Saving the project does not save your location in the wizard. If you close the project before you complete the entire wizard, you'll lose your spot and have to start the wizard over from the beginning. Avoid closing the software before moving to the next lesson.

In this lesson, you extracted spectral bands to emphasize the distinction between pervious and impervious features. You then grouped pixels with similar spectral characteristics into segments, simplifying the image so that features can be more accurately classified by broad land-use types. In the next lesson, you'll classify the imagery by perviousness or imperviousness.