Segment the imagery

To determine which parts of the ground are pervious and impervious, you'll classify the imagery into land-use types. Impervious surfaces are generally human-made: buildings, roads, parking lots, brick, or asphalt. Pervious surfaces include vegetation, water bodies, and bare soil. However, if you try to classify an image in which almost every pixel has a unique combination of spectral characteristics, you're likely to encounter errors and inaccuracies.

Before you classify the imagery, you'll change the band combination to distinguish features clearly. Then, you'll group pixels into segments, which will generalize the image and significantly reduce the number of spectral signatures to classify. Once you segment the imagery, you'll perform a supervised classification of the segments. You'll first classify the image into broad land-use types, such as roofs or vegetation. Then, you'll reclassify those land-use types into either impervious or pervious surfaces.

Download and open the project

Before you begin, you'll download data supplied by the local government of Louisville, Kentucky. This data includes imagery of the study area and land parcel features.

  1. Download the Surface_Imperviousness compressed folder.
  2. Locate the downloaded file on your computer.
    Note:

    Depending on your web browser, you may have been prompted to choose the file's location before you began the download. Most browsers download to your computer's Downloads folder by default.

  3. Right-click the file and extract it to a location you can easily find, such as your Documents folder.
  4. Open the Surface_Imperviousness folder.

    Surface_Imperviousness folder

    The folder contains several subfolders, an ArcGIS Pro project file (.aprx), and an ArcGIS Toolbox (.tbx). Before you explore the other data, you'll open the project file.

  5. If you have ArcGIS Pro installed on your machine, double-click Surface Imperviousness (without the underscore) to open the project file. If prompted, sign in using your licensed ArcGIS account.
    Note:

    If you don't have ArcGIS Pro or an ArcGIS account, you can sign up for an ArcGIS free trial.

    Default project

    The project contains a map of a neighborhood near Louisville, Kentucky. The map includes a 6-inch resolution, 4-band aerial photograph of the area and a feature class of land parcels. Next, you'll look at the rest of the data that you downloaded.

  6. In the Catalog pane, expand Folders and expand the Surface_Imperviousness folder.
    Note:

    If the Catalog pane is not open, go to the ribbon and click the View tab. In the Windows group, click the Catalog arrow and choose Catalog Pane.

    Project data

    The other folders that you downloaded are connected to and can be accessed within the Surface Imperviousness project. The Index folder contains project metadata and reusable templates. The other folders contain the data, files, and tools you'll use during the project.

  7. Expand the Louisville_Imagery folder, the Training_Samples folder, and the Neighborhood_Data geodatabase.

    Project data expanded

    The Louisville_Neighborhood TIFF image and the Parcels feature class are already on the map. The Louisville_Training_Samples shapefile and the Accuracy_Points feature class are premade versions of data you'll create during your analysis (you'll learn more about them later).

Extract spectral bands

The multiband imagery of the Louisville neighborhood currently uses the natural color band combination to display the imagery the way the human eye would see it. You'll change the band combination to better distinguish urban features such as concrete from natural features such as vegetation. While you can change the band combination by right-clicking the bands in the Contents pane, later parts of the workflow will require you to use imagery with only three bands. You'll create a new image by extracting the three bands that you want to show from the original image.

  1. In the Contents pane, click the Louisville_Neighborhood layer to select it.
  2. On the ribbon, click the Imagery tab. In the Analysis group, click Raster Functions.

    Raster Functions

    The Raster Functions pane opens. Raster functions apply an operation to a raster image on the fly, meaning that the original data is unchanged and no new dataset is created. The output takes the form of a layer that exists only in the project in which the raster function was run. You'll use the Extract Bands function to create a new image with only three bands to distinguish between impervious and pervious surfaces.

  3. In the Raster Functions pane, search for and click the Extract Bands function.

    The Extract Bands function opens. The bands you extract will include Near Infrared (Band 4), which emphasizes vegetation; Red (Band 1), which emphasizes human-made objects and vegetation; and Blue (Band 3), which emphasizes water bodies.

  4. For Raster, choose the Louisville_Neighborhood image. Confirm that Method is set to Band IDs.

    Extract Bands Raster and Method parameters

    The Method parameter determines the type of keyword used to refer to bands when you enter the band combination. You can choose Band IDs, Band Names, or Band Wavelengths. For this data, Band IDs (a single number for each band) are the simplest way to refer to each band.

  5. For Combination, delete the existing text and type 4 1 3 (with spaces). Confirm that Missing Band Action is set to Best Match.

    Extract Bands Band and Combination parameters

    Tip:

    You can also choose the bands one by one using the Band parameter.

    The Missing Band Action parameter specifies what action occurs if a band listed for extraction is unavailable in the image. Best Match chooses the best available band to use instead, while Fail causes the function to fail.

  6. Click Create new layer.

    The new layer, called Extract_Bands_Louisville_Neighborhood, is added to the map. It displays only the extracted bands. The yellow Parcels layer covers the imagery and can make some features difficult to see. You won't need the Parcels layer until later in the project, so you'll turn it off for now.

  7. In the Contents pane, uncheck the Parcels layer box to turn it off.

    Extract Bands output

    The Extract Bands layer shows the imagery with the band combination that you chose (4 1 3). Vegetation appears as red, roads appear as gray, and roofs appear as shades of gray or blue. By emphasizing the difference between natural and human-made surfaces, you'll be able to more easily classify them later.

    Caution:

    Although the Extract Bands layer appears in the Contents pane, it has not been added as data to any of your folders. If you remove the layer from the map, you will delete the layer.

Configure the Classification Wizard

Next, you'll open the Classification Wizard and configure its default parameters. The Classification Wizard walks you through the steps for image segmentation and classification.

  1. In the Contents pane, make sure that the Extract_Bands_Louisville_Neighborhoods layer is selected.
  2. On the Imagery tab, in the Image Classification group, click the Classification Wizard button.

    Classification Wizard

    Note:

    If you want to open the individual tools available in the wizard, you can access them from the same tab. In the Image Classification group, click Classification Tools and choose the tool you want.

    The Image Classification Wizard pane opens. The wizard's first page (indicated by the blue circle at the top of the wizard) contains several basic parameters that determine the type of classification to perform. These parameters affect which subsequent steps will appear in the wizard. You'll use the supervised classification method. This method is based on user-defined training samples, which indicate what types of pixels or segments should be classified in what way. (An unsupervised classification, by contrast, relies on the software to decide classifications based on algorithms.)

  3. Confirm that Classification Method is set to Supervised and that Classification Type is set to Object based.

    The object based classification type uses a process called segmentation to group neighboring pixels based on the similarity of their spectral characteristics. Next, you'll choose the classification schema. The classification schema is a file that specifies the classes that will be used in the classification. A schema is saved in an Esri classification schema (.ecs) file, which uses JSON syntax. For this workflow, you'll modify the default schema, NLCD2011. This schema is based on land cover types used by the United States Geological Survey.

  4. For Classification Schema, choose Use default schema.

    Classification Schema

    The next parameter determines the output location, which is the workspace that stores all the outputs created in the wizard. These outputs include training data, segmented images, custom schemas, accuracy assessment information, intermediate outputs, and resulting classification results.

  5. Confirm that Output Location is set to Neighborhood_Data.gdb.

    You won't enter anything for Segmented Image, because you'll create a new segmented image in the next step. Likewise, you'll create new training samples using the wizard, so you'll leave the Training Samples parameter blank. The last parameter is Reference Dataset. A reference dataset contains known classes and tests the accuracy of a classification. You haven't classified this data before, so you don't have a reference dataset for it. You'll test your classification's accuracy later in the workflow.

  6. Click Next.

Segment the image

Next, you'll group adjacent pixels with similar spectral characteristics into segments. Doing so will generalize the image and make it easier to classify. Instead of classifying thousands of pixels with unique spectral signatures, you'll classify a much smaller number of segments. The optimal number of segments, and the range of pixels grouped into a segment, changes depending on the image size and the intended use of the image.

To control how your imagery is segmented, you'll adjust three parameters. The first parameter is Spectral detail. It sets the level of importance given to spectral differences between pixels on a scale of 1 to 20. A higher value means that pixels must be more similar to be grouped together, creating a higher number of segments. A lower value creates fewer segments. Because you want to distinguish between pervious and impervious surfaces (which generally have very different spectral signatures), you'll use a lower value.

  1. For Spectral detail, replace the default value with 8.

    The next parameter is Spatial detail. It sets the level of importance given to the proximity between pixels on a scale of 1 to 20. A higher value means that pixels must be closer to each other to be grouped together, creating a higher number of segments. A lower value creates fewer segments that are more uniform throughout the image. You'll use a low value because not all similar features in your imagery are clustered together. For example, houses and roads are not always close together and are located throughout the full image extent.

  2. For Spatial detail, replace the default value with 2.

    The next parameter is Minimum segment size in pixels. Unlike the other parameters, this parameter is not on a scale of 1 to 20. Segments with fewer pixels than the value specified in this parameter will be merged into a neighboring segment. You don't want segments that are too small, but you also don't want to merge pervious and impervious segments into one segment. The default value will be acceptable in this case.

  3. For Minimum segment size in pixels, confirm that the value is 20.

    The final parameter, Show Segment Boundaries Only, determines whether the segments are displayed with black boundary lines. This is useful for distinguishing adjacent segments with similar colors but may make smaller segments more difficult to see. Some of the features in the image, such as the houses or driveways, are fairly small, so you'll leave this parameter unchecked.

  4. Confirm that Show Segment Boundaries Only is unchecked.

    Show Segment Boundaries Only unchecked

  5. Click Next.

    A preview of the segmentation is added to the map. It is also added to the Contents pane with the name Preview_Segmented.

    Segmentation preview

    At the full extent, the output layer does not appear to have been segmented the way you wanted. Features such as vegetation seem to have been grouped into many segments that blur together, especially on the left side of the image. Tiny segments that seem to encompass only a handful of pixels dot the area as well. However, this image is being generated on the fly, which means the processing will change based on the map extent. At full extent, the image is generalized to save time. You'll zoom in to reduce the generalization, so you can better see what the segmentation looks like with the parameters you chose.

  6. Zoom to the neighborhood in the middle of the image.

    Zoomed segmentation preview

    The segmentation runs again. With a smaller map extent, the segmentation more accurately reflects the parameters you used, with fewer segments and smoother outputs.

    Note:

    If you're unhappy with how the segmentation turned out, you can return to the previous page of the wizard and adjust the parameters. The segmentation is only previewed on the fly because it can take a long time to process the actual segmentation, so it's good to test different combinations of parameters until you find a result you like.

  7. On the Quick Access Toolbar, click the Save button to save the project.
    Caution:

    Saving the project does not save your location in the wizard. If you close the project before you complete the entire wizard, you'll lose your spot and have to start the wizard over from the beginning. Avoid closing the software before moving to the next lesson.

In this lesson, you extracted spectral bands to emphasize the distinction between pervious and impervious features. You then grouped pixels with similar spectral characteristics into segments, simplifying the image so that features can be more accurately classified by broad land-use types. In the next lesson, you'll classify the imagery by perviousness or imperviousness.


Classify the imagery

In the previous lesson, you segmented the imagery to simplify it for classification. Next, you'll perform a supervised classification of the segments. A supervised classification is based on user-defined training samples, which indicate what types of pixels or segments should be classified in what way. (An unsupervised classification, by contrast, relies on the software to decide classifications based on algorithms.) You'll first classify the image into broad land-use types, such as vegetation or roads. Then, you'll reclassify those land-use types into either pervious or impervious surfaces.

Create training samples

To perform a supervised classification, you need training samples. Training samples are polygons that represent distinct sample areas of the different land-cover types in the imagery. The training samples then signify that segments with certain spectral characteristics should be classified together to represent the same land-use type. First, you'll modify the default schema (which you chose when you configured the wizard) to contain two parent classes: Impervious and Pervious. Then, you'll add subclasses to each class that represent types of land cover. If you attempted to classify the segmented image into only pervious and impervious surfaces, the classification would be too generalized and likely have many errors. By classifying the image based on more specific land-use types, you'll create a more accurate classification. Later, you'll be able to reclassify these subclasses into their parent classes.

  1. On the Training Samples Manager page of the wizard, right-click each of the default classes and click Remove Class. For each class, click Yes in the Remove Class window.
  2. Right-click NLCD2011 and choose Add New Class.

    Add New Class

  3. In the Add New Class window, for Name, type Impervious. For Value, type 20, and for Color, choose Gray 30%. Click OK.

    Settings for Impervious class

  4. Right-click NLCD2011 again and choose Add New Class. Add a class named Pervious with a value of 40 and a color of Quetzal Green.

    Settings for Pervious class

    Next, you'll add a subclass for gray roof surfaces.

  5. Right-click the Impervious parent class and choose Add New Class. Add a class named Gray Roofs with a value of 21 and a color of Gray 50%.

    Next, you'll create a training sample on the map using this class.

  6. Click the Gray Roofs class to select it. Then, click the Polygon button.

    Polygon button

  7. Zoom to the cul-de-sac to the northwest of the neighborhood.
    Tip:

    You can enable navigation tools while the Polygon tool is active by holding down the C key.

    Northwest neighborhood

  8. On the northernmost roof in the cul-de-sac, draw a polygon. Make sure the polygon covers only pixels that comprise the roof.

    Training sample

    A row is added to the wizard for your new training sample.

    Row added to wizard

    When creating training samples, you want to cover a high number of pixels for each land-use type. For now, you'll create more training samples to represent the roofs of the houses.

  9. Draw more rectangles on some of the nearby houses.

    Training samples

    Every training sample that you make is added to the wizard. Although you have only drawn training samples on roofs, each training sample currently exists as its own class. You'll eventually want all gray roofs to be classified as the same value, so you'll merge the training samples that you've created into one class.

  10. In the wizard, click the first row to select it. Press Shift and click the last row to select all the training samples.
  11. Above the list of training samples, click the Collapse button.

    Collapse button

    The training samples collapse into one class. You can continue to add more training samples for gray roofs and merge them into the Gray Roofs class. Ultimately, the Gray Roofs class should have training samples on roofs throughout the entire image (not every roof needs a training sample, but more coverage is more likely to yield a satisfactory classification).

  12. Create two more impervious subclasses based on the following table:

    Subclass Value Color

    Roads

    22

    Cardovan Brown

    Driveways

    23

    Nubuck Tan

    Impervious subclasses

  13. Create four pervious subclasses based on the following table:

    Subclass Value Color

    Bare Earth

    41

    Medium Yellow

    Grass

    42

    Medium Apple

    Water

    43

    Cretan Blue

    Shadows

    44

    Sahara Sand

    Pervious subclasses

    Note:

    These seven classes are specific to the land-use types for this image. Images of different locations may have different types of land use or ground features that should be represented in a classification. For example, a different location may have houses with both gray roofs and red roofs. Because the spectral signatures of both roof types are very different, it would be more accurate to classify gray roofs and red roofs as two classes.

    Shadows are not actual surfaces and cannot be either pervious or impervious. However, shadows are usually cast by tall objects such as houses or trees and are more likely to cover grass or bare earth, which are pervious surfaces. Some shadows cover roads or driveways, but you'll factor these into your accuracy assessment later in the workflow.

  14. Draw training samples throughout the image to represent these seven main land-use types. Zoom and pan throughout the image as needed.

    Training samples

  15. Collapse training samples that represent the same types of land use into one class.

    Collapsed classes

  16. When you're satisfied with your training samples, click the Save button.

    Save button

    Your customized classification schema is saved in case you want to use it again.

  17. Click Next.

Classify the image

Now that you've created the training samples, you'll choose the classification method. Each classification method uses a different statistical process involving your training samples. You'll use the Support Vector Machine classifier, which can handle larger images and is less susceptible to discrepancies in your training samples. Then, you'll train the classifier with your training samples and create a classifier definition file. This file will be used during the classification. Once you create the file, you'll classify the image. Lastly, you'll reclassify the pervious and impervious subclasses into their parent classes, creating a raster with only two classes.

  1. Confirm that Classifier is set to Support Vector Machine.

    For the next parameter, you can specify the maximum number of samples to use for defining each class. You want to use all your training samples, so you'll change the maximum number of samples per class to 0. Changing the maximum to 0 is a trick to ensure all training samples are used.

  2. For Maximum Number of Samples per Class, type 0.

    Settings for Classifier and Maximum Number of Samples per Class

    Lastly, you have the option to choose statistical attributes to include in the attribute table of any raster dataset created using the classifier. While these statistics can be interesting, you won't need any of them for your purposes, so you'll leave the default parameters unchanged. Next, you'll train the classifier and display a preview.

  3. Click Run.

    The process may take a long time, as multiple processes are run. First, the image is segmented (previously, you only segmented the image on the fly, which isn't permanent). Then, the classifier is trained and the classification performed. When the process finishes, a preview of the classification is displayed on the map.

    Classification preview

    Depending on your training samples, your classification preview should appear to be fairly accurate (the colors in the dataset correspond to the colors you chose for each training sample class). However, you may notice that some features were classified incorrectly. For instance, in the example image, the muddy pond south of the neighborhood was incorrectly classified as a gray roof, when it is actually water. Classification is not an exact science and rarely will every feature be classified correctly. However, because this classification will be used to determine storm water fees for landowners, a high degree of accuracy is expected. If you see only a few inaccuracies, you can correct them manually later in the wizard. If you see a large number of inaccuracies, you may need to create more training samples. Later, you'll run tools to assess the accuracy of your classification.

    Note:

    To go back in the wizard and create more training samples, click the Previous button until you return to the correct page.

  4. If you're satisfied with the classification preview, click Next.

    The next page is the Classify page. You'll use this page to run the actual classification and save it in your geodatabase.

  5. For Output Classified Dataset, change the output name to Classified_Louisville.tif.

    The remaining parameters are optional. They allow you to create additional outputs, such as a classifier definition file or a segmented image. You've already created these files, so you don't need to create them again.

  6. Leave the remaining parameters unchanged and click Run.

    The process runs and the classified raster is added to the map. It looks similar to the preview.

  7. Click Next.

    The next page is the Merge Classes page. You'll use this page to merge subclasses into their parent classes. Your raster currently has seven classes, each representing a type of land use. While these classes were essential for an accurate classification, you're only interested in whether each class is pervious or impervious. You'll merge the subclasses into the Pervious and Impervious parent classes to create a raster with only two classes.

  8. For each class, in the New Class column, choose either Pervious or Impervious.

    New Class column

    When you change the first class, a preview is added to the map. The preview shows what the reclassified image will look like. When you change all of the classes, the preview should only have two classes, representing pervious and impervious surfaces.

  9. Click Next.

Reclassify errors

The final page of the wizard is the Reclassifier page. This page includes tools for reclassifying small errors in the raster dataset. You'll use this page to fix an incorrect classification in your raster.

  1. In the Contents pane, uncheck all layers except the Preview_Reclass and Louisville_Neighborhood.tif layers. Click the Preview_Reclass layer to select it.
  2. On the ribbon, click the Appearance tab. In the Effects group, click Swipe.

    Swipe

  3. Drag the pointer across the map to visually compare the preview to the original neighborhood imagery.

    One inaccuracy that you may notice is the muddy pond south of the neighborhood. Because the pond is muddy, it has a different spectral signature than the other water bodies on the map, so it will likely be classified incorrectly even with thorough training samples. This pond is not connected to any other impervious objects, so you can reclassify it with relative ease.

  4. Zoom to the muddy pond area.

    Muddy pond area

  5. In the wizard, click Reclassify within a region.

    Reclassify within a region

    With this tool, you can draw a polygon on the map and reclassify everything within the polygon.

  6. In the Remap Classes section, confirm that Current Class is set to Any. Change New Class to Pervious.

    Remap Classes

    With these settings, any pixels in the polygon will be reclassified to pervious surfaces. Next, you'll reclassify the muddy pond.

  7. Draw a polygon around the muddy pond. Make sure you don't include any other impervious surfaces in the polygon.

    Polygon drawn around the muddy pond

    The pond is immediately reclassified as a pervious surface.

    Reclassified pond

    Note:

    If you make a mistake, you can undo the reclassification by unchecking it in the Edits Log.

    While you likely noticed other inaccuracies in your classification, for the purposes of this lesson, you won't make any more edits.

  8. For Final Classified Dataset, type Louisville_Impervious.tif (including the .tif extension).
  9. Click Run. Then, click Finish.

    Reclassify output

    The tool runs and the reclassified raster is added to the map.

  10. On the Quick Access Toolbar, click Save to save the project.

In this lesson, you classified imagery of a neighborhood in Louisville to determine land cover that was pervious and land cover that was impervious. In the next lesson, you'll perform an accuracy assessment on your classification to determine if it is within an acceptable range of error. Then, you'll calculate the area of impervious surfaces per land parcel so the local government can assign storm water fees.


Calculate impervious surface area

In the previous lesson, you classified an image to show impervious surfaces. In this lesson, you'll assess the accuracy of your classification by statistically comparing it to the original image. After confirming that the classification has an acceptable accuracy, you'll calculate the area of impervious surface per parcel and symbolize the parcels accordingly.

Create accuracy assessment points

Visually comparing the classified image to the original doesn't provide a statistical measurement of the classification's accuracy. With storm water bills being determined from your analysis, you'll perform a more rigorous assessment by creating randomly generated accuracy assessment points throughout the image. You'll then compare the classified value of the image at the location of each point with the actual land-use type, or ground truth, of the original image.

  1. If necessary, open the Surface Imperviousness project in ArcGIS Pro. In the Catalog pane, expand the Tasks folder and open the Calculate Surface Imperviousness task.
  2. In the Tasks pane, expand the Assess Classification Accuracy task group. Double-click the Create accuracy assessment points task to open it.

    Create accuracy assessment points task

    The first step of the task opens the Create Accuracy Assessment Points tool. This tool generates random points throughout an image and gives the points an attribute based on the classified value of the image at the point's location. The accuracy assessment points will also have a field for the ground truth of the original image, which you'll manually fill in for each point.

  3. For Input Raster or Feature Class Data, choose the Louisville_Impervious layer.
  4. For Output Accuracy Assessment Points, click the Browse button. Browse to the Neighborhood_Data geodatabase and save the output layer as My_Accuracy_Points.

    Create Accuracy Assessment Points parameters

    Next, you'll determine the characteristics of the points. The Target Field parameter determines whether the attribute table of the points describes the classification value or the ground truth value. Your input image is the classified raster, so the points should contain the classification values. The Number of Random Points parameter determines how many points are created. For a small image with only two classes, a relatively small number of points is acceptable.

    Lastly, the Sampling Strategy parameter determines how points are randomly distributed across the image. The points can be distributed proportionally to the area of each class, equally between each class, or absolutely randomly. Because your primary interest is in the accuracy of impervious surfaces (the smaller of the two classes), you'll equally distribute the points between each class to better represent impervious surfaces in the assessment.

  5. Change the remaining parameters:

    • Target Field: Classified
    • Number of Random Points: 100
    • Sampling Strategy: Equalized stratified random

    Create Accuracy Assessment Points parameters

  6. Click Run.

    Create Accuracy Assessment Points output

    One hundred accuracy points are added to the map (they may be difficult to see in the example image) and the task continues to the next step. The tool also added attributes to the points. Specifically, the points attribute table contains the class value of the classified image for each point location. You'll now use the accuracy points data to compare the classified image to the ground truth of the original image.

  7. In the Contents pane, right-click the My_Accuracy_Points layer and choose Attribute Table.

    Open Attribute Table

    The attribute table opens.

    Attribute Table

    Other than the ObjectID and Shape fields, the points have two attributes: Classified and GrndTruth (or Ground Truth). The Classified field has values that are either 20 or 40. These numbers represent the classes in the image: 20 is impervious; 40 is pervious. For the GrndTruth field, however, every value is -1 by default. You'll edit the GrndTruth attributes to either 20 or 40 depending on the type of terrain that the point covers in the original image.

  8. In the Contents pane, uncheck all layers except My_Accuracy_Points and Louisville_Neighborhood.tif.
  9. In the attribute table, click the row header (the small gray square) next to the first record to select the feature. Right-click the row header and choose Zoom To.

    Zoom To

    The map zooms to the selected point. (Your point will be in a different location than the point in the example.)

    Accuracy assessment point

    In this example, the point appears to be on either grass or bare earth. Either way, the surface is pervious. You would change the GrndTruth attribute for this point to 40 for pervious. If your first point appears to be on an impervious surface such as roads or roofs, you'll change the GrndTruth attribute to 20 for impervious.

    Note:

    Depending on your map extent and the location of the point, you may not have zoomed close enough to the point to determine what kind of ground cover it is on. Feel free to zoom closer to better determine the point's ground truth.

  10. In the attribute table, in the GrndTruth column, double-click the value for the selected feature to edit it. Replace the default value with either 40 or 20, depending on the point's location, and press Enter.

    Attribute table

  11. Select the next point in the attribute table. Right-click the point and choose Pan To.

    The map pans to the corresponding point.

  12. Depending on the location of the point, change the GrndTruth value to either 20 or 40.

    It may be difficult to tell the ground truth for some of the points due to ambiguous features on the map. The most rigorous accuracy assessment would involve on-site verification of accuracy assessment points, but in many cases traveling to the actual location being analyzed is infeasible. Edit each point with your best guess based on the imagery.

  13. Repeat the process for the first ten points.

    Under normal circumstances, you would need to examine and edit each accuracy point. However, to save time in this lesson, you will not continue to repeat this process for the rest of your points. The data that you downloaded at the beginning of the project includes an accuracy assessment point feature class with the GrndTruth field populated for you. You'll use the provided feature class for subsequent analysis in this lesson.

  14. Close the attribute table. In the Contents pane, right-click the Louisville_Neighborhood layer and choose Zoom To Layer.

    The map returns to the full extent of the imagery.

  15. In the Tasks pane, click Next Step.

    Although you'll use the provided accuracy points for the remainder of the project, you'll still save the edits that you made to your own points.

  16. Click Run. In the Save Edits window, click Yes to save all edits.
  17. In the Tasks pane, click Finish.

Compute a confusion matrix

After creating accuracy assessment points and populating their attributes with ground truth data, you'll use the points to create a confusion matrix. A confusion matrix is a table that compares the Classified and GrndTruth attributes of accuracy assessment points and determines the percentage of accuracy between them. If the areas that were classified as impervious actually represent impervious areas in the original imagery, the confusion matrix will have a high percentage and indicate high accuracy of the classification.

  1. In the Tasks pane, double-click the Compute a confusion matrix task to open it.

    Compute a confusion matrix task

    The task opens the Compute Confusion Matrix tool. The tool has only two parameters: an input and an output.

  2. For Input Accuracy Assessment Points, click Browse. Browse to the Neighborhood_Data geodatabase and select Accuracy_Points.
  3. For Output Confusion Matrix, click Browse. Save the output in the Neighborhood_Data geodatabase as Confusion_Matrix.

    Compute Confusion Matrix tool

  4. Click Finish.

    The tool runs and the confusion matrix is added to the Contents pane. Because the confusion matrix is a table with no spatial data, it does not appear on the map.

  5. In the Contents pane, under Standalone Tables, right-click Confusion_Matrix and choose Open.
    Note:

    Because you have so many layers in the Contents pane, you may need to scroll down to find the confusion matrix. If you want to reduce the amount of space that the imagery layers take up in the Contents pane, click the arrows next to the layer name to collapse the layer symbology.

    Open confusion matrix

    The confusion matrix opens.

    Confusion matrix

    The values in the ClassValue column serve as row headers in the table. C_20 and C_40 correspond to the two classes in the classified raster: 20 for impervious surfaces and 40 for pervious surfaces. The C_20 and C_40 columns represent points with a ground truth of 20 or 40, while the C_20 and C_40 rows represent points that were classified as 20 or 40. For instance, when using the example points, 47 points that had a ground truth of 20 were also classified as 20, while one point with a ground truth of 20 was misclassified as 40. Out of a total of 100 points, four were misclassified (three were misclassified as impervious, and one was misclassified as pervious).

    U_Accuracy stands for user's accuracy. It represents the fraction of pixels classified correctly per total classifications. P_Accuracy stands for producer's accuracy and represents the fraction of pixels classified correctly per total ground truths. For instance, 50 pixels were classified as impervious, of which 47 were classified correctly, leading to a user's accuracy of 0.94 (or 94 percent). Meanwhile, 48 pixels had a ground truth of impervious, of which 47 were classified correctly, leading to a producer's accuracy of approximately 0.98 (or 98 percent).

    The final attribute is Kappa. Based on the total user's and producer's accuracies, it gives an overall assessment of the classification's accuracy. In the example above, the Kappa is 0.92, or 92 percent. While not perfect, an overall accuracy of 92 percent is fairly reliable. If you used your own accuracy points instead of the example points, you might receive different values. For the purposes of this lesson, you'll assume your classification was fairly accurate.

    Note:

    If your Kappa is below 85 to 90 percent, your classification may not be accurate enough. There are two parts of the workflow that may contribute to classification error. The first is segmentation. If your segmentation parameters generalize the original image too heavily or not enough, features may be misclassified. Try tweaking the segmentation parameters for a better segmentation. Alternatively, the majority of error may have been caused by your training samples. Having too few training samples, or training samples that cover too wide a variety of spectral signatures, may also lead to classification error. Adding either more samples or more classes may increase the accuracy.

  6. Close the confusion matrix.

Tabulate the area

Now that you've assessed your classification's accuracy, you'll determine the area of impervious surfaces within each parcel of land in the neighborhood. You'll first calculate the area and store the results in a stand-alone table. Then, you'll join the table to the Parcels layer.

  1. In the Tasks pane, expand the Calculate Impervious Surface Area task group. Double-click the Tabulate the area task to open it.

    Tabulate the area task

    The first step of the task opens the Tabulate Area tool. This tool calculates the area of classes within zones that can be defined by an integer raster or a feature layer.

  2. For Input raster or feature zone data, choose the Parcels layer. Confirm that the Zone field parameter populates with the Parcel ID field.

    Tabulate Area parameters

    The zone field is an attribute field that identifies each zone for which area will be calculated. You want the zones to correspond to the parcel features, so you'll use a zone field that is unique for each parcel. The Parcel ID field has a unique identification number for each feature, so you'll leave the parameter unchanged.

  3. For Input raster or feature class data, choose the Louisville_Impervious layer.
  4. For Class field, choose Class_name.

    Tabulate Area parameters

    The class field determines the field by which area will be determined. You want to know the area of each class in your reclassified raster (pervious and impervious), so the Class_name field is appropriate.

  5. For Output table, confirm that the output location is the Neighborhood_Data geodatabase and change the output name to Impervious_Area.

    Tabulate Area parameters

    The final parameter, Processing cell size, determines the cell size for the area calculation. By default, the cell size is the same as the input raster: half a foot (in this case). You'll leave this parameter unchanged.

  6. Click Run.

    The tool runs and the table is added to the Contents pane. The task continues to the next step and opens the Join Field tool. Before you continue, you'll take a look at the table that you created.

  7. In the Contents pane, right-click the Impervious_Area table and click Open.

    Area table

    The table has a standard ObjectID field, as well as three other fields. The first is the Parcel_ID field from the Parcels layer, showing the unique identification number for each parcel. The next two are the class fields from the Louisville_Impervious raster. Impervious shows the area (in feet) of impervious surfaces per parcel, while Pervious shows the area of pervious surfaces.

  8. Close the table.

    You now have the area of impervious surfaces per parcel, but only in a stand-alone table. Next, you'll join the stand-alone table to the Parcels attribute table. A table join updates the input table with the attributes from another table based on a common attribute field. Because you created the Impervious_Area table with the Parcel_ID field from the Parcels layer, you'll perform the join based on that field.

  9. In the Tasks pane, for Input Table, choose the Parcels layer.
  10. For Input Join Field, choose Parcel ID.
  11. For Join Table, choose the Impervious_Area table.
  12. For Output Join Field, choose Parcel_ID.

    Join Field parameters

    With the final parameter, Join Fields, you can choose specific fields from the join table to include in the result. If left empty, all fields from the join table will be included. The join table has only three fields, so there's no reason not to add them all.

  13. Click Finish.

    The tool runs and the task ends.

  14. In the Contents pane, open the attribute table for the Parcels layer. Confirm that the attribute table includes the following fields:

    • Parcel_ID_1
    • Impervious
    • Pervious

  15. Close the table.

Symbolize the parcels

Now that the tables have been joined, you'll change the field aliases to be more informative. Then, you'll symbolize the parcels by impervious surface area to depict the area attribute on the map.

  1. In the Tasks pane, double-click the Clean up the table and symbolize the data task to open it.

    Clean up the table and symbolize the data task

    The first step of the task is to clean up the Parcels attribute table.

  2. In the Contents pane, click the Parcels layer to select it (it may already be selected). In the Tasks pane, click Run.

    The Fields view for the Parcels attribute table opens. With the Fields view, you can add or delete fields, as well as rename them, change their aliases, or adjust other settings. First, you'll remove the redundant Parcel_ID_1 field.

  3. Right-click the gray square to the left of the Parcel_ID_1 field and choose Delete.

    Delete field

    Next, you'll change the field aliases of the two area fields to be more informative.

  4. Change the alias of the Impervious field to Impervious Area (Feet).
  5. Change the alias of the Pervious field to Pervious Area (Feet).

    Rename field

  6. On the ribbon, on the Fields tab, in the Changes group, click Save.

    Save button

    The changes to the attribute table are saved.

  7. Close the Fields view. In the Tasks pane, click Next Step.

    The second step of the task is to symbolize the Parcels layer. First, however, you need to turn the Parcels layer back on. You'll also turn off layers that are unnecessary for visualizing the data.

  8. In the Contents pane, uncheck the My_Accuracy_Points layer to turn it off. Check the Parcels layer to turn it on and confirm that the layer is selected.
  9. In the Tasks pane, click Run.

    The Symbology pane for the Parcels layer opens. Currently, the layer is symbolized with a single symbol. You'll symbolize the layer so that parcels with high areas of impervious surfaces appear differently than those with low areas.

  10. In the Symbology pane, for Primary symbology, choose Graduated Colors.

    Graduated Colors symbology

    A series of parameters becomes available. First, you'll change the field that determines the symbology.

  11. For Field, choose Impervious Area (Feet).

    The symbology on the layer changes automatically. However, there is little variety between the symbology of the parcels because of the low number of classes.

  12. Change Classes to 7. Change the Color scheme to Yellow to Red.

    Symbology parameters

    The layer symbology changes again.

    Final map

    The parcels with the highest area of impervious surfaces appear to be the ones that correspond to the location of roads. These parcels are very large and almost entirely impervious. In general, larger parcels tend to have larger impervious surfaces. While you could symbolize the layer by the percentage of area that is impervious, most storm water fees are based on total area, not percentage of area.

  13. Close the Symbology pane. In the Tasks pane, click Finish.
  14. Save the project.

In this project, you classified an aerial image of a neighborhood in Louisville, Kentucky, to show areas that were pervious and impervious to water. You then assessed the accuracy of your classification and determined the area of impervious surfaces per land parcel. With the information that you derived in this lesson, the local government would be better equipped to determine storm water bills. While your classification was not perfect, it was accurate enough that the local government could have reasonable confidence in your results.

You can use the tasks and tools in this project with your own data. As long as you have high-resolution, multispectral imagery of an area, you can classify its surfaces. This ArcGIS Pro task is designed to quickly replicate the workflow described by these lessons.

Try some of the other lessons in the Learn ArcGIS Gallery to discover more capabilities of ArcGIS.