Segment the imagery
To determine which parts of the ground are pervious and impervious, you will classify the imagery into land-use types. Impervious surfaces are generally human-made: buildings, roads, parking lots, brick, or asphalt. Pervious surfaces include vegetation, water bodies, and bare soil. However, if you try to classify an image in which almost every pixel has a unique combination of spectral characteristics, you are likely to encounter errors and inaccuracies.
Before you classify the imagery, you will change the band combination to distinguish features clearly. Then, you will group pixels into segments, which will generalize the image and significantly reduce the number of spectral signatures to classify. Once you segment the imagery, you will perform a supervised classification of the segments. You will first classify the image into broad land-use types, such as roofs or vegetation. Then, you will reclassify those land-use types into either impervious or pervious surfaces.
Download and open the project
Before you begin, you will download data supplied by the local government of Louisville, Kentucky. This data includes imagery of the study area and land parcel features.
- Download the Surface_Imperviousness compressed folder.
Locate the downloaded file on your computer.
Depending on your web browser, you may have been prompted to choose the file's location before you began the download. Most browsers download to your computer's Downloads folder by default.
- Right-click the file and extract it to a location you can easily find, such as your Documents folder.
- Open the Surface_Imperviousness folder.
The folder contains several subfolders, an ArcGIS Pro project file (.aprx), and an ArcGIS Toolbox (.tbx). Before you explore the other data, you will open the project file.
- If you have ArcGIS Pro installed on your machine, double-click Surface Imperviousness (without the underscore) to open the project file. If prompted, sign in using your licensed ArcGIS account.
If you don't have ArcGIS Pro or an ArcGIS account, you can sign up for an ArcGIS free trial.
The project contains a map of a neighborhood near Louisville, Kentucky. The map includes a 6-inch resolution, 4-band aerial photograph of the area and a feature class of land parcels. Next, you will look at the rest of the data that you downloaded.
- In the Catalog pane, expand Folders and expand the Surface_Imperviousness folder.
If the Catalog pane is not open, go to the ribbon and click the View tab. In the Windows group, click the Catalog arrow and choose Catalog Pane.
The other folders that you downloaded are connected to and can be accessed within the Surface Imperviousness project. The other folders contain the data, files, and tools you will use during the project.
- Expand the Louisville_Imagery folder, the Training_Samples folder, and the Neighborhood_Data geodatabase.
The Louisville_Neighborhood TIFF image and the Parcels feature class are already on the map. The Louisville_Training_Samples shapefile and the Accuracy_Points feature class are premade versions of data you will create during your analysis.
Extract spectral bands
The multiband imagery of the Louisville neighborhood currently uses the natural color band combination to display the imagery the way the human eye would see it. You will change the band combination to better distinguish urban features such as concrete from natural features such as vegetation. While you can change the band combination by right-clicking the bands in the Contents pane, later parts of the workflow will require you to use imagery with only three bands. You will create a new image by extracting the three bands that you want to show from the original image.
- In the Contents pane, click the Louisville_Neighborhood.tif layer to select it.
- On the ribbon, click the Imagery tab. In the Analysis group, click Raster Functions.
The Raster Functions pane opens. Raster functions apply an operation to a raster image on the fly, meaning that the original data is unchanged and no new dataset is created. The output takes the form of a layer that exists only in the project in which the raster function was run. You will use the Extract Bands function to create a new image with only three bands to distinguish between impervious and pervious surfaces.
- In the Raster Functions pane, search for and click the Extract Bands function.
The Extract Bands function opens. The bands you extract will include Near Infrared (Band 4), which emphasizes vegetation; Red (Band 1), which emphasizes human-made objects and vegetation; and Blue (Band 3), which emphasizes water bodies.
- For Raster, choose the Louisville_Neighborhood image. Confirm that Method is set to Band IDs.
The Method parameter determines the type of keyword used to refer to bands when you enter the band combination. You can choose Band IDs, Band Names, or Band Wavelengths. For this data, Band IDs (a single number for each band) are the simplest way to refer to each band.
- For Combination, delete the existing text and type 4 1 3 (with spaces). Confirm that Missing Band Action is set to Best Match.
You can also choose the bands one by one using the Band parameter.
The Missing Band Action parameter specifies what action occurs if a band listed for extraction is unavailable in the image. Best Match chooses the best available band to use instead, while Fail causes the function to fail.
- Click Create new layer.
The new layer, called Extract Bands_Louisville_Neighborhood.tif, is added to the map. It displays only the extracted bands. The yellow Parcels layer covers the imagery and can make some features difficult to see. You will not use the Parcels layer until later in the project, so you will turn it off for now.
- In the Contents pane, uncheck the Parcels layer box to turn it off.
The Extract Bands layer shows the imagery with the band combination that you chose (4 1 3). Vegetation appears as red, roads appear as gray, and roofs appear as shades of gray or blue. By emphasizing the difference between natural and human-made surfaces, you can more easily classify them later.
Although the Extract Bands layer appears in the Contents pane, it has not been added as data to any of your folders. If you remove the layer from the map, you will delete the layer.
Configure the Classification Wizard
Next, you will open the Classification Wizard and configure its default parameters. The Classification Wizard walks you through the steps for image segmentation and classification.
- In the Contents pane, make sure that the Extract Bands_Louisville_Neighborhoods layer is selected.
- On the Imagery tab, in the Image Classification group, click the Classification Wizard button.
If you want to open the individual tools available in the wizard, you can access them from the same tab. In the Image Classification group, click Classification Tools and choose the tool you want.
The Image Classification Wizard pane opens. The wizard's first page (indicated by the blue circle at the top of the wizard) contains several basic parameters that determine the type of classification to perform. These parameters affect which subsequent steps will appear in the wizard. You will use the supervised classification method. This method is based on user-defined training samples, which indicate what types of pixels or segments should be classified in what way. (An unsupervised classification, by contrast, relies on the software to decide classifications based on algorithms.)
- Confirm that Classification Method is set to Supervised and that Classification Type is set to Object based.
The object based classification type uses a process called segmentation to group neighboring pixels based on the similarity of their spectral characteristics. Next, you will choose the classification schema. The classification schema is a file that specifies the classes that will be used in the classification. A schema is saved in an Esri classification schema (.ecs) file, which uses JSON syntax. For this workflow, you'll modify the default schema, NLCD2011. This schema is based on land cover types used by the United States Geological Survey.
- For Classification Schema, choose Use default schema.
The next parameter determines the output location, which is the workspace that stores all the outputs created in the wizard. These outputs include training data, segmented images, custom schemas, accuracy assessment information, intermediate outputs, and resulting classification results.
- Confirm that Output Location is set to Neighborhood_Data.gdb.
You will not enter anything for Segmented Image, because you will create a new segmented image in the next step. Likewise, you will create new training samples using the wizard, so you will leave the Training Samples parameter blank. The last parameter is Reference Dataset. A reference dataset contains known classes and tests the accuracy of a classification. You have not classified this data before, therefore you do not have a reference dataset for it. You will test your classification's accuracy later in the workflow.
- Click Next.
Segment the image
Next, you will group adjacent pixels with similar spectral characteristics into segments. Doing so will generalize the image and make it easier to classify. Instead of classifying thousands of pixels with unique spectral signatures, you will classify a much smaller number of segments. The optimal number of segments, and the range of pixels grouped into a segment, changes depending on the image size and the intended use of the image.
To control how your imagery is segmented, you will adjust three parameters. The first parameter is Spectral detail. It sets the level of importance given to spectral differences between pixels on a scale of 1 to 20. A higher value means that pixels must be more similar to be grouped together, creating a higher number of segments. A lower value creates fewer segments. Because you want to distinguish between pervious and impervious surfaces (which generally have very different spectral signatures), you will use a lower value.
- For Spectral detail, replace the default value with 8.
The next parameter is Spatial detail. It sets the level of importance given to the proximity between pixels on a scale of 1 to 20. A higher value means that pixels must be closer to each other to be grouped together, creating a higher number of segments. A lower value creates fewer segments that are more uniform throughout the image. You will use a low value because not all similar features in your imagery are clustered together. For example, houses and roads are not always close together and are located throughout the full image extent.
- For Spatial detail, replace the default value with 2.
The next parameter is Minimum segment size in pixels. Unlike the other parameters, this parameter is not on a scale of 1 to 20. Segments with fewer pixels than the value specified in this parameter will be merged into a neighboring segment. You do not want segments that are too small, but you also do not want to merge pervious and impervious segments into one segment. The default value will be acceptable in this case.
- For Minimum segment size in pixels, confirm that the value is 20.
The final parameter, Show Segment Boundaries Only, determines whether the segments are displayed with black boundary lines. This is useful for distinguishing adjacent segments with similar colors but may make smaller segments more difficult to see. Some of the features in the image, such as the houses or driveways, are fairly small, so you will leave this parameter unchecked.
- Confirm that Show Segment Boundaries Only is unchecked.
- Click Next.
A preview of the segmentation is added to the map. It is also added to the Contents pane with the name Preview_Segmented.
At the full extent, the output layer does not appear to have been segmented the way you wanted. Features such as vegetation seem to have been grouped into many segments that blur together, especially on the left side of the image. Tiny segments that seem to encompass only a handful of pixels dot the area as well. However, this image is being generated on the fly, which means the processing will change based on the map extent. At full extent, the image is generalized to save time. You will zoom in to reduce the generalization, so you can better see what the segmentation looks like with the parameters you chose.
- Zoom to the neighborhood in the middle of the image.
The segmentation runs again. With a smaller map extent, the segmentation more accurately reflects the parameters you used, with fewer segments and smoother outputs.
If you are unhappy with how the segmentation turned out, you can return to the previous page of the wizard and adjust the parameters. The segmentation is only previewed on the fly because it can take a long time to process the actual segmentation, so it is good practice to test different combinations of parameters until you find a result you like.
- In the Contents pane, right-click Preview_Segmented and choose Zoom To Layer.
- On the Quick Access Toolbar, click the Save button to save the project.
Saving the project does not save your location in the wizard. If you close the project before you complete the entire wizard, you will lose your spot and have to start the wizard over from the beginning. Avoid closing the software before moving to the next lesson.
You have extracted spectral bands to emphasize the distinction between pervious and impervious features. You also grouped pixels with similar spectral characteristics into segments, simplifying the image so that features can be more accurately classified by broad land-use types. Next, you will classify the imagery by perviousness or imperviousness.
Classify the imagery
Previously, you segmented the imagery to simplify it for classification. Next, you will perform a supervised classification of the segments. A supervised classification is based on user-defined training samples, which indicate what types of pixels or segments should be classified in what way. (An unsupervised classification, by contrast, relies on the software to decide classifications based on algorithms.) You will first classify the image into broad land-use types, such as vegetation or roads. Then, you will reclassify those land-use types into either pervious or impervious surfaces.
Create training samples
To perform a supervised classification, you need training samples. Training samples are polygons that represent distinct sample areas of the different land-cover types in the imagery. The training samples then signify that segments with certain spectral characteristics should be classified together to represent the same land-use type. First, you will modify the default schema (which you chose when you configured the wizard) to contain two parent classes: Impervious and Pervious. Then, you will add subclasses to each class that represent types of land cover. If you attempted to classify the segmented image into only pervious and impervious surfaces, the classification would be too generalized and likely have many errors. By classifying the image based on more specific land-use types, you will create a more accurate classification. Later, you will reclassify these subclasses into their parent classes.
- On the Training Samples Manager page of the wizard, right-click each of the default classes and click Remove Class. For each class, click Yes in the Remove Class window.
- Right-click NLCD2011 and choose Add New Class.
- In the Add New Class window, for Name, type Impervious. For Value, type 20, and for Color, choose Gray 30%. Click OK.
- Right-click NLCD2011 again and choose Add New Class. Add a class named Pervious with a value of 40 and a color of Quetzal Green.
Next, you'll add a subclass for gray roof surfaces.
- Right-click the Impervious parent class and choose Add New Class. Add a class named Gray Roofs with a value of 21 and a color of Gray 50%.
Next, you'll create a training sample on the map using this class.
- Click the Gray Roofs class to select it. Then, click the Polygon button.
- Zoom to the cul-de-sac to the northwest of the neighborhood.
You can enable navigation tools while the Polygon tool is active by holding down the C key.
- On the northernmost roof in the cul-de-sac, draw a polygon. (Double-click to finish the drawing.) Make sure the polygon covers only pixels that comprise the roof.
A row is added to the wizard for your new training sample.
When creating training samples, you want to cover a high number of pixels for each land-use type. For now, you'll create more training samples to represent the roofs of the houses.
- Draw more rectangles on some of the nearby houses.
Every training sample that you make is added to the wizard. Although you have only drawn training samples on roofs, each training sample currently exists as its own class. Eventually, you want all gray roofs to be classified as the same value, so you will merge the training samples that you created into one class.
- In the wizard, click the first row to select it. Press Shift and click the last row to select all the training samples.
- Above the list of training samples, click the Collapse button.
The training samples collapse into one class. You can continue to add more training samples for gray roofs and merge them into the Gray Roofs class. Ultimately, the Gray Roofs class should have training samples on roofs throughout the entire image (not every roof needs a training sample, but more coverage is more likely to yield a satisfactory classification).
- Create two more impervious subclasses based on the following table:
Subclass Value Color
- Create four pervious subclasses based on the following table:
Subclass Value Color
These seven classes are specific to the land-use types for this image. Images of different locations may have different types of land use or ground features that should be represented in a classification. For example, a different location may have houses with both gray roofs and red roofs. Because the spectral signatures of both roof types are very different, it would be more accurate to classify gray roofs and red roofs as two classes.
Shadows are not actual surfaces and cannot be either pervious or impervious. However, shadows are usually cast by tall objects such as houses or trees and are more likely to cover grass or bare earth, which are pervious surfaces. Some shadows cover roads or driveways, but you will factor these into your accuracy assessment later in the workflow.
- Draw training samples throughout the image to represent these seven main land-use types. Zoom and pan throughout the image as needed.
- Collapse training samples that represent the same types of land use into one class.
- When you are satisfied with your training samples, click the Save button.
Your customized classification schema is saved in case you want to use it again.
- Click Next.
Classify the image
Now that you have created the training samples, you will choose the classification method. Each classification method uses a different statistical process involving your training samples. You will use the Support Vector Machine classifier, which can handle larger images and is less susceptible to discrepancies in your training samples. Then, you will train the classifier with your training samples and create a classifier definition file. This file will be used during the classification. Once you create the file, you will classify the image. Lastly, you will reclassify the pervious and impervious subclasses into their parent classes, creating a raster with only two classes.
- Confirm that Classifier is set to Support Vector Machine.
For the next parameter, you can specify the maximum number of samples to use for defining each class. You want to use all your training samples, so you will change the maximum number of samples per class to 0. Changing the maximum to 0 is a trick to ensure all training samples are used.
- For Maximum Number of Samples per Class, type 0.
Lastly, you have the option to choose statistical attributes to include in the attribute table of any raster dataset created using the classifier. While these statistics can be interesting, you will not use them for your purposes, so you will leave the default parameters unchanged. Next, you will train the classifier and display a preview.
- Click Run.
The process may take a long time, as multiple processes are run. First, the image is segmented (previously, you only segmented the image on the fly, which is not permanent). Then, the classifier is trained and the classification performed. When the process finishes, a preview of the classification is displayed on the map.
Depending on your training samples, your classification preview should appear to be fairly accurate (the colors in the dataset correspond to the colors you chose for each training sample class). However, you may notice that some features were classified incorrectly. For instance, in the example image, the muddy pond south of the neighborhood was incorrectly classified as a gray roof, when it is actually water. Classification is not an exact science and rarely will every feature be classified correctly. However, because this classification will be used to determine storm water fees for landowners, a high degree of accuracy is expected. If you see only a few inaccuracies, you can correct them manually later in the wizard. If you see a large number of inaccuracies, you may need to create more training samples. Later, you will run tools to assess the accuracy of your classification.
- If you are satisfied with the classification preview, click Next.
The next page is the Classify page. You will use this page to run the actual classification and save it in your geodatabase.
- For Output Classified Dataset, change the output name to Classified_Louisville.tif.
The remaining parameters are optional. They allow you to create additional outputs, such as a classifier definition file or a segmented image. You have already created these files, so you do not have to create them again.
- Leave the remaining parameters unchanged and click Run.
The process runs and the classified raster is added to the map. It looks similar to the preview.
- Click Next.
The next page is the Merge Classes page. You will use this page to merge subclasses into their parent classes. Your raster currently has seven classes, each representing a type of land use. While these classes were essential for an accurate classification, you are only interested in whether each class is pervious or impervious. You will merge the subclasses into the Pervious and Impervious parent classes to create a raster with only two classes.
- For each class, in the New Class column, choose either Pervious or Impervious.
When you change the first class, a preview is added to the map. The preview shows what the reclassified image will look like. When you change all of the classes, the preview should only have two classes, representing pervious and impervious surfaces.
- Click Next.
The final page of the wizard is the Reclassifier page. This page includes tools for reclassifying small errors in the raster dataset. You will use this page to fix an incorrect classification in your raster.
- In the Contents pane, uncheck all layers except the Preview_Reclass and Louisville_Neighborhood.tif layers. Click the Preview_Reclass layer to select it.
- On the ribbon, click the Appearance tab. In the Effects group, click Swipe.
- Drag the pointer across the map to visually compare the preview to the original neighborhood imagery.
One inaccuracy that you may notice is the muddy pond south of the neighborhood. Because the pond is muddy, it has a different spectral signature than the other water bodies on the map, so it will likely be classified incorrectly even with thorough training samples. This pond is not connected to any other impervious objects, so you can reclassify it with relative ease.
- Zoom to the muddy pond area.
- In the wizard, click Reclassify within a region.
With this tool, you can draw a polygon on the map and reclassify everything within the polygon.
- In the Remap Classes section, confirm that Current Class is set to Any. Change New Class to Pervious.
With these settings, any pixels in the polygon will be reclassified to pervious surfaces. Next, you will reclassify the muddy pond.
- Draw a polygon around the muddy pond. Make sure you do not include any other impervious surfaces in the polygon.
The pond is automatically reclassified as a pervious surface.
If you make a mistake, you can undo the reclassification by unchecking it in the Edits Log.
While you likely noticed other inaccuracies in your classification, for the purposes of this lesson, you will not make any more edits.
- Zoom to the full extent of the data.
- In the Image Classification Wizard, for Final Classified Dataset, type Louisville_Impervious.tif (including the .tif extension).
- Click Run. Then, click Finish.
The tool runs and the reclassified raster is added to the map.
- On the Quick Access Toolbar, click Save to save the project.
You have classified imagery of a neighborhood in Louisville to determine land cover that was pervious and land cover that was impervious. Next, you will perform an accuracy assessment on your classification to determine if it is within an acceptable range of error. Then, you will calculate the area of impervious surfaces per land parcel so the local government can assign storm water fees.
Calculate impervious surface area
Previously, you classified an image to show impervious surfaces. Next, you will assess the accuracy of your classification by statistically comparing it to the original image. After confirming that the classification has an acceptable accuracy, you will calculate the area of impervious surface per parcel and symbolize the parcels accordingly.
Create accuracy assessment points
Visually comparing the classified image to the original does not provide a statistical measurement of the classification's accuracy. With storm water bills being determined from your analysis, you will perform a more rigorous assessment by creating randomly generated accuracy assessment points throughout the image. You will then compare the classified value of the image at the location of each point with the actual land-use type, or ground truth, of the original image.
- If necessary, open the Surface Imperviousness project in ArcGIS Pro. In the Catalog pane, expand the Tasks folder and open the Calculate Surface Imperviousness task.
- In the Tasks pane, expand the Assess Classification Accuracy task group. Double-click the Create accuracy assessment points task to open it.
The first step of the task opens the Create Accuracy Assessment Points tool. This tool generates random points throughout an image and gives the points an attribute based on the classified value of the image at the point's location. The accuracy assessment points will also have a field for the ground truth of the original image, which you will manually fill in for each point.
- For Input Raster or Feature Class Data, choose the Louisville_Impervious layer.
- For Output Accuracy Assessment Points, click the Browse button. Browse to the Neighborhood_Data geodatabase and save the output layer as My_Accuracy_Points.
Next, you will determine the characteristics of the points. The Target Field parameter determines whether the attribute table of the points describes the classification value or the ground truth value. Your input image is the classified raster, so the points should contain the classification values. The Number of Random Points parameter determines how many points are created. For a small image with only two classes, a relatively small number of points is acceptable.
Lastly, the Sampling Strategy parameter determines how points are randomly distributed across the image. The points can be distributed proportionally to the area of each class, equally between each class, or absolutely randomly. Because your primary interest is in the accuracy of impervious surfaces (the smaller of the two classes), you will equally distribute the points between each class to better represent impervious surfaces in the assessment.
- Change the remaining parameters:
- Target Field: Classified
- Number of Random Points: 100
- Sampling Strategy: Equalized stratified random
- Click Run.
One hundred accuracy points are added to the map (they may be difficult to see in the example image) and the task continues to the next step. The tool also added attributes to the points. Specifically, the points attribute table contains the class value of the classified image for each point location. You will now use the accuracy points data to compare the classified image to the ground truth of the original image.
- In the Contents pane, right-click the My_Accuracy_Points layer and choose Attribute Table.
The attribute table opens.
Other than the ObjectID and Shape fields, the points have two attributes: Classified and GrndTruth (or Ground Truth). The Classified field has values that are either 20 or 40. These numbers represent the classes in the image: 20 is impervious; 40 is pervious. For the GrndTruth field, however, every value is -1 by default. You will edit the GrndTruth attributes to either 20 or 40 depending on the type of terrain that the point covers in the original image.
- In the Contents pane, uncheck all layers except My_Accuracy_Points and Louisville_Neighborhood.tif.
- In the attribute table, click the row header (the small gray square) next to the first record to select the feature. Right-click the row header and choose Zoom To.
The map zooms to the selected point. (Your point will be in a different location than the point in the example.)
In this example, the point appears to be on either grass or bare earth. Either way, the surface is pervious. You would change the GrndTruth attribute for this point to 40 for pervious. If your first point appears to be on an impervious surface such as roads or roofs, you'll change the GrndTruth attribute to 20 for impervious.
Depending on your map extent and the location of the point, you may not have zoomed close enough to the point to determine what kind of ground cover it is on. Feel free to zoom closer to better determine the point's ground truth.
- In the attribute table, in the GrndTruth column, double-click the value for the selected feature to edit it. Replace the default value with either 40 or 20, depending on the point's location, and press Enter.
- Select the next point in the attribute table. Right-click the point and choose Pan To.
The map pans to the corresponding point.
- Depending on the location of the point, change the GrndTruth value to either 20 or 40.
It may be difficult to tell the ground truth for some of the points due to ambiguous features on the map. The most rigorous accuracy assessment would involve on-site verification of accuracy assessment points, but in many cases traveling to the actual location being analyzed is infeasible. Edit each point with your best guess based on the imagery.
- Repeat the process for the first ten points.
Under normal circumstances, you would need to examine and edit each accuracy point. However, to save time in this lesson, you will not continue to repeat this process for the rest of your points. The data that you downloaded at the beginning of the project includes an accuracy assessment point feature class with the GrndTruth field populated for you. You will use the provided feature class for subsequent analysis in this lesson.
- Close the attribute table. In the Contents pane, right-click the Louisville_Neighborhood layer and choose Zoom To Layer.
The map returns to the full extent of the imagery.
- In the Tasks pane, click Next Step.
Although you will use the provided accuracy points for the remainder of the project, you will still save the edits that you made to your own points.
- Click Run. In the Save Edits window, click Yes to save all edits.
- In the Tasks pane, click Finish.
Compute a confusion matrix
After creating accuracy assessment points and populating their attributes with ground truth data, you will use the points to create a confusion matrix. A confusion matrix is a table that compares the Classified and GrndTruth attributes of accuracy assessment points and determines the percentage of accuracy between them. If the areas that were classified as impervious actually represent impervious areas in the original imagery, the confusion matrix will have a high percentage and indicate high accuracy of the classification.
- In the Tasks pane, double-click the Compute a confusion matrix task to open it.
The task opens the Compute Confusion Matrix tool. The tool has only two parameters: an input and an output.
- For Input Accuracy Assessment Points, click Browse. Browse to the Neighborhood_Data geodatabase and select Accuracy_Points.
- For Output Confusion Matrix, click Browse. Save the output in the Neighborhood_Data geodatabase as Confusion_Matrix.
- Click Finish.
The tool runs and the confusion matrix is added to the Contents pane. Because the confusion matrix is a table with no spatial data, it does not appear on the map.
- In the Contents pane, under Standalone Tables, right-click Confusion_Matrix and choose Open.
Because you have so many layers in the Contents pane, you may need to scroll down to find the confusion matrix. If you want to reduce the amount of space that the imagery layers take up in the Contents pane, click the arrows next to the layer name to collapse the layer symbology.
The confusion matrix opens.
The values in the ClassValue column serve as row headers in the table. C_20 and C_40 correspond to the two classes in the classified raster: 20 for impervious surfaces and 40 for pervious surfaces. The C_20 and C_40 columns represent points with a ground truth of 20 or 40, while the C_20 and C_40 rows represent points that were classified as 20 or 40. For instance, when using the example points, 47 points that had a ground truth of 20 were also classified as 20, while one point with a ground truth of 20 was incorrectly classified as 40. Out of a total of 100 points, four were misclassified (three were misclassified as impervious, and one was misclassified as pervious).
U_Accuracy stands for user's accuracy. It represents the fraction of pixels classified correctly per total classifications. P_Accuracy stands for producer's accuracy and represents the fraction of pixels classified correctly per total ground truths. For instance, 50 pixels were classified as impervious, of which 47 were classified correctly, leading to a user's accuracy of 0.94 (or 94 percent). Meanwhile, 48 pixels had a ground truth of impervious, of which 47 were classified correctly, leading to a producer's accuracy of approximately 0.98 (or 98 percent).
The final attribute is Kappa. Based on the total user's and producer's accuracies, it gives an overall assessment of the classification's accuracy. In the example above, the Kappa is 0.92, or 92 percent. While not perfect, an overall accuracy of 92 percent is fairly reliable. If you used your own accuracy points instead of the example points, you might receive different values. For the purposes of this lesson, you'll assume your classification was fairly accurate.
If your Kappa is below 85 to 90 percent, your classification may not be accurate enough. There are two parts of the workflow that may contribute to classification error. The first is segmentation. If your segmentation parameters generalize the original image too heavily or not enough, features may be misclassified. Try tweaking the segmentation parameters for a better segmentation. Alternatively, the majority of error may have been caused by your training samples. Having too few training samples, or training samples that cover too wide a variety of spectral signatures, may also lead to classification error. Adding either more samples or more classes may increase the accuracy.
- Close the confusion matrix.
Tabulate the area
Now that you have assessed your classification's accuracy, you will determine the area of impervious surfaces within each parcel of land in the neighborhood. You will first calculate the area and store the results in a stand-alone table. Then, you will join the table to the Parcels layer.
- In the Tasks pane, expand the Calculate Impervious Surface Area task group. Double-click the Tabulate the area task to open it.
The first step of the task opens the Tabulate Area tool. This tool calculates the area of classes within zones that can be defined by an integer raster or a feature layer.
- For Input raster or feature zone data, choose the Parcels layer. Confirm that the Zone field parameter populates with the Parcel ID field.
The zone field is an attribute field that identifies each zone for which area will be calculated. You want the zones to correspond to the parcel features, so you will use a zone field that is unique for each parcel. The Parcel ID field has a unique identification number for each feature, so you will leave the parameter unchanged.
- For Input raster or feature class data, choose the Louisville_Impervious layer.
- For Class field, choose Class_name.
The class field determines the field by which area will be determined. You want to know the area of each class in your reclassified raster (pervious and impervious), so the Class_name field is appropriate.
- For Output table, confirm that the output location is the Neighborhood_Data geodatabase and change the output name to Impervious_Area.
The final parameter, Processing cell size, determines the cell size for the area calculation. By default, the cell size is the same as the input raster: half a foot (in this case). You'll leave this parameter unchanged.
- Click Run.
The tool runs and the table is added to the Contents pane. The task continues to the next step and opens the Join Field tool. Before you continue, you will take a look at the table that you created.
- In the Contents pane, right-click the Impervious_Area table and click Open.
The table has a standard ObjectID field, as well as three other fields. The first is the Parcel_ID field from the Parcels layer, showing the unique identification number for each parcel. The next two are the class fields from the Louisville_Impervious raster. Impervious shows the area (in feet) of impervious surfaces per parcel, while Pervious shows the area of pervious surfaces.
- Close the table.
You now have the area of impervious surfaces per parcel, but only in a stand-alone table. Next, you will join the stand-alone table to the Parcels attribute table. A table join updates the input table with the attributes from another table based on a common attribute field. Because you created the Impervious_Area table with the Parcel_ID field from the Parcels layer, you will perform the join based on that field.
- In the Tasks pane, for Input Table, choose the Parcels layer.
- For Input Join Field, choose Parcel ID.
- For Join Table, choose the Impervious_Area table.
- For Output Join Field, choose PARCEL_ID.
With the final parameter, Join Fields, you can choose specific fields from the join table to include in the result. If left empty, all fields from the join table will be included. The join table has only three fields, so there is no reason not to add them all.
- Click Finish.
The tool runs and the task ends.
- In the Contents pane, open the attribute table for the Parcels layer. Confirm that the attribute table includes the following fields:
- Close the table.
Symbolize the parcels
Now that the tables have been joined, you will change the field aliases to be more informative. Then, you will symbolize the parcels by impervious surface area to depict the area attribute on the map.
- In the Tasks pane, double-click the Clean up the table and symbolize the data task to open it.
The first step of the task is to clean up the Parcels attribute table.
- In the Contents pane, click the Parcels layer to select it (it may already be selected). In the Tasks pane, click Run.
The Fields view for the Parcels attribute table opens. With the Fields view, you can add or delete fields, as well as rename them, change their aliases, or adjust other settings. First, you will remove the redundant PARCEL_ID_1 field.
- Right-click the gray square to the left of the PARCEL_ID_1 field and choose Delete.
Next, you will change the field aliases of the two area fields to be more informative.
- Change the alias of the IMPERVIOUS field to Impervious Area (Feet).
- Change the alias of the PERVIOUS field to Pervious Area (Feet).
- On the ribbon, on the Fields tab, in the Changes group, click Save.
The changes to the attribute table are saved.
- Close the Fields view. In the Tasks pane, click Next Step.
The second step of the task is to symbolize the Parcels layer. First, however, you need to turn the Parcels layer back on. You'll also turn off layers that are unnecessary for visualizing the data.
- In the Contents pane, uncheck the My_Accuracy_Points layer to turn it off. Check the Parcels layer to turn it on and confirm that the layer is selected.
- In the Tasks pane, click Run.
The Symbology pane for the Parcels layer opens. Currently, the layer is symbolized with a single symbol. You will symbolize the layer so that parcels with high areas of impervious surfaces appear differently than those with low areas.
- In the Symbology pane, for Primary symbology, choose Graduated Colors.
A series of parameters becomes available. First, you will change the field that determines the symbology.
- For Field, choose Impervious Area (Feet).
The symbology on the layer changes automatically. However, there is little variety between the symbology of the parcels because of the low number of classes.
- Change Classes to 7. Change the Color scheme to Yellow to Red.
The layer symbology changes again.
The parcels with the highest area of impervious surfaces appear to be the ones that correspond to the location of roads. These parcels are very large and almost entirely impervious. In general, larger parcels tend to have larger impervious surfaces. While you could symbolize the layer by the percentage of area that is impervious, most storm water fees are based on total area, not percentage of area.
- Close the Symbology pane. In the Tasks pane, click Finish.
- Save the project.
In this lesson, you classified an aerial image of a neighborhood in Louisville, Kentucky, to show areas that were pervious and impervious to water. You then assessed the accuracy of your classification and determined the area of impervious surfaces per land parcel. With the information that you derived in this lesson, the local government would be better equipped to determine storm water bills. While your classification was not perfect, it was accurate enough that the local government could have reasonable confidence in your results.
You can use the tasks and tools in this project with your own data. As long as you have high-resolution, multispectral imagery of an area, you can classify its surfaces. This ArcGIS Pro task is designed to quickly replicate the workflow described by these lessons.
You can find more lessons in the Learn ArcGIS Lesson Gallery.