Prepare the data

Training a model requires two inputs: accurate and high-resolution imagery and good training sample data that identifies your features of interest. You'll review the provided training samples using aerial imagery taken after the fire and Los Angeles County building footprints.

Review training data

First, you'll review the building footprints that have been classified as damaged or undamaged to ensure that they are accurate for training a deep learning model. In this lesson, the building footprints have been provided. In other scenarios, you may have to manually digitize features for training a model.

  1. Download the Automate_Fire_Damage .zip file and extract its contents to a suitable location on your computer.
  2. Open the Automate_Fire_Damage_Assessment folder. Open the Woosley Fire folder. Double-click the Woosley Fire ArcGIS project file.

    The ArcGIS Pro project opens. If prompted, sign into your ArcGIS Online or ArcGIS Enterprise account.


    If you don't have an organizational account, you can sign up for an ArcGIS free trial.

    The project opens to the area where the Woosley wildfire occurred. In the Contents pane, the Training Samples layer contains polygons of building footprints that have already been classified as damaged, in red, or undamaged, in green. A large input dataset is necessary to train the model to detect structures and classify them properly.

    Damaged and Undamaged features

    Before reviewing the buildings on the map, you'll explore the two building class attributes and how they are stored in the Training Samples feature class.

  3. In the Contents pane, right-click the Training Samples layer, point to Data Design, and choose Fields.

    Fields option

    The Fields view appears. You can see this dataset's fields.

    Fields view

    The Class Value field is used to classify the buildings as Damaged or Undamaged. It has a Domain named class_value that controls the values that can be entered for this attribute field.

    You already know the two classes used, but you'll explore how the domain is storing these values.

  4. On the ribbon, on the Fields tab, in the Data Design group, click Domains.

    Domains button

    The Domains view appears. It displays all of the domains for this project's geodatabase. There is only the one domain you saw earlier, class_value. It is already selected.

    Under Description, the values of Damaged and Undamaged are listed. However, the Code value that is stored in the geodatabase is 1 or 2.

    Domains tab

    Although this may seem like a small detail, the deep learning model that you'll create needs to use numeric values as inputs for the different classes. This domain allows you to see readable values, such as Damaged or Undamaged, while feeding values of 1 and 2 into your training data for the deep learning tools to process.

  5. Close the Domains view and Fields view.

    Now that you understand how the values are being stored, you'll ensure that the Training Samples polygons are properly classified.

  6. On the ribbon, click the Edit tab. In the Selection group, click Attributes.

    Attributes button

    The Attributes pane appears. You'll cycle through the features in this layer and update their attributes.

  7. In the Attributes pane, click the Layers tab. From the drop-down list, choose Training Samples.

    Choose Training Samples.

    A list of features appears, and the first building is selected. Its attributes are shown below the list.

    Feature attributes

    You'll go through some of these buildings, one at a time, and confirm that the Class Value attribute reflects each building's current status.

  8. Click the Step Forward button to cycle through the buildings in this layer.

    Step Forward button

  9. Click the Step Forward button until you see the building with the OBJECTID value of 8.

    The OBJECTID value is listed on the Attributes tab.

    Building with an OBJECTID value of 8

    If you look at the Class Value attribute for this building and compare this to what you see on the map, this building has been incorrectly classified as undamaged. You'll correct this by editing the Class Value attribute.

    Incorrectly classified building

  10. For the Class Value attribute, click the drop-down list and choose Damaged.

    Change Class Value to Damaged.


    If you do not have Auto Apply turned on, you'll need to press the Apply button in the Attribute pane.

    The symbol for the feature changes to red showing that it's damaged.

    Updated building

    Buildings 9 and 10 have also been incorrectly classified. Errors such as these in the training data will not necessarily break your model, but they will reduce its overall accuracy. You'll update these building as well.

  11. Click the Step Forward button. Edit the Class Value attribute for building 9.
  12. Update the Class Value attribute for building 10.

    Building 10 was incorrectly labeled as damaged. It should be labeled as undamaged.

  13. On the ribbon, on the Edit tab, in the Manage Edits group, click Save. Click Yes to save your edits.

    Save edits.

  14. Close the Attributes pane.
  15. On the Quick Access Toolbar, click Save.

    Save your project.

Export training samples

A deep learning model cannot process the raw aerial imagery and the Training Samples polygons. Instead, you'll split the image into smaller tiles, called chips, and convert the polygons to labels. The chips are smaller pieces of imagery for the software to process, and labels are the different classes of features that you identified, damaged and undamaged. You'll use the Export Training Data for Deep Learning geoprocessing tool to do this.

  1. On the ribbon, in Command Search, type Export Training Data for Deep Learning. Choose Export Training Data for Deep Learning.

    Command Search

    The Geoprocessing pane appears. You'll configure the tool to create the training labels. First, you'll select the imagery used to determine the class value for each building.

  2. For Input Raster, choose USAA Imagery (11/13/18).

    Input Raster parameter

    Next, you'll create a folder to store the results of this tool.

  3. For Output Folder, type buildings_training_samples.

    Output Folder parameter

    Next, you'll input the feature class of buildings and the attribute field that identifies each building as damaged or undamaged.

  4. For Input Feature Class Or Classified Raster Or Table choose Training Samples. For Class Value Field, choose Class Value.

    Input Feature Class Or Classified Raster Or Table and Class Value Field parameters

    The next set of parameters, Tile Size, is used to make the chips created by this tool bigger or smaller. The default size is 256 by 256 pixels. You'll make these chips larger. The Stride value determines how much overlap there is between the chips that are created. By default, each chip overlaps by 128 pixels. Increased overlap will increase the number of chips created. You'll only update the Tile Size parameters.

  5. For Tile Size X and Tile Size Y, type 448.

    Tile Size X and Tile Size Y parameters

    Now, you'll set the format used for this project's type of image classification. Depending on your deep learning workflow, you'll choose the appropriate output format.

  6. For Metadata Format, choose Labeled Tiles.

    Metadata Format parameter

    Finally, you'll set the cell size for the tool to use. A deep learning model performs best when the training and prediction images are of the same resolution. Cell size is the dimensions of the image pixels in real-world units. For instance, if you are using a projected coordinate system with linear units of meters, setting the cell size to 0.3 makes each pixel in the output 0.3 meters wide.

  7. Click the Environments tab.

    Environments tab

  8. Under Raster Analysis, for Cell Size, type 0.3.

    Cell Size parameter

  9. Click Run.
  10. Save your project.

You've exported a set of image tiles with labels that identify the buildings as damaged and undamaged. You'll use these results to train a deep learning model to automatically identify buildings in each class.

Train the model

Now, you'll train a deep learning model using the tiles you created.

Depending on your computer's hardware, training the model can take up to half an hour or more. It's recommended that your computer be equipped with a dedicated Graphics Processing Unit (GPU). If you do not want to train the model, a deep learning model has been provided to you in the project's Provided Results folder. Optionally, you can skip ahead to the Use the trained model to classify features section of this lesson.


Using the deep learning tools requires that you have the correct Deep Learning Libraries installed on your computer. If you do not have these files installed, save your project, close ArcGIS Pro, and follow the instructions to install deep learning frameworks for ArcGIS. Once installed, reopen your project and continue with the lesson.

Train a feature classifier model

You'll use the chips and labels you exported to create a deep learning model for classifying buildings as damaged and undamaged.

  1. On the ribbon, in Command Search, type Train Deep Learning Model. Choose Train Deep Learning Model.

    First, you'll set the exported training data from the previous section.

  2. For Input Training Data, click Browse. In your project's folder, select buildings_training_samples, and click OK.

    Folder directory

    The folder is shown in the geoprocessing tool's parameter.

    Input Training Data parameter

    Next, you'll set an output location for your model.

  3. For Output Model, type buildings_classify.

    Output Model parameter

    Now, you'll set the number of epochs for your model to train. Each epoch is one pass of all the training data forward and backward through a neural network. The more times you pass the data through this network, the better your results will be, although after a while the returns from doing so will decrease.

  4. For Max Epochs, confirm that it is set to 20.

    Max Epochs parameter

    Next, you'll choose the type of model to produce. This is based upon the format of your training data from the previous section.

  5. Expand Model Parameters. For Model Type, choose Feature classifier (Object classification).

    Model Type parameter

    Next, you'll set the Batch Size value. This parameter is the number of training samples that will be processed at a time.

  6. For Batch Size, type 8.

    Finally, if you have a GPU, you'll set this tool to run on your computer's GPU for faster processing. Otherwise, skip the next step.

  7. Optionally, if your computer has a GPU, click the Environments tab. For Processor Type, choose GPU.

    Processor Type parameter

  8. Click Run.

    If the model fails to run, you may have to adjust the Batch Size parameter. The ability to process larger batch sizes is based on your computer's GPU. If you have a powerful GPU, you can process up to 64 training samples at a time. If you have a less powerful GPU, you may have to set this parameter to 8, 4, or 2 and rerun the tool.

    The tool will run and the time it takes to complete is dependent upon your computer's hardware. The training will stop automatically when the accuracy of the model stops improving or when the number of epochs reaches the specified limit.

    While the tool is running, you'll see how the model is progressing through the training.

  9. Click View Details.

    View Details button

  10. Click Messages.

    Train Deep Learning Model details


    Since the model trains differently each time this tool is run, your model's messages may vary from the image above.

    For each of the epochs that run, Messages displays the model's performance. Specifically, you'll begin to see the accuracy of the model increase or move toward 1.

    When the training completes, you'll observe its results.

  11. Close the Train Deep Learning Model details window.
  12. In File Explorer, open the project's folder. Open the buildings_classify folder.

    If you did not run the Train Deep Learning Model tool, the results have been provided to you. In the project's folder, open the Provided Results folder. Then open the buildings_classify folder.

    Deep learning model components

    The folder contains a number of outputs that you can use to process imagery:

    • buildings_classify.pth is a trained model and is usually saved in a PyTorch format.
    • buildings_classify.emd is a model definition file that contains model information about the tile size, classes, and model type.
    • model_metrics.html contains details about the learning rate used and the final accuracy of the model.
    • buildings_classify.dlpk is a complete package of all the files stored in model output folder including the trained model, the model definition file, and the model metrics file. This package can be shared to ArcGIS Online and ArcGIS Enterprise as an item for others to access.

    Next, you'll explore the model_metrics.html page to see how your model performed during training.

  13. Open the model_metrics.html file.

    Your default web browser opens.


    Since the model trains differently each time this tool is run, your model's metrics may vary from the images below.

    The Training and Validation loss section displays a graph of the amount of error that was present as the model trained over time. When the model ran, some of your input data was used to train the model and some was used to validate the model, or test the model to determine its accuracy. Ideally, you would see these values decrease as the number of images processed increases over the course of the 20 epochs.

    Training and Validation loss graph

    The Confusion matrix graph visualizes how well the model identified buildings between the two classes using the actual, or input, data and the predicted results.

    Confusion matrix

    The Ground truth/Predictions section provides some examples of the training's results. Images are shown side-by-side with the known results on the left and the predictions on the right.

    Ground truth/Predictions from deep learning training

  14. Close the model_metrics browser tab.
  15. In ArcGIS Pro, save the project.

You trained a deep learning model and examined its metrics. Next, you'll use your model to automatically classify buildings in a different part of the provided imagery.

Use the trained model to classify features

With a model trained, you'll use it to help expedite the process of identifying, or inferencing, damaged and undamaged buildings. Then you'll observe the results.

Apply your feature classifier model

First, you'll use your model on a subset of the imagery provided to you. You'll look at a part of the map that has not yet been classified.


If you did not train a model in the previous module, a deep learning package has been provided for you in the project's folder.

Classifying features is a GPU-intensive process and can take a while to complete depending your computer's hardware. If you choose to not classify the buildings, results have been provided and you may skip ahead to Review the results section.

  1. In the Contents pane, turn off the Training Samples layer and turn on the Building Features layer.

    The Building_Feautures layer has additional building footprints in the southeast part of the map that need to be classified.

  2. On the ribbon, click the Map tab, click Bookmarks, and choose Inference Area.

    Inference Area bookmark

    You'll use the trained model to classify the buildings in this extent.

    Inference Area map extent

  3. On the ribbon, in Command Search, type Classify Objects Using Deep Learning. Choose Classify Objects Using Deep Learning.

    This tool uses a trained model to make classification predictions. The input raster gets split into tiles, which are fed in batches to the model to classify features. The classifications, after some postprocessing, are written into a specified output feature class.

    First, you set the raster that the model will inference.

  4. For Input Raster, choose USAA Imagery (11/13/18).

    Input Raster parameter

    Next, you'll choose the features to be classified. In this case it will be the unclassified buildings.

  5. For Input Features, choose Building Features.

    Input Features parameter

    The tool creates an output feature class dataset. You'll name it.

  6. For Output Classified Objects Feature Class, type Inferenced_Buildings.

    Output Classified Objects Feature Class parameter

    Now, you'll input the model you trained.

  7. For Model Definition, click Browse. Open the buildings_classify folder. Click the buildings_classify.dlpk deep learning model package file. Click OK.

    If you did not train a deep learning model, browse to the project's folder. Open Provided Results. Open buildings_classify. Click the buildings_classify.dlpk deep learning model package file. Click OK.

    Model Definition window with deep learning package

    The next parameter specifies the name of the field in the new layer that stores how each feature will be classified.

  8. For Class Label Field, keep the default value ClassLabel.

    Class Label Field parameter

    Again, you'll set the Batch Size value, which is how many sets of image chips are processed at a time.

  9. Under Model Arguments, for Batch Size, type 8.

    Batch Size parameter

    Before running the tool, you'll set some environments.

  10. Click the Environments tab.
  11. Under Processing Extent, set Extent to Current Display Extent.

    Extent parameter

    After you choose Current Display Extent, the coordinates of the extent's geographic bounding box are displayed.

  12. Under Raster Analysis, for Cell Size, type 0.3.
  13. Under Processor Type, set Processor Type to GPU.
  14. Click Run.

    This tool can take 30 minutes or more to run depending on your computer's hardware.


    The ability to set the batch size to 8 depends on your GPU. If the tool fails to run, try changing this parameter to 4 or 2.

    The results are added to the map.

    Results from inferencing


    The color of your classified buildings may differ from the image above.

  15. In the Contents pane, turn off the Building Features layer.
  16. Save your project

Review the results

Now that your results have been added to the map, you'll see how accurate they are. To help with this process, you'll change the symbology so that the features are different colors based on their damaged or undamaged classification. This will be done with a provided layer file containing an appropriate symbology.


If you did not run the model to classify the buildings, a classified dataset of buildings has been provided. To add the Inferenced_Buildings feature class to the map, on the ribbon, on the Map tab, in the Layer group, click Add Data. Browse to the Woolsey Fire folder and to the Provided Results folder, open the Results geodatabase, and double-click the Inferenced_Buildings feature class.

  1. In the Contents pane, click the Inferenced_Buildings layer to select it.
  2. On the ribbon, click the Appearance tab. In the Drawing group, click Import.

    Import button

    The Import Symbology window appears.

  3. For Symbology Layer, click Browse and go to the Woolsey Fire folder. Select Inferenced Buildings Symbology.lyrx, and click OK.

    Symbology Layer window

    The tool's parameters are automatically populated in the Import Symbology window.

    Import Symbology parameters

  4. Click OK.

    The Buildings_Inferenced layer now shows the damaged buildings in red and the undamaged buildings in green.

    Symbolized results

    Next, you'll use the Attributes pane to review your model's results.

  5. On the ribbon, click the Edit tab. In the Selection group, click Attributes.
  6. In the Attributes pane, if necessary, click the Layers tab. Choose Inferenced_Buildings.
  7. Click the Play button to start cycling through the building footprints.

    Play button


    You can change the play speed by clicking the drop-down list next to Play and choosing a different speed.

    Observe how well your model classified the buildings as damaged or undamaged.

    If you want to change the classification for a feature, click Pause and update the ClassLabel attribute.

  8. Save your edits.
  9. Save your project.

In this lesson, you prepared data to train a deep learning model, trained the model, classified a set of features using your model, and examined the results.

Being able to train a model to automatically classify features can save an organization valuable time and money while reducing the possibility of human error. This is especially critical when time is limited and people's lives and property are at risk.

You can find more lessons in the Learn ArcGIS Lesson Gallery.