Generate training data

In this workflow, you'll learn the process of training SAMLoRA to identify features of interest in your imagery, in this case informal settlement buildings. First, you need to produce a small amount of training data showing examples of these buildings in your imagery. You'll set up the ArcGIS Pro project, review the examples provided, and export the training data to the format that SAMLoRA expects.

Set up the project

You'll download a project that contains all the data for this tutorial and open it in ArcGIS Pro.

  1. Download the Alexandra_Informal_Settlements.zip file and locate the downloaded file on your computer.
    Note:

    Most web browsers download files to your computer's Downloads folder by default.

  2. Right-click the Alexandra_Informal_Settlements.zip file and extract it to a location on your computer, such as a folder on your C: drive.
  3. Open the extracted Alexandra_Informal_Settlements folder. Double-click Alexandra_Informal_Settlements.aprx to open the project in ArcGIS Pro.

    Project folder

  4. If prompted, sign in with your ArcGIS account.
    Note:

    If you don't have access to ArcGIS Pro or an ArcGIS organizational account, see options for software access.

    The project opens.

    Initial overview

    On the map, a drone imagery layer represents a neighborhood of the Township of Alexandra in South Africa. The imagery is high resolution, with each pixel representing a square of about 2 by 2 centimeters on the ground. It was captured by South Africa Flying Labs. The layer is stored in ArcGIS Online as an image tile service.

    Note:

    South Africa Flying Labs is a nonprofit organization that produces drone imagery in South Africa and seeks to empower local communities with the knowledge and skills necessary to solve social problems in that country.

    This True Ortho image layer was derived from multiple original drone images. It was generated in the Site Scan for ArcGIS application and saved to ArcGIS Online directly from Site Scan.

    To apply the workflow proposed in this tutorial to your own imagery, see the tips provided in the Apply this workflow to your own imagery section.

  5. Zoom in and out with the mouse wheel button to observe the built-up areas in the image.

    Built-up area detail

    Many of these built-up areas are informal settlements where buildings are built very close to each other, forming intricate patterns. Their roofs are made of corrugated metal sheets of varied colors and maintenance states. For these reasons, traditional deep learning models can have difficulty identifying such buildings with a high degree of accuracy. The SAMLoRA approach that you'll learn in this tutorial is a good alternative to obtain high-quality results without requiring high computing power.

Explore informal settlement examples

You'll review the training examples that were provided with your project.

  1. In the Contents pane, check the box next to the Training_Area layer to turn it on.

    Training_Area layer turned on

    An orange polygon appears on the west side of the imagery. It represents the area chosen to train SAMLoRA on what informal settlements look like.

    Training_area polygon on the west side of the imagery

    Note:

    In this case, there is only one training area, but it is also possible to have several training areas, each one represented by a different polygon. You'll see an example of multiple training areas later in the tutorial.

  2. In the Contents pane, right-click the Training_Area layer and choose Zoom To Layer.

    Zoom To Layer menu option

    The map zooms in to the training area.

  3. Turn on the Informal_Settlements_Examples layer.

    Informal_Settlements_Examples layer turned on

    The Informal_Settlements_Examples layer represents all the buildings in the training area as light gray polygons.

    Informal_Settlements_Examples layer displayed on the map

    Note:

    You can find step-by-step instructions on how to create a training example layer in the Prepare training samples for transfer learning section of the Improve a deep learning model with transfer learning tutorial.

    Every building in the training area was captured as a polygon. If even a few buildings were missing, it would generate confusing information and SAMLoRA would not train optimally. In the following example images, on the left, the training set is complete and ready to be used for training. In contrast, on the right, some buildings are missing, which would yield poor training performance.

    Complete and incomplete training set

    Next, you'll inspect the Informal_Settlements_Examples attribute fields.

  4. In the Contents pane, right-click the Informal_Settlements_Examples layer and choose Attribute Table.

    Attribute Table menu option

    Every line in the table represents one of the building polygons. The Class attribute has the value Building.

    Class attribute with the value Building.

    Note:

    While this is not an essential part of this workflow, note that the value Building is actually a label for the underlying numeric value 1. See the Apply this workflow to your own imagery section later in this tutorial for more information on the topic.

    In a case where the training examples represent several feature types, the Class field would list these several types, for instance, Building, Road, or Tree. This approach enables SAMLoRA to learn how to recognize different feature types in your imagery.

  5. Close the table.
  6. In the Quick Access Toolbar, click the Save Project button to save your project.

    Save Project button

Learn about training chips and cell size

You'll use the example polygon layer and the imagery layer to generate training data in a specific format. A deep learning model can't train over a large area in one pass, and it can only handle smaller cutouts of the image, known as chips. A chip is made of an image tile and a corresponding label tile which shows where the objects of interest (in this case, buildings) are located. These chips are fed to the deep learning model during the training process.

Training chip with image and label tiles
A training chip, with its image tile (left) and its corresponding label tile (right).

When producing chips, one important thing to determine is their optimum size. To detect objects that are close to each other, as is the case with informal settlement buildings, a good guideline is that a chip should include between six to twelve features. Another way to think about it is that a chip should contain at least one or two complete features in its center and show a good amount of context (or background) around it.

The following example image contains three chips of different sizes:

  • Chip 1 is too small; it contains no complete features and almost no context.
  • Chip 2 is a good size; there are two complete features in the center, as well as a few incomplete features and context around them.
  • Chip 3 is too large; it contains a large number of features (over 25).

Chips of different sizes

How can you modify the size of the chips? It is customary to have chips that measure 256 by 256 pixels (or cells), so you'll keep that default value. However, you can obtain chips of different sizes by varying the size of the cells that compose them. For instance, the three tiles shown above were generated with cell sizes of:

  • Chip 1—0.02 meters (2 centimeters)
  • Chip 2—0.05 meters (5 centimeters)
  • Chip 3—0.1 meter (10 centimeters)
Note:

When a chip has a cell size of 5 centimeters, it means that each of its cells represents a square of 5 by 5 centimeters on the ground.

The optimum chip size will depend on the size of the features you want to identify. For instance, agricultural fields, houses, cars, and solar panels have widely different sizes. One approach to find the optimum chip size is trial and error. You can choose a cell size, generate the chips, and evaluate the chips visually. There are more advanced approaches to decide the chip and cell sizes, but they are beyond the scope of this tutorial. For this tutorial, it was determined that a 5-centimeter cell size would generate chips of the optimum size to identify informal settlement buildings.

Export training data

Next, you'll generate the training chips using the Export Training Data For Deep Learning tool.

  1. On the ribbon, click the Analysis tab. In the Geoprocessing group, click Tools.

    Tools button

  2. In the Geoprocessing pane, search for Export Training Data For Deep Learning. In the list of results, click Export Training Data For Deep Learning.

    Export Training Data For Deep Learning tool search

  3. Set the following tool parameters:
    • For Input Raster, choose Alexandra_Orthomosaic.
    • For Output Folder, type Informal_Settlements_256_5cm. (This will create a subfolder in your project to hold the training data.)
    • For Input Feature Class Or Classified Raster or Table, choose Informal_Settlements_Examples.
    • For Class Value Field, choose Class.
    • For Input Mask Polygons, choose Training_Area.

    Export Training Data For Deep Learning parameters

    Specifying the Training_Area layer as a mask is crucial, as it will ensure that the image chips will only be created within the area where all the buildings are labeled.

    Note:

    If you need to create a Training_Area layer for your own data, you can use the Create Feature Class geoprocessing tool. After creating the feature class, on the ribbon, on the Edit tab, click the Create tool to trace one or several rectangle polygons outlining the areas where you want to provide training examples.

  4. For Tile Size X and Tile Size Y, keep the default values of 256 pixels.

    Tile Size X and Tile Size Y parameter values

  5. For Metadata Format, choose Classified Tiles.

    Metadata Format parameter set to Classified Tiles

    Classified Tiles is the metadata format that is expected to train a SAMLoRA model.

    Note:

    To learn more about supported metadata formats for different deep learning model types, refer to the Deep learning model architectures page. To learn more about the parameters listed in the tool, refer to the Export Training Data For Deep Learning documentation page.

  6. In the tool pane, click the Environments tab.

    Environments tab

  7. For Cell Size, type 0.05.

    Cell Size set to 0.05

    This cell size is 0.05 meters, or 5 centimeters. As indicated in the previous section, this cell size will ensure that the tiles generated are of a suitable size to identify informal settlement buildings.

    Note:

    Choosing the cell size for your image chips will not change the original resolution of your input imagery. The tool will resample the data on the fly to produce image chips of the desired cell size.

    See the Apply this workflow to your own imagery section later in this tutorial, for more information on how to identify your imagery's properties, including its cell size. You can also learn more about imagery cell size and resolution in the Explore imagery - Spatial resolution tutorial.

  8. Click Run.

    After a few moments, the tool finishes running.

Examine the training data

Next, you'll examine the training chips you generated. First, you'll turn off the building examples layer to reduce clutter on the map.

  1. In the Contents pane, uncheck the box next to Informal_Settlements_Examples to turn the layer off.

    Informal_Settlements_Examples layer turned off

    You'll browse to the folder where the image and label chips are stored.

  2. On the ribbon, click the View tab. In the Windows group, click Catalog Pane.

    Catalog Pane button

  3. In the Catalog pane, expand Folders, Alexandra_Informal_Settlements, and Informal_Settlements_256_5cm.

    Informal_Settlements_256_5cm folder expanded

    The Informal_Settlements_256_5cm folder contains the images and labels folders.

  4. Expand the labels folder.

    Labels folder

    This folder contains a total of 128 label tiles in the TIFF format.

  5. Right-click the 000000000000.tif file and choose Add To Current Map.

    Add To Current Map menu option

  6. If you are prompted to calculate statistics, click Yes.

    The label tile appears on the map. It is a raster where each cell can have a value of 0 or 1. Cells with a value of 1 appear in white and indicate building areas. Cells with a value of 1 appear in black and indicate non-building areas.

    Label tile on the map

  7. In the Catalog pane, expand the images folder.

    Images folder

    This folder contains a total of 128 image tiles in the TIFF format. Each label tile has a corresponding image tile. For instance, the 000000000000.tif image tile is a small imagery cutout that matches the 000000000000.tif label tile, as shown in the following example image.

    Example of an image tile

    The SAMLoRA deep learning model will use the labeled tiles to learn how the buildings look like and where they are located.

  8. Optionally, turn off the Alexandra_Orthomosaic layer and add image tiles to the map to examine them. You can also add more label tiles to the map.
  9. In the Contents pane, right-click the 000000000000.tif label tile and choose Remove. Remove any other label and image tiles you added to the map.
    Tip:

    If you want to see more numeric information about the label tiles, in the Catalog pane, under the labels folder, right-click the stats.txt file and choose Show In File Explorer. This file contains statistics, such as the minimum, mean (average), and maximum number of features (in this case, buildings) per chip.

  10. In ArcGIS Pro, press Ctrl+S to save the project.

So far, you set up the project, explored a polygon layer representing informal settlement examples, learned about training chips and cell size, generated label and image training chips, and examined the output.


Train a SAMLoRA informal settlement model

Next, you'll use the training data you generated to train the SAMLoRA deep learning foundational model and teach it to identify the informal settlement buildings in your imagery. Then, you'll review the trained model to better understand it.

Train the SAMLoRA model

First, you'll train the model with the Train Deep Learning Model tool.

  1. At the bottom of the Catalog pane, click the Geoprocessing tab.

    Geoprocessing tab

  2. In the Geoprocessing pane, click the Back button until you go back to the search box.

    Back button

  3. Search for and open the Train Deep Learning Model tool.

    Train Deep Learning Model tool search

  4. For Input Training Data, click the Browse button. Browse to Folders > Alexandra_Informal_Settlements, select informal_settlements_256_5cm, and click OK.
  5. For Output Folder, type Informal_Settlements_256_5cm_SAMLoRA.

    This parameter will create a subfolder in your project (under Folders > models) to hold the resulting trained model.

  6. For Max Epochs, type 50.

    An epoch refers to one complete pass of the entire training dataset.

  7. For Model Type, choose SAMLoRA (Pixel classification).

    Train Deep Learning Model tool parameters

  8. Expand Data Preparation. For Batch Size, if your NVIDIA GPU dedicated memory is 4 GB, type 2. If it is 8 GB, type 4.

    Data Preparation section

    The batch size value will only change the speed of the training process, not the quality of the output.

    Tip:

    To decide on the Batch Size value, a good guideline is to start with a batch size half of your GPU's dedicated memory in GB. For instance, if you have a 16 GB GPU dedicated memory, you can start with a batch size of 8. If you are not sure, start with 2.

    To be more precise, you can examine your GPU memory usage live. Open the command prompt from the Windows start menu and paste the command nvidia-smi -l -5. While running deep learning tools, observe how much memory you are utilizing.

    GPU memory usage

    If you aren't reaching the maximum, you can increase the batch size the next time you run the tool.

  9. Expand Advanced. For Backbone Model, choose ViT-B.

    B stands for Basic. This option will train the SAMLoRA model on a smaller (basic) neural network.

  10. For Monitor Metric, confirm that Validation loss is selected.

    This metric measures how well the model generalizes what it has learned to new data.

  11. Confirm the Stop when model stops improving option is checked to avoid model overfitting.
    Note:

    For cases when you have a larger training dataset and your GPU dedicated memory is 8 GB or more, you can consider training the deep learning model on a larger Backbone Model, such as ViT-L (large) or ViT-H (huge).

    Advanced section

    You are now ready to run the tool.

    Note:

    This tutorial recommends using an NVIDIA GPU with a minimum of 8 GB of dedicated memory. Based on whether your computer has a GPU and what its specifications are, this process may take from under 2 minutes to 20 minutes or more. Alternatively, you can choose to use a model that was already trained for you. In that case, don't run the tool and read until the end of this section. Later, there will be instructions to retrieve the provided model.

    If you are not sure whether your computer has a GPU and what its specifications are, see the Check for GPU availability section in the Get ready for deep learning in ArcGIS Pro tutorial.

  12. If you choose to run the process yourself, click Run. As the tool runs, click View Details to see more information about the training process.

    Run button and View Details link

    Tip:

    You can retrieve the same information in the History pane. On the ribbon, on the Analysis tab, in the Geoprocessing group, click History. In the History pane, right-click the Train Deep Learning Model process and choose View Details.

  13. In the details window, click the Messages tab and monitor the validation loss metric.

    Messages tab

    The smaller the validation loss value, the better. It gradually decreases with each epoch as the model improves its ability to successfully identify building areas. When its value no longer changes significantly, the training stops. In parallel, the accuracy and Dice metrics (third and fourth columns) steadily increase. These metrics measure the model performance.

    Note:

    Since deep learning training is a non-deterministic process, the information you obtain on the Messages tab might look different than the example images.

  14. When the training ends, review the accuracy and Building precision numbers.

    Accuracy and Building precision numbers

    In the example image, the overall accuracy is 8.9250e-01 or 89.25 percent. For this use case, it should be between 85 and 95 percent. Because you are specifically interested in identifying buildings, the precision value for Building is the best measure of the model's performance. In this case, it is 0.8913 or 89.13 percent. (The values you obtain might be different.)

  15. Close the details window.

Review the trained SAMLoRA model

One way to learn more about your trained SAMLoRA model is to look at it through the Review Deep Learning Models tool.

  1. On the ribbon, click the Imagery tab. In the Image Classification group, click Deep Learning Tools and choose Review Deep Learning Models.

    Review Deep Learning Models menu option

  2. In the Deep Learning Model Reviewer pane, for Model, click the Browse button.

    Browse button

  3. Browse to Folders > Alexandra_Informal_Settlements > models, select Informal_Settlements_256_5cm_SAMLoRA, and click OK.

    Informal_Settlements_256_5cm_SAMLoRA selected

  4. In the Deep Learning Model Reviewer pane, locate the graph under Training and Validation Loss.

    Training and Validation Loss graph

    This graph shows the detail of how the model learned. Training Loss (in blue) shows how well the model learned on the training data, and Validation Loss (in orange) shows how the model was able to generalize what it had learned to new data. (Your graph might look different.) In the last part of the training, the training and validation loss curves were forming asymptotic lines.

    Asymptotic lines

    This phenomenon is called convergence. If the training continues beyond that phase, the model might start performing better on training data than on validation data, a sign that it is overfitting the training data and losing its ability to generalize to new data.

  5. Review the other information provided in the pane, including Model Type, Backbone, overall Accuracy, and Epochs Details.
    Note:

    Learn more about the information in this pane in the Deep learning model review documentation page.

  6. When you're done reviewing the model information, close the Deep Learning Model Reviewer pane.
  7. Press Ctrl+S to save the project.

You have now trained the SAMLoRA foundational deep learning model to teach it how to identify informal settlement buildings.


Classify and extract informal settlements with SAMLoRA

You are now ready to extract informal settlements from your imagery. First, you'll apply the trained SAMLoRA model to your imagery to classify pixels as building or no building. This process is known as inferencing. Then, you'll derive building footprint polygons. Finally, you'll explore expanded results to understand the power of the SAMLoRA approach to extract different object types for larger extents.

Classify informal settlements

First, you'll apply the trained SAMLoRA model to your imagery using the Classify Pixels Using Deep Learning tool. To expedite the workflow in this tutorial, you'll only run the process on a small extent. However, in real life, you could process very large amounts of imagery.

  1. On the ribbon, click the Map tab. In the Navigate group, click Bookmarks and choose the Inferencing Area bookmark.

    Inferencing Area bookmark

    The map zooms to the bookmark.

    Inferencing area

  2. In the Geoprocessing pane, click the Back button. Search for and open the Classify Pixels Using Deep Learning tool.

    Classify Pixels Using Deep Learning tool search

  3. Set the following tool parameters:
    • For Input Raster, choose Alexandra_Orthomosaic.
    • For Output Raster Dataset, type Informal_Settlements_Raster.
    • For ModelDefinition, click the Browse button. Browse to Folders > Alexandra_Informal_Settlements > models > Informal_Settlements _256_5cm_SAMLoRA, select Informal_Settlements _256_5cm_SAMLoRA.dlpk, and click OK.
    • For batch_size, type the same value you used previously (for instance, 2 if your GPU dedicated memory is 4 GB).
    Note:

    If you didn't train the model yourself, for Model Definition, use the provided model located at Folders > Alexandra_Informal_Settlements > Provided_Data > models > Informal_Settlements _256_5cm_SAMLoRA.

    Classify Pixels Using Deep Learning tool parameters

  4. In the tool pane, click the Environments tab.
  5. Under Processing Extent, click the Current Display Extent button.

    Current Display Extent button

    This parameter ensures only the area currently displaying on the map will be processed.

  6. For Cell Size, type 0.05 to match the cell size you used when training SAMLoRA.

    Cell Size set to 0.05

    Note:

    This tool process can take from under a minute up to 15 minutes, depending on whether you have a GPU and what its specifications are. If you prefer to not run the process yourself, you can use a provided output. In the Catalog pane, browse to Databases > Provided_Data.gdb, right-click Informal_Settlements_Raster, and choose Add To Current Map.

  7. If you chose to run the process yourself, click Run.

    When the process is complete, the new raster layer appears on the map.

    Informal_Settlements_Raster on the map

    Every pixel in the imagery for your chosen extent was classified, and the result was captured in the output raster: areas pertaining to an informal settlement building were assigned a value of 1—symbolized in light orange—and non-building areas were assigned a value of 0—symbolized as transparent.

    Note:

    The color is assigned at random and may vary.

Derive and clean up building footprint polygons

Now that you generated an output raster from GeoAI tools, you'll perform some post-processing on it. The goal is to obtain a layer representing the building footprints as polygons. You'll use a custom tool that includes the following main steps:

First, you'll retrieve a toolbox that contains that tool and is hosted in ArcGIS Online.

  1. In the Catalog pane, click the Portal tab and the ArcGIS Online button. Type Post Deep Learning Workflows owner:Esri_Tutorials in the search box and press Enter.

    Toolbox search

    In the search results, right-click Post Deep Learning Workflows and choose Add To Project.

    Add To Project menu option

    The toolbox downloads to your local project.

  2. In the Catalog pane, click the Project tab. Expand Toolboxes and PostDeepLearning.pyt.

    The toolbox contains two tools: one to post-process buildings, which you'll use, and another to post-process roads.

    Note:

    The tool to post-process buildings should work for any types of building, not just informal settlements.

    Optionally, you can look at the source code for this custom tool on GitHub.

  3. Right-click Post Processing Buildings from Raster Output and choose Open.

    Open menu option

  4. For Input Raster, choose Informal_Settlements_Raster.
  5. For Field Name for Raster to Polygon, choose Class.

    This field contains the Building class value.

  6. For Unique Value of Selected Field, choose Building.

    The tool will only focus on the raster cells labeled with this value.

  7. For Output Feature Class, type Informal_Settlements_Final.

    This output will be the final feature class output. It will be saved in your default project database.

    Post Processing Buildings from Raster Output tool parameters

  8. Click Run.

    The output layer appears. You'll give the layer the same symbology as the Informal_Settlements_Examples layer that you used earlier in the workflow.

  9. In the Contents pane, right-click Informal_Settlements_Examples and choose Copy.

    Copy menu option

  10. Right-click Informal_Settlements_Final and choose Paste Properties.

    Paste Properties menu option

    The layer updates to the light gray symbology. You'll turn off the raster layer to reduce clutter in the map.

  11. In the Contents pane, turn off the Informal_Settlements_Raster layer.

    Informal_Settlements_Raster layer turned off

    You'll use the Swipe tool to compare the final output layer to the original imagery.

  12. In the Contents pane, click the Informal_Settlements_Final layer to select it.

    Informal_Settlements_Final layer selected

  13. On the ribbon, click the Feature Layer tab. In the Compare group, click Swipe.

    Swipe button

  14. On the map, drag from top to bottom to peel off the Informal_Settlements_Final layer and reveal the Alexandra_Orthomosaic imagery underneath.

    Swipe cursor

    The building footprint polygons match the buildings present in the imagery with a high level of accuracy.

  15. On the ribbon, click the Map tab. In the Navigate group, click the Explore button.

    Explore button

Explore expanded results

So far, you've extracted informal settlements for a small extent to expedite the workflow. Next, you'll examine the output for the entire drone imagery extent. You'll also look at the possibility of extracting more than one feature type with the SAMLoRA approach. In this case, the following features were extracted:

  • Different types of buildings—from informal and smaller to more standard and larger.
  • Different types of roads—from narrow and unpaved to larger and paved.

First, you'll open a map containing these examples.

  1. In the Catalog pane, expand Maps. Right-click Explore Outputs and choose Open.

    Open menu option

    The Explore Outputs map appears. For now, only the Alexandra_Orthomosaic imagery layer is turned on. You'll turn on other layers and review them.

  2. In the Contents pane, turn on the Expanded_Training_Areas and Buildings_and_Roads_Examples layers.

    Expanded_Training_Areas and Buildings_and_Roads_Examples layers turned on

    This time, there are several orange polygons, each delineating a different training area. In these training areas, the gray examples capture different types of buildings and roads.

    Expanded_Training_Areas and Buildings_and_Roads_Examples layers displayed on the map

  3. Zoom in and pan to observe the building and road examples.
  4. In the Contents pane, right-click Buildings_and_Roads_Examples and choose Attribute Table.

    Attribute Table menu option

  5. In the Buildings_and_Roads_Examples attribute table, scroll down to examine the Class values.

    Class attribute

    In this case, there are two possible values: Building and Road. SAMLoRA will use that information to learn to identify the two different feature types.

  6. Close the table.

    Following the same workflow as this tutorial, these examples were used as the inputs to generate training data. Then, the SAMLoRA model was trained to learn to identify these features. The model was applied to the entire imagery, and finally feature layers for buildings and roads were derived. You'll examine the resulting outputs.

  7. In the Contents pane, turn off the Expanded_Training_Areas and Buildings_and_Roads_Examples layers.

    Expanded_Training_Areas and Buildings_and_Roads_Examples layers turned off

  8. Turn on the Building_and_Roads_Raster_Full_Extent layer.

    Building_and_Roads_Raster_Full_Extent layer turned on

    The output raster identifies the areas pertaining to buildings (light orange) and roads (navy blue).

    Building_and_Roads_Raster_Full_Extent layer on the map

  9. Zoom in and pan to observe the raster layer.

    The variety of buildings and roads identified is impressive. The model could be applied to a much larger imagery extent, covering an entire city or region.

    Next, you'll review the two derived feature layers:

    • A polygon layer for buildings, derived using the Post Processing Buildings from Raster Output tool that you used earlier.
    • A polyline layer for roads, derived using the Post Processing Roads from Raster Output tool also included in the Post Deep Learning Workflows toolbox.
  10. In the Contents pane, turn off Building_and_Roads_Raster_Full_Extent. Turn on Buildings_Full_Extent and Roads_Full_Extent.

    Building_and_Roads_Raster_Full_Extent layer turned off; Buildings_Full_Extent and Roads_Full_Extent layer turned on

  11. Zoom in and pan to observe the building and road feature layers.

    Buildings_Full_Extent and Roads_Full_Extent layers on the map

    Such a detailed map could be used by your local nonprofit to provide better services to the community.

  12. Press Ctrl+S to save the project.

Apply this workflow to your own imagery — Optional

To apply this workflow to your own imagery, keep the following tips in mind.

  • Where to store your imagery—In this tutorial, you used an image layer that was generated in Site Scan for ArcGIS out of raw high-resolution drone imagery and saved to ArcGIS Online directly from Site Scan. When working with your own data, you can similarly host it on ArcGIS Online. See the Publish hosted imagery layers documentation page to learn more. Another option is to use imagery that is stored on your local computer.
  • Using consistent imagery throughout the workflow—When working with the SAMLoRA model, you should ensure that you are using similar imagery to both train and apply the model. Most particularly, the spectral bands (such as, red, green, and blue bands), pixel depth (such as 8-bit), and cell size should be the same.
  • Finding your imagery properties—If you are not certain what your imagery properties are, in the Contents pane, right-click your imagery layer and choose Properties. In the Properties pane, click the Source pane. Under Raster Information, find the Number of Bands, Cell Size X, Cell Size Y, and Pixel Depth values.
  • Adapting your imagery input—Should you need to adapt your imagery to use as input to an already trained SAMLoRA model (such as choosing a subset of bands or changing the pixel depth), see the Select relevant imagery bands section in the Improve a deep learning model with transfer learning tutorial for step-by-step instructions on the topic. You can also learn how to resample (or change the cell size) of your imagery in the Explore imagery – Spatial resolution tutorial.
  • Creating a training examples polygon layer—Use the step-by-step instructions in the Prepare training samples for transfer learning section of the Improve a deep learning model with transfer learning tutorial.
  • Using an attribute domain—In the training examples polygon layer, the value Building in the Class attribute is actually a label for the underlying numeric value 1. While this is not an essential part of the workflow, you should know that this is implemented with an attribute domain. You can learn more about this technique in the Apply subtypes and domains to Vienna hiking trails tutorial. Alternatively, you could use numeric values for your classes; the output will also be numeric, the extracted features being named 1 instead of Building.
  • Creating a training area layer—You can use the Create Feature Class geoprocessing tool. Then, on the ribbon, on the Edit tab, click the Create tool to trace one or several rectangle polygons outlining the areas where you want to provide training examples.
  • Choosing the cell size—As explained in the Learn about training chips and cell size section, remember to experiment with the cell size to generate tiles that are optimized for the features you plan to extract.
  • Experimenting on a small extent—While experimenting, you can limit the processing to a small extent for faster results. On the Environments tab, under Processing Extent, click the Draw Extent button and draw a small polygon on the map. Alternatively, zoom in on the map, and click the Current Display Extent button.

In this tutorial, you used the SAMLoRa approach to identify informal settlements in your imagery. You generated training data and used it to train the foundational model. You applied the trained model to classify informal settlements in your imagery and then derived and cleaned up building footprint polygons. Finally, you explored expanded results.

You can find more tutorials like these in the Try deep learning in ArcGIS series.

You can find more tutorials in the tutorial gallery.