Create training samples

Inventorying and assessing the health of each palm tree on the Kolovai, Tonga, plantation would take a lot of time and a large workforce. To simplify the process, you'll use a deep learning model in ArcGIS Pro to identify trees, then calculate their health based on a measure of vegetation greenness. The first step is to find imagery that shows Kolovai, Tonga, and has a fine enough spatial and spectral resolution to identify trees. Once you have the imagery, you'll create training samples and convert them to a format that can be used by a deep learning model. For the model to recognize what it's tasked with finding, you need to define images of palm trees so that it can identify similar pixels and tree sizes.

Note:

Using the deep learning tools in ArcGIS Pro requires that you have the correct deep learning libraries installed on your computer. If you do not have these files installed, ensure that ArcGIS Pro is closed, and follow the steps delineated in the Get ready for deep learning in ArcGIS Pro instructions. In these instructions, you will also learn how to check whether your computer hardware and software are able to run deep learning workflows and other useful tips. Once done, you can continue with this tutorial.

Download the imagery

Accurate and high-resolution imagery is essential when extracting features. The model will only be able to identify the palm trees if the pixel size is small enough to distinguish palm canopies. Additionally, to calculate tree health, you'll need an image with spectral bands that will enable you to generate a vegetation health index. You'll find and download the imagery for this study from OpenAerialMap, an open-source repository of high-resolution, multispectral imagery.

  1. Go to the OpenAerialMap website.
  2. Click Start Exploring.

    In the interactive map view, you can zoom, pan, and search for imagery available anywhere on the planet. The map is broken up into grids. When you point to a grid box, a number appears. This number indicates the number of available images for that box.

  3. In the search box, type Kolovai and press Enter. In the list of results, click Kolovai.

    The map zooms to Kolovai. This is a town on the main island of Tongatapu with a coconut plantation.

  4. If necessary, zoom out until you see the label for Kolovai on the map. Click the grid box directly over Kolovai.

    Kolovai on the map

  5. In the side pane, click Kolovai UAV4R Subset (OSM-Fit) by Cristiano Giovando.

    Choose Kolovai image tile.

  6. Click the download button to download the raw .tif file. Save the image to a location of your choice.

    Download imagery

    Because of the file size, download may take a few minutes.

    The default name of the file is 5b1b6fb2-5024-4681-a175-9b667174f48c.

Explore the data

To begin the classification process, you'll download an ArcGIS Pro project containing a few bookmarks to guide you through the process of creating training samples.

  1. Download the Palm_Tree_Detection .zip file and extract its contents to a suitable location on your computer.

    Because of the file size, download may take a few minutes.

  2. If necessary, open the extracted Palm_Tree_Detection folder. Open the Kolovai folder. Double-click the Kolovai ArcGIS project file.

    If prompted, sign in to your ArcGIS Online or ArcGIS Enterprise account.

    Note:

    If you don't have an organizational account, see options for software access.

    The project opens with a blank map; you will add the imagery you downloaded.

  3. On the ribbon, on the Map tab, in the Layer group, click Add Data.

    Add data to the map.

    The Add Data window appears.

  4. In the Add Data window, under Computer, browse to the Kolovai (recall that the file is named 5b1b6fb2-5024-4681-a175-9b667174f48c) image you downloaded from OpenAerialMap. Select the .tif file and click OK.
    Note:

    If the Calculate statistics window appears, click Yes.

    The Kolovai image is added to your map. The layer is listed in the Contents pane by its unique identifier, which isn't meaningful. It's best practice to rename the layer to something you understand.

  5. In the Contents pane, click the current layer name two times and type Imagery. Press Enter.

    Rename the layer.

  6. Pan and zoom around the map to get an idea of what the palm farm looks like.

    A large number of coconut palm trees are in this image. Counting them individually, in the field or by visually inspecting the image, would take days. To enable a deep learning model to do this work for you, you'll create a sample of palm trees to use for training your model.

Create training schema

Creating good training samples is essential when training a deep learning model, or any image classification model. It is also often the most time-consuming step in the process. To provide your deep learning model with the information it needs to extract all the palm trees in the image, you'll create features for a number of palm trees to teach the model what the size, shape, and spectral signature of coconut palms may be. These training samples are created and managed through the Label Objects for Deep Learning tool.

Note:

Creating a training dataset entails digitizing hundreds of features and can be time consuming. If you do not want to create the training samples, a dataset has been provided in the Results geodatabase in the Provided Results folder. You can advance to the Create image chips section.

  1. In the Contents pane, make sure the Imagery layer is selected.
  2. On the ribbon, click the Imagery tab. In the Image Classification group, click Classification Tools and choose Label Objects for Deep Learning.

    Image classification tool

    The Image Classification pane appears with a blank schema. You'll create a schema with only one class because you're only interested in extracting coconut palm trees from the imagery.

  3. In the Image Classification pane, right-click New Schema and choose Edit Properties.

    Edit Properties button

  4. For Name, type Coconut Palms.

    Name parameter

  5. Click Save.

    The schema is renamed in the Image Classification pane. You can now add classes to it.

  6. Right-click Coconut Palms and choose Add New Class.

    Add a new class to the schema.

    The Add New Class pane appears. You will set some parameters for the class that will train the model.

  7. For Name, type Palm.

    Name parameter

    Next is the value, or the code used by the computer when you train a model. The palm trees will be given a value of 1.

  8. For Value, type 1.

    Value parameter

    Finally, you'll choose the color used when you identify features. The color selected is arbitrary, but since you are digitizing features on imagery that is mostly green, yellow is highly visible.

  9. For Color, choose a bright yellow, such as Solar Yellow.
    Tip:

    To see the name of a color, point to the color square.

    Color parameter

  10. Click OK.

    New Palm class

    The Palm class is added to the Coconut Palms schema in the Image Classification pane. You'll create features with the Palm class to train the deep learning model.

Create training samples

To make sure you're capturing a representative sample of trees in the area, you'll digitize features throughout the image. These features are read into the deep learning model in a specific format called image chips. Image chips are small blocks of imagery cut from the source image. Once you've created a sufficient number of features in the Image Classification pane, you'll export them as image chips with metadata.

  1. On the ribbon, click the Map tab. In the Navigate group, click Bookmarks and choose Training Location 1.

    Training Location 1 bookmark

    The map zooms to the first area of sample palm trees that you'll identify.

  2. In the Image Classification pane, select the Palm class and click the Circle tool.

    Circle tool

    You'll use this tool to draw circles around each palm tree in your current display. Circles are drawn from the center of the feature outward, measuring the radius of the feature.

  3. On the map, click the center of a palm tree and draw a circle around a single tree.

    Make training samples.

    A new palm record is added in the Labeled Objects group of the Image Classification pane. You'll create a palm record for every tree you can to ensure there are many image chips with all the palm trees marked.

  4. Draw circles around each tree in the map display.
    Note:

    If you would like extra guidance to help you understand how to draw these circles, or if you would like to skip digitizing the trees, a training sample dataset is available in the folder you downloaded. On the ribbon, on the Map tab, in the Layer group, click Add Data. Browse to the Databases folder and double-click the Results geodatabase. Click PalmTraining and click OK.

    When you're finished with this first bookmark's extent, you'll have approximately 180 samples recorded in the Training Samples Manager pane.

    Palm tree training samples

    Here are a few details to help you as you identify the trees:

    • You can zoom and pan around the map to make digitizing easier but be sure to digitize as many of the trees within the extent of the bookmark as you can.
    • If you are not sure about the exact location of a tree, it is OK to skip it. You want to ensure that you create accurate training samples.
    • It is OK if the circles you draw overlap.
    • Your final model will take into account the size of the trees you identify, so be sure to mark both small and large palm trees.

  5. Create training samples for every palm tree on each of the six remaining Training Location bookmarks.

    Overview of training data

    Digitizing training samples can be a time-consuming process, but it pays off to have a large number of samples. The more samples you provide the model with as training data, the more accurate the results will be.

    As an example, the training dataset used to train the model provided with this tutorial had more than 600 samples.

  6. When you're done creating samples, in the Image Classification pane, click Save.

    Save the training samples.

  7. In the Save current training samples window, under Project, click Databases and double-click the default project geodatabase, Kolovai.gdb.
  8. Name the feature class PalmTraining and click Save.
  9. Close the Image Classification pane. If the Label Objects window appears, click Yes.

    Although you saved the training samples to a geodatabase, you need to refresh the geodatabase to be able to access this dataset.

  10. On the ribbon, click the View tab. In the Windows group, click Catalog Pane.

    Catalog Pane button

    The Catalog pane appears.

  11. Expand Databases. Right-click Kolovai and choose Refresh.

    Your PalmTraining feature class is now visible.

    PalmTraining feature class

  12. On the Quick Access Toolbar, click Save.

    Save button

Create image chips

The last step before training the model is exporting your training samples to the correct format as image chips.

  1. At the top of the ArcGIS Pro application window, in Command Search, type Export Training Data for Deep Learning. Click Export Training Data for Deep Learning.

    Command Search

    The Geoprocessing pane appears.

    You'll set the parameters for creating image chips. First, you'll choose the imagery used for training.

  2. For Input Raster, choose Imagery.

    Input Raster parameter

    Next, you'll create a folder to store the image chips.

  3. For Output Folder, type imagechips.

    Output Folder parameter

    Next, you'll select the feature class containing the training samples you created.

  4. For Input Feature Class Or Classified Raster Or Table, browse to the Kolovai geodatabase. Click PalmTraining and click OK.

    Input Feature Class Or Classified Raster Or Table parameter

    Note:

    If you did not draw the training samples, a dataset has been provided for you to use. Browse to Databases and open the Results geodatabase. Select PalmTraining and click OK.

    Next, you'll select the field from your training data that holds the class value for each feature you drew. Recall that your palm class value was 1.

  5. For Class Value Field, choose Classvalue.

    Class Value Field parameter

    Next, you'll choose the output format for your chips. The format you choose is based on the type of deep learning model you want to train.

  6. For, Image Format, choose JPEG format

    Image Format parameter

    Next, you'll set the size, in pixels, for each of your image chips. The image chip size is determined by the size of the features you are trying to detect. If the feature is larger than the tiles' x and y dimensions, your model will not provide good results.

  7. For Tile Size X and Tile Size Y, type 448.

    Tile Size parameters

    Now, you'll ensure that your output format is correct. This, too, is dependent on the type of deep learning model that you are creating.

  8. For Metadata Format, ensure that PASCAL Visual Object Classes is chosen.

    Metadata Format parameter

    Before you run the tool and create image chips, you'll set the tool's environments. In particular, you need to know the resolution of the imagery. It's a best practice to create image chips at the same resolution as your input imagery.

  9. Click the Environments tab.

    Environments tab

  10. Under Raster Analysis, for Cell Size, choose Same as layer Imagery.

    Cell Size parameter

  11. Click Run.

    Depending on your computer's hardware, the tool will take a few minutes to run.

    The images chips are created and are ready to be used for training a deep learning model.

  12. Save your project.

In this module, you downloaded and added open-source imagery to a project, created training samples using the Training Samples Manager pane, and exported them to a format compatible with a deep learning model for training. Next, you'll create a deep learning model and identify all the trees on the plantation.


Detect palm trees with a deep learning model

Before you can begin to detect palm trees, you need to train a model. Training a model entails taking your training sample data and putting it through a neural network over and over again. This computationally intensive process will be handled by a geoprocessing tool, but this is how the model will learn what a palm tree is and is not. Once you have a model, you'll apply it to your imagery to automatically identify trees.

Train a deep learning model

The Train Deep Learning Model geoprocessing tool uses the image chips you labeled to determine what combinations of pixels in a given image represent palm trees. You'll use these training samples to train a single-shot detector (SSD) deep learning model.

Depending on your computer's hardware, training the model can take more than an hour. It's recommended that your computer be equipped with a dedicated graphics processing unit (GPU). If you do not want to train the model, a deep learning model has been provided to you in the project's Provided Results folder. Optionally, you can skip ahead to the Palm tree detection section of this tutorial.

  1. On the ribbon, in Command Search, type Train Deep Learning Model. Select Train Deep Learning Model.

    The Geoprocessing pane appears.

    First, you'll set the tool to use your training samples.

  2. In the Geoprocessing pane, for Input Training Data, browse to the Kolovai project folder. Select the imagechips folder and click OK.

    Input Training Data parameter

    The folder may take a few seconds to load.

    The imagechips folder contains two folders, two text files, a .json and an .emd file that were created from the Export Training Data for Deep Learning tool. The esri_model_definition.emd file is a template that will be filled in by the data scientist who trained the model, with information such as the deep learning framework, the file path to the trained model, class names, model type, and image specifications of the image used for training. The .emd file is the bridge between the trained model and ArcGIS Pro.

    Next, you'll create a folder to store your model.

  3. For Output Model, type classify_palms.

    Output Model parameter

    Next, you'll set the number of epochs that your model will run. An epoch is a full cycle through the training dataset. During each epoch, the training dataset you stored in the imagechips folder will be passed forward and backward through the neural network one time.

  4. For Max Epochs, type 50.

    Max Epochs parameter

    Next, you'll ensure that you are training the correct model type for detecting objects in imagery. The model type will determine the deep learning algorithm and neural network that you will use to train your model. In this case, you're using the single-shot detector method because it's optimized for object detection.

  5. Expand Model Parameters and make sure Model Type is set to Single Shot Detector (Object detection).

    Model Type parameter

    Next, you'll set the batch size. This parameter determines the number of training samples that will be trained at a time.

  6. For Batch Size, type 8.

    Batch Size parameter

    Next, you'll ensure that the model runs for all 100 epochs.

  7. Expand Advanced and uncheck Stop when model stops improving.

    Stop when model stops improving parameter

  8. Accept the rest of the default parameters.

    Model arguments, the parameter values used to train the model, vary based on the model type you choose, and can be customized. For more information about choosing model arguments, see the Train Deep Learning Model documentation.

    Finally, if you have a GPU, you'll set this tool to run on your computer's GPU for faster processing. Otherwise, skip the next step.

  9. Optionally, if your computer has a GPU, click the Environments tab. Under Processor Type, for Processor Type, choose GPU.

    Processor Type parameter

  10. Click Run.
    Note:

    This tool can take more than an hour to run.

    If the model fails to run, reducing the Batch Size parameter can help. You may have to set this parameter to 4 or 2 and rerun the tool. However, this may reduce the quality of your trained model's results.

Palm tree detection

The bulk of the work in extracting features from imagery is preparing the data, creating training samples, and training the model. Now that these steps have been completed, you'll use a trained model to detect palm trees throughout your imagery. Object detection is a process that typically requires multiple tests to achieve the best results. There are several parameters that you can alter to allow your model to perform best. To test these parameters quickly, you'll try detecting trees in a small section of the image. Once you're satisfied with the results, you'll extend the detection tools to the full image.

Note:

If you did not train a model in the previous section, a deep learning package has been provided for you in the Provided Results folder.

Classifying features is a GPU-intensive process and can take a while to complete depending your computer's hardware. If you choose to not detect the palm trees, results have been provided and you may skip ahead to the Refine detected features section.

  1. On the ribbon, click the Map tab. In the Navigate group, click Bookmarks. Choose Detection Area.
  2. On the ribbon, in Command Search, type Detect Objects Using Deep Learning. Choose Detect Objects Using Deep Learning.

    First, you'll set the imagery from which you want to detect features.

  3. In the Detect Objects Using Deep Learning tool, for Input Raster, choose Imagery.

    Input Raster parameter

    Next, you'll name the feature class of detected objects.

  4. For Output Detected Objects, type DetectedPalms.

    Output Detected Objects parameter

    Next, you'll choose the model you created to detect the palm trees.

  5. For Model Definition, browse to the classify_palms folder. Click the classify_palms.dlpk deep learning model package file. Click OK.
    Note:

    If you did not train a deep learning model, browse to the project's folder. Open Provided Results. Open classify_palms. Click the classify_palms.dlpk deep learning model package file. Click OK.

    Model Definition parameter

    Next, you'll set some of the model's arguments. Arguments are used to adjust how the model runs for optimal results.

    When performing convolution of imagery in convolutional neural network modeling, you are essentially shrinking the data, and the pixels at the edge of the image are used much less during the analysis, compared to inner pixels. The padding parameter adds an additional boundary of pixels to the outside edges of the image. This reduces the loss of information from the valid edge pixels and shrinking. You'll leave this as the default.

    Padding pixels

    The threshold argument is the confidence threshold—how much confidence is acceptable to label an object a palm tree? This number can be tweaked to achieve desired accuracy.

  6. For threshold, type 0.2.

    threshold parameter

    Next, you'll set the nms_overlap argument. This controls how much each feature is allowed to intersect. A lower number for this argument would specify that the objects could not overlap and are considered individual features.

  7. For nms_overlap, keep the default value of 0.1.

    nms_overlap parameter

    Next, you'll set the batch size.

  8. For batch_size, type 8.

    batch_size parameter

    Before running the tool, you'll set some environments.

  9. Click the Environments tab.

    Next, you'll set a processing extent. This parameter forces the tool to only process the imagery that falls within the current map extent. Since the object detection process is hardware intensive, it is best to run the tool on a smaller area to test your parameters before running it on a full imagery dataset.

  10. Under Processing Extent, set Extent to Current Display Extent.

    Extent parameter

    After you choose Current Display Extent, the coordinates of the extent's geographic bounding box are displayed.

  11. Under Raster Analysis, for Cell Size, choose Same as layer Imagery.
  12. Optionally, if your computer has a GPU, under Processor Type, for Processor Type, choose GPU.
  13. Click Run.

    The tool will take several minutes to run, depending on your hardware and whether you are running on CPU, GPU, or RAM.

    Observe your results. You can try experimenting with the arguments to see how this impacts your results.

    Once you have arguments that yield good results, you'll detect palm trees across the entire image.

  14. On the Environments tab, for Processing Extent, choose Default.
  15. Click Run.

    Since the tool is running on the full imagery dataset, processing time will increase based on your computer's hardware.

    Note:

    If you do not run the model to detect the palm trees, a dataset of palm trees has been provided. To add the DetectedPalms feature class to the map, on the ribbon, on the Map tab, in the Layer group, click Add Data. Browse to the Kolovai folder and to the Provided Results folder, open the Results geodatabase, and double-click the DetectedPalms feature class.

    When the tool finishes, observe your results. The color of your final results may differ from the image provided.

    Palm trees detected by deep learning tools.

    You'll notice that some of your palm trees have overlapping features. This means that many trees have been identified multiple times leading to an erroneous count of the total number of trees. After you change the symbology to make this issue clearer, you'll remove these overlapping features with a geoprocessing tool.

    Features overlapping

  16. In the Contents pane, double-click the DetectedPalms layer's symbol.

    DetectedPalms symbol

    The Symbology pane appears.

  17. Click the Properties tab.

    Properties tab

  18. Under Appearance, set the following:

    • For Color, choose No color.
    • For Outline color, choose Solar yellow.
    • For Outline width, type 1.5.

    Layer appearance options

  19. Click Apply.

    Observe your results again now that the symbology has been changed.

    Updated symbology

    Next, you'll remove the duplicate polygons.

  20. Save the project.

Refine detected features

Ensuring an accurate count of palm trees in important. Since many trees have been counted multiple times, you'll use the Non Maximum Suppression tool to resolve this. However, you have to be careful; palm trees' canopies can overlap. So, you'll remove features that are clearly duplicates of the same tree while ensuring that separate trees with some overlap are not removed.

  1. On the ribbon, in Command Search, type Non Maximum Suppression. Select Non Maximum Suppression.

    First, you'll choose your layer of palm trees created by the model.

  2. For Input Feature Class, choose DetectedPalms.

    Input Feature Class parameter

    Note:

    If you skipped the previous section, a dataset of palm trees has been provided. To add the DetectedPalms feature class to the map, on the ribbon, on the Map tab, in the Layer group, click Add Data. Browse to the Kolovai folder and to the Provided Results folder, open the Results geodatabase, and double-click the DetectedPalms feature class.

    Each palm tree in this dataset has a confidence score to represent how accurately the model identified each feature. You'll enter this field into the tool.

  3. For Confidence Score Field, choose Confidence.

    Confidence Score Field parameter

    Each feature detected has also been marked with its appropriate class. Recall that this model had one class, Palm. This was recorded when you used the model.

  4. For Class Value Field, choose Class.

    Class Value Field parameter

  5. For Output Feature Class, type DetectedPalms_NMS.

    Output Feature Class parameter

    The Max Overlap Ratio determines how much overlap there can be between two features before they are considered the same feature. A higher value indicates that there can be more overlap between two features. The feature with the lower confidence will be removed. You'll set the tool to remove any trees with more than 50 percent overlap.

  6. For Max Overlap Ratio, type 0.5.

    Max Overlap Ratio parameter

  7. Click Run.

    A new layer is added in the Contents pane. It has the same symbology as the DetectedPalms layer.

  8. In the Contents pane, turn off the Detected Palms layer.

    You'll see that there are fewer trees with overlap in the new layer.

    You can rerun the tool as needed with different Max Overlap Ratio values to achieve optimal results.

  9. Remove the DetectedPalms layer from the map.
  10. In the Contents pane, click DetectedPalms_NMS two times and rename it Detected Palm Trees.

    Rename the layer.

  11. Turn off the Detected Palm Trees layer.
  12. Save your project.

You've just trained and used a model to detect palm trees. Next, you'll use raster functions to obtain an estimate of vegetation health for each tree detected in your study area.

Note:

It is important to realize that your model's results might not be perfect the first time. Training and implementing a deep learning model is a process that can take several iterations to provide the best results. Better results can be achieved by doing the following:

  • Increasing your initial sample size of features
  • Ensuring that your training samples are accurately capturing the features you want to detect
  • Making sure your training samples include features of different sizes
  • Adjusting the geoprocessing tools' parameters
  • Retraining an existing model using the Train Deep Learning Model tool's advanced parameters


Estimate vegetation health

In the previous module, you used a deep learning model to extract coconut palm trees from imagery. In this module, you'll use the same imagery to estimate vegetation health by calculating a vegetation health index.

To assess vegetation health, you'll calculate the Visible Atmospherically Resistant Index (VARI), which was developed as an indirect measure of leaf area index (LAI) and vegetation fraction (VF) using only reflectance values from the visible wavelength:

(Rg - Rr) / (Rg + Rr - R(Rg - Rb))

where Rr, Rg, and Rb are reflectance values for the red, green, and blue bands, respectively (Gitelson et al. 2002).

Typically, you would use reflectance values in both the visible and the near infrared (NIR) wavelength bands to estimate vegetation health, as with the normalized difference vegetation index (NDVI). However, the imagery you downloaded from OpenAerialMap is a multiband image with three bands, all in the visible electromagnetic spectrum, so you'll use the VARI instead.

Calculate VARI

The VARI measurement requires the input of the three bands within the OpenAerialMap imagery. To calculate VARI, you'll use the Band Arithmetic raster function. Raster functions are quicker than geoprocessing tools because they don't create a new raster dataset. Instead, they perform real-time analysis on pixels as you pan and zoom.

  1. On the ribbon, click the Imagery tab. In the Analysis group, click Raster Functions.

    Raster Functions button

    The Raster Functions pane appears.

  2. In the Raster Functions pane, search for and select the Band Arithmetic raster function.

    Band Arithmetic function

  3. In the Band Arithmetic Properties function, set the following parameters:

    • For Raster, choose the Imagery raster layer.
    • For Method, choose VARI. The function requires you to provide the band index number that corresponds to the input bands for the formula. The input underneath the Band Indexes parameter shows Red Green Blue, so you'll provide the band index numbers that correspond with the Red, Green, and Blue bands, in that order. Make sure to put a single space between each band.
    • For Band Indexes, type 1 2 3.

    Band Arithmetic parameters

  4. Click Create new layer.

    The VARI layer is added to the Contents pane as Band Arithmetic_Imagery. By zooming and panning around the area, you can see features such as the coastline, roads, buildings, and fields.

    VARI raster result

  5. In the Contents pane, make sure the Band Arithmetic_Imagery layer is selected.

    Next, you'll change how the raster draws on the map to make the VARI symbology more clear.

  6. On the ribbon, click the Raster Layer tab.
  7. In the Rendering group, click the Stretch Type drop-down menu and choose Standard Deviation.

    Change the raster stretch type.

  8. In the Contents pane, rename Band Arithmetic_Imagery to VARI.

    Change the layer name.

Extract VARI to Coconut Palms

Having a raster layer showing VARI is helpful, but not necessarily actionable. To figure out which trees need attention, you want to know the average VARI for each individual tree. To find the VARI value for each tree, you'll extract the underlying average VARI value and symbolize them to show which trees are healthy and which need maintenance.

First, you'll convert the polygon features to points.

  1. On the ribbon, in Command Search, type Feature To Point. Choose Feature To Point.
  2. In the Feature To Point tool, enter the following parameters:

    • For Input Features, select the Detected Palm Trees layer.
    • For Output Feature Class, type PalmTree_Points.

    Feature To Point tool

  3. Click Run.

    You have a point feature class in the centroid of each detected polygon. If you zoom in to various locations and use the Measure tool, you'll see that the palm trees have an average radius of roughly 3 meters. In the next step, you'll create a polygon layer with a 3-meter buffer around each point.

    Note:

    The Measure tool is found on the ribbon, on the Map tab, in the Inquiry group.

  4. On the ribbon, in Command Search, type Pairwise Buffer. Choose Pairwise Buffer.
  5. In the Pairwise Buffer tool, enter the following parameters:

    • For Input Features, choose PalmTree_Points.
    • For Output Feature Class, type PalmTreeBuffer.
    • For Distance, type 3 and choose Meters.

    Pairwise Buffer tool

  6. Click Run.

    You have a polygon feature class depicting the location and general shape of each palm tree canopy.

  7. In the Contents pane, turn off the VARI and PalmTree_Points layers.

    Your map shows the estimated canopies of the palm trees in the imagery.

    Buffered palm trees

    Next, you'll extract the average VARI value for each polygon. The Zonal Statistics as Table tool goes through each polygon you created one at a time, finds all of the VARI pixels that fall within the polygon, and calculates the average VARI value for that polygon.

  8. On the ribbon, in Command Search, type Zonal Statistics as Table. Choose Zonal Statistics as Table.
  9. In the Zonal Statistics as Table tool, enter the following parameters:

    • For Input raster or feature zone data, choose PalmTreeBuffer.
    • For Zone Field, choose ORIG_FID.
    • For Input Value Raster, choose VARI.
    • For Output Table, type MeanVARI_per_Palm.
    • Ensure Ignore NoData in Calculations is checked.
    • For Statistics Type, choose Mean.

    Setting the Zone Field to ORIG_FID will ensure that you get statistics for each tree separately. This attribute is the unique ID from the original DetectPalms layer.

    Zonal Statistics as Table parameters

  10. Click Run.

    The output table is added to the bottom of the Contents pane. If you open it, you'll see the original FID value and a column called MEAN containing the average VARI value. You'll join this table to the PalmTreeBuffer layer to get a feature class with the average VARI for each detected palm tree.

  11. On the ribbon, in Command Search, type Join Field. Choose Join Field.
  12. In the Join Field tool, enter the following parameters:

    • For Input Table, choose PalmTreeBuffer.
    • For Input Join Field, choose ORIG_FID.
    • For Join Table, choose MeanVARI_per_Palm.
    • For Join Table Field, choose ORIG_FID.
    • For Transfer Fields, choose MEAN.

    Join Field tool parameters

  13. Click Run.

    The PalmTreeBuffer layer now has a field called MEAN added to it. You'll rename this layer and symbolize it for a better understanding of the data.

  14. In the Contents pane, rename PalmTreeBuffer to Palm Trees VARI.

    Palm Trees VARI layer

  15. In the Contents pane, verify that Palm Trees VARI is selected. On the ribbon, on the Feature Layer tab, in the Drawing group, click Symbology.

    Symbology button

    The Symbology pane appears.

  16. For Primary symbology, choose Graduated Colors.

    Graduated color symbology

  17. For Field, choose MEAN.

    Field parameter

  18. If necessary, for Method, choose Natural Breaks (Jenks) and set Classes to 4.

    Method parameter

  19. For Color scheme, click the drop-down menu and check Show all and Show names. Scroll and select the Red-Yellow-Green (4 Classes) color scheme.

    Red-yellow-green color scheme

  20. Under Classes, click each label and rename the classes from top to bottom as follows: Needs Inspection, Declining Health, Moderate, and Healthy.

    Category labels

    You now have a map with a feature class showing the location and health for each palm tree in the image.

    Tree health by imagery

  21. Save the project.

Optional: Assign field tasks and monitor project progress

One of the biggest benefits of using ArcGIS Pro for feature extraction and imagery analysis is that it can be integrated with the entire ArcGIS platform. In the last tutorial, you used the deep learning tools in ArcGIS Pro to identify coconut palm trees from imagery. The palm trees can be stored as features in a feature class that's amenable for use in a GIS. To extend the workflow, you can publish your results to the cloud, configure a web application template for quality assurance, assign tree inspection tasks to workers in the field, and monitor the progress of the project using a dashboard.

Publish to ArcGIS Online

To use configurable apps to work with your data, you need to publish the palm trees as a feature service in ArcGIS Online or ArcGIS Enterprise. In ArcGIS Pro, right-click the PalmTreesVARI layer in the Contents pane and select Sharing, then select Share as Web Layer. It will publish to your ArcGIS Online account.

Learn more about publishing a feature service

Use app templates to review deep learning accuracy

Deep learning tools provide results with accuracy that is proportional to the accuracy of the training samples and the quality of the trained model. In other words, the results are not always perfect. You can assess the quality of the model results by checking through the trees where the Confidence score, stored in the deep learning result, is lower than a given value. Instead of zooming to each record using an attribute filter in ArcGIS Pro, the Image Visit configurable web app template allows you to quickly review the accuracy of your results in a web application.

Learn more about the Image Visit app

Use ArcGIS Workforce to perform field verification

ArcGIS Workforce is a mobile app solution that uses the location of features to coordinate your field workforce. You can use the Workforce app to assign tasks to members of your organization so that all the trees with a VARI score that is listed as Needs Inspection can be assigned to someone in the field, checked, and marked with a suggested treatment.

Learn more about ArcGIS Workforce

Use ArcGIS Dashboards to monitor project progress

Finally, you can monitor the progress of the assignments dispatched in your ArcGIS Workforce project using ArcGIS Dashboards. ArcGIS Dashboards is a configurable web app that provides visualization and analytics for a real-time operational view of people, services, and tasks.

Learn more about getting started with ArcGIS Dashboards

In this tutorial, you obtained open-source drone imagery and created training samples of palm trees in the image. Those image chips were provided to a data scientist as image chips and used by a trained deep learning model to extract more than 11,000 palm trees in the image.

You learned about deep learning and image analysis, as well as configurable apps across the ArcGIS system. You can use this workflow for any number of tasks, if you have the imagery and knowledge of deep learning models. For example, you can use these tools to assess structural damage resulting from natural disasters, count vehicles in an urban area, or find structures near geological danger zones.

You can find more tutorials in the tutorial gallery.