Configure your system
Before you can run the deep learning model, you need to check your computer system, install the latest drivers for your graphics card, and ensure that you have installed the deep learning libraries for use with ArcGIS Pro.
Check your graphics card
The processes that are performed when using deep learning take a large amount of memory from your computer. To perform a deep learning process, an NVIDIA GPU with a minimum dedicated memory of 8 GB is recommended. To check your graphics card's dedicated GPU memory, you will use the NVIDIA-SMI utility. Learn more about what GPU is and how it works.
- Click the Start menu and type command. Click Command Prompt to open it.
The Command Prompt window appears. The first command you will run is to change directories to where the NVIDIA-SMI utility is located.
- In the Command Prompt window, type cd, add a space, type or copy and paste C:\Windows\system32, and press Enter.
The path of the executable can be different on your machine depending on how the graphics cards are installed. NVIDIA could also be installed to this folder: C:\Program Files\NVIDIA Corporation\NVSMI. You can search your C drive for nvidia-smi.exe and in the Command Prompt, change the directory to that folder, using the cd command. Once you are in the correct folder, you can run the nvidia-smi.exe file.
Next, you will run an executable to find the dedicated GPU memory.
- In the Command Prompt window, type or copy and paste nvidia-smi.exe and press Enter.
The command runs and shows the maximum amount of dedicated GPU memory for your computer.
Depending on your computer, the displayed information may be different. In this example, this computer meets the minimum recommendation for this lesson of 8 GB of dedicated memory in the GPU.
If you have more than one graphics card, the deep learning geoprocessing tools will automatically choose the best GPU in the training and inferencing process.
- Make a note of the name of the graphics card and close the Command Prompt window.
In this example, the name of the graphics card is Quadro RTX 4000.
Now that you know more about your graphics cards, you will download and install the latest drivers from NVIDIA.
Update NVIDIA drivers
Next, you will update the drivers for your graphics card. An out-of-date GPU driver will cause deep learning tools to fail, so it is good practice to check system requirements and update the drivers before running the tools.
- Go to the NVIDIA Driver Downloads page.
- In the NVIDIA Driver Downloads section, choose the options that match your graphics card. In this example, the graphics card is Quadro RTX 4000.
- Click Search, click Download, and click Agree & Download.
- On your computer, locate and run the installer file. Click Yes to allow changes to the system and click OK to run the installer.
- In the NVIDIA Installer window, click Agree and Continue, accept the default installation options, and click Next.
- When the installation is finished, click Close.
Now, you have the necessary drivers for processing imagery with deep learning in ArcGIS Pro. However, you also need to have the deep learning libraries installed on your computer for ArcGIS Pro.
Using the deep learning tools requires that you have the correct Deep Learning Libraries installed on your computer. If you do not have these files installed, save your project, close ArcGIS Pro, and follow the instructions to install deep learning frameworks for ArcGIS. Once installed, reopen your project and continue with the lesson.
Once these libraries are installed, you are ready to begin detecting features from imagery.
Create training samples
Inventorying and assessing the health of each palm tree on the Kolovai, Tonga, plantation would take a lot of time and a large workforce. To simplify the process, you'll use a deep learning model in ArcGIS Pro to identify trees, then calculate their health based on a measure of vegetation greenness. The first step is to find imagery that shows Kolovai, Tonga, and has a fine enough spatial and spectral resolution to identify trees. Once you have the imagery, you'll create training samples and convert them to a format that can be used by a deep learning model. For the model to recognize what it's tasked with finding, you need to define images of palm trees so that it can identify similar pixels and tree sizes.
Download the imagery
Accurate and high-resolution imagery is essential when extracting features. The model will only be able to identify the palm trees if the pixel size is small enough to distinguish palm canopies. Additionally, to calculate tree health, you'll need an image with spectral bands that will enable you to generate a vegetation health index. You'll find and download the imagery for this study from OpenAerialMap, an open-source repository of high-resolution, multispectral imagery.
- Go to the OpenAerialMap website.
- Click Start Exploring.
In the interactive map view, you can zoom, pan, and search for imagery available anywhere on the planet. The map is broken up into grids. When you point to a grid box, a number appears. This number indicates the number of available images for that box.
- In the search box, type Kolovai and press Enter. In the list of results, click Kolovai.
The map zooms to Kolovai. This is a town on the main island of Tongatapu with a coconut plantation.
- If necessary, zoom out until you see the label for Kolovai on the map. Click the grid box directly over Kolovai.
- In the side pane, click Kolovai UAV4R Subset (OSM-Fit) by Cristiano Giovando.
- Click the download button to download the raw .tif file. Save the image to a location of your choice.
Because of the file size, download may take a few minutes.
The default name of the file is 5b1b6fb2-5024-4681-a175-9b667174f48c.
Explore the data
To begin the classification process, you'll download an ArcGIS Pro project containing a few bookmarks to guide you through the process of creating training samples.
- Download the Palm_Tree_Detection .zip file and extract its contents to a suitable location on your computer.
Because of the file size, download may take a few minutes.
- If necessary, open the extracted Palm_Tree_Detection folder. Open the Kolovai folder. Double-click the Kolovai ArcGIS project file.
If prompted, sign in to your ArcGIS Online or ArcGIS Enterprise account.
If you don't have an organizational account, you can sign up for an ArcGIS free trial.
The project opens with a blank map; you will add the imagery you downloaded.
- On the ribbon, on the Map tab, in the Layer group, click Add Data.
The Add Data window appears.
- In the Add Data window, under Computer, browse to the Kolovai image you downloaded from OpenAerialMap. Select the .tif file and click OK.
If the Calculate statistics window appears, click Yes.
The Kolovai image is added to your map. The layer is listed in the Contents pane by its unique identifier, which isn't meaningful. It's best practice to rename the layer to something you understand.
- In the Contents pane, click the imagery layer two times and type Imagery. Press Enter.
- Pan and zoom around the map to get an idea of what the palm farm looks like.
A large number of coconut palm trees are in this image. Counting them individually, in the field or by visually inspecting the image, would take days. To enable a deep learning model to do this work for you, you'll create a sample of palm trees to use for training your model.
Create training schema
Creating good training samples is essential when training a deep learning model, or any image classification model. It is also often the most time-consuming step in the process. To provide your deep learning model with the information it needs to extract all the palm trees in the image, you'll create features for a number of palm trees to teach the model what the size, shape, and spectral signature of coconut palms may be. These training samples are created and managed through the Label Objects for Deep Learning tool.
Creating a training dataset entails digitizing hundreds of features and can be time consuming. If you do not want to create the training samples, a dataset has been provided in the Results geodatabase in the Provided Results folder. You can advance to the Create image chips section.
- In the Contents pane, make sure the Imagery layer is selected.
- On the ribbon, click the Imagery tab. In the Image Classification group, click Classification Tools and choose Label Objects for Deep Learning.
The Image Classification pane appears with a blank schema. You'll create a schema with only one class because you're only interested in extracting coconut palm trees from the imagery.
- In the Image Classification pane, right-click New Schema and choose Edit Properties. For Name, type Coconut Palms.
- Click Save.
The schema is renamed in the Image Classification pane. You can now add classes to it.
- Right-click Coconut Palms and choose Add New Class.
The Add New Class pane appears. You will set the parameters for your new class that will train the model. First is the name of the new class.
- For Name, type Palm.
Next is the value, or the code used by the computer when you train a model. The palm trees will be given a value of 1.
- For Value, type 1.
Finally, you'll choose the color used when you identify features. The color selected is arbitrary, but since you are digitizing features on imagery that is mostly green, yellow is highly visible.
- For Color, choose a bright yellow, such as Solar Yellow.
To see the name of a color, point to the color square.
- Click OK.
The Palm class is added to the Coconut Palms schema in the Image Classification pane. You'll create features with the Palm class to train the deep learning model.
Create training samples
To make sure you're capturing a representative sample of trees in the area, you'll digitize features throughout the image. These features are read into the deep learning model in a specific format called image chips. Image chips are small blocks of imagery cut from the source image. Once you've created a sufficient number of features in the Image Classification pane, you'll export them as image chips with metadata.
- On the ribbon, click the Map tab. In the Navigate group, click Bookmarks and choose Training Location 1.
The map zooms to the first area of sample palm trees that you'll identify.
- In the Image Classification pane, select the Palm class and click the Circle tool.
You'll use this tool to draw circles around each palm tree in your current display. Circles are drawn from the center of the feature outward, measuring the radius of the feature.
- On the map, click the center of a palm tree and draw a circle around a single tree.
A new palm record is added in the Labeled Objects group of the Image Classification pane. You'll create a palm record for every tree you can to ensure there are many image chips with all the palm trees marked.
- Draw circles around each tree in the map display.
If you would like extra guidance to help you understand how to draw these circles, or if you would like to skip digitizing the trees, a training sample dataset is available in the folder you downloaded. On the ribbon, on the Map tab, in the Layer group, click Add Data. Browse to the Databases folder and double-click the Results geodatabase. Click PalmTraining and click OK.
When you're finished with this first bookmark's extent, you'll have approximately 180 samples recorded in the Training Samples Manager pane.
Here are a few details to help you as you identify the trees:
- You can zoom and pan around the map to make digitizing easier but be sure to digitize as many of the trees within the extent of the bookmark as you can.
- If you are not sure about the exact location of a tree, it is OK to skip it. You want to ensure that you create accurate training samples.
- It is OK if the circles you draw overlap.
- Your final model will take into account the size of the trees you identify, so be sure to mark both small and large palm trees.
- Create training samples for every palm tree on each of the six remaining Training Location bookmarks.
Digitizing training samples can be a time-consuming process, but it pays off to have a large number of samples. The more samples you provide the model with as training data, the more accurate the results will be.
As an example, the training dataset used to train the model provided with this lesson had more than 600 samples.
- When you're done creating samples, in the Image Classification pane, click Save.
- In the Save current training samples window, under Project, click Databases and double-click the default project geodatabase, Kolovai.gdb.
- Name the feature class PalmTraining and click Save.
- Close the Image Classification pane. If the Label Objects window appears, click Yes.
Although you saved the training samples to a geodatabase, you need to refresh the geodatabase to be able to access this dataset.
- On the ribbon, click the View tab. In the Windows group, click Catalog Pane.
The Catalog pane appears.
- Expand Databases. Right-click Kolovai and choose Refresh.
Your PalmTraining feature class is now visible.
- On the Quick Access Toolbar, click Save.
Create image chips
The last step before training the model is exporting your training samples to the correct format as image chips.
- On the ribbon, in Command Search, type Export Training Data for Deep Learning. Click Export Training Data for Deep Learning.
The Geoprocessing pane appears.
You'll set the parameters for creating image chips. First, you'll choose the imagery used for training.
- For Input Raster, choose Imagery.
Next, you'll create a folder to store the image chips.
- For Output Folder, type imagechips.
Next, you'll select the feature class containing the training samples you created.
- For Input Feature Class Or Classified Raster Or Table, browse to the Kolovai geodatabase. Click PalmTraining and click OK.
If you did not draw the training samples, a dataset has been provided for you to use. Browse to Databases and open the Results geodatabase. Select PalmTraining and click OK.
Next, you'll select the field from your training data that holds the class value for each feature you drew. Recall that your palm class value was 1.
- For Class Value Field, choose Classvalue.
Next, you'll choose the output format for your chips. The format you choose is based on the type of deep learning model you want to train.
- For, Image Format, choose JPEG format
Next, you'll set the size, in pixels, for each of your image chips. The image chip size is determined by the size of the features you are trying to detect. If the feature is larger than the tiles' x and y dimensions, your model will not provide good results.
- For Tile Size X and Tile Size Y, type 448.
Now, you'll ensure that your output format is correct. This, too, is dependent on the type of deep learning model that you are creating.
- For Metadata Format, ensure that PASCAL Visual Object Classes is chosen.
Before you run the tool and create image chips, you'll set the tool's environments. In particular, you need to know the resolution of the imagery. It's a best practice to create image chips at the same resolution as your input imagery.
- Click the Environments tab.
- Under Raster Analysis, for Cell Size, choose Same as layer Imagery.
- Click Run.
Depending on your computer's hardware, the tool will take a few minutes to run.
The images chips are created and are ready to be used for training a deep learning model.
- Save your project.
In this module, you downloaded and added open-source imagery to a project, created training samples using the Training Samples Manager pane, and exported them to a format compatible with a deep learning model for training. Next, you'll create a deep learning model and identify all the trees on the plantation.
Detect palm trees with a deep learning model
Before you can begin to detect palm trees, you need to train a model. Training a model entails taking your training sample data and putting it through a neural network over and over again. This computationally intensive process will be handled by a geoprocessing tool, but this is how the model will learn what a palm tree is and is not. Once you have a model, you'll apply it to your imagery to automatically identify trees.
Train a deep learning model
The Train Deep Learning Model geoprocessing tool uses the image chips you labeled to determine what combinations of pixels in a given image represent palm trees. You'll use these training samples to train a single-shot detector (SSD) deep learning model.
Depending on your computer's hardware, training the model can take more than an hour. It's recommended that your computer be equipped with a dedicated graphics processing unit (GPU). If you do not want to train the model, a deep learning model has been provided to you in the project's Provided Results folder. Optionally, you can skip ahead to the Palm tree detection section of this lesson.
- On the ribbon, in Command Search, type Train Deep Learning Model. Select Train Deep Learning Model.
The Geoprocessing pane appears.
First, you'll set the tool to use your training samples.
- In the Geoprocessing pane, for Input Training Data, browse to the Kolovai project folder. Select the imagechips folder and click OK.
The folder may take a few seconds to load.
The imagechips folder contains two folders, two text files, a .json and an .emd file that were created from the Export Training Data for Deep Learning tool. The esri_model_definition.emd file is a template that will be filled in by the data scientist who trained the model, with information such as the deep learning framework, the file path to the trained model, class names, model type, and image specifications of the image used for training. The .emd file is the bridge between the trained model and ArcGIS Pro.
Next, you'll create a folder to store your model.
- For Output Model, type classify_palms.
Next, you'll set the number of epochs that your model will run. An epoch is a full cycle through the training dataset. During each epoch, the training dataset you stored in the imagechips folder will be passed forward and backward through the neural network one time.
- For Max Epochs, type 50.
Next, you'll ensure that you are training the correct model type for detecting objects in imagery. The model type will determine the deep learning algorithm and neural network that you will use to train your model. In this case, you're using the single-shot detector method because it's optimized for object detection.
- Expand Model Parameters and make sure Model Type is set to Single Shot Detector.
Next, you'll set the batch size. This parameter determines the number of training samples that will be trained at a time.
- For Batch Size, type 8.
Next, you'll ensure that the model runs for all 100 epochs.
- Expand Advanced and uncheck Stop when model stops improving.
- Accept the rest of the default parameters.
Model arguments, the parameter values used to train the model, vary based on the model type you choose, and can be customized. For more information about choosing model arguments, see the Train Deep Learning Model documentation.
Finally, if you have a GPU, you'll set this tool to run on your computer's GPU for faster processing. Otherwise, skip the next step.
- Optionally, if your computer has a GPU, click the Environments tab. Under Processor Type, for Processor Type, choose GPU.
- Click Run.
This tool can take more than an hour to run.
If the model fails to run, reducing the Batch Size parameter can help. You may have to set this parameter to 4 or 2 and rerun the tool. However, this may reduce the quality of your trained model's results.
Palm tree detection
The bulk of the work in extracting features from imagery is preparing the data, creating training samples, and training the model. Now that these steps have been completed, you'll use a trained model to detect palm trees throughout your imagery. Object detection is a process that typically requires multiple tests to achieve the best results. There are several parameters that you can alter to allow your model to perform best. To test these parameters quickly, you'll try detecting trees in a small section of the image. Once you're satisfied with the results, you'll extend the detection tools to the full image.
If you did not train a model in the previous section, a deep learning package has been provided for you in the Provided Results folder.
Classifying features is a GPU-intensive process and can take a while to complete depending your computer's hardware. If you choose to not detect the palm trees, results have been provided and you may skip ahead to the Refine detected features section.
- On the ribbon, click the Map tab. In the Navigate group, click Bookmarks. Choose Detection Area.
- On the ribbon, in Command Search, type Detect Objects Using Deep Learning. Choose Detect Objects Using Deep Learning.
First, you'll set the imagery from which you want to detect features.
- In the Detect Objects Using Deep Learning tool, for Input Raster, choose Imagery.
Next, you'll name the feature class of detected objects.
- For Output Detected Objects, type DetectedPalms.
Next, you'll choose the model you created to detect the palm trees.
- For Model Definition, browse to the classify_palms folder. Click the classify_palms.dlpk deep learning model package file. Click OK.
If you did not train a deep learning model, browse to the project's folder. Open Provided Results. Open classify_palms. Click the classify_palms.dlpk deep learning model package file. Click OK.
Next, you'll set some of the model's arguments. Arguments are used to adjust how the model runs for optimal results.
When performing convolution of imagery in convolutional neural network modeling, you are essentially shrinking the data, and the pixels at the edge of the image are used much less during the analysis, compared to inner pixels. The padding parameter adds an additional boundary of pixels to the outside edges of the image. This reduces the loss of information from the valid edge pixels and shrinking. You'll leave this as the default.
The threshold argument is the confidence threshold—how much confidence is acceptable to label an object a palm tree? This number can be tweaked to achieve desired accuracy.
- For threshold, type 0.2.
Next, you'll set the nms_overlap argument. This controls how much each feature is allowed to intersect. A lower number for this argument would specify that the objects could not overlap and are considered individual features.
- For nms_overlap, keep the default value of 0.1.
Next, you'll set the batch size.
- For batch_size, type 8.
Before running the tool, you'll set some environments.
- Click the Environments tab.
Next, you'll set a processing extent. This parameter forces the tool to only process the imagery that falls within the current map extent. Since the object detection process is hardware intensive, it is best to run the tool on a smaller area to test your parameters before running it on a full imagery dataset.
- Under Processing Extent, set Extent to Current Display Extent.
After you choose Current Display Extent, the coordinates of the extent's geographic bounding box are displayed.
- Under Raster Analysis, for Cell Size, choose Imagery.
- Optionally, if your computer has a GPU, under Processor Type, for Processor Type, choose GPU.
- Click Run.
The tool will take several minutes to run, depending on your hardware and whether you are running on CPU, GPU, or RAM.
Observe your results. You can try experimenting with the arguments to see how this impacts your results.
Once you have arguments that yield good results, you'll detect palm trees across the entire image.
- On the Environments tab, for Processing Extent, choose Default.
- Click Run.
Since the tool is running on the full imagery dataset, processing time will increase based on your computer's hardware.
If you do not run the model to detect the palm trees, a dataset of palm trees has been provided. To add the DetectedPalms feature class to the map, on the ribbon, on the Map tab, in the Layer group, click Add Data. Browse to the Kolovai folder and to the Provided Results folder, open the Results geodatabase, and double-click the DetectedPalms feature class.
When the tool finishes, observe your results. The color of your final results may differ from the image provided.
You'll notice that some of your palm trees have overlapping features. This means that many trees have been identified multiple times leading to an erroneous count of the total number of trees. After you change the symbology to make this issue clearer, you'll remove these overlapping features with a geoprocessing tool.
- In the Contents pane, double-click the DetectedPalms layer's symbol.
The Symbology pane appears.
- Click the Properties tab.
- Under Appearance, set the following:
- For Color, choose No color.
- For Outline color, choose Solar yellow.
- For Outline width, type 1.5.
- Click Apply.
Observe your results again now that the symbology has been changed.
Next, you'll remove the duplicate polygons.
- Save the project.
Refine detected features
Ensuring an accurate count of palm trees in important. Since many trees have been counted multiple times, you'll use the Non Maximum Supression tool to resolve this. However, you have to be careful; palm trees' canopies can overlap. So, you'll remove features that are clearly duplicates of the same tree while ensuring that separate trees with some overlap are not removed.
- On the ribbon, in Command Search, type Non Maximum Suppression. Select Non Maximum Suppression.
First, you'll choose your layer of palm trees created by the model.
- For Input Feature Class, choose DetectedPalms.
If you skipped the previous section, a dataset of palm trees has been provided. To add the DetectedPalms feature class to the map, on the ribbon, on the Map tab, in the Layer group, click Add Data. Browse to the Kolovai folder and to the Provided Results folder, open the Results geodatabase, and double-click the DetectedPalms feature class.
Each palm tree in this dataset has a confidence score to represent how accurately the model identified each feature. You'll enter this field into the tool.
- For Confidence Score Field, choose Confidence.
Each feature detected has also been marked with its appropriate class. Recall that this model had one class, Palm. This was recorded when you used the model.
- For Class Value Field, choose Class.
- For Output Feature Class, type DetectedPalms_NMS.
The Max Overlap Ratio determines how much overlap there can be between two features before they are considered the same feature. A higher value indicates that there can be more overlap between two features. The feature with the lower confidence will be removed. You'll set the tool to remove any trees with more than 50 percent overlap.
- For Max Overlap Ratio, type 0.5.
- Click Run.
A new layer is added in the Contents pane. It has the same symbology as the DetectedPalms layer.
- In the Contents pane, turn off the Detected Palms layer.
You'll see that there are fewer trees with overlap in the new layer.
You can rerun the tool as needed with different Max Overlap Ratio values to achieve optimal results.
- Remove the DetectedPalms layer from the map.
- In the Contents pane, click DetectedPalms_NMS two times and rename it Detected Palm Trees.
- Turn off the Detected Palm Trees layer.
- Save your project.
You've just trained and used a model to detect palm trees. Next, you'll use raster functions to obtain an estimate of vegetation health for each tree detected in your study area.
It is important to realize that your model's results might not be perfect the first time. Training and implementing a deep learning model is a process that can take several iterations to provide the best results. Better results can be achieved by doing the following:
- Increasing your initial sample size of features
- Ensuring that your training samples are accurately capturing the features you want to detect
- Making sure your training samples include features of different sizes
- Adjusting the geoprocessing tools' parameters
- Retraining an existing model using the Train Deep Learning Model tool's advanced parameters
Estimate vegetation health
In the previous module, you used a deep learning model to extract coconut palm trees from imagery. In this module, you'll use the same imagery to estimate vegetation health by calculating a vegetation health index.
To assess vegetation health, you'll calculate the Visible Atmospherically Resistant Index (VARI), which was developed as an indirect measure of leaf area index (LAI) and vegetation fraction (VF) using only reflectance values from the visible wavelength:
(Rg - Rr) / (Rg + Rr - R(Rg - Rb))
where Rr, Rg, and Rb are reflectance values for the red, green, and blue bands, respectively (Gitelson et al. 2002).
Typically, you would use reflectance values in both the visible and the near infrared (NIR) wavelength bands to estimate vegetation health, as with the normalized difference vegetation index (NDVI). However, the imagery you downloaded from OpenAerialMap is a multiband image with three bands, all in the visible electromagnetic spectrum, so you'll use the VARI instead.
The VARI measurement requires the input of the three bands within the OpenAerialMap imagery. To calculate VARI, you'll use the Band Arithmetic raster function. Raster functions are quicker than geoprocessing tools because they don't create a new raster dataset. Instead, they perform real-time analysis on pixels as you pan and zoom.
- On the ribbon, click the Imagery tab. In the Analysis group, click Raster Functions.
The Raster Functions pane appears.
- In the Raster Functions pane, search for and select the Band Arithmetic raster function.
- In the Band Arithmetic Properties function, set the following parameters:
- For Raster, choose the Imagery raster layer.
- For Method, choose VARI. The function requires you to provide the band index number that corresponds to the input bands for the formula. The input underneath the Band Indexes parameter shows Red Green Blue, so you'll provide the band index numbers that correspond with the Red, Green, and Blue bands, in that order. Make sure to put a single space between each band.
- For Band Indexes, type 1 2 3.
- Click Create new layer.
The VARI layer is added to the Contents pane as Band Arithmetic_Imagery. By zooming and panning around the area, you can see features such as the coastline, roads, buildings, and fields.
- In the Contents pane, make sure the Band Arithmetic_Imagery layer is selected.
Next, you'll change how the raster draws on the map to make the VARI symbology more clear.
- On the ribbon, click the Appearance contextual tab.
- In the Rendering group, select the Stretch Type drop-down menu and choose Standard Deviation.
- In the Contents pane, rename Band Arithmetic_Imagery to VARI.
Extract VARI to Coconut Palms
Having a raster layer showing VARI is helpful, but not necessarily actionable. To figure out which trees need attention, you want to know the average VARI for each individual tree. To find the VARI value for each tree, you'll extract the underlying average VARI value and symbolize them to show which trees are healthy and which need maintenance.
First, you'll convert the polygon features to points.
- On the ribbon, in Command Search, type Feature To Point. Choose Feature To Point.
- In the Feature To Point tool, enter the following parameters:
- For Input Features, select the Detected Palm Trees layer.
- For Output Feature Class, type PalmTree_Points.
- Click Run.
You have a point feature class in the centroid of each detected polygon. If you zoom in to various locations and use the Measure tool, you'll see that the palm trees have an average radius of roughly 3 meters. In the next step, you'll create a polygon layer with a 3-meter buffer around each point.
The Measure tool is found on the ribbon, on theMap tab, in the Inquiry group.
- On the ribbon, in Command Search, type Pairwise Buffer. Choose Pairwise Buffer.
- In the Pairwise Buffer tool, enter the following parameters:
- For Input Features, choose PalmTree_Points.
- For Output Feature Class, type PalmTreeBuffer.
- For Distance, type 3 and choose Meters.
- Click Run.
You have a polygon feature class depicting the location and general shape of each palm tree canopy.
- In the Contents pane, turn off the VARI and PalmTree_Points layers.
Your map shows the estimated canopies of the palm trees in the imagery.
Next, you'll extract the average VARI value for each polygon. The Zonal Statistics as Table tool goes through each polygon you created one at a time, finds all of the VARI pixels that fall within the polygon, and calculates the average VARI value for that polygon.
- On the ribbon, in Command Search, type Zonal Statistics as Table. Choose Zonal Statistics as Table.
- In the Zonal Statistics as Table tool, enter the following parameters:
- For Input raster or feature zone data, choose PalmTreeBuffer.
- For Zone field, choose ORIG_FID.
- For Input value raster, choose VARI.
- For Output table, type MeanVARI_per_Palm.
- Ensure Ignore NoData in calculations is checked.
- For Statistics type, choose Mean.
Setting the Zone field to ORIG_FID will ensure that you get statistics for each tree separately. This attribute is the unique ID from the original DetectPalms layer.
- Click Run.
The output table is added to the bottom of the Contents pane. If you open it, you'll see the original FID value and a column called MEAN containing the average VARI value. You'll join this table to the PalmTreeBuffer layer to get a feature class with the average VARI for each detected palm tree.
- On the ribbon, in Command Search, type Join Field. Choose Join Field.
- In the Join Field tool, enter the following parameters:
- For Input Table, choose PalmTreeBuffer.
- For Input Join Field, choose ORIG_FID.
- For Join Table, choose MeanVARI_per_Palm.
- For Join Table Field, choose ORIG_FID.
- For Transfer Fields, choose MEAN.
- Click Run.
The PalmTreeBuffer layer now has a field called MEAN added to it. You'll rename this layer and symbolize it for a better understanding of the data.
- In the Contents pane, rename PalmTreeBuffer to Palm Trees VARI.
- On the ribbon, on the Appearance tab, in the Drawing group, click Symbology.
The Symbology pane appears.
- For Primary symbology, choose Graduated Colors.
- For Field, choose MEAN.
- If necessary, for Method, choose Natural Breaks (Jenks) and set Classes to 4.
- For Color scheme, click the drop-down menu and check Show all and Show names. Scroll and select the Red-Yellow-Green (4 Classes) color scheme.
- Under Classes, click each label and rename the classes from top to bottom as follows: Needs Inspection, Declining Health, Moderate, and Healthy.
You now have a map with a feature class showing the location and health for each palm tree in the image.
- Save the project.
Optional: Assign field tasks and monitor project progress
One of the biggest benefits of using ArcGIS Pro for feature extraction and imagery analysis is that it can be integrated with the entire ArcGIS platform. In the last lesson, you used the deep learning tools in ArcGIS Pro to identify coconut palm trees from imagery. The palm trees can be stored as features in a feature class that's amenable for use in a GIS. To extend the workflow, you can publish your results to the cloud, configure a web application template for quality assurance, assign tree inspection tasks to workers in the field, and monitor the progress of the project using a dashboard.
Publish to ArcGIS Online
To use configurable apps to work with your data, you need to publish the palm trees as a feature service in ArcGIS Online or ArcGIS Enterprise. In ArcGIS Pro, right-click the PalmTreesVARI layer in the Contents pane and select Sharing, then select Share as Web Layer. It will publish to your ArcGIS Online account.
Use app templates to review deep learning accuracy
Deep learning tools provide results with accuracy that is proportional to the accuracy of the training samples and the quality of the trained model. In other words, the results are not always perfect. You can assess the quality of the model results by checking through the trees where the Confidence score, stored in the deep learning result, is lower than a given value. Instead of zooming to each record using an attribute filter in ArcGIS Pro, the Image Visit configurable web app template allows you to quickly review the accuracy of your results in a web application.
Use ArcGIS Workforce to perform field verification
ArcGIS Workforce is a mobile app solution that uses the location of features to coordinate your field workforce. You can use the Workforce app to assign tasks to members of your organization so that all the trees with a VARI score that is listed as Needs Inspection can be assigned to someone in the field, checked, and marked with a suggested treatment.
Use ArcGIS Dashboards to monitor project progress
Finally, you can monitor the progress of the assignments dispatched in your ArcGIS Workforce project using ArcGIS Dashboards. ArcGIS Dashboards is a configurable web app that provides visualization and analytics for a real-time operational view of people, services, and tasks.
In this lesson, you obtained open-source drone imagery and created training samples of palm trees in the image. Those image chips were provided to a data scientist as image chips and used by a trained deep learning model to extract more than 11,000 palm trees in the image.
You learned about deep learning and image analysis, as well as configurable apps across the ArcGIS system. You can use this workflow for any number of tasks, if you have the imagery and knowledge of deep learning models. For example, you can use these tools to assess structural damage resulting from natural disasters, count vehicles in an urban area, or find structures near geological danger zones.
You can find more lessons in the Learn ArcGIS Lesson Gallery.